Updates from: 03/22/2023 02:13:15
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Previously updated : 03/20/2023 Last updated : 03/21/2023
Along with this property, attribute-mappings also support the following attribut
The Azure AD provisioning service can be deployed in both "green field" scenarios (where users don't exist in the target system) and "brownfield" scenarios (where users already exist in the target system). To support both scenarios, the provisioning service uses the concept of matching attributes. Matching attributes allow you to determine how to uniquely identify a user in the source and match the user in the target. As part of planning your deployment, identify the attribute that can be used to uniquely identify a user in the source and target systems. Things to note: - **Matching attributes should be unique:** Customers often use attributes such as userPrincipalName, mail, or object ID as the matching attribute.-- **Multiple attributes can be used as matching attributes:** You can define multiple attributes to be evaluated when matching users and the order in which they are evaluated (defined as matching precedence in the UI). If for example, you define three attributes as matching attributes, and a user is uniquely matched after evaluating the first two attributes, the service will not evaluate the third attribute. The service will evaluate matching attributes in the order specified and stop evaluating when a match is found. -- **The value in the source and the target do not have to match exactly:** The value in the target can be some simple function of the value in the source. So, one could have an emailAddress attribute in the source and the userPrincipalName in the target, and match by a function of the emailAddress attribute that replaces some characters with some constant value. -- **Matching based on a combination of attributes is not supported:** Most applications do not support querying based on two properties. Therefore, it is not possible to match based on a combination of attributes. It is possible to evaluate single properties on after another.
+- **Multiple attributes can be used as matching attributes:** You can define multiple attributes to be evaluated when matching users and the order in which they're evaluated (defined as matching precedence in the UI). If for example, you define three attributes as matching attributes, and a user is uniquely matched after evaluating the first two attributes, the service won't evaluate the third attribute. The service will evaluate matching attributes in the order specified and stop evaluating when a match is found.
+- **The value in the source and the target don't have to match exactly:** The value in the target can be a function of the value in the source. So, one could have an emailAddress attribute in the source and the userPrincipalName in the target, and match by a function of the emailAddress attribute that replaces some characters with some constant value.
+- **Matching based on a combination of attributes isn't supported:** Most applications don't support querying based on two properties. Therefore, it's not possible to match based on a combination of attributes. It is possible to evaluate single properties on after another.
- **All users must have a value for at least one matching attribute:** If you define one matching attribute, all users must have a value for that attribute in the source system. If for example, you define userPrincipalName as the matching attribute, all users must have a userPrincipalName. If you define multiple matching attributes (for example, both extensionAttribute1 and mail), not all users have to have the same matching attribute. One user could have a extensionAttribute1 but not mail while another user could have mail but no extensionAttribute1. - **The target application must support filtering on the matching attribute:** Application developers allow filtering for a subset of attributes on their user or group API. For applications in the gallery, we ensure that the default attribute mapping is for an attribute that the target application's API does support filtering on. When changing the default matching attribute for the target application, check the third party API documentation to ensure that the attribute can be filtered on.
Group provisioning can be optionally enabled or disabled by selecting the group
The attributes provisioned as part of Group objects can be customized in the same manner as User objects, described previously. > [!TIP]
-> Provisioning of group objects (properties and members) is a distinct concept from [assigning groups](../manage-apps/assign-user-or-group-access-portal.md) to an application. It is possible to assign a group to an application, but only provision the user objects contained in the group. Provisioning of full group objects is not required to use groups in assignments.
+> Provisioning of group objects (properties and members) is a distinct concept from [assigning groups](../manage-apps/assign-user-or-group-access-portal.md) to an application. It is possible to assign a group to an application, but only provision the user objects contained in the group. Provisioning of full group objects isn't required to use groups in assignments.
## Editing the list of supported attributes
Applications and systems that support customization of the attribute list includ
- SuccessFactors to Active Directory / SuccessFactors to Azure Active Directory - Azure Active Directory ([Azure AD Graph API default attributes](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#user-entity) and custom directory extensions are supported). Learn more about [creating extensions](./user-provisioning-sync-attributes-for-mapping.md) and [known limitations](./known-issues.md). - Apps that support [SCIM 2.0](https://tools.ietf.org/html/rfc7643)-- For Azure Active Directory writeback to Workday or SuccessFactors, it is supported to update relevant metadata for supported attributes (XPATH and JSONPath), but it is not supported to add new Workday or SuccessFactors attributes beyond those included in the default schema
+- For Azure Active Directory writeback to Workday or SuccessFactors, it is supported to update relevant metadata for supported attributes (XPATH and JSONPath), but it isn't supported to add new Workday or SuccessFactors attributes beyond those included in the default schema
> [!NOTE]
-> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute is not automatically displayed in the Azure Portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the attribute list as described [above](#editing-the-list-of-supported-attributes).
+> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute isn't automatically displayed in the Azure Portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the attribute list as described [above](#editing-the-list-of-supported-attributes).
> [!NOTE] > When a directory extension attribute in Azure AD does not show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory.
For SCIM applications, the attribute name must follow the pattern shown in the e
These instructions are only applicable to SCIM-enabled applications. Applications such as ServiceNow and Salesforce are not integrated with Azure AD using SCIM, and therefore they don't require this specific namespace when adding a custom attribute.
-Custom attributes can't be referential attributes, multi-value or complex-typed attributes. Custom multi-value and complex-typed extension attributes are currently supported only for applications in the gallery. The custom extension schema header is omitted in the example below as it is not sent in requests from the Azure AD SCIM client. This issue will be fixed in the future and the header will be sent in the request.
+Custom attributes can't be referential attributes, multi-value or complex-typed attributes. Custom multi-value and complex-typed extension attributes are currently supported only for applications in the gallery. The custom extension schema header is omitted in the example below as it isn't sent in requests from the Azure AD SCIM client. This issue will be fixed in the future and the header will be sent in the request.
**Example representation of a user with an extension attribute:**
Use the steps below to provision roles for a user to your application. Note that
![Add SingleAppRoleAssignment](./media/customize-application-attributes/edit-attribute-singleapproleassignment.png) - **Things to consider** - Ensure that multiple roles are not assigned to a user. We cannot guarantee which role will be provisioned.
- - SingleAppRoleAssignments is not compatible with setting scope to "Sync All users and groups."
+ - SingleAppRoleAssignments isn't compatible with setting scope to "Sync All users and groups."
- **Example request (POST)** ```json
The request formats in the PATCH and POST differ. To ensure that POST and PATCH
- **Things to consider** - All roles will be provisioned as primary = false. - The POST contains the role type. The PATCH request does not contain type. We are working on sending the type in both POST and PATCH requests.
- - AppRoleAssignmentsComplex is not compatible with setting scope to "Sync All users and groups."
+ - AppRoleAssignmentsComplex isn't compatible with setting scope to "Sync All users and groups."
- **Example output**
Selecting this option will effectively force a resynchronization of all users wh
- Microsoft Azure AD provides an efficient implementation of a synchronization process. In an initialized environment, only objects requiring updates are processed during a synchronization cycle. - Updating attribute-mappings has an impact on the performance of a synchronization cycle. An update to the attribute-mapping configuration requires all managed objects to be reevaluated. - A recommended best practice is to keep the number of consecutive changes to your attribute-mappings at a minimum.-- Adding a photo attribute to be provisioned to an app is not supported today as you cannot specify the format to sync the photo. You can request the feature on [User Voice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789)-- The attribute IsSoftDeleted is often part of the default mappings for an application. IsSoftdeleted can be true in one of four scenarios (the user is out of scope due to being unassigned from the application, the user is out of scope due to not meeting a scoping filter, the user has been soft deleted in Azure AD, or the property AccountEnabled is set to false on the user). It is not recommended to remove the IsSoftDeleted attribute from your attribute mappings.
+- Adding a photo attribute to be provisioned to an app isn't supported today as you cannot specify the format to sync the photo. You can request the feature on [User Voice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789)
+- The attribute IsSoftDeleted is often part of the default mappings for an application. IsSoftdeleted can be true in one of four scenarios (the user is out of scope due to being unassigned from the application, the user is out of scope due to not meeting a scoping filter, the user has been soft deleted in Azure AD, or the property AccountEnabled is set to false on the user). It isn't recommended to remove the IsSoftDeleted attribute from your attribute mappings.
- The Azure AD provisioning service does not support provisioning null values. - They primary key, typically "ID", should not be included as a target attribute in your attribute mappings. - The role attribute typically needs to be mapped using an expression, rather than a direct mapping. See section above for more details on role mapping. -- While you can disable groups from your mappings, disabling users is not supported.
+- While you can disable groups from your mappings, disabling users isn't supported.
## Next steps
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for multifactor authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both multifactor authentication and SSPR. We recommend this video on [How to enable and configure SSPR in Azure AD](https://www.youtube.com/watch?v=rA8TvhNcCvQ)
-> [!NOTE]
-> Effective Oct. 1st, 2022, we will begin to enable combined registration for all users in Azure AD tenants created before August 15th, 2020. Tenants created after this date are enabled with combined registration.
-
-This article outlines what combined security registration is. To get started with combined security registration, see the following article:
-
-> [!div class="nextstepaction"]
-> [Enable combined security registration](howto-registration-mfa-sspr-combined.md)
![My Account showing registered Security info for a user](media/concept-registration-mfa-sspr-combined/combined-security-info-defaults-registered.png)
Before enabling the new experience, review this administrator-focused documentat
Azure AD combined security information registration is available for Azure US Government but not Azure China 21Vianet.
-> [!IMPORTANT]
-> Users that are enabled for both the original preview and the enhanced combined registration experience see the new behavior. Users that are enabled for both experiences see only the My Account experience. The *My Account* aligns with the look and feel of combined registration and provides a seamless experience for users. Users can see My Account by going to [https://myaccount.microsoft.com](https://myaccount.microsoft.com).
->
-> You can set **Require users to register when signing in** to **Yes** to require all users to register when signing in, ensuring that all users are protected.
->
-> You might encounter an error message while trying to access the Security info option, such as, "Sorry, we can't sign you in". Confirm that you don't have any configuration or group policy object that blocks third-party cookies on the web browser.
- *My Account* pages are localized based on the language settings of the computer accessing the page. Microsoft stores the most recent language used in the browser cache, so subsequent attempts to access the pages continue to render in the last language used. If you clear the cache, the pages re-render. If you want to force a specific language, you can add `?lng=<language>` to the end of the URL, where `<language>` is the code of the language you want to render.
Users can access manage mode by going to [https://aka.ms/mysecurityinfo](https:/
## Key usage scenarios
+### Protect Security info registration with Conditional Access
+To secure when and how users register for Azure AD Multi-Factor Authentication and self-service password reset, you can use user actions in Conditional Access policy. This functionality may be enabled in organizations that want users to register for Azure AD Multi-Factor Authentication and SSPR from a central location, such as a trusted network location during HR onboarding. Learn more on how to configure [common Conditional Access policies for securing security info registration.](../conditional-access/howto-conditional-access-policy-registration.md)
++ ### Set up security info during sign-in An admin has enforced registration.
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
The two-gate policy requires two pieces of authentication data, such as an email
* A custom domain has been configured for your Azure AD tenant, such as *contoso.com*; or * Azure AD Connect is synchronizing identities from your on-premises directory
-You can disable the use of SSPR for administrator accounts using the [Set-MsolCompanySettings](/powershell/module/msonline/set-msolcompanysettings) PowerShell cmdlet. The `-SelfServePasswordResetEnabled $False` parameter disables SSPR for administrators. Policy changes to disable or enable SSPR for administrator accounts can take up to 60 minutes to take effect.
+You can disable the use of SSPR for administrator accounts using the [Update-MgPolicyAuthorizationPolicy](/powershell/module/microsoft.graph.identity.signins/update-mgpolicyauthorizationpolicy) PowerShell cmdlet. The `-AllowedToUseSspr:$true|$false ` parameter enables/disables SSPR for administrators. Policy changes to enable or disable SSPR for administrator accounts can take up to 60 minutes to take effect.
### Exceptions
active-directory Concept System Preferred Multifactor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-system-preferred-multifactor-authentication.md
When a user signs in, the authentication process checks which authentication met
1. [Telephony](concept-authentication-phone-options.md)<sup>2</sup> <sup>1</sup> Includes hardware or software TOTP from Microsoft Authenticator, Authenticator Lite, or third-party applications.
-<sup>2</sup> Includes SMS and voice calls.
+<sup>2</sup> Includes SMS and voice calls.
### How does system-preferred MFA affect AD FS or NPS extension?
System-preferred MFA doesn't affect users who sign in by using Active Directory
### What happens for users who aren't specified in the Authentication methods policy but enabled in the legacy MFA tenant-wide policy? The system-preferred MFA also applies for users who are enabled for MFA in the legacy MFA policy.+ :::image type="content" border="true" source="./media/how-to-mfa-number-match/legacy-settings.png" alt-text="Screenshot of legacy MFA settings."::: ## Next steps
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md
A major step in every multifactor authentication deployment is getting users reg
### Combined registration for SSPR and Azure AD MFA
-> [!NOTE]
-> Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration. Tenants created after this date will be unable to utilize the legacy registration workflows.
-> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration.
-
-We recommend that organizations use the [combined registration experience for Azure AD Multi-Factor Authentication and self-service password reset (SSPR)](howto-registration-mfa-sspr-combined.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD Multi-Factor Authentication. Combined registration is a single step for end users. To make sure you understand the functionality and end-user experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
+[The combined registration experience for Azure AD Multi-Factor Authentication and self-service password reset (SSPR)](howto-registration-mfa-sspr-combined.md) enables users to register for both MFA and SSPR in a unified experience. SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD Multi-Factor Authentication. To make sure you understand the functionality and end-user experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
It's critical to inform users about upcoming changes, registration requirements, and any necessary user actions. We provide [communication templates](https://aka.ms/mfatemplates) and [user documentation](https://support.microsoft.com/account-billing/set-up-security-info-from-a-sign-in-page-28180870-c256-4ebf-8bd7-5335571bf9a8) to prepare your users for the new experience and help to ensure a successful rollout. Send users to https://myprofile.microsoft.com to register by selecting the **Security Info** link on that page.
active-directory Howto Registration Mfa Sspr Combined Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined-troubleshoot.md
The following table lists all audit events generated by combined registration:
| | | | I don't have the option to add a particular method. | 1. Determine whether the method is enabled for Multi-Factor Authentication or for SSPR. <br> 2. If the method is enabled, save the policies again and wait 1-2 hours before testing again. <br> 3. If the method is enabled, ensure that the user hasn't already set up the maximum number of that method that they're allowed to set up.|
-## Disable combined registration
-
-When a user registers a phone number and/or mobile app in the combined registration experience, our service stamps a set of flags (StrongAuthenticationMethods) for those methods on that user. This functionality allows the user to perform Multi-Factor Authentication with those methods whenever Multi-Factor Authentication is required.
-
-If an admin enables combined registration, users register through the combined registration experience, and then the admin disables combined registration, users might unknowingly be registered for Multi-Factor Authentication also.
-
-If a user who has completed combined registration goes to the legacy self-service password reset (SSPR) registration page at [https://aka.ms/ssprsetup](https://aka.ms/ssprsetup), the user will be prompted to perform Multi-Factor Authentication before they can access that page. This step is expected from a technical standpoint, but it's new for users who were previously registered for SSPR only. Though this extra step does improve the user's security posture by providing another level of security, admins might want to roll back their users so that they're no longer able to perform Multi-Factor Authentication.
### How to roll back users
In a PowerShell window, run the following command, providing the script and user
`<script location> -path <user file location>`
-### Disable the updated experience
-
-To disable the updated experience for your users, complete these steps:
-
-1. Sign in to the Azure portal as a user administrator.
-2. Go to **Azure Active Directory** > **User settings** > **Manage user feature settings**.
-3. Under **Users can use the combined security information registration experience**, set the selector to **None**, and then select **Save**.
-
-Users will no longer be prompted to register by using the updated experience.
- ## Next steps * [Learn more about combined registration for self-service password reset and Azure AD Multi-Factor Authentication](concept-registration-mfa-sspr-combined.md)
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
# Enable combined security information registration in Azure Active Directory
-Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for Azure AD Multi-Factor Authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both Azure AD Multi-Factor Authentication and SSPR.
+Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. Users were confused that similar methods were used for Azure AD Multi-Factor Authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both Azure AD Multi-Factor Authentication and SSPR.
> [!NOTE]
-> Effective Oct. 1st, 2022, we will begin to enable combined registration for all users in Azure AD tenants created before August 15th, 2020. Tenants created after this date are enabled with combined registration.
+> Effective Mar. 14th, 2023 combined registration is now the default MFA and SSPR registration experience for all organizations.
-To make sure you understand the functionality and effects before you enable the new experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
+To help you understand the functionality and effects of the new experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
![Combined security information registration enhanced experience](media/howto-registration-mfa-sspr-combined/combined-security-info-more-required.png)
-## Enable combined registration
-
-To enable combined registration, complete these steps:
-
-1. Sign in to the Azure portal as a user administrator or global administrator.
-2. Go to **Azure Active Directory** > **User settings** > **Manage user feature settings**.
-3. Under **Users can use the combined security information registration experience**, choose to enable for a **Selected** group of users or for **All** users.
-
- ![Enable the combined security info experience for users](media/howto-registration-mfa-sspr-combined/enable-the-combined-security-info.png)
-
- > [!IMPORTANT]
- > If your Azure tenant has already been enabled for combined registration, you might not see the configuration option for **Users can use the combined security information registration experience** or even see it grayed out.
-
-> [!NOTE]
-> After you enable combined registration, users who register or confirm their phone number or mobile app through the new experience can use them for Azure AD Multi-Factor Authentication and SSPR, if those methods are enabled in the Azure AD Multi-Factor Authentication and SSPR policies.
->
-> If you then disable this experience, users who go to the previous SSPR registration page at `https://aka.ms/ssprsetup` are required to perform multi-factor authentication before they can access the page.
-
-If you have configured the *Site to Zone Assignment List* in Internet Explorer, the following sites have to be in the same zone:
-
-* *[https://login.microsoftonline.com](https://login.microsoftonline.com)*
-* *[https://Login.windows.net](https://login.windows.net)*
-* *[https://mysignins.microsoft.com](https://mysignins.microsoft.com)*
-* *[https://account.activedirectory.windowsazure.com](https://account.activedirectory.windowsazure.com)*
- ## Conditional Access policies for combined registration To secure when and how users register for Azure AD Multi-Factor Authentication and self-service password reset, you can use user actions in Conditional Access policy. This functionality may be enabled in organizations that want users to register for Azure AD Multi-Factor Authentication and SSPR from a central location, such as a trusted network location during HR onboarding.
Complete the following steps to create a policy that applies to all selected use
If you need help, see [troubleshoot combined security info registration](howto-registration-mfa-sspr-combined-troubleshoot.md) or learn [What is the location condition in Azure AD Conditional Access?](../conditional-access/location-condition.md)
-Once users are enabled for combined registration, you can then [enable self-service password reset](tutorial-enable-sspr.md) and [enable Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
+Review how you can [enable self-service password reset](tutorial-enable-sspr.md) and [enable Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md) in your tenant.
If needed, learn how to [force users to re-register authentication methods](howto-mfa-userdevicesettings.md#manage-user-authentication-options).
active-directory Howto Sspr Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-reporting.md
In the Azure portal experience, we have improved the way that you can view passw
### Combined registration
-If you have enabled [combined registration](./concept-registration-mfa-sspr-combined.md), information regarding user activity in the audit logs will be found under **Security** > **Authentication Methods**.
+[combined registration](./concept-registration-mfa-sspr-combined.md) security information registration and management events can be found in the audit logs under **Security** > **Authentication Methods**.
## Description of the report columns in the Azure portal
active-directory Tutorial Pilot Aadc Aadccp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-pilot-aadc-aadccp.md
The following are prerequisites required for completing this tutorial
As a minimum, you should have [Azure AD connect](https://www.microsoft.com/download/details.aspx?id=47594) 1.4.32.0. To update Azure AD Connect sync, complete the steps in [Azure AD Connect: Upgrade to the latest version](../hybrid/how-to-upgrade-previous-version.md). ## Back up your Azure AD Connect configuration
-Before making any changes, you should back up your Azure AD Connect configuration. This way, you can role-back. See [Import and export Azure AD Connect configuration settings](../hybrid/how-to-connect-import-export-config.md) for more information.
+Before making any changes, you should back up your Azure AD Connect configuration. This way, you can roll back to your previous configuration. See [Import and export Azure AD Connect configuration settings](../hybrid/how-to-connect-import-export-config.md) for more information.
## Stop the scheduler
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
A Conditional Access policy must include a user, group, or workload identity ass
> [!VIDEO https://www.youtube.com/embed/5DsW1hB3Jqs]
-> [!NOTE]
-> Some Conditional Access features are currently in public preview and might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Include users This list of users typically includes all of the users an organization is targeting in a Conditional Access policy.
The following options are available to include when creating a Conditional Acces
- All users - All users that exist in the directory including B2B guests. - Select users and groups
- - Guest or external users (preview)
+ - Guest or external users
- This selection provides several choices that can be used to target Conditional Access policies to specific guest or external user types and specific tenants containing those types of users. There are [several different types of guest or external users that can be selected](../external-identities/authentication-conditional-access.md#conditional-access-for-external-users), and multiple selections can be made: - B2B collaboration guest users - B2B collaboration member users
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
A persistent browser session allows users to remain signed in after closing and
The Azure AD default for browser session persistence allows users on personal devices to choose whether to persist the session by showing a ΓÇ£Stay signed in?ΓÇ¥ prompt after successful authentication. If browser persistence is configured in AD FS using the guidance in the article [AD FS single sign-on settings](/windows-server/identity/ad-fs/operations/ad-fs-single-sign-on-settings#enable-psso-for-office-365-users-to-access-sharepoint-online), we'll comply with that policy and persist the Azure AD session as well. You can also configure whether users in your tenant see the ΓÇ£Stay signed in?ΓÇ¥ prompt by changing the appropriate setting in the [company branding pane](../fundamentals/customize-branding.md).
+In persistent browsers, cookies stay stored in the userΓÇÖs device even after a user closes the browser. These cookies could have access to Azure Active Directory artifacts, and those artifacts are useable until token expiry regardless of the Conditional Access policies placed on the resource environment. So, token caching can be in direct violation of desired security policies for authentication. While it may seem convenient to store tokens beyond the current session, doing so can create a security vulnerability by allowing unauthorized access to Azure Active Directory artifacts.
+ ## Configuring authentication session controls Conditional Access is an Azure AD Premium capability and requires a premium license. If you would like to learn more about Conditional Access, see [What is Conditional Access in Azure Active Directory?](overview.md#license-requirements)
We factor for five minutes of clock skew, so that we donΓÇÖt prompt users more o
## Next steps
-* If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
+* If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
Create a location based Conditional Access policy that applies to service princi
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
- 1. Under **What does this policy apply to?**, select **Workload identities (Preview)**.
+ 1. Under **What does this policy apply to?**, select **Workload identities**.
1. Under **Include**, choose **Select service principals**, and select the appropriate service principals from the list. 1. Under **Cloud apps or actions**, select **All cloud apps**. The policy will apply only when a service principal requests a token. 1. Under **Conditions** > **Locations**, include **Any location** and exclude **Selected locations** where you want to allow access.
Create a risk-based Conditional Access policy that applies to service principals
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
- 1. Under **What does this policy apply to?**, select **Workload identities (Preview)**.
+ 1. Under **What does this policy apply to?**, select **Workload identities**.
1. Under **Include**, choose **Select service principals**, and select the appropriate service principals from the list. 1. Under **Cloud apps or actions**, select **All cloud apps**. The policy will apply only when a service principal requests a token.
-1. Under **Conditions** > **Service principal risk (Preview)**
+1. Under **Conditions** > **Service principal risk**
1. Set the **Configure** toggle to **Yes**. 1. Select the levels of risk where you want this policy to trigger. 1. Select **Done**.
active-directory B2b Quickstart Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-invite-powershell.md
Previously updated : 02/16/2022 Last updated : 03/21/2023 -+ #Customer intent: As a tenant admin, I want to walk through the B2B invitation workflow so that I can understand how to add a user through PowerShell.
Remove-MgUser -UserId '3f80a75e-750b-49aa-a6b0-d9bf6df7b4c6'
## Next steps
-In this quickstart, you invited and added a single guest user to your directory using PowerShell. Next, learn how to invite guest users in bulk using PowerShell.
+In this quickstart, you invited and added a single guest user to your directory using PowerShell. Next, learn how to [invite guest users in bulk using PowerShell](tutorial-bulk-invite.md).
-> [!div class="nextstepaction"]
-> [Tutorial: Bulk invite Azure AD B2B collaboration users](tutorial-bulk-invite.md)
active-directory Invite Internal Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md
Title: Invite internal users to B2B collaboration
-description: If you have internal user accounts for partners, distributors, suppliers, vendors, and other guests, you can change to Azure AD B2B collaboration by inviting them to sign in with their own external credentials or login. Use either PowerShell or the Microsoft Graph invitation API.
+description: If you have internal user accounts for partners, distributors, suppliers, vendors, and other guests, you can change to Azure AD B2B collaboration by inviting them to sign in with their own external credentials or sign-in. Use either PowerShell or the Microsoft Graph invitation API.
Previously updated : 03/02/2022 Last updated : 03/21/2023 -++
+# Customer intent: As a tenant administrator, I want to know how to invite internal users to B2B collaboration.
# Invite internal users to B2B collaboration
-Before the availability of Azure AD B2B collaboration, organizations could collaborate with distributors, suppliers, vendors, and other guest users by setting up internal credentials for them. If you have internal guest users like these, you can invite them to use B2B collaboration instead. These B2B guest users will be able to use their own identities and credentials to sign in, and you wonΓÇÖt need to maintain passwords or manage account lifecycles.
+Before the availability of Azure AD B2B collaboration, organizations could collaborate with distributors, suppliers, vendors, and other guest users by setting up internal credentials for them. If you have internal guest users like these, you can invite them to use B2B collaboration instead. These B2B guest users will be able to sign in using their own identities and credentials, eliminating the need for password maintenance or account lifecycle management.
+ Sending an invitation to an existing internal account lets you retain that userΓÇÖs object ID, UPN, group memberships, and app assignments. You donΓÇÖt need to manually delete and re-invite the user or reassign resources. To invite the user, you use the invitation API to pass both the internal user object and the guest userΓÇÖs email address along with the invitation. When the user accepts the invitation, the B2B service changes the existing internal user object to a B2B user. Going forward, the user must sign in to cloud resources services using their B2B credentials.
Sending an invitation to an existing internal account lets you retain that user
- **Access to on-premises resources**: After the user is invited to B2B collaboration, they can still use their internal credentials to access on-premises resources. You can prevent this by resetting or changing the password on the internal account. The exception is [email one-time passcode authentication](one-time-passcode.md); if the user's authentication method is changed to one-time passcode, they won't be able to use their internal credentials anymore. -- **Billing**: This feature doesn't change the UserType for the user, so it doesn't automatically switch the user's billing model to [External Identities monthly active user (MAU) pricing](external-identities-pricing.md). To activate MAU pricing for the user, change the UserType for the user to `guest`. Also note that your Azure AD tenant must be [linked to an Azure subscription](external-identities-pricing.md#link-your-azure-ad-tenant-to-a-subscription) to activate MAU billing.
+- **Billing**: This feature doesn't change the UserType for the user, so it doesn't automatically switch the user's billing model to [External Identities monthly active user (MAU) pricing](external-identities-pricing.md). To activate MAU pricing for the user, change the UserType for the user to `guest`. Also note that your Azure AD tenant must be linked to an Azure subscription to activate MAU billing.
- **Invitation is one-way**: You can invite internal users to use B2B collaboration, but you canΓÇÖt remove the B2B credentials once theyΓÇÖre added. To change the user back to an internal-only user, youΓÇÖll need to delete the user object and create a new one.
Sending an invitation to an existing internal account lets you retain that user
- **On-premises synced users**: For user accounts that are synced between on-premises and the cloud, the on-premises directory remains the source of authority after theyΓÇÖre invited to use B2B collaboration. Any changes you make to the on-premises account will sync to the cloud account, including disabling or deleting the account. Therefore, you canΓÇÖt prevent the user from signing into their on-premises account while retaining their cloud account by simply deleting the on-premises account. Instead, you can set the on-premises account password to a random GUID or other unknown value. > [!NOTE]
-> In Azure AD Connect sync, thereΓÇÖs a default rule that writes the [onPremisesUserPrincipalName attribute](../hybrid/reference-connect-sync-attributes-synchronized.md#notes) to the user object. Because the presence of this attribute can prevent a user from signing in using external credentials, we block internal-to-external conversions for user objects with this attribute. If youΓÇÖre using Azure AD Connect and you want to be able to invite internal users to B2B collaboration, you'll need to [modify the default rule](../hybrid/how-to-connect-sync-change-the-configuration.md) so the onPremisesUserPrincipalName attribute isnΓÇÖt written to the user object.
+> In Azure AD Connect sync, thereΓÇÖs a default rule that writes the onPremisesUserPrincipalName attribute to the user object. Because the presence of this attribute can prevent a user from signing in using external credentials, we block internal-to-external conversions for user objects with this attribute. If youΓÇÖre using Azure AD Connect and you want to be able to invite internal users to B2B collaboration, you'll need to [modify the default rule](../hybrid/how-to-connect-sync-change-the-configuration.md) so the onPremisesUserPrincipalName attribute isnΓÇÖt written to the user object.
## How to invite internal users to B2B collaboration You can use the Azure portal, PowerShell, or the invitation API to send a B2B invitation to the internal user. Some things to note: -- Before you invite the user, make sure the `User.Mail` property of the internal user object (the user's **Email** property in the Azure portal) is set to the external email address they'll use for B2B collaboration.
+- Before you invite the user, make sure the `User.Mail` property of the internal user object (the user's **Email** property in the Azure portal) is set to the external email address they'll use for B2B collaboration. If the internal user has an existing mailbox, you can't change this property to an external email address. You must update their attributes in the [Exchange admin center](/exchange/exchange-admin-center).
- When you invite the user, an invitation is sent to the user via email. If you're using PowerShell or the invitation API, you can suppress this email by setting `SendInvitationMessage` to `False`. Then you can notify the user in another way. [Learn more about the invitation API](customize-invitation-api.md).
active-directory Azure Ad Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-ad-data-residency.md
Previously updated : 01/31/2023 Last updated : 03/21/2023
Learn more: [Azure Active Directory, Product overview](https://www.microsoft.com
|Azure AD Connect Health|Azure AD Connect Health generates alerts and reports in Azure Tables storage and blob storage.|In geo location| |Azure AD dynamic membership for groups, Azure AD self-service group management|Azure Tables storage holds dynamic membership rule definitions.|In geo location| |Azure AD Application Proxy|Azure AD Application Proxy stores metadata about the tenant, connector machines, and configuration data in Azure SQL.|In geo location|
-|Azure AD password reset |Azure AD password reset is a back-end service using Redis Cache to track session state. To learn more, go to redis.com to see [Introduction to Redis](https://redis.io/docs/about/).|See, Intro to Redis link in center column.|
|Azure AD password writeback in Azure AD Connect|During initial configuration, Azure AD Connect generates an asymmetric keypair, using the RivestΓÇôShamirΓÇôAdleman (RSA) cryptosystem. It then sends the public key to the self-service password reset (SSPR) cloud service, which performs two operations: </br></br>1. Creates two Azure Service Bus relays for the Azure AD Connect on-premises service to communicate securely with the SSPR service </br> 2. Generates an Advanced Encryption Standard (AES) key, K1 </br></br> The Azure Service Bus relay locations, corresponding listener keys, and a copy of the AES key (K1) goes to Azure AD Connect in the response. Future communications between SSPR and Azure AD Connect occur over the new ServiceBus channel and are encrypted using SSL. </br> New password resets, submitted during operation, are encrypted with the RSA public key generated by the client during onboarding. The private key on the Azure AD Connect machine decrypts them, which prevents pipeline subsystems from accessing the plaintext password. </br> The AES key encrypts the message payload (encrypted passwords, more data, and metadata), which prevents malicious ServiceBus attackers from tampering with the payload, even with full access to the internal ServiceBus channel. </br> For password writeback, Azure AD Connect need keys and data: </br></br> - The AES key (K1) that encrypts the reset payload, or change requests from the SSPR service to Azure AD Connect, via the ServiceBus pipeline </br> - The private key, from the asymmetric key pair that decrypts the passwords, in reset or change request payloads </br> - The ServiceBus listener keys </br></br> The AES key (K1) and the asymmetric keypair rotate a minimum of every 180 days, a duration you can change during certain onboarding or offboarding configuration events. An example is a customer disables and re-enables password writeback, which might occur during component upgrade during service and maintenance. </br> The writeback keys and data stored in the Azure AD Connect database are encrypted by data protection application programming interfaces (DPAPI) (CALG_AES_256). The result is the master ADSync encryption key stored in the Windows Credential Vault in the context of the ADSync on-premises service account. The Windows Credential Vault supplies automatic secret re-encryption as the password for the service account changes. To reset the service account password invalidates secrets in the Windows Credential Vault for the service account. Manual changes to a new service account might invalidate the stored secrets.</br> By default, the ADSync service runs in the context of a virtual service account. The account might be customized during installation to a least-privileged domain service account, a managed service account (MSA), or a group managed service account (gMSA). While virtual and managed service accounts have automatic password rotation, customers manage password rotation for a custom provisioned domain account. As noted, to reset the password causes loss of stored secrets. |In geo location| |Azure AD Device Registration Service |Azure AD Device Registration Service has computer and device lifecycle management in the directory, which enable scenarios such as device-state conditional access, and mobile device management.|In geo location| |Azure AD provisioning|Azure AD provisioning creates, removes, and updates users in systems, such as software as service (SaaS) applications. It manages user creation in Azure AD and on-premises AD from cloud HR sources, like Workday. The service stores its configuration in an Azure Cosmos DB, which stores the group membership data for the user directory it keeps. Cosmos DB replicates the database to multiple datacenters in the same region as the tenant, which isolates the data, according to the Azure AD cloud solution model. Replication creates high availability and multiple reading and writing endpoints. Cosmos DB has encryption on the database information, and the encryption keys are stored in the secrets storage for Microsoft.|In geo location|
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD recommendations help you improve your tenant posture by surfacing oppor
This release includes our first 3 recommendations: - Convert from per-user MFA to Conditional Access MFA-- Migration applications from AD FS to AAD
+- Migration applications from AD FS to Azure AD
- Minimize MFA prompts from known devices
Unfamiliar sign-in properties risk detection now provides risk reasons as to whi
Identity Protection now surfaces the unfamiliar properties in the Azure portal on UX and in API as *Additional Info* with a user-friendly description explaining that *the following properties are unfamiliar for this sign-in of the given user*.
-There is no additional work to enable this feature, the unfamiliar properties will be shown by default. For more information, see: [Sign-in risk](../identity-protection/concept-identity-protection-risks.md#sign-in-risk).
+There's no additional work to enable this feature, the unfamiliar properties are shown by default. For more information, see: [Sign-in risk](../identity-protection/concept-identity-protection-risks.md#sign-in-risk).
In the **All Devices** options under the registered column, you can now select a
**Service category:** Identity Protection **Product capability:** Identity Security & Protection
-In the January 2023 release of Authenticator for iOS, there's no companion app for watchOS due to it being incompatible with Authenticator security features, meaning you won't be able to install or use Authenticator on Apple Watch. This change only impacts Apple Watch, so you can still use Authenticator on your other devices. For more information, see: [Common questions about the Microsoft Authenticator app](https://support.microsoft.com/account-billing/common-questions-about-the-microsoft-authenticator-app-12d283d1-bcef-4875-9ae5-ac360e2945dd).
+In the January 2023 release of Authenticator for iOS, there's no companion app for watchOS due to it being incompatible with Authenticator security features, meaning you aren't able to install or use Authenticator on Apple Watch. This change only impacts Apple Watch, so you can still use Authenticator on your other devices. For more information, see: [Common questions about the Microsoft Authenticator app](https://support.microsoft.com/account-billing/common-questions-about-the-microsoft-authenticator-app-12d283d1-bcef-4875-9ae5-ac360e2945dd).
The Converged Authentication Methods Policy enables you to manage all authentica
**Service category:** Directory Management **Product capability:** AuthZ/Access Delegation
-You can now use administrative units to delegate management of specified devices in your tenant by adding devices to an administrative unit. You are also able to assign built-in, and custom device management roles, scoped to that administrative unit. For more information, see: [Device management](../roles/administrative-units.md#device-management).
+You can now use administrative units to delegate management of specified devices in your tenant by adding devices to an administrative unit. You're also able to assign built-in, and custom device management roles, scoped to that administrative unit. For more information, see: [Device management](../roles/administrative-units.md#device-management).
Azure AD supports provisioning users into applications hosted on-premises or in
In December 2022 we've added the following 44 new applications in our App gallery with Federation support:
-[Bionexo IDM](https://login.bionexo.com/), [SMART Meeting Pro](https://www.smarttech.com/en/business/software/meeting-pro), [Venafi Control Plane ΓÇô Datacenter](../saas-apps/venafi-control-plane-tutorial.md), [HighQ](../saas-apps/highq-tutorial.md), [Drawboard PDF](https://pdf.drawboard.com/), [ETU Skillsims](../saas-apps/etu-skillsims-tutorial.md), [TencentCloud IDaaS](../saas-apps/tencent-cloud-idaas-tutorial.md), [TeamHeadquarters Email Agent OAuth](https://thq.entry.com/), [Verizon MDM](https://verizonmdm.vzw.com/), [QRadar SOAR](../saas-apps/qradar-soar-tutorial.md), [Tripwire Enterprise](../saas-apps/tripwire-enterprise-tutorial.md), [Cisco Unified Communications Manager](../saas-apps/cisco-unified-communications-manager-tutorial.md), [Howspace](https://login.in.howspace.com/), [Flipsnack SAML](../saas-apps/flipsnack-saml-tutorial.md), [Albert](http://www.albertinvent.com/), [Altinget.no](https://www.altinget.no/), [Coveo Hosted Services](../saas-apps/coveo-hosted-services-tutorial.md), [Cybozu(cybozu.com)](../saas-apps/cybozu-tutorial.md), [BombBomb](https://app.bombbomb.com/app), [VMware Identity Service](../saas-apps/vmware-identity-service-tutorial.md), [Cimmaron Exchange Sync - Delegated](https://cimmaronsoftware.com/Mortgage-CRM-Exchange-Sync.aspx), [HexaSync](https://app-az.hexasync.com/login), [Trifecta Teams](https://app.trifectateams.net/), [VerosoftDesign](https://verosoft-design.vercel.app/), [Mazepay](https://app.mazepay.com/), [Wistia](../saas-apps/wistia-tutorial.md), [Begin.AI](https://app.begin.ai/), [WebCE](../saas-apps/webce-tutorial.md), [Dream Broker Studio](https://dreambroker.com/studio/login/), [PKSHA Chatbot](../saas-apps/pksha-chatbot-tutorial.md), [PGM-BCP](https://ups-pgm-bcp.4gfactor.com/azure/), [ChartDesk SSO](../saas-apps/chartdesk-sso-tutorial.md), [Elsevier SP](../saas-apps/elsevier-sp-tutorial.md), [GreenCommerce IdentityServer](https://identity.jem-id.nl/Account/Login), [Fullview](https://app.fullview.io/sign-in), [Aqua Platform](../saas-apps/aqua-platform-tutorial.md), [SpedTrack](../saas-apps/spedtrack-tutorial.md), [Pinpoint](https://pinpoint.ddiworld.com/psg2?sso=true), [Darzin Outlook Add-in](https://outlook.darzin.com/graph-login.html), [Simply Stakeholders Outlook Add-in](https://outlook.simplystakeholders.com/graph-login.html), [tesma](../saas-apps/tesma-tutorial.md), [Parkable](../saas-apps/parkable-tutorial.md), [Unite Us](../saas-apps/unite-us-tutorial.md)
+[Bionexo IDM](https://login.bionexo.com/), [SMART Meeting Pro](https://www.smarttech.com/en/business/software/meeting-pro), [Venafi Control Plane ΓÇô Datacenter](../saas-apps/venafi-control-plane-tutorial.md), [HighQ](../saas-apps/highq-tutorial.md), [Drawboard PDF](https://pdf.drawboard.com/), [ETU Skillsims](../saas-apps/etu-skillsims-tutorial.md), [TencentCloud IDaaS](../saas-apps/tencent-cloud-idaas-tutorial.md), [TeamHeadquarters Email Agent OAuth](https://thq.entry.com/), [Verizon MDM](https://verizonmdm.vzw.com/), [QRadar SOAR](../saas-apps/qradar-soar-tutorial.md), [Tripwire Enterprise](../saas-apps/tripwire-enterprise-tutorial.md), [Cisco Unified Communications Manager](../saas-apps/cisco-unified-communications-manager-tutorial.md), [Howspace](https://login.in.howspace.com/), [Flipsnack SAML](../saas-apps/flipsnack-saml-tutorial.md), [Albert](http://www.albertinvent.com/), [Altinget.no](https://www.altinget.no/), [Coveo Hosted Services](../saas-apps/coveo-hosted-services-tutorial.md), [Cybozu(cybozu.com)](../saas-apps/cybozu-tutorial.md), [BombBomb](https://app.bombbomb.com/app), [VMware Identity Service](../saas-apps/vmware-identity-service-tutorial.md), [HexaSync](https://app-az.hexasync.com/login), [Trifecta Teams](https://app.trifectateams.net/), [VerosoftDesign](https://verosoft-design.vercel.app/), [Mazepay](https://app.mazepay.com/), [Wistia](../saas-apps/wistia-tutorial.md), [Begin.AI](https://app.begin.ai/), [WebCE](../saas-apps/webce-tutorial.md), [Dream Broker Studio](https://dreambroker.com/studio/login/), [PKSHA Chatbot](../saas-apps/pksha-chatbot-tutorial.md), [PGM-BCP](https://ups-pgm-bcp.4gfactor.com/azure/), [ChartDesk SSO](../saas-apps/chartdesk-sso-tutorial.md), [Elsevier SP](../saas-apps/elsevier-sp-tutorial.md), [GreenCommerce IdentityServer](https://identity.jem-id.nl/Account/Login), [Fullview](https://app.fullview.io/sign-in), [Aqua Platform](../saas-apps/aqua-platform-tutorial.md), [SpedTrack](../saas-apps/spedtrack-tutorial.md), [Pinpoint](https://pinpoint.ddiworld.com/psg2?sso=true), [Darzin Outlook Add-in](https://outlook.darzin.com/graph-login.html), [Simply Stakeholders Outlook Add-in](https://outlook.simplystakeholders.com/graph-login.html), [tesma](../saas-apps/tesma-tutorial.md), [Parkable](../saas-apps/parkable-tutorial.md), [Unite Us](../saas-apps/unite-us-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
In addition to the Azure Active Directory Authentication Library to Microsoft Au
**Service category:** N/A **Product capability:** User Authentication
-The Temporary Access Pass can now be used to recover Azure AD-joined PCs when the EnableWebSignIn policy is enabled on the device. This is useful for when your users do not know, or have, a password. For more information, see: [Authentication/EnableWebSignIn](/windows/client-management/mdm/policy-csp-authentication#authentication-enablewebsignin).
+The Temporary Access Pass can now be used to recover Azure AD-joined PCs when the EnableWebSignIn policy is enabled on the device. This is useful for when your users don't know, or have, a password. For more information, see: [Authentication/EnableWebSignIn](/windows/client-management/mdm/policy-csp-authentication#authentication-enablewebsignin).
In the Azure portal, the required permissions list is under API Permissions with
**Service category:** Conditional Access **Product capability:** User Authentication
-We are announcing Public preview of Authentication strength, a Conditional Access control that allows administrators to specify which authentication methods can be used to access a resource. For more information, see: [Conditional Access authentication strength (preview)](../authentication/concept-authentication-strengths.md). You can use custom authentication strengths to restrict access by requiring specific FIDO2 keys using the Authenticator Attestation GUIDs (AAGUIDs), and apply this through conditional access policies. For more information, see: [FIDO2 security key advanced options](../authentication/concept-authentication-strengths.md#fido2-security-key-advanced-options).
+We're announcing Public preview of Authentication strength, a Conditional Access control that allows administrators to specify which authentication methods can be used to access a resource. For more information, see: [Conditional Access authentication strength (preview)](../authentication/concept-authentication-strengths.md). You can use custom authentication strengths to restrict access by requiring specific FIDO2 keys using the Authenticator Attestation GUIDs (AAGUIDs), and apply this through conditional access policies. For more information, see: [FIDO2 security key advanced options](../authentication/concept-authentication-strengths.md#fido2-security-key-advanced-options).
For more information, see: [How to use additional context in Microsoft Authentic
In October 2022 we've added the following 15 new applications in our App gallery with Federation support:
-[Unifii](https://www.unifii.com.au/), [WaitWell Staff App](https://waitwell.c)
+[Unifii](https://www.unifii.com.au/), [WaitWell Staff App](https://waitwell.c)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
For more information about how to better secure your organization by using autom
-## September 2022
-
-### General Availability - SSPR writeback is now available for disconnected forests using Azure AD Connect cloud sync
---
-**Type:** New feature
-**Service category:** Azure AD Connect Cloud Sync
-**Product capability:** Identity Lifecycle Management
-
-Azure AD Connect Cloud Sync Password writeback now provides customers the ability to synchronize Azure AD password changes made in the cloud to an on-premises directory in real time. This can be accomplished using the lightweight Azure AD cloud provisioning agent. For more information, see: [Tutorial: Enable cloud sync self-service password reset writeback to an on-premises environment](../authentication/tutorial-enable-cloud-sync-sspr-writeback.md).
---
-### General Availability - Device-based conditional access on Linux Desktops
---
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** SSO
---
-This feature empowers users on Linux clients to register their devices with Azure AD, enroll into Intune management, and satisfy device-based Conditional Access policies when accessing their corporate resources.
--- Users can register their Linux devices with Azure AD.-- Users can enroll in Mobile Device Management (Intune), which can be used to provide compliance decisions based upon policy definitions to allow device based conditional access on Linux Desktops.-- If compliant, users can use Microsoft Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies.-
-For more information, see:
--- [Azure AD registered devices](../devices/concept-azure-ad-register.md)-- [Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md)---
-### General Availability - Azure AD SCIM Validator
---
-**Type:** New feature
-**Service category:** Provisioning
-**Product capability:** Outbound to SaaS Applications
---
-Independent Software Vendors(ISVs) and developers can self-test their SCIM endpoints for compatibility: We have made it easier for ISVs to validate that their endpoints are compatible with the SCIM-based Azure AD provisioning services. This is now in general availability (GA) status.
-
-For more information, see: [Tutorial: Validate a SCIM endpoint](../app-provisioning/scim-validator-tutorial.md)
---
-### General Availability - prevent accidental deletions
---
-**Type:** New feature
-**Service category:** Provisioning
-**Product capability:** Outbound to SaaS Applications
---
-Accidental deletion of users in any system could be disastrous. WeΓÇÖre excited to announce the general availability of the accidental deletions prevention capability as part of the Azure AD provisioning service. When the number of deletions to be processed in a single provisioning cycle spikes above a customer defined threshold the following will happen. The Azure AD provisioning service pauses, provide you with visibility into the potential deletions, and allow you to accept or reject the deletions. This functionality has historically been available for Azure AD Connect, and Azure AD Connect Cloud Sync. It's now available across the various provisioning flows, including both HR-driven provisioning and application provisioning.
-
-For more information, see: [Enable accidental deletions prevention in the Azure AD provisioning service](../app-provisioning/accidental-deletions.md)
---
-### General Availability - Identity Protection Anonymous and Malicious IP for ADFS on-premises logins
---
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
---
-Identity protection expands its Anonymous and Malicious IP detections to protect ADFS sign-ins. This automatically applies to all customers who have AD Connect Health deployed and enabled, and show up as the existing "Anonymous IP" or "Malicious IP" detections with a token issuer type of "AD Federation Services".
-
-For more information, see: [What is risk?](../identity-protection/concept-identity-protection-risks.md)
----
-### New Federated Apps available in Azure AD Application gallery - September 2022
---
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
---
-In September 2022 we've added the following 15 new applications in our App gallery with Federation support:
-
-[RocketReach SSO](../saas-apps/rocketreach-sso-tutorial.md), [Arena EU](../saas-apps/arena-eu-tutorial.md), [Zola](../saas-apps/zola-tutorial.md), [FourKites SAML2.0 SSO for Tracking](../saas-apps/fourkites-tutorial.md), [Syniverse Customer Portal](../saas-apps/syniverse-customer-portal-tutorial.md), [Rimo](https://rimo.app/), [Q Ware CMMS](https://qware.app/), [Mapiq (OIDC)](https://app.mapiq.com/), [NICE Cxone](../saas-apps/nice-cxone-tutorial.md), [dominKnow|ONE](../saas-apps/dominknowone-tutorial.md), [Waynbo for Azure AD](https://webportal-eu.waynbo.com/Login), [innDex](https://web.inndex.co.uk/azure/authorize), [Profiler Software](https://www.profiler.net.au/), [Trotto go links](https://trot.to/_/auth/login), [AsignetSSOIntegration](../saas-apps/asignet-sso-tutorial.md).
-
-You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
-
-For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest
-----
active-directory Entitlement Management Logic Apps Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md
Title: Trigger custom logic apps with entitlement management
+ Title: Trigger Logic Apps with custom extensions in entitlement management (Preview)
description: Learn how to configure and use custom logic app workflows in entitlement management. documentationCenter: ''
na Previously updated : 01/25/2023 Last updated : 03/20/2023
#Customer intent: As an administrator, I want detailed information about how I can configure and add custom logic apps to my catalogs and access packages in entitlement management.
-# Trigger custom logic apps with entitlement management
+# Trigger Logic Apps with custom extensions in entitlement management (Preview)
-[Azure Logic Apps](../../logic-apps/logic-apps-overview.md) can be used to automate custom workflows and connect apps and services in one place. Users can integrate Azure Logic Apps with entitlement management to broaden their governance workflows beyond the core entitlement management use cases.
+[Azure Logic Apps](../../logic-apps/logic-apps-overview.md) can be used to automate custom workflows and connect apps and services in one place. Users can integrate Logic Apps with entitlement management to broaden their governance workflows beyond the core entitlement management use cases.
-These logic app workflows can then be triggered to run in accordance with entitlement management use cases such as when an access package is granted or requested. For example, an admin could create and link a custom logic app workflow to entitlement management so that when a user requests an access package, the logic app workflow is triggered to ensure that the user is also assigned certain characteristics in a 3rd party SAAS app (like Salesforce) or is sent a custom email.
+These Logic Apps can then be triggered to run in accordance with entitlement management use cases such as when an access package is granted or requested. For example, an admin could create and link a custom Logic App to entitlement management, so that when a user requests an access package, a Logic App is triggered that ensures the user is also assigned certain characteristics in a third party SAAS app (like Salesforce) or is sent a custom email.
-Entitlement management use cases that can be integrated with Azure Logic Apps include:
+Entitlement management use cases that can be integrated with Logic Apps include the following stages. These are the triggers associated with an access package that can launch the custom extension Logic App:
-- when an access package is requested
+- When an access package request is created
-- when an access package request is granted
+- When an access package request is approved
-- when an access package assignment expires
+- When an access package assignment is granted
-These triggers in logic app workflows are controlled in a new tab within access package policies called **Rules**. Additionally, a **Custom Extensions** tab on the Catalog page will show all added logic app resources for a given Catalog. This article describes how to create and add logic apps to catalogs and access packages in entitlement management.
+- When an access package assignment is removed
-## Create and add a logic app workflow to a catalog for use in entitlement management
+- 14 days before an access package assignment auto expires
+
+- One day before an access package assignment auto expires
++
+These triggers to Logic Apps are controlled in a tab within access package policies called **Rules**. Additionally, a **Custom Extensions** tab on the Catalog page shows all added Logic Apps extensions for a given Catalog. This article describes how to create and add logic apps to catalogs and access packages in entitlement management.
+
+## Create and add a Logic App workflow to a catalog for use in entitlement management
**Prerequisite roles:** Global administrator, Identity Governance administrator, Catalog owner or Resource Group Owner
These triggers in logic app workflows are controlled in a new tab within access
1. In the header navigation bar, select **Add a Custom Extension**.
-1. In the **Basics** tab, enter the name of the custom extension (linked logic app that you are adding) and description of the workflow. These fields will show up in the **Custom Extensions** tab of the Catalog going forward.
+1. In the **Basics** tab, enter the name of the custom extension, which should be the name of the Logic App you're linking, and a description of the workflow. These fields show up in the **Custom Extensions** tab of the Catalog.
![Pane to create a custom extension](./media/entitlement-management-logic-apps/create-custom-extension.png)
-1. Then go on to the **Details** tab.
+1. The **Extension Type** tab defines what type of access package policies you can use the custom extension with. The ΓÇ£**Request workflow**ΓÇ¥ type supports policy stages: access package requested is created, when the request is approved, when assignment is granted, and when assignment is removed. This type also supports the [Launch and wait](entitlement-management-logic-apps-integration.md#configuring-custom-extensions-that-pause-entitlement-management-processes). capabilities.
-1. In the **Create new logic app** field, select **Yes**. Otherwise, select **No** and move on to step 9 if you are going to use an existing logic app. If you selected yes, select one of the options below and move on to step 9:
+1. The pre-expiration workflow supports the policy stages: 14 days until access package assignment expiry, and 1 day until access package assignment expiration. This extension type doesn't support Launch and Wait.
- 1. Select **create new Azure AD application** if you want to use a new application as the basis for the new logic app, or
-
- ![Pane to select new app for logic app](./media/entitlement-management-logic-apps/new-app-selection.png)
+ :::image type="content" source="media/entitlement-management-logic-apps/extension-configuration-launch-wait.png" alt-text="Screenshot of launch and wait configuration options.":::
- 1. select **an existing Azure AD Application** if you want to use an existing application as the basis for the new logic app.
+1. The **Extension Configuration** tab allows you to decide if your extension has ΓÇ£launch and continueΓÇ¥ or ΓÇ£launch and waitΓÇ¥ behavior. With ΓÇ£Launch and continueΓÇ¥ the linked policy action on the access package, such as a request, triggers the Logic App attached to the custom extension. After the Logic App is triggered, the entitlement management process associated with the access package will continue. For ΓÇ£Launch and waitΓÇ¥, we'll pause the associated access package action until after the Logic App linked to the extension completes its task, and a resume action is sent by the admin to continue the process. If no response is sent back in the wait time period defined, this process would be considered a failure. This process is further described below in its own section [Configuring custom extensions that pause entitlement management processes](entitlement-management-logic-apps-integration.md#configuring-custom-extensions-that-pause-entitlement-management-processes).
++
+1. In the **Details** tab, choose whether youΓÇÖd like to use an existing Logic App. Selecting Yes in the field ΓÇ£Create new logic appΓÇ¥ (default) creates a new blank Logic App that is already linked to this custom extension. Regardless, you need to provide:
+
+ 1. An Azure subscription.
+
+ 1. A resource group that has Logic App resource creation permissions if creating a new Logic App.
+
+ 1. Select ΓÇ£Create Logic AppΓÇ¥ if using that setting.
- ![Pane to select existing app for logic app](./media/entitlement-management-logic-apps/existing-app-selection.png)
+ :::image type="content" source="media/entitlement-management-logic-apps/create-logic-app-extension-details.png" alt-text="Screenshot of creating logic app detail selections.":::
> [!NOTE]
- > Later, you can edit what your logic app workflow does in workflow designer. To do so, in the **Custom Extensions** tab of **Catalogs**, select the logic app you created.
+ > when creating a new Logic App in this modal, the length of "/subscriptions/{SubscriptionId}/resourceGroups/{RG Name}/providers/Microsoft.Logic/workflows/{Logicapp Name}" cannot exceed 150 characters.
+
+1. In **Review and Create**, review the summary of your custom extension and make sure the details for your Logic App callout are correct. Then select **Create**.
+
+1. This custom extension to the linked Logic App now appears in your Custom Extensions tab under Catalogs. You're able to call on this in access package policies.
+
+## Configuring custom extensions that pause entitlement management processes
+
+A new update to the custom extensions feature is the ability to pause the access package policy process associated with a custom extension until after that Logic App completes, and a resume request payload is sent back to entitlement management. For example, if a custom extension for a Logic App is triggered from an access package grant policy, and ΓÇ£launch and waitΓÇ¥ is enabled, once the Logic App is triggered the grant process won't resume until after the Logic App completes, and a resume request is sent back to entitlement management.
+
+This pause process allows admins to have control of workflows theyΓÇÖd like to run before continuing with access lifecycle tasks in entitlement management. The only exception to this is if a timeout occurs. Launch and wait processes require a timeout of up to 14 days noted in minutes, hours, or days. If a resume response isn't sent back to entitlement management by the time the ΓÇ£timeoutΓÇ¥ period elapses, the entitlement management process automatically continues.
+
+The admin is responsible for configuring an automated process that is able to send the API **resume request** payload back to entitlement management, once the Logic App workflow has completed. To send back the resume request payload, follow the instructions here in the graph API documents. See information here on the [resume request](/graph/api/accesspackageassignmentrequest-resume)
+
+Specifically, when an access package policy has been enabled to call out a custom extension and the request processing is waiting for the callback from the customer, the customer can initiate a resume action. It's performed on an [accessPackageAssignmentRequest](/graph/api/resources/accesspackageassignmentrequest) object whose **requestStatus** is in a **WaitingForCallback** state.
+
+The resume request can be sent back for the following stages:
+
+``
+microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestCreated
+microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestApproved
+microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestGranted
+Microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestRemoved
+``
+
+An example of a resume request payload is:
+
+``` http
+POST https://graph.microsoft.com/beta/identityGovernance/entitlementManagement/accessPackageAssignmentRequests/0e60f18c-b2a0-4887-9da8-da2e30a39d99/resume
+```
-1. Next, enter the **Subscription ID**, **Resource group**, **Logic app name**.
+``` http
+Content-Type: application/json
-1. Then, select **Validate and Create**.
+{
+ "source": "Contoso.SodCheckProcess",
+ "type": "microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestCreated",
+ "data": {
+ "@odata.type": "microsoft.graph.accessPackageAssignmentRequestCallbackData",
+ "stage": "assignmentRequestCreated",
+ "customExtensionStageInstanceId": "957d0c50-466b-4840-bb5b-c92cea7141ff",
+ "customExtensionStageInstanceDetail": "This user is all verified"
+ }
+}
+```
-1. Review the summary of your custom extension and make sure the details for your logic app callout are correct. Then select **Create**.
+With Launch and wait, admins also have the ability to deny a request if the extension is linked to the access package stages ΓÇ£*request is created*ΓÇ¥ or ΓÇ£*request is approved*ΓÇ¥. In these cases, the Logic App can send back a *ΓÇ£denyΓÇ¥* message to entitlement management, which would end the process before the end user receives the access package.
- ![Example of custom extension summary](./media/entitlement-management-logic-apps/custom-extension-summary.png)
+As mentioned, custom extensions created with the request workflow type, which includes four associated policy stages, can be enabled with ΓÇ£*Launch and wait*ΓÇ¥ if desired.
-This custom extension to the linked logic app will now appear in your Custom Extensions tab under Catalogs. You will be able to call on this in access package policies.
-## Edit a linked logic app
+The following is an example to resume the processing of an access package assignment request by denying the request that's waiting for a callback. A request can't be denied at the **assignmentRequestCreated** stage of the callout.
+
+``` http
+POST https://graph.microsoft.com/beta/identityGovernance/entitlementManagement/accessPackageAssignmentRequests/9e60f18c-b2a0-4887-9da8-da2e30a39d99/resume
+```
+
+``` http
+Content-Type: application/json
+
+{
+ "source": "Contoso.SodCheckProcess",
+ "type": "microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestCreated",
+ "data": {
+ "@odata.type": "microsoft.graph.accessPackageAssignmentRequestCallbackData",
+ "stage": "AssignmentRequestCreated",
+ "customExtensionStageInstanceId": "857d0c50-466b-4840-bb5b-c92cea7141ff",
+ "state": "denied",
+ "customExtensionStageInstanceDetail": "Potential risk user based on the SOD check"
+ }
+}
+```
+
+## Edit a linked Logic App's workflow definition 
**Prerequisite roles:** Global administrator, Identity Governance administrator, or Catalog owner
-1. Sign in to the [Azure portal](https://portal.azure.com)l.
+For newly created Logic Apps linked to custom extensions, these Logic Apps begin blank. To create the workflows in the Logic Apps that will be triggered by the extension when the linked access package policy condition is triggered, you need to edit the definition of the Logic App workflow in Logic App designer. To accomplish this, you'd follow these steps:
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Navigate to the Custom Extensions tab within a Catalog as mentioned in the above section.
-1. In the left menu, select **Catalogs**.
+1. Select the custom extension for whom you want to edit the Logic App.
+
+1. Select the Logic App under the Logic app column for the associated custom extension row. This allows you to edit or create the workflow in Logic App designer.
+
+For more information on creating logic app workflows, see [Quickstart: Create an example Consumption workflow in multi-tenant Azure Logic Apps with the Azure portal](../../logic-apps/quickstart-create-example-consumption-workflow.md).
+
+## View and Edit Existing Custom Extensions for a Catalog
+
+**Prerequisite roles:** Global administrator, Identity Governance administrator, or Catalog owner
+
+1. Navigate to the Custom Extensions tab within a Catalog as mentioned earlier.
+
+1. Here, you can view all the custom extensions you've created, along with the associated Logic App and information about the custom extension type.
+ :::image type="content" source="media/entitlement-management-logic-apps/custom-extension-list.png" alt-text="Screenshot of a list of custom extensions.":::
+1. Along with the Logic App name, the column Type dictates whether the custom extension was created in the new V2 auth model (after March 17, 2023), or the original model. If a custom extension was created in the new model, the Type column matches the selected type from the configuration modal that is either ΓÇ£*assignment request*ΓÇ¥ or ΓÇ£*pre-expiration*ΓÇ¥. For older custom extensions, the type shows ΓÇ£*custom access package*ΓÇ¥.
+
+1. The Token Security column shows the associated auth security framework used when creating the custom extension. New V2 custom extensions show ΓÇ£*proof-of-possession*ΓÇ¥ (PoP) as the token security type. Older custom extensions show ΓÇ£regularΓÇ¥.
+
+1. Old style custom extensions are no longer able to be created from the UI, however existing ones can be converted to new style custom extensions from the UI.
+ :::image type="content" source="media/entitlement-management-logic-apps/convert-token-security-extension.png" alt-text="Screenshot of converting old security token to new.":::
-1. In the left menu, select **Custom Extensions**.
+1. Selecting the three dots at the end of the row of an old custom extension allows you to update the custom extension to a new type quickly.
+ > [!NOTE]
+ > Custom extensions can only be converted to the new type if they are not in use, or if they are in use exclusively for policy stages of one specific extension type (assignment request stages or pre expiration stages).
+1. You can also edit any custom extension. This allows you to update the name, description, and other field values. This can be accomplished by selecting **Edit** inside the three-dot pane for any custom extension.
-1. Here, you can view all custom extensions (logic apps) that you have added to this Catalog. To edit a logic app workflow, or to create a workflow for a newly-added logic app, select the Azure Logic Apps custom extension under **Endpoint**. This will open the workflow designer and allow you to create your workflow.
+1. Old style custom extensions can continue to be used and edited even if not converted, even though they can no longer be created.
-For more information on creating logic app workflows, see [Create an example Consumption workflow with Azure Logic Apps in the Azure portal](../../logic-apps/quickstart-create-example-consumption-workflow.md).
+1. If an old style custom extension cannot be updated to the new type because it is being used for policy stages, of **BOTH** assignment request and pre expiration types, then in order to update it you must either remove it from all linked policies or ensure it is only used for policy stages associated with **ONE** type (assignment request, or pre expiration).  
## Add custom extension to a policy in an access package
For more information on creating logic app workflows, see [Create an example Con
> [!NOTE] > Select **New access package** if you want to create a new access package.
- > For more information about how to create an access package see [Create a new access package in entitlement management](entitlement-management-access-package-create.md). For more information about how to edit an existing access package, see [Change request settings for an access package in Azure AD entitlement management](entitlement-management-access-package-request-policy.md#open-and-edit-an-existing-policys-request-settings).
+ > For more information about how to create an access package, see [Create a new access package in entitlement management](entitlement-management-access-package-create.md). For more information about how to edit an existing access package, see [Change request settings for an access package in Azure AD entitlement management](entitlement-management-access-package-request-policy.md#open-and-edit-an-existing-policys-request-settings).
1. Change to the policy tab, select the policy and select **Edit**. 1. In the policy settings, go to the **Custom Extensions (Preview)** tab.
-1. In the menu below **Stage**, select the access package event you wish to use as trigger for this custom extension (logic app). For example, if you only want to trigger the custom extension logic app workflow when a user requests the access package, select **Request is created**.
+1. In the menu below **Stage**, select the access package event you wish to use as trigger for this custom extension (Logic App). For example, if you only want to trigger the custom extension Logic App workflow when a user requests the access package, select **Request is created**.
-1. In the menu below **Custom Extension**, select the custom extension (logic app) you want to add to the access package. The do action you select will execute when the event selected in the when field occurs.
+1. In the menu below **Custom Extension**, select the custom extension (Logic App) you want to add to the access package. The action you select executes when the event selected in the *when* field occurs.
1. Select **Update** to add it to an existing access package's policy.
- ![Add a logic app to access package](./media/entitlement-management-logic-apps/add-logic-apps-access-package.png)
+ ![Add a Logic App to access package](./media/entitlement-management-logic-apps/add-logic-apps-access-package.png)
## Troubleshooting and Validation
-To verify that your custom extension has correctly triggered the associated logic app when called upon by the access package **Do** option, you can view the Azure Logic Apps logs.
+For custom extensions associated with a request, you can view details about the custom extension (and launch and wait if enabled) process from the Request history details link within the request details page of the associated access package.
-The overview page for a specific logic app will show timestamps of when the logic app was last executed. Also, the Resource Group overview for a resource group with a linked custom extension will show the name of that custom extension in the overview if it has been configured correctly.
+
+For example, here you can see the time the request was submitted, and the time the launch and wait process (waiting for callback) began. The request was approved, and the entitlement management stage ΓÇ£resumedΓÇ¥, once the Logic App executed and the resume request was returned at 12:15pm.
+
+Additionally, a new **Custom extension instances link** within request details show information about the custom extension associated with the access package for the request.
+ :::image type="content" source="media/entitlement-management-logic-apps/extension-selection-details-list.png" alt-text="Screenshot of selection details list items.":::
+
+This shows the custom extension ID and the status. This information changes based on if there's an associated launch and wait callback.
+
+To verify that your custom extension has correctly triggered the associated Logic App you can view the Logic App logs as well, which have a timestamp of when the Logic App was last executed.
## Next steps+
+- [Delegation and roles in entitlement management](entitlement-management-delegate.md)
+- [Create and manage a catalog of resources in entitlement management](entitlement-management-catalog-create.md)
active-directory Tutorial Password Hash Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-password-hash-sync.md
Before you install Windows Server AD, run a script that installs prerequisites:
$ipprefix = "24" $ipgw = "10.0.1.1" $ipdns = "10.0.1.117"
- $ipdns2 = "8.8.8.8"
+ $ipdns2 = "4.2.2.2"
$ipif = (Get-NetAdapter).ifIndex $featureLogPath = "c:\poshlog\featurelog.txt" $newname = "DC1"
active-directory Delegate App Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-app-roles.md
By default in Azure AD, all users can register applications and manage all aspec
### Grant individual permissions to create and consent to applications when the default ability is disabled
-Assign the Application Developer role to grant the ability to create application registrations when the **Users can register applications** setting is set to No. This role also grants permission to consent on one's own behalf when the **Users can consent to apps accessing company data on their behalf** setting is set to No.
+Assign the [Application Developer role](../roles/permissions-reference.md#application-developer) to grant the ability to create application registrations when the **Users can register applications** setting is set to No. This role also grants permission to consent on one's own behalf when the **Users can consent to apps accessing company data on their behalf** setting is set to No.
## Assign application owners
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
This administrator manages federation between Azure AD organizations and externa
## Global Administrator
-Users with this role have access to all administrative features in Azure Active Directory, as well as services that use Azure Active Directory identities like the Microsoft 365 Defender portal, the Microsoft Purview compliance portal, Exchange Online, SharePoint Online, and Skype for Business Online. Furthermore, Global Administrators can [elevate their access](../../role-based-access-control/elevate-access-global-admin.md) to manage all Azure subscriptions and management groups. This allows Global Administrators to get full access to all Azure resources using the respective Azure AD Tenant. The person who signs up for the Azure AD organization becomes a Global Administrator. There can be more than one Global Administrator at your company. Global Administrators can reset the password for any user and all other administrators. A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has zero Global Administrators.
+Users with this role have access to all administrative features in Azure Active Directory, as well as services that use Azure Active Directory identities like the Microsoft 365 Defender portal, the Microsoft Purview compliance portal, Exchange Online, SharePoint Online, and Skype for Business Online. Global Administrators can view Directory Activity logs. Furthermore, Global Administrators can [elevate their access](../../role-based-access-control/elevate-access-global-admin.md) to manage all Azure subscriptions and management groups. This allows Global Administrators to get full access to all Azure resources using the respective Azure AD Tenant. The person who signs up for the Azure AD organization becomes a Global Administrator. There can be more than one Global Administrator at your company. Global Administrators can reset the password for any user and all other administrators. A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has zero Global Administrators.
> [!NOTE] > As a best practice, Microsoft recommends that you assign the Global Administrator role to fewer than five people in your organization. For more information, see [Best practices for Azure AD roles](best-practices.md).
Users with this role have global permissions to manage settings within Microsoft
## Knowledge Administrator
-Users in this role have full access to all knowledge, learning and intelligent features settings in the Microsoft 365 admin center. They have a general understanding of the suite of products, licensing details and has responsibility to control access. Knowledge Administrator can create and manage content, like topics, acronyms and learning resources. Additionally, these users can create content centers, monitor service health, and create service requests.
+Users in this role have full access to all knowledge, learning and intelligent features settings in the Microsoft 365 admin center. They have a general understanding of the suite of products, licensing details and have responsibility to control access. Knowledge Administrator can create and manage content, like topics, acronyms and learning resources. Additionally, these users can create content centers, monitor service health, and create service requests.
> [!div class="mx-tableFixed"] > | Actions | Description |
Users with this role **cannot** do the following:
Users with this role can do the following tasks: - Manage and configure all aspects of Virtual Visits in Bookings in the Microsoft 365 admin center, and in the Teams EHR connector-- View usage reports for Virtual Visits in the Teams admin center, Microsoft 365 admin center, and PowerBI
+- View usage reports for Virtual Visits in the Teams admin center, Microsoft 365 admin center, and Power BI
- View features and settings in the Microsoft 365 admin center, but can't edit any settings Virtual Visits are a simple way to schedule and manage online and video appointments for staff and attendees. For example, usage reporting can show how sending SMS text messages before appointments can reduce the number of people who don't show up for appointments.
active-directory Nist Authenticator Assurance Level 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-1.md
Before you begin authenticator assurance level 1 (AAL1), you can review the foll
## Permitted authenticator types
-To achieve AAL1, you can use any NIST single-factor or multifactor [permitted authenticator](nist-authenticator-types.md). Note that Password and Phone (SMS) are not covered in [AAL2](nist-authenticator-assurance-level-2.md) or [AAL3](nist-authenticator-assurance-level-2.md).
+To achieve AAL1, you can use any NIST single-factor or multifactor [permitted authenticator](nist-authenticator-types.md).
|Azure AD authentication method|NIST authenticator type | | - | - |
-|Password |memorized secret |
-|Phone (SMS)|out-of-band |
-|FIDO 2 security key <br> Microsoft Authenticator app for iOS (passwordless) <br> Windows Hello for Business with software TPM <br> Smartcard (Active Directory Federation Services) | Multi-factor crypto software |
+|Password |Memorized Secret |
+|Phone (SMS): Not recommended | Out-of-band |
+|Microsoft Authenticator App for iOS (Passwordless) <br> Microsoft Authenticator App for Android (Passwordless)|Multi-factor Out-of-band |
+|Single-factor certificate | Single-factor crypto software |
+|Multi-factor Software Certificate (PIN Protected) <br> Windows Hello for Business with software TPM <br> | Multi-factor crypto software |
+|Windows Hello for Business with hardware TPM <br> Hardware protected certificate (smartcard/security key/TPM) <br> FIDO 2 security key | Multi-factor crypto hardware
+ > [!TIP]
-> We recommend you meet at least AAL2. If necessary, meet AAL3 for business reasons, industry standards, or compliance requirements.
+> We recommend you select at a minimum phishing resistant AAL2 authneticators. Select AAL3 authenticators as necessary for business reasons, industry standards, or compliance requirements.
## FIPS 140 validation
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
Authorization: Bearer <token>
{     "includeQRCode": true, "callback": {
-      "url": "https://www.contoso.com/api/verifier/presentationCallbac",
+      "url": "https://www.contoso.com/api/verifier/presentationCallback",
      "state": "11111111-2222-2222-2222-333333333333",       "headers": {         "api-key": "an-api-key-can-go-here"
aks Auto Upgrade Node Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md
Last updated 02/03/2023
# Automatically upgrade Azure Kubernetes Service cluster node operating system images (preview)
-AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, works in tandem with the existing [auto-upgrade][Autoupgrade] channel which is used for Kubernetes version upgrades.
+AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, works in tandem with the existing [auto-upgrade][Autoupgrade] channel, which is used for Kubernetes version upgrades.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
az provider register --namespace Microsoft.ContainerService
## Limitations
-If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node OS auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default. You can't change node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to set the node OS auto-upgrade channel values , make sure the [cluster auto-upgrade channel][Autoupgrade] is not `node-image`.
+If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node OS auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default. You can't change node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to set the node OS auto-upgrade channel values, make sure the [cluster auto-upgrade channel][Autoupgrade] isn't `node-image`.
-The nodeosupgradechannel is not supported on Mariner and Windows OS nodepools.
+The nodeosupgradechannel isn't supported on Mariner and Windows OS nodepools.
## Using node OS auto-upgrade
-Automatically completed upgrades are functionally the same as manual upgrades. The timing of upgrades is determined by the selected channel. When making changes to auto-upgrade, allow 24 hours for the changes to take effect. By default, a cluster's node OS auto-upgrade channel is set to `Unmanaged`.
+Automatically completed upgrades are functionally the same as manual upgrades. The selected channel determines the timing of upgrades. When making changes to auto-upgrade, allow 24 hours for the changes to take effect. By default, a cluster's node OS auto-upgrade channel is set to `Unmanaged`.
> [!NOTE] > Node OS image auto-upgrade won't affect the cluster's Kubernetes version, but it still still requires the cluster to be in a supported version to function properly.
+> When changing channels to `NodeImage` or `SecurityPatch`, the unattended upgrades will only be disabled when the image gets applied in the next cycle and not immediately.
The following upgrade channels are available: |Channel|Description|OS-specific behavior| ||| | `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates|N/A|
-| `Unmanaged`|OS updates will be applied automatically through the OS built-in patching infrastructure. Newly allocated machines will be unpatched initially and will be patched at some point by the OS's infrastructure|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows and Mariner don't apply security patches automatically, so this option behaves equivalently to `None`|
-| `SecurityPatch`|AKS will update the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only" on a regular basis. Some patches, such as kernel patches, can't be applied to existing nodes without disruption. For such patches, the VHD will be updated and existing machines will be upgraded to that VHD following maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.|N/A|
-| `NodeImage`|AKS will update the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.|
+| `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially and will be patched at some point by the OS's infrastructure|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows and Mariner don't apply security patches automatically, so this option behaves equivalently to `None`|
+| `SecurityPatch`|AKS updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only" on a regular basis. There maybe disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|N/A|
+| `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|
To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example.
az aks update --resource-group myResourceGroup --name myAKSCluster --node-os-upg
## Using node OS auto-upgrade with Planned Maintenance
-If youΓÇÖre using Planned Maintenance and node OS auto-upgrade, your upgrade will start during your specified maintenance window.
+If youΓÇÖre using Planned Maintenance and node OS auto-upgrade, your upgrade starts during your specified maintenance window.
> [!NOTE] > To ensure proper functionality, use a maintenance window of four hours or more.
For more information on Planned Maintenance, see [Use Planned Maintenance to sch
[upgrade-aks-cluster]: upgrade-cluster.md [unattended-upgrades]: https://help.ubuntu.com/community/AutomaticSecurityUpdates [Autoupgrade]: auto-upgrade-cluster.md
-[kured]: node-updates-kured.md
+[kured]: node-updates-kured.md
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
Both the horizontal pod autoscaler and cluster autoscaler can also decrease the
The cluster autoscaler may be unable to scale down if pods can't move, such as in the following situations: * A pod is directly created and isn't backed by a controller object, such as a deployment or replica set.
-* A pod disruption budget (PDB) is too restrictive and doesn't allow the number of pods to be fall below a certain threshold.
+* A pod disruption budget (PDB) is too restrictive and doesn't allow the number of pods to fall below a certain threshold.
* A pod uses node selectors or anti-affinity that can't be honored if scheduled on a different node. For more information about how the cluster autoscaler may be unable to scale down, see [What types of pods can prevent the cluster autoscaler from removing a node?][autoscaler-scaledown].
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
In the following example, traffic is routed as such:
spec: ingressClassName: nginx tls:
- - hosts:
+ - hosts:
- hello-world-ingress.MY_CUSTOM_DOMAIN secretName: tls-secret rules:
- - host: hello-world-ingress.MY_CUSTOM_DOMAIN
+ - host: hello-world-ingress.MY_CUSTOM_DOMAIN
http: paths: - path: /hello-world-one(/|$)(.*)
In the following example, traffic is routed as such:
spec: ingressClassName: nginx tls:
- - hosts:
+ - hosts:
- hello-world-ingress.MY_CUSTOM_DOMAIN secretName: tls-secret rules:
- - host: hello-world-ingress.MY_CUSTOM_DOMAIN
+ - host: hello-world-ingress.MY_CUSTOM_DOMAIN
http: paths: - path: /static(/|$)(.*)
aks Node Upgrade Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-upgrade-github-actions.md
This article also assumes you have a [GitHub][github] account to create your act
`cron` is a utility that allows you to run a set of commands, or job, on an automated schedule. To create job to update your AKS nodes on an automated schedule, you'll need a repository to host your actions. Usually, GitHub actions are configured in the same repository as your application, but you can use any repository. For this article we'll be using your [profile repository][profile-repository]. If you don't have one, create a new repository with the same name as your GitHub username. 1. Navigate to your repository on GitHub
-1. Click on the **Actions** tab at the top of the page.
-1. If you already set up a workflow in this repository, you'll be directed to the list of completed runs, in this case, click on the **New Workflow** button. If this is your first workflow in the repository, GitHub will present you with some project templates, click on the **Set up a workflow yourself** link below the description text.
-1. Change the workflow `name` and `on` tags similar to the below. GitHub Actions use the same [POSIX cron syntax][cron-syntax] as any Linux-based system. In this schedule, we're telling the workflow to run every 15 days at 3am.
+2. Select the **Actions** tab at the top of the page.
+3. If you already set up a workflow in this repository, you'll be directed to the list of completed runs, in this case, select the **New Workflow** button. If this is your first workflow in the repository, GitHub will present you with some project templates, select the **Set up a workflow yourself** link below the description text.
+4. Change the workflow `name` and `on` tags similar to the below. GitHub Actions use the same [POSIX cron syntax][cron-syntax] as any Linux-based system. In this schedule, we're telling the workflow to run every 15 days at 3am.
```yml name: Upgrade cluster node images
This article also assumes you have a [GitHub][github] account to create your act
- cron: '0 3 */15 * *' ```
-1. Create a new job using the below. This job is named `upgrade-node`, runs on an Ubuntu agent, and will connect to your Azure CLI account to execute the needed steps to upgrade the nodes.
+5. Create a new job using the below. This job is named `upgrade-node`, runs on an Ubuntu agent, and will connect to your Azure CLI account to execute the needed steps to upgrade the nodes.
```yml name: Upgrade cluster node images
In the `steps` key, you'll define all the work the workflow will execute to upgr
Download and sign in to the Azure CLI. 1. On the right-hand side of the GitHub Actions screen, find the *marketplace search bar* and type **"Azure Login"**.
-1. You'll get as a result, an Action called **Azure Login** published **by Azure**:
+2. You'll get as a result, an Action called **Azure Login** published **by Azure**:
:::image type="content" source="media/node-upgrade-github-actions/azure-login-search.png" alt-text="Search results showing two lines, the first action is called 'Azure Login' and the second 'Azure Container Registry Login'":::
-1. Click on **Azure Login**. On the next screen, click the **copy icon** in the top right of the code sample.
+3. Select **Azure Login**. On the next screen, select the **copy icon** in the top right of the code sample.
- :::image type="content" source="media/node-upgrade-github-actions/azure-login.png" alt-text="Azure Login action result pane with code sample below, red square around a copy icon highlights the click spot":::
+ :::image type="content" source="media/node-upgrade-github-actions/azure-login.png" alt-text="Azure Login action result pane with code sample below, red square around a copy icon highlights the select spot":::
-1. Paste the following under the `steps` key:
+4. Paste the following under the `steps` key:
```yml name: Upgrade cluster node images
Download and sign in to the Azure CLI.
creds: ${{ secrets.AZURE_CREDENTIALS }} ```
-1. From the Azure CLI, run the following command to generate a new username and password.
+5. From the Azure CLI, run the following command to generate a new username and password.
+
+ > [!NOTE]
+ > This example creates the `Contributor` role at the *Subscription* scope. You may provide the role and scope that meets your needs. For more information, see [Azure built-in roles][azure-built-in-roles] and [Azure RBAC scope levels][azure-rbac-scope-levels].
```azurecli-interactive az ad sp create-for-rbac --role Contributor --scopes /subscriptions/{subscriptionID} -o json
Download and sign in to the Azure CLI.
} ```
-1. **In a new browser window** navigate to your GitHub repository and open the **Settings** tab of the repository. Click **Secrets** then, click on **New Repository Secret**.
-1. For *Name*, use `AZURE_CREDENTIALS`.
-1. For *Value*, add the entire contents from the output of the previous step where you created a new username and password.
+6. **In a new browser window** navigate to your GitHub repository and open the **Settings** tab of the repository. Select **Secrets** then, select **New Repository Secret**.
+7. For *Name*, use `AZURE_CREDENTIALS`.
+8. For *Value*, add the entire contents from the output of the previous step where you created a new username and password.
:::image type="content" source="media/node-upgrade-github-actions/azure-credential-secret.png" alt-text="Form showing AZURE_CREDENTIALS as secret title, and the output of the executed command pasted as JSON":::
-1. Click **Add Secret**.
+9. Select **Add Secret**.
The CLI used by your action will be logged to your Azure account and ready to run commands.
To create the steps to execute Azure CLI commands.
:::image type="content" source="media/node-upgrade-github-actions/azure-cli-action.png" alt-text="Search result for 'Azure CLI Action' with first result being shown as made by Azure":::
-1. Click the copy button on the *GitHub marketplace result* and paste the contents of the action in the main editor, below the *Azure Login* step, similar to the following:
+1. Select the copy button on the *GitHub marketplace result* and paste the contents of the action in the main editor, below the *Azure Login* step, similar to the following:
```yml name: Upgrade cluster node images
To create the steps to execute Azure CLI commands.
> You can decouple the `-g` and `-n` parameters from the command by adding them to secrets similar to the previous steps. Replace the `{resourceGroupName}` and `{aksClusterName}` placeholders by their secret counterparts, for example `${{secrets.RESOURCE_GROUP_NAME}}` and `${{secrets.AKS_CLUSTER_NAME}}` 1. Rename the file to `upgrade-node-images`.
-1. Click **Start Commit**, add a message title, and save the workflow.
+1. Select **Start Commit**, add a message title, and save the workflow.
Once you create the commit, the workflow will be saved and ready for execution.
jobs:
[system-pools]: use-system-pools.md [spot-pools]: spot-node-pool.md [use-multiple-node-pools]: use-multiple-node-pools.md
-[auto-upgrade-node-image]: auto-upgrade-node-image.md
+[auto-upgrade-node-image]: auto-upgrade-node-image.md
+[azure-built-in-roles]: ../role-based-access-control/built-in-roles.md
+[azure-rbac-scope-levels]: ../role-based-access-control/scope-overview.md#scope-format
aks Operator Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-network.md
The *ingress resource* is a YAML manifest of `kind: Ingress`. It defines the hos
The following example YAML manifest would distribute traffic for *myapp.com* to one of two services, *blogservice* or *storeservice*. The customer is directed to one service or the other based on the URL they access. ```yaml
+apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: myapp-ingress
- annotations: kubernetes.io/ingress.class: "PublicIngress"
spec:
+ ingressClassName: PublicIngress
tls: - hosts: - myapp.com
spec:
paths: - path: /blog backend:
- serviceName: blogservice
- servicePort: 80
+ service
+ name: blogservice
+ port: 80
- path: /store backend:
- serviceName: storeservice
- servicePort: 80
+ service
+ name: storeservice
+ port: 80
``` ### Ingress controller
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-group-managed-service-accounts.md
credspec:
HostAccountConfig: PluginGUID: '{CCC2A336-D7F3-4818-A213-272B7924213E}' PortableCcgVersion: "1"
- PluginInput: ObjectId=$MANAGED_ID;SecretUri=$SECRET_URI # SECRET_URI takes the form https://$akvName.vault.azure.net/secrets/$akvSecretName
+ PluginInput: "ObjectId=$MANAGED_ID;SecretUri=$SECRET_URI" # SECRET_URI takes the form https://$akvName.vault.azure.net/secrets/$akvSecretName
CmsPlugins: - ActiveDirectory DomainJoinConfig:
aks Use Pod Sandboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-sandboxing.md
Title: Pod Sandboxing (preview) with Azure Kubernetes Service (AKS) description: Learn about and deploy Pod Sandboxing (preview), also referred to as Kernel Isolation, on an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/01/2023 Last updated : 03/07/2023
Use the following command to enable Pod Sandboxing (preview) by creating a node
## Deploy a trusted application
-To demonstrate the isolation of an application on the AKS cluster, perform the following steps.
+To demonstrate deployment of a trusted application on the shared kernel in the AKS cluster, perform the following steps.
1. Create a file named *trusted-app.yaml* to describe a trusted pod, and then paste the following manifest.
To demonstrate the isolation of an application on the AKS cluster, perform the f
## Deploy an untrusted application
-To demonstrate the deployed application on the AKS cluster isn't isolated and is on the untrusted shim, perform the following steps.
+To demonstrate the deployment of an untrusted application into the pod sandbox on the AKS cluster, perform the following steps.
1. Create a file named *untrusted-app.yaml* to describe an untrusted pod, and then paste the following manifest.
aks Virtual Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes.md
Virtual nodes functionality is heavily dependent on ACI's feature set. In additi
* Virtual nodes support scheduling Linux pods. You can manually install the open source [Virtual Kubelet ACI](https://github.com/virtual-kubelet/azure-aci) provider to schedule Windows Server containers to ACI. * Virtual nodes require AKS clusters with Azure CNI networking. * Using api server authorized ip ranges for AKS.
-* Volume mounting Azure Files share support [General-purpose V2](../storage/common/storage-account-overview.md#types-of-storage-accounts) and [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). Follow the instructions for mounting [a volume with Azure Files share](azure-files-csi.md).
+* Volume mounting Azure Files share support [General-purpose V2](../storage/common/storage-account-overview.md#types-of-storage-accounts) and [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). However, virtual nodes currently don't support [Persistent Volumes](concepts-storage.md#persistent-volumes) and [Persistent Volume Claims](concepts-storage.md#persistent-volume-claims). Follow the instructions for mounting [a volume with Azure Files share as an inline volume](azure-csi-files-storage-provision.md#mount-file-share-as-an-inline-volume).
* Using IPv6 isn't supported. * Virtual nodes don't support the [Container hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) feature.
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
Serviceaccount/workload-identity-sa created
Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject. Replace the values `resourceGroupName`, `userAssignedIdentityName`, `federatedIdentityName`, `serviceAccountNamespace`, and `serviceAccountName`. ```azurecli
-az identity federated-credential create --name federatedIdentityName --identity-name userAssignedIdentityName --resource-group resourceGroupName --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}
+az identity federated-credential create --name federatedIdentityName --identity-name userAssignedIdentityName --resource-group resourceGroupName --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME} --audience api://AzureADTokenExchange
``` > [!NOTE]
api-management Workspaces Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md
Last updated 03/10/2023 -+ + # Workspaces in Azure API Management In API Management, *workspaces* allow decentralized API development teams to manage and productize their own APIs, while a central API platform team maintains the API Management infrastructure. Each workspace contains APIs, products, subscriptions, and related entities that are accessible only to the workspace collaborators. Access is controlled through Azure role-based access control (RBAC).
Therefore, the following sample scenarios aren't currently supported in workspac
* Using the authorizations feature * Specifying API authorization server information (for example, for the developer portal)
+
+All resources in an API Management service need to have unique names, even if they are located in different workspaces.
## Next steps
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
There are two types of backups in App Service. Automatic backups made for your a
|Feature|Automatic backups | Custom backups | |-|-|-|
-| Pricing tiers | **Basic**, **Standard**, **Premium**. | **Basic**, **Standard**, **Premium**, **Isolated**. |
+| Pricing tiers | **Basic**, **Standard**, **Premium**, **Isolated**. | **Basic**, **Standard**, **Premium**, **Isolated**. |
| Configuration required | No. | Yes. | | Backup size | 30 GB. | 10 GB, 4 GB of which can be the linked database. | | Linked database | Not backed up. | The following linked databases can be backed up: [SQL Database](/azure/azure-sql/database/), [Azure Database for MySQL](../mysql/index.yml), [Azure Database for PostgreSQL](../postgresql/index.yml), [MySQL in-app](https://azure.microsoft.com/blog/mysql-in-app-preview-app-service/). |
application-gateway Application Gateway Create Probe Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-create-probe-portal.md
Probes are configured in a two-step process through the portal. The first step i
|**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com, just use '/'. You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.| |**Interval (secs)**|30|How often the probe is run to check for health. It isn't recommended to set the lower than 30 seconds.| |**Timeout (secs)**|30|The amount of time the probe waits before timing out. If a valid response isn't received within this time-out period, the probe is marked as failed. The timeout interval needs to be high enough that an http call can be made to ensure the backend health page is available. The time-out value shouldn't be more than the ΓÇÿIntervalΓÇÖ value used in this probe setting or the ΓÇÿRequest timeoutΓÇÖ value in the HTTP setting, which will be associated with this probe.|
- |**Unhealthy threshold**|3|Number of consecutive failed attempts to be considered unhealthy. The threshold can be set to 1 or more.|
+ |**Unhealthy threshold**|3|Number of consecutive failed attempts to be considered unhealthy. The threshold can be set to 0 or more.|
|**Use probe matching conditions**|Yes or No|By default, an HTTP(S) response with status code between 200 and 399 is considered healthy. You can change the acceptable range of backend response code or backend response body. [Learn more](./application-gateway-probe-overview.md#probe-matching)| |**HTTP Settings**|selection from dropdown|Probe will get associated with the HTTP settings selected here and therefore, will monitor the health of that backend pool, which is associated with the selected HTTP setting. It will use the same port for the probe request as the one being used in the selected HTTP setting. You can only choose those HTTP settings, which aren't associated with any other custom probe. <br>The only HTTP settings that are available for association are those that have the same protocol as the protocol chosen in this probe configuration, and have the same state for the *Pick Host Name From Backend HTTP setting* switch.|
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Previously updated : 02/25/2022 Last updated : 03/21/2022
The access log is generated only if you've enabled it on each Application Gatewa
|host| Address listed in the host header of the request. If rewritten using header rewrite, this field contains the updated host name| |originalRequestUriWithArgs| This field contains the original request URL | |requestUri| This field contains the URL after the rewrite operation on Application Gateway |
+|upstreamSourcePort| The source port used by Application Gateway when initiating a connection to the backend target|
|originalHost| This field contains the original request host name ```json {
The access log is generated only if you've enabled it on each Application Gatewa
"serverRouted": "52.239.221.65:443", "serverStatus": "200", "serverResponseLatency": "0.028",
+ "upstreamSourcePort": "21564",
"originalHost": "20.110.30.194", "host": "20.110.30.194" }
application-gateway Application Gateway Troubleshooting 502 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-troubleshooting-502.md
Similarly, the presence of a custom DNS in the VNet could also cause issues. An
Validate NSG, UDR, and DNS configuration by going through the following steps:
-1. Check NSGs associated with the application gateway subnet. Ensure that communication to backend isn't blocked.
+1. Check NSGs associated with the application gateway subnet. Ensure that communication to backend isn't blocked. For more information, see [Network security groups](/azure/application-gateway/configuration-infrastructure#network-security-groups).
2. Check UDR associated with the application gateway subnet. Ensure that the UDR isn't directing traffic away from the backend subnet. For example, check for routing to network virtual appliances or default routes being advertised to the application gateway subnet via ExpressRoute/VPN. ```azurepowershell
application-gateway Ingress Controller Letsencrypt Certificate Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway.md
Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o
#!/bin/bash # Install the CustomResourceDefinition resources separately
- kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/deploy/manifests/00-crds.yaml
+ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.crds.yaml
# Create the namespace for cert-manager kubectl create namespace cert-manager # Label the cert-manager namespace to disable resource validation
- kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
+ kubectl label namespace cert-manager cert-manager.io/disable-validation=true
# Add the Jetstack Helm repository helm repo add jetstack https://charts.jetstack.io
Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o
helm install \ cert-manager jetstack/cert-manager \ --namespace cert-manager \
- --version v1.0.4 \
+ --version v1.10.1 \
# --set installCRDs=true-
- # Helm v2
- helm install \
- --name cert-manager \
- --namespace cert-manager \
- --version v1.0.4 \
- jetstack/cert-manager \
- # --set installCRDs=true
-
+
#To automatically install and manage the CRDs as part of your Helm release, # you must add the --set installCRDs=true flag to your Helm installation command. ```
Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o
```bash #!/bin/bash kubectl apply -f - <<EOF
- apiVersion: certmanager.k8s.io/v1alpha1
+ apiVersion: cert-manager.io/v1
kind: ClusterIssuer metadata:
- name: letsencrypt-staging
+ name: letsencrypt-staging
spec:
- acme:
+ acme:
# You must replace this email address with your own. # Let's Encrypt will use this to contact you about expiring # certificates, and issues related to your account.
Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o
# before moving to production server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef:
- # Secret resource used to store the account's private key.
- name: example-issuer-account-key
+ # Secret resource used to store the account's private key.
+ name: example-issuer-account-key
# Enable the HTTP-01 challenge provider # you prove ownership of a domain by ensuring that a particular # file is present at the domain
- http01: {}
+ solvers:
+ - http01:
+ ingress:
+ class: azure/application-gateway
EOF ```
Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o
name: guestbook-letsencrypt-staging annotations: kubernetes.io/ingress.class: azure/application-gateway
- certmanager.k8s.io/cluster-issuer: letsencrypt-staging
+ cert-manager.io/cluster-issuer: letsencrypt-staging
spec: tls: - hosts:
application-gateway Tutorial Multiple Sites Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-cli.md
az network application-gateway create \
--frontend-port 80 \ --http-settings-port 80 \ --http-settings-protocol Http \
- --public-ip-address myAGPublicIPAddress
+ --public-ip-address myAGPublicIPAddress \
+ --priority 10
``` It may take several minutes for the application gateway to be created. After the application gateway is created, you can see these new features of it:
az network application-gateway rule create \
--resource-group myResourceGroupAG \ --http-listener contosoListener \ --rule-type Basic \
- --address-pool contosoPool
+ --address-pool contosoPool \
+ --priority 200
az network application-gateway rule create \ --gateway-name myAppGateway \
az network application-gateway rule create \
--resource-group myResourceGroupAG \ --http-listener fabrikamListener \ --rule-type Basic \
- --address-pool fabrikamPool
+ --address-pool fabrikamPool \
+ --priority 100
az network application-gateway rule delete \ --gateway-name myAppGateway \
applied-ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-add-on-capabilities.md
recommendations: false
<!-- markdownlint-disable MD033 -->
-# Azure Form Recognizer add-on capabilities
+# Azure Form Recognizer add-on capabilities (preview)
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
+> [!IMPORTANT]
+>
+> * The Form Recognizer Studio add-on capabilities feature is currently in gated preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
+> * Complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey) to request access.
+ > [!NOTE] > > Add-on capabilities for Form Recognizer Studio are only available within the Read and Layout models for the `2023-02-28-preview` release.
The `ocr.font` capability extracts all font properties of text extracted in the
> [!div class="nextstepaction"] > Learn more:
-> [**Read model**](concept-read.md) [**Layout model**](concept-layout.md).
+> [**Read model**](concept-read.md) [**Layout model**](concept-layout.md).
applied-ai-services Concept Insurance Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-insurance-card.md
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Azure Form Recognizer health insurance card model
+# Azure Form Recognizer health insurance card model (preview)
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
+> [!IMPORTANT]
+>
+> * The Form Recognizer Studio health insurance card model is currently in gated preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
+> * Complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey) to request access.
+ The Form Recognizer health insurance card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from US health insurance cards. A health insurance card is a key document for care processing and can be digitally analyzed for patient onboarding, financial coverage information, cashless payments, and insurance claim processing. The health insurance card model analyzes health card images; extracts key information such as insurer, member, prescription, and group number; and returns a structured JSON representation. Health insurance cards can be presented in various formats and quality including phone-captured images, scanned documents, and digital PDFs. ***Sample health insurance card processed using Form Recognizer Studio***
applied-ai-services Concept Query Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-query-fields.md
recommendations: false
<!-- markdownlint-disable MD033 -->
-# Azure Form Recognizer query field extraction
+# Azure Form Recognizer query field extraction (preview)
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
For query field extraction, specify the fields you want to extract and Form Reco
## Next steps > [!div class="nextstepaction"]
-> [Try the Form Recognizer Studio quickstart](./quickstarts/try-form-recognizer-studio.md)
+> [Try the Form Recognizer Studio quickstart](./quickstarts/try-form-recognizer-studio.md)
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Before you begin, review the [conceptual overview of the cluster connect feature
- For an Azure AD user account: ```azurepowershell
- $AAD_ENTITY_OBJECT_ID = (az ad signed-in-user show --query objectId -o tsv)
+ $AAD_ENTITY_OBJECT_ID = (az ad signed-in-user show --query id -o tsv)
``` - For an Azure AD application:
azure-arc Conceptual Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-configurations.md
Title: "GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes" Previously updated : 12/02/2022 Last updated : 03/21/2023 description: "This article provides a conceptual overview of GitOps and configurations capability of Azure Arc-enabled Kubernetes."
description: "This article provides a conceptual overview of GitOps and configur
# GitOps Flux v1 configurations with Azure Arc-enabled Kubernetes > [!IMPORTANT]
-> The documents in this section are for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about GitOps with Flux v2](./conceptual-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+> The documents in this section are for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about GitOps with Flux v2](./conceptual-gitops-flux2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+>
+> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
In relation to Kubernetes, GitOps is the practice of declaring the desired state of Kubernetes cluster configurations (deployments, namespaces, etc.) in a Git repository. This declaration is followed by a polling and pull-based deployment of these cluster configurations using an operator. The Git repository can contain:
azure-arc Conceptual Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-ci-cd.md
Title: "CI/CD Workflow using GitOps - Azure Arc-enabled Kubernetes" Previously updated : 12/02/2022 Last updated : 03/21/2023 description: "This article provides a conceptual overview of a CI/CD workflow using GitOps with Flux"
description: "This article provides a conceptual overview of a CI/CD workflow us
# CI/CD workflow using GitOps - Azure Arc-enabled Kubernetes > [!IMPORTANT]
-> The workflow described in this document uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about CI/CD workflow using GitOps with Flux v2](./conceptual-gitops-flux2-ci-cd.md). Eventually Azure will stop supporting GitOps with Flux v1, so we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+> The workflow described in this document uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [learn about CI/CD workflow using GitOps with Flux v2](./conceptual-gitops-flux2-ci-cd.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+>
+> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
Modern Kubernetes deployments house multiple applications, clusters, and environments. With GitOps, you can manage these complex setups more easily, tracking the desired state of the Kubernetes environments declaratively with Git. Using common Git tooling to track cluster state, you can increase accountability, facilitate fault investigation, and enable automation to manage environments.
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Micros
The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. > [!NOTE]
-> Eventually Azure will stop supporting GitOps with Flux v1, so we recommend [migrating to Flux v2](#migrate-from-flux-v1) as soon as possible.
+> If you have been using Flux v1, we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+>
+> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
### Controllers
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md
Title: 'Tutorial: Implement CI/CD with GitOps using Azure Arc-enabled Kubernetes clusters' description: This tutorial walks through setting up a CI/CD solution using GitOps with Azure Arc-enabled Kubernetes clusters. Previously updated : 12/02/2022 Last updated : 03/21/2023 # Tutorial: Implement CI/CD with GitOps using Azure Arc-enabled Kubernetes clusters > [!IMPORTANT]
-> This tutorial uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial that uses GitOps with Flux v2](./tutorial-gitops-flux2-ci-cd.md). Eventually Azure will stop supporting GitOps with Flux v1, so we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+> This tutorial uses GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial that uses GitOps with Flux v2](./tutorial-gitops-flux2-ci-cd.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+>
+> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
In this tutorial, you'll set up a CI/CD solution using GitOps with Azure Arc-enabled Kubernetes clusters. Using the sample Azure Vote app, you'll:
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
In this tutorial, you'll set up a CI/CD solution using GitOps with Flux v2 and A
> * Deploy the `dev` and `stage` environments. > * Test the application environments.
-> [!NOTE]
-> Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
Title: 'Tutorial: Deploy configurations using GitOps on an Azure Arc-enabled Kubernetes cluster' description: This tutorial demonstrates applying configurations on an Azure Arc-enabled Kubernetes cluster. Previously updated : 02/16/2023 Last updated : 03/21/2023 # Tutorial: Deploy configurations using GitOps on an Azure Arc-enabled Kubernetes cluster > [!IMPORTANT]
-> This tutorial is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so begin using Flux v2 as soon as possible.
+> This tutorial is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+>
+> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
In this tutorial, you will apply configurations using GitOps on an Azure Arc-enabled Kubernetes cluster. You'll learn how to:
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
Before you dive in, take a moment to [learn how GitOps with Flux works conceptua
> You can also create Flux configurations by using Bicep, ARM templates, or Terraform AzAPI provider. For more information, see [Microsoft.KubernetesConfiguration fluxConfigurations](/azure/templates/microsoft.kubernetesconfiguration/fluxconfigurations). > [!IMPORTANT]
-> The `microsoft.flux` extension released major version 1.0.0. This includes the [multi-tenancy feature](conceptual-gitops-flux2.md#multi-tenancy). If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension, you can upgrade to the latest extension manually using the Azure CLI: `az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>` (use `-t connectedClusters` for Arc clusters and `-t managedClusters` for AKS clusters).
+> The `microsoft.flux` extension released major version 1.0.0. This includes the [multi-tenancy feature](conceptual-gitops-flux2.md#multi-tenancy). If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension, you can upgrade to the [latest version](extensions-release.md#flux-gitops) manually using the Azure CLI: `az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>` (use `-t connectedClusters` for Arc clusters and `-t managedClusters` for AKS clusters).
> [!TIP] > When using this extension with [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
-> [!NOTE]
-> Eventually Azure will stop supporting GitOps with Flux v1, so we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
- ## Prerequisites To deploy applications using GitOps with Flux v2, you need the following:
azure-arc Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy.md
Title: "Apply Flux v1 configurations at-scale using Azure Policy" Previously updated : 12/02/2022 Last updated : 03/21/2023 description: "Apply Flux v1 configurations at-scale using Azure Policy"
description: "Apply Flux v1 configurations at-scale using Azure Policy"
You can use Azure Policy to apply Flux v1 configurations (`Microsoft.KubernetesConfiguration/sourceControlConfigurations` resource type) at scale on Azure Arc-enabled Kubernetes clusters (`Microsoft.Kubernetes/connectedclusters`). > [!IMPORTANT]
-> This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the article for using policy with Flux v2](./use-azure-policy-flux-2.md). Eventually Azure will stop supporting GitOps with Flux v1, so we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+> This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; learn about [using Azure Policy with Flux v2](./use-azure-policy-flux-2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+>
+> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
To use Azure Policy, select a built-in GitOps policy definition and create a policy assignment. When creating the policy assignment: 1. Set the scope for the assignment.
azure-arc Use Gitops With Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-gitops-with-helm.md
Title: "Deploy Helm Charts using GitOps on Azure Arc-enabled Kubernetes cluster" Previously updated : 05/24/2022 Last updated : 03/21/2023 description: "Use GitOps with Helm for an Azure Arc-enabled cluster configuration"
description: "Use GitOps with Helm for an Azure Arc-enabled cluster configuratio
# Deploy Helm Charts using GitOps on an Azure Arc-enabled Kubernetes cluster > [!IMPORTANT]
-> This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). Eventually Azure will stop supporting GitOps with Flux v1, so we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
-
+> This article is for GitOps with Flux v1. GitOps with Flux v2 is now available for Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters; [go to the tutorial for GitOps with Flux v2](./tutorial-use-gitops-flux2.md). We recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+>
+> Support for Flux v1-based cluster configuration resources created prior to May 1, 2023 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on May 1, 2023, you won't be able to create new Flux v1-based cluster configuration resources.
Helm is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers like APT and Yum, Helm is used to manage Kubernetes charts, which are packages of pre-configured Kubernetes resources. This article shows you how to configure and use Helm with Azure Arc-enabled Kubernetes.
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
If you're deploying on VMware, a x64 Python environment is required. The [pip](h
If you're deploying on Azure Stack HCI, the x32 Azure CLI installer can be used to install Azure CLI. +
+Arc Appliance CLI extension, 'arcappliance', needs to be installed on the CLI. This can be done by running: `az extension add --name arcappliance`
+ ## Minimum resource requirements Arc resource bridge has the following minimum resource requirements:
When deploying Arc resource bridge with AKS on Azure Stack HCI (AKS Hybrid), the
- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about requirements and technical details. - Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md).+
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Competitive Innovations, LLC](https://www.cillc.com)| |[CompuNet Inc.](https://compunet.biz/)| |[Computer Solutions Inc.](http://cs-inc.co/)|
-|[Computex Technology Solutions](http://www.computex-inc.com/)|
|[Communication Square LLC](https://www.communicationsquare.com)| |[ConvergeOne](https://www.convergeone.com)| |[Copper River Technologies, LLC](http://www.copperrivertech.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Dell Federal Services](https://www.dellemc.com/en-us/industry/federal/federal-government-it.htm#)| |[Dell Marketing LP](https://www.dell.com/)| |[Delphi Technology Solutions](https://delphi-ts.com/)|
-|[Derek Coleman & Associates Corporation](https://www.dcassociatesgroup.com/https://docsupdatetracker.net/index.html)|
+|[Derek Coleman & Associates Corporation](https://www.dcassociatesgroup.com/)|
|[Developing Today LLC](https://www.developingtoday.net/)| |[DevHawk, LLC](https://www.devhawk.io)| |Diamond Capture Associates LLC|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[PA-Group](https://pa-group.us/)| |[Palecek Consulting Group](https://www.pcgit.net)| |[Pangea Group Inc.](http://www.pangea-group.com)|
-|[Parachute Technology](https://www.parachutech.com)|
|[Paragon Software Solutions, Inc.](http://www.paragonhq.com/)| |[Patrocinium Systems, Inc.](https://www.patrocinium.com)| |[PCM](https://www.pcm.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Pax8](https://www.pax8.com/en-us/microsoft/)| |[Synnex](https://www.synnexcorp.com)| |[Tech Data Corporation](https://www.techdata.com/)|
-|[Tech Data Government Solutions, LLC](https://gov.as.techdata.com/en-us/Pages/default.aspx)|
- ## Approved LSPs
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
In addition to collecting [Metrics](standard-metrics.md) and application [Teleme
The [log trace](asp-net-trace-logs.md) is associated with other telemetry to give a detailed view of the activity. Adding trace logging to existing apps only requires providing a destination for the logs; the logging framework rarely needs to be changed. + Application Insights provides other features including, but not limited to: - [Live Metrics](live-stream.md) ΓÇô observe activity from your deployed application in real time with no effect on the host environment
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
Today's complex computing environments run distributed applications that rely on
[Azure Monitor](overview.md) collects and aggregates data from various sources into a common data platform where it can be used for analysis, visualization, and alerting. It provides a consistent experience on top of data from multiple sources. You can gain deep insights across all your monitored resources and even with data from other services that store their data in Azure Monitor.
-![Diagram that shows an overview of Azure Monitor with data sources on the left sending data to a central data platform and features of Azure Monitor on the right that use the collected data.](media/overview/azure-monitor-overview-2022_10_15-add-prometheus-opt.svg)
## Observability data in Azure Monitor
-Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. A monitoring tool must collect and analyze these three different kinds of data to provide sufficient observability of a monitored system. Observability can be achieved by correlating data from multiple pillars and aggregating data across the entire set of resources being monitored. Because Azure Monitor stores data from multiple sources together, the data can be correlated and analyzed by using a common set of tools. It also correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services.
+Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. A monitoring tool must collect and analyze these three different kinds of data to provide sufficient observability of a monitored system. Observability can be achieved by correlating data from multiple pillars and aggregating data across the entire set of resources being monitored. Because Azure Monitor stores data from multiple sources together, the data can be correlated and analyzed by using a common set of tools. It also correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services.
Azure resources generate a significant amount of monitoring data. Azure Monitor consolidates this data along with monitoring data from other sources into either a Metrics or Logs platform. Each is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. Features such as data analysis, visualizations, or alerting require you to understand the differences so that you can implement your required scenario in the most efficient and cost effective manner. Insights in Azure Monitor such as [Application Insights](app/app-insights-overview.md) or [Container insights](containers/container-insights-overview.md) have analysis tools that allow you to focus on the particular monitoring scenario without having to understand the differences between the two types of data. - ### Metrics [Metrics](essentials/data-platform-metrics.md) are numerical values that describe some aspect of a system at a particular point in time. They're collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated by using various algorithms. They can be compared to other metrics and analyzed for trends over time.
azure-monitor Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md
Some of these data sources use the [new data ingestion pipeline](essentials/data
Sources of monitoring data from Azure applications can be organized into tiers, the highest tiers being your application itself and the lower tiers being components of Azure platform. The method of accessing data from each tier varies. The application tiers are summarized in the table below, and the sources of monitoring data in each tier are presented in the following sections. See [Monitoring data locations in Azure](monitor-reference.md) for a description of each data location and how you can access its data. ### Azure
azure-monitor Change Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/change-pricing-tier.md
Each Log Analytics workspace in Azure Monitor can have a different [pricing tier
## Permissions required To change the pricing tier for a workspace, you must be assigned to one of the following roles: -- Log Analytics Contributor role
+- Log Analytics Contributor role.
- A custom role with `Microsoft.OperationalInsights/workspaces/*/write` permissions. ## Changing pricing tier
Use the following steps to change the pricing tier of your workspace using the A
1. From the **Log Analytics workspaces** menu, select your workspace, and open **Usage and estimated costs**. This displays a list of each of the pricing tiers available for this workspace.
-2. Review the estimated costs for each pricing tier. This estimate assumes that the last 31 days of your usage is typical. In the example below, based on the data patterns from the previous 31 days, this workspace would cost less in the Pay-As-You-Go tier (#1) compared to the 100 GB/day commitment tier (#2).
+2. Review the estimated costs for each pricing tier. This estimate assumes that the last 31 days of your usage is typical. Choose the tier with the lowest estimated cost.
:::image type="content" source="media/manage-cost-storage/pricing-tier-estimated-costs.png" alt-text="Pricing tiers":::
azure-monitor Migrate Splunk To Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs.md
The benefits of migrating to Azure Monitor include:
|Splunk offering|Azure offering| |||
-|Splunk Observability|[Azure Monitor](../overview.md) is an end-to-end solution for collecting, analyzing, and acting on telemetry from your cloud, multicloud, and on-premises environments, built over a powerful data ingestion pipeline that's shared with Microsoft Sentinel. Azure Monitor offers enterprises a comprehensive solution for monitoring cloud, hybrid, and on-premises environments, with [network isolation](../logs/private-link-security.md), [resilience features and protection from data center failures](../logs/availability-zones.md), [reporting](../overview.md#insights-and-visualizations), and [alerts and response](../overview.md#respond) capabilities.|
+|Splunk Observability|[Azure Monitor](../overview.md) is an end-to-end solution for collecting, analyzing, and acting on telemetry from your cloud, multicloud, and on-premises environments, built over a powerful data ingestion pipeline that's shared with Microsoft Sentinel. Azure Monitor offers enterprises a comprehensive solution for monitoring cloud, hybrid, and on-premises environments, with [network isolation](../logs/private-link-security.md), [resilience features and protection from data center failures](../logs/availability-zones.md), [reporting](../overview.md#insights), and [alerts and response](../overview.md#respond) capabilities.|
|Splunk Security|[Microsoft Sentinel](../../sentinel/overview.md) is a cloud-native solution that runs over the Azure Monitor platform to provide intelligent security analytics and threat intelligence across the enterprise.| ## Introduction to key concepts
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Title: Azure Monitor overview description: Overview of Microsoft services and functionalities that contribute to a complete monitoring strategy for your Azure services and applications. -+ Previously updated : 02/08/2023 Last updated : 03/20/2023 # Azure Monitor overview
Azure Monitor collects and aggregates the data from every layer and component of
Azure Monitor also includes Azure Monitor SCOM Managed Instance, which allows you to move your on-premises System Center Operation Manager (Operations Manager) installation to the cloud in Azure.
-Use Azure Monitor to monitor these types of resources in Azure, other clouds, or on-premises:
- - Applications
+Use Azure Monitor to monitor these types of resources in Azure, other clouds, or on-premises:
+ - Applications
- Virtual machines
- - Guest operating systems
- - Containers
+ - Guest operating systems
+ - Containers including Prometheus metrics
- Databases - Security events in combination with Azure Sentinel - Networking events and health in combination with Network Watcher - Custom sources that use the APIs to get data into Azure Monitor
- - Collect your Prometheus metrics with Azure Managed Prometheus and analyze them using PromQL in Azure Managed Grafana.
-
+ You can also export monitoring data from Azure Monitor into other systems so you can: - Integrate with other third-party and open-source monitoring and visualization tools - Integrate with ticketing and other ITSM systems
Azure Monitor achieves observability by correlating data from multiple pillars a
The following diagram gives a high-level view of Azure Monitor. The diagram depicts the Azure Monitor system components: - The **[data sources](data-sources.md)** are the types of data collected from each monitored resource. The data is collected and routed to the **data platform**. - The **[data platform](data-platform.md)** is made up of the data stores for collected data. Azure Monitor's data platform has stores for metrics, logs, traces, and changes. - The functions and components that consume data include analysis, visualizations, insights, and responses.-- Services that integrate with Azure Monitor to provide additional functionality and are integrated throughout the system.
+- Services that integrate with Azure Monitor and provide additional functionality are marked with an asterisk * in the diagram.
## Data sources
-Azure Monitor can collect data from multiple sources, including from your application, operating systems, the services they rely on, and from the platform itself.
-You can integrate monitoring data from sources outside Azure, including on-premises and other non-Microsoft clouds, using the application, infrastructure, and custom data sources.
+Azure Monitor can collect data from multiple sources, including from your application, operating systems, the services they rely on, and from the platform itself. The diagram below shows an expanded version of the datasource types gathered by Azure Monitor.
-Azure Monitor collects these types of data:
-|Data Type |Description |
-|||
-|Application|Data about the performance and functionality of your application code on any platform.|
-|Infrastructure|**- Container.** Data about containers, such as [Azure Kubernetes Service](../aks/intro-kubernetes.md), [Prometheus](./essentials/prometheus-metrics-overview.md), and about the applications running inside containers.<br>**- Operating system.** Data about the guest operating system on which your application is running.|
-|Azure Platform|**- Azure resource**. The operation of an Azure resource.<br>**- Azure subscription.** The operation and management of an Azure subscription, and data about the health and operation of Azure itself.<br>**- Azure tenant.** Data about the operation of tenant-level Azure services, such as Azure Active Directory.<br>**- Azure resource changes.** Data about changes within your Azure resources and how to address and triage incidents and issues. |
-|Custom Sources|Use the Azure Monitor REST API to send customer metric or log data to Azure Monitor and incorporate monitoring of resources that donΓÇÖt expose monitoring data through other methods.|
-For detailed information about each of the data sources, see [data sources](./data-sources.md).
-## Data collection and routing
+Click on the picture to see a larger version of the data sources diagram in context.
-Azure Monitor collects and routes monitoring data using several mechanisms, depending on the data being routed and the destination data platform stores.
+You can integrate monitoring data from sources outside Azure, including on-premises and other non-Microsoft clouds, using the application, infrastructure, and custom data sources.
-|Collection method|Description |
-|||
-|Direct data routing|Platform metrics are sent automatically to Azure Monitor Metrics by default and without configuration.|
-|[Diagnostic settings](essentials/diagnostic-settings.md)|Use diagnostic settings to determine where to send resource and activity log data on the data platform.|
-|[Data collection rules](essentials/data-collection-rule-overview.md)|Use data collection rules to specify what data should be collected, how to transform that data, and where to send that data.|
-|[Application SDK](app/app-insights-overview.md)|Add the Application Insights SDK to your application code to receive, store, and explore your monitoring data. The SDK pre-processes telemetry and metrics before sending the data to Azure where it's ingested and processed further before being stored in Azure Monitor Logs.|
-|[Azure Monitor REST API](logs/logs-ingestion-api-overview.md)|The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace from any REST API client.|
-|[Azure Monitor Agents](agents/agents-overview.md)|Azure Monitor Agent (AMA) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor for use by features, insights, and other services, such as Microsoft Sentinel and Microsoft Defender for Cloud.|
-|[Azure Monitor managed service for Prometheus](./essentials/prometheus-metrics-overview.md)|Azure Monitor managed service for Prometheus lets you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution, based on the [Prometheus](https://aka.ms/azureprometheus-promio) project from the Cloud Native Compute Foundation.
+Azure Monitor collects these types of data:
-For detailed information about data collection, see [data collection](./best-practices-data-collection.md).
+|Data Type|Description and subtypes|
+||--|
+|Application|Data about the performance and functionality of your application code on any platform.|
+|Infrastructure|**Container** - Data about containers, such as [Azure Kubernetes Service](../aks/intro-kubernetes.md), [Prometheus](./essentials/prometheus-metrics-overview.md), and the applications running inside containers.<br><br>**Operating system** - Data about the guest operating system on which your application is running.|
+|Azure Platform|**Azure resource** - Data about the operation of an Azure resource from inside the resource, including changes. Resource Logs are one example. <br><br>**Azure subscription** - The operation and management of an Azure subscription, and data about the health and operation of Azure itself. The activity log is one example.<br><br>**Azure tenant** - Data about the operation of tenant-level Azure services, such as Azure Active Directory.<br>| Data sent into the Azure Monitor data platform using the Azure Monitor REST API. |
+|Custom Sources| Data which gets into the system using Azure Monitor REST API.
+
+For detailed information about each of the data sources, see [data sources](./data-sources.md).
## Data platform Azure Monitor stores data in data stores for each of the pillars of observability: metrics, logs, distributed traces, and changes. Each store is optimized for specific types of data and monitoring scenarios. +
+Click on the picture to see a larger version of the data platform diagram in context.
|Pillar of Observability/<br>Data Store|Description| ||| |[Azure Monitor Metrics](essentials/data-platform-metrics.md)|Metrics are numerical values that describe an aspect of a system at a particular point in time. [Azure Monitor Metrics](./essentials/data-platform-metrics.md) is a time-series database, optimized for analyzing time-stamped data. Azure Monitor collects metrics at regular intervals. Metrics are identified with a timestamp, a name, a value, and one or more defining labels. They can be aggregated using algorithms, compared to other metrics, and analyzed for trends over time. It supports native Azure Monitor metrics and [Prometheus metrics](essentials/prometheus-metrics-overview.md).| |[Azure Monitor Logs](logs/data-platform-logs.md)|Logs are recorded system events. Logs can contain different types of data, be structured or free-form text, and they contain a timestamp. Azure Monitor stores structured and unstructured log data of all types in [Azure Monitor Logs](./logs/data-platform-logs.md). You can route data to [Log Analytics workspaces](./logs/log-analytics-overview.md) for querying and analysis.|
-|Traces|Distributed traces identify the series of related events that follow a user request through a distributed system. A trace measures the operation and performance of your application across the entire set of components in your system. Traces can be used to determine the behavior of application code and the performance of different transactions. Azure Monitor gets distributed trace data from the Application Insights SDK. The trace data is stored in a separate workspace in Azure Monitor Logs.|
+|Traces|[Distributed traces](app/distributed-tracing.md) identify the series of related events that follow a user request through a distributed system. A trace measures the operation and performance of your application across the entire set of components in your system. Traces can be used to determine the behavior of application code and the performance of different transactions. Azure Monitor gets distributed trace data from the Application Insights SDK. The trace data is stored in a separate workspace in Azure Monitor Logs.|
|Changes|Changes are a series of events in your application and resources. They're tracked and stored when you use the [Change Analysis](./change/change-analysis.md) service, which uses [Azure Resource Graph](../governance/resource-graph/overview.md) as its store. Change Analysis helps you understand which changes, such as deploying updated code, may have caused issues in your systems.|
-## The Azure portal
-The Azure portal is a web-based, unified console that provides an alternative to command-line tools. With the Azure portal, you can manage your Azure subscription using a graphical user interface. You can build, manage, and monitor everything from simple web apps to complex cloud deployments in the portal.
-The Monitor section of the Azure portal provides a visual interface that gives you access to the data collected for Azure resources and an easy way to access the tools, insights, and visualizations in Azure Monitor.
+For less expensive, long-term archival of monitoring data for auditing or compliance purposes, you can export to [Azure Storage](/azure/storage/).
+
+## Data collection and routing
-## Insights and Visualizations
+Azure Monitor collects and routes monitoring data using a few different mechanisms depending on the data being routed and the destination. Much like a road system built over time, not all roads lead to all locations. Some are legacy, some new, and some are better to take than others given how Azure Monitor has evolved over time. For more information, see **[data sources](data-sources.md)**.
-Insights and visualizations help increase your visibility into the operation of your computing environment. Some Azure resource providers have curated visualizations that provide a customized monitoring experience and require minimal configuration.
+
+Click on the picture to see a larger version of the data collection diagram in context.
++
+|Collection method|Description |
+|||
+|[Application SDK](app/app-insights-overview.md)| You can add the Application Insights SDK to your application code to receive, store, and explore your monitoring data. The SDK preprocesses telemetry and metrics before sending the data to Azure where it's ingested and processed further before being stored in Azure Monitor Logs.|
+|[Agents](agents/agents-overview.md)|Agents can collect monitoring data from applications, the guest operating system of Azure, and hybrid virtual machines.|
+|[Data collection rules](essentials/data-collection-rule-overview.md)|Use data collection rules to specify what data should be collected, how to transform it, and where to send it.|
+|Internal| Data is automatically sent to a destination without user configuration. |
+|[Diagnostic settings](essentials/diagnostic-settings.md)|Use diagnostic settings to determine where to send resource log and activity log data on the data platform.|
+|[Azure Monitor REST API](logs/logs-ingestion-api-overview.md)|The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace in Azure Monitor Logs. You can also send metrics into the Azure Monitor Metrics store using the custom metrics API.|
+
+A common way to route monitoring data to other non-Microsoft tools is using *Event hubs*. See more in the [Integrate](#integrate) section below.
+
+For detailed information about data collection, see [data collection](./best-practices-data-collection.md).
+
+## Consumption
+
+The following sections outline methods and services that consume monitoring data from the Azure Monitor data platform.
+
+All areas in the *consumption* section of the diagram have a user interface that appears in the Azure portal.
+
+### The Azure portal
+
+The Azure portal is a web-based, unified console that provides an alternative to command-line tools. With the Azure portal, you can manage your Azure subscription using a graphical user interface. You can build, manage, and monitor everything from simple web apps to complex cloud deployments in the portal. The *Monitor* section of the Azure portal provides a visual interface that gives you access to the data collected for Azure resources and an easy way to access the tools, insights, and visualizations in Azure Monitor.
+ ### Insights
-Insights are large, scalable, curated visualizations. For more information, see List of insights and curated visualizations using Azure Monitor.
-The following table describes the three major insights:
+Some Azure resource providers have curated visualizations that provide a customized monitoring experience and require minimal configuration. Insights are large, scalable, curated visualizations.
++
+The following table describes some of the larger insights:
|Insight |Description | ||| |[Application Insights](app/app-insights-overview.md)|Application Insights takes advantage of the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. Application Insights monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. You can use it to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes.| |[Container Insights](containers/container-insights-overview.md)|Container Insights gives you performance visibility into container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. Container Insights collects container logs and metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux.| |[VM Insights](vm/vminsights-overview.md)|VM Insights monitors your Azure VMs. It analyzes the performance and health of your Windows and Linux VMs and identifies their different processes and interconnected dependencies on external processes. The solution includes support for monitoring performance and application dependencies for VMs hosted on-premises or another cloud provider.|
+|[Network Insights](../network-watcher/network-insights-overview.md)|Network Insights provides a comprehensive and visual representation through topologies, of health and metrics for all deployed network resources, without requiring any configuration. It also provides access to network monitoring capabilities like Connection Monitor, flow logging for network security groups (NSGs), and Traffic Analytics as well as other diagnostic features. |
+
+For more information, see the [list of insights and curated visualizations in the Azure Monitor Insights overview](insights/insights-overview.md).
### Visualize
-Visualizations such as charts and tables are effective tools for summarizing monitoring data and presenting it to different audiences. Azure Monitor has its own features for visualizing monitoring data and uses other Azure services for publishing it to different audiences.
+
+Visualizations such as charts and tables are effective tools for summarizing monitoring data and presenting it to different audiences. Azure Monitor has its own features for visualizing monitoring data and uses other Azure services for publishing it to different audiences. Power BI and Grafana are not officially part of the Azure Monitor product, but they're a core integration and part of the Azure Monitor story.
|Visualization|Description | ||| |[Dashboards](visualize/tutorial-logs-dashboards.md)|Azure dashboards allow you to combine different kinds of data into a single pane in the Azure portal. You can optionally share the dashboard with other Azure users. You can add the output of any log query or metrics chart to an Azure dashboard. For example, you could create a dashboard that combines tiles that show a graph of metrics, a table of activity logs, a usage chart from Application Insights, and the output of a log query.| |[Workbooks](visualize/workbooks-overview.md)|Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports in the Azure portal. You can use them to query data from multiple data sources. Workbooks can combine and correlate data from multiple data sets in one visualization giving you easy visual representation of your system. Workbooks are interactive and can be shared across teams with data updating in real time. Use workbooks provided with Insights, utilize the library of templates, or create your own.|
-|[Power BI](logs/log-powerbi.md)|Power BI is a business analytics service that provides interactive visualizations across various data sources. It's an effective means of making data available to others within and outside your organization. You can configure Power BI to automatically import log data from Azure Monitor to take advantage of these visualizations.|
+|[Power BI](logs/log-powerbi.md)|Power BI is a business analytics service that provides interactive visualizations across various data sources. It's an effective means of making data available to others within and outside your organization. You can configure Power BI to automatically import log data from Azure Monitor to take advantage of these visualizations. |
|[Grafana](visualize/grafana-plugin.md)|Grafana is an open platform that excels in operational dashboards. Grafana has popular plug-ins and dashboard templates for APM tools such as Dynatrace, New Relic, and AppDynamics. You can use these resources to visualize Azure platform data alongside other metrics from higher in the stack collected by other tools. It also has AWS CloudWatch and GCP BigQuery plug-ins for multicloud monitoring in a single pane of glass. All versions of Grafana include the Azure Monitor data source plug-in to visualize your Azure Monitor metrics and logs. Azure Managed Grafana also optimizes this experience for Azure-native data stores such as Azure Monitor and Azure Data Explorer. In this way, you can easily connect to any resource in your subscription and view all resulting monitoring data in a familiar Grafana dashboard. It also supports pinning charts from Azure Monitor metrics and logs to Grafana dashboards.|
-## Analyze
+### Analyze
+ The Azure portal contains built in tools that allow you to analyze monitoring data. + |Tool |Description | ||| |[Metrics explorer](essentials/metrics-getting-started.md)|Use the Azure Monitor metrics explorer user interface in the Azure portal to investigate the health and utilization of your resources. Metrics explorer helps you plot charts, visually correlate trends, and investigate spikes and dips in metric values. Metrics explorer contains features for applying dimensions and filtering, and for customizing charts. These features help you analyze exactly the data you need in a visually intuitive way.|
-|[Log Analytics](logs/log-analytics-overview.md)|The Log Analytics user interface in the Azure portal helps you query the log data collected by Azure Monitor so that you can quickly retrieve, consolidate, and analyze collected data. After creating test queries, you can then directly analyze the data with Azure Monitor tools, or you can save the queries for use with visualizations or alert rules. Log Analytics workspaces are based on Azure Data Explorer, using a powerful analysis engine and the rich Kusto query language (KQL). Azure Monitor Logs uses a version of the Kusto Query Language suitable for simple log queries, and advanced functionality such as aggregations, joins, and smart analytics. You can [get started with KQL](logs/get-started-queries.md) quickly and easily.|
-|[Change Analysis](change/change-analysis.md)| The Change Analysis user interface in the Azure portal gives you insight into the cause of live site issues, outages, or component failures. Change Analysis uses the power of [Azure Resource Graph](../governance/resource-graph/overview.md) to detect various types of changes, from the infrastructure layer through application deployment. Change Analysis is a subscription-level Azure resource provider that checks resource changes in the subscription and provides data for diagnostic tools to help users understand what changes might have caused issues.|
-
+|[Log Analytics](logs/log-analytics-overview.md)|The Log Analytics user interface in the Azure portal helps you query the log data collected by Azure Monitor so that you can quickly retrieve, consolidate, and analyze collected data. After creating test queries, you can then directly analyze the data with Azure Monitor tools, or you can save the queries for use with visualizations or alert rules. Log Analytics workspaces are based on Azure Data Explorer, using a powerful analysis engine and the rich Kusto query language (KQL).Azure Monitor Logs uses a version of the Kusto Query Language suitable for simple log queries, and advanced functionality such as aggregations, joins, and smart analytics. You can [get started with KQL](logs/get-started-queries.md) quickly and easily. NOTE: The term "Log Analytics" is sometimes used to mean both the Azure Monitor Logs data platform store and the UI that accesses that store. Previous to 2019, the term "Log Analytics" did refer to both. It's still common to find content using that framing in various blogs and documentation on the internet. |
+|[Change Analysis](change/change-analysis.md)| Change Analysis is a subscription-level Azure resource provider that checks resource changes in the subscription and provides data for diagnostic tools to help users understand what changes might have caused issues. The Change Analysis user interface in the Azure portal gives you insight into the cause of live site issues, outages, or component failures. Change Analysis uses the [Azure Resource Graph](../governance/resource-graph/overview.md) to detect various types of changes, from the infrastructure layer through application deployment.|
-## Respond
+### Respond
An effective monitoring solution proactively responds to critical events, without the need for an individual or team to notice the issue. The response could be a text or email to an administrator, or an automated process that attempts to correct an error condition. -- **[Alerts](alerts/alerts-overview.md)** notify you of critical conditions and can take corrective action. Alert rules can be based on metric or log data. Metric alert rules provide near-real-time alerts based on collected metrics. Log alerts rules based on logs allow for complex logic across data from multiple sources. +
+**[Alerts](alerts/alerts-overview.md)** notify you of critical conditions and can take corrective action. Alert rules can be based on metric or log data. Metric alert rules provide near-real-time alerts based on collected metrics. Log alerts rules based on logs allow for complex logic across data from multiple sources.
Alert rules use action groups, which can perform actions like sending email or SMS notifications. Action groups can send notifications using webhooks to trigger external processes or to integrate with your IT service management tools. Action groups, actions, and sets of recipients can be shared across multiple rules.-- **[Autoscale](autoscale/autoscale-overview.md)** allows you to dynamically control the number of resources running to handle the load on your application. You can create rules that use Azure Monitor metrics to determine when to automatically add resources when the load increases or remove resources that are sitting idle. You can specify a minimum and maximum number of instances, and the logic for when to increase or decrease resources to save money and to increase performance.
-## Integrate
+
+**[Autoscale](autoscale/autoscale-overview.md)** allows you to dynamically control the number of resources running to handle the load on your application. You can create rules that use Azure Monitor metrics to determine when to automatically add resources when the load increases or remove resources that are sitting idle. You can specify a minimum and maximum number of instances, and the logic for when to increase or decrease resources to save money and to increase performance.
+
-You may need to integrate Azure Monitor with other systems or to build custom solutions that use your monitoring data. These Azure services work with Azure Monitor to provide integration capabilities.
+**[Azure Logic Apps](../logic-apps/logic-apps-overview.md)** is a service where you can create and run automated workflows with little to no code. While not a part of the Azure Monitor product, it's a core part of the story. You can use Logic Apps to [customize responses and perform other actions in response to to Azure Monitor alerts](alerts/alerts-logic-apps.md). You can also use Logic Apps to perform other [more complex actions](logs/logicapp-flow-connector.md) if the Azure Monitor infrastructure doesn't have a built-it method.
+
+## Integrate
+You may need to integrate Azure Monitor with other systems or to build custom solutions that use your monitoring data. These Azure services work with Azure Monitor to provide integration capabilities. Below are only a few of the possible integrations.
|Azure service |Description | ||| |[Event Hubs](../event-hubs/event-hubs-about.md)|Azure Event Hubs is a streaming platform and event ingestion service. It can transform and store data by using any real-time analytics provider or batching/storage adapters. Use Event Hubs to stream Azure Monitor data to partner SIEM and monitoring tools.| |[Logic Apps](../logic-apps/logic-apps-overview.md)|Azure Logic Apps is a service you can use to automate tasks and business processes by using workflows that integrate with different systems and services. Activities are available that read and write metrics and logs in Azure Monitor.| |[API](/rest/api/monitor/)|Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have unlimited possibilities to build custom solutions that integrate with Azure Monitor.|
+|[Hosted Partners](partners.md) | Many external partners integrate with Azure Monitor. Some integrations are [hosted on the Azure platform itself](/azure/partner-solutions/) to make integration faster and easier.
## Next steps - [Getting started with Azure Monitor](getting-started.md)
azure-monitor Vminsights Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-change-analysis.md
Title: Change analysis in VM insights
-description: VM insights integration with Application Change Analysis integration allows you to view any changes made to a virtual machine that may have affected it performance.
+description: VM insights integration with Application Change Analysis integration allows you to view any changes made to a virtual machine that might have affected it performance.
Last updated 06/08/2022 # Change analysis in VM insights
-VM insights integration with [Application Change Analysis](../app/change-analysis.md) integration allows you to view any changes made to a virtual machine that may have affected it performance.
+VM insights integration with [Application Change Analysis](../app/change-analysis.md) integration allows you to view any changes made to a virtual machine that might have affected its performance.
## Overview
-Suppose you have a VM that beings running slow and want to investigate whether recent changes to its configuration could have affected its performance. You view the performance of the VM using VM insights and find that there is an increase in memory usage in the past hour. Change analysis can help you determine whether any configuration changes made around this time were the cause of this increase.
+Suppose you have a VM that's running slow and you want to investigate whether recent changes to its configuration could have affected its performance. You view the performance of the VM by using VM insights and find that there's an increase in memory usage in the past hour. Change analysis can help you determine whether any configuration changes made around this time were the cause of the increase.
-The Application Change Analysis service aggregates changes from [Azure Resource Graph](../../governance/resource-graph/how-to/get-resource-changes.md) as well as nested properties changes such as network security rules from Azure Resource Manager.
+The Application Change Analysis service aggregates changes from [Azure Resource Graph](../../governance/resource-graph/how-to/get-resource-changes.md) and nested properties changes like network security rules from Azure Resource Manager.
-## Enabling change analysis
-To onboard change analysis in VM insights, you must register the *Microsoft.ChangeAnalysis* resource provider. The first time you launch VM insights or Application Change Analysis in the Azure portal, this resource provider will automatically be registered for you. Application Change Analysis is a free service that has no performance overhead on resources.
+## Enable change analysis
+To onboard change analysis in VM insights, you must register the *Microsoft.ChangeAnalysis* resource provider. The first time you start VM insights or Application Change Analysis in the Azure portal, this resource provider is automatically registered for you. Application Change Analysis is a free service that has no performance overhead on resources.
## View change analysis
-Change analysis is available from the **Performance** or **Map** tab of VM insights by selecting the **Change** option.
+Change analysis is available from the **Performance** or **Map** tab of VM insights by selecting the **Change** option.
-[![Investigate changes](media/vminsights-change-analysis/investigate-changes-screenshot.png)](media/vminsights-change-analysis/investigate-changes-screenshot-zoom.png#lightbox)
+[![Screenshot that shows investigating changes.](media/vminsights-change-analysis/investigate-changes-screenshot.png)](media/vminsights-change-analysis/investigate-changes-screenshot-zoom.png#lightbox)
+Select **Investigate Changes** to open the Application Change Analysis page filtered for the VM. Review the listed changes to see if there are any that could have caused the issue. If you're unsure about a particular change, look at the **Changed by** column to identify the person who made the change.
-Click the **Investigate changes** button to launch the Application Change Analysis page filtered for the VM. You can review the listed changes to see if there are any that could have caused the issue. If you are unsure about a particular change, you can reference the **Change by** column to determine the person who made the change.
-
-[![Change details](media/vminsights-change-analysis/change-details-screenshot.png)](media/vminsights-change-analysis/change-details-screenshot.png#lightbox)
+[![Screenshot that shows the Change details screen.](media/vminsights-change-analysis/change-details-screenshot.png)](media/vminsights-change-analysis/change-details-screenshot.png#lightbox)
## Next steps - Learn more about [Application Change Analysis](../app/change-analysis.md).-- Learn more about [Azure Resource Graph](../../governance/resource-graph/how-to/get-resource-changes.md).
+- Learn more about [Azure Resource Graph](../../governance/resource-graph/how-to/get-resource-changes.md).
azure-monitor Vminsights Enable Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md
Title: Enable VM insights using PowerShell
-description: Describes how to enable VM insights for Azure virtual machines or Virtual Machine Scale Sets using Azure PowerShell.
+ Title: Enable VM insights by using PowerShell
+description: This article describes how to enable VM insights for Azure virtual machines or virtual machine scale sets by using Azure PowerShell.
Last updated 06/08/2022
-# Enable VM insights using PowerShell
-This article describes how to enable VM insights on Azure virtual machines using PowerShell. This procedure can be used for the following:
+# Enable VM insights by using PowerShell
+This article describes how to enable VM insights on Azure virtual machines by using PowerShell. This procedure can be used for:
-- Azure virtual machine-- Azure Virtual Machine Scale Set
+- Azure Virtual Machines
+- Azure Virtual Machine Scale Sets
> [!NOTE]
-> This article only applies to the Log Analytics agent. To enable VM insights with the Azure monitor agent, use other installation methods described in [Enable VM insights overview](vminsights-enable-overview.md).
+> This article only applies to the Log Analytics agent. To enable VM insights with the Azure Monitor agent, use other installation methods described in [Enable VM insights overview](vminsights-enable-overview.md).
## Prerequisites -- [Log Analytics workspace configured for VM insights](../vm/vminsights-configure-workspace.md). -- See [Supported operating systems](./vminsights-enable-overview.md#supported-operating-systems) to ensure that the operating system of the virtual machine or Virtual Machine Scale Set you're enabling is supported. -- See [Manage the Azure Monitor agent](../agents/azure-monitor-agent-manage.md#prerequisites) for prerequisites related to Azure Monitor agent.
+You need to:
+
+- [Configure a Log Analytics workspace for VM insights](../vm/vminsights-configure-workspace.md).
+- See [Supported operating systems](./vminsights-enable-overview.md#supported-operating-systems) to ensure that the operating system of the virtual machine or virtual machine scale set you're enabling is supported.
+- See [Manage Azure Monitor Agent](../agents/azure-monitor-agent-manage.md#prerequisites) for prerequisites related to Azure Monitor Agent.
## PowerShell script
-To enable VM insights for multiple VMs or Virtual Machine Scale Sets, use the PowerShell script [Install-VMInsights.ps1](https://www.powershellgallery.com/packages/Install-VMInsights), which is available from the Azure PowerShell Gallery. This script iterates through:
+To enable VM insights for multiple VMs or virtual machine scale set, use the PowerShell script [Install-VMInsights.ps1](https://www.powershellgallery.com/packages/Install-VMInsights). The script is available from the Azure PowerShell Gallery. This script iterates through:
-- Every virtual machine and Virtual Machine Scale Set in your subscription.
+- Every virtual machine and virtual machine scale set in your subscription.
- The scoped resource group that's specified by *ResourceGroup*.-- A single VM or Virtual Machine Scale Set that's specified by *Name*.
+- A single VM or virtual machine scale set that's specified by *Name*.
-For each virtual machine or Virtual Machine Scale Set, the script verifies whether the VM extension for the Log Analytics agent and Dependency agent is already installed. If both extensions are installed, the script tries to reinstall it. If both extensions aren't installed, the script installs them.
+For each virtual machine or virtual machine scale set, the script verifies whether the VM extension for the Log Analytics agent and Dependency agent is already installed. If both extensions are installed, the script tries to reinstall it. If both extensions aren't installed, the script installs them.
-Verify you're using Azure PowerShell module Az version 1.0.0 or later with `Enable-AzureRM` compatibility aliases enabled. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+Verify that you're using Azure PowerShell module Az version 1.0.0 or later with `Enable-AzureRM` compatibility aliases enabled. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
To get a list of the script's argument details and example usage, run `Get-Help`.
Failed: (0)
## Next steps
-* See [Use VM insights Map](vminsights-maps.md) to view discovered application dependencies.
+* See [Use VM insights Map](vminsights-maps.md) to view discovered application dependencies.
* See [View Azure VM performance](vminsights-performance.md) to identify bottlenecks, overall utilization, and your VM's performance.
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
Title: How to chart performance with VM insights
-description: Performance is a feature of the VM insights that automatically discovers application components on Windows and Linux systems and maps the communication between services. This article provides details on how to use it in a variety of scenarios.
+ Title: Chart performance with VM insights
+description: This article discusses the VM insights Performance feature that discovers application components on Windows and Linux systems and maps the communication between services.
Last updated 06/08/2022
-# How to chart performance with VM insights
+# Chart performance with VM insights
-VM insights includes a set of performance charts that target several key performance indicators (KPIs) to help you determine how well a virtual machine is performing. The charts show resource utilization over a period of time so you can identify bottlenecks, anomalies, or switch to a perspective listing each machine to view resource utilization based on the metric selected. While there are numerous elements to consider when dealing with performance, VM insights monitors key operating system performance indicators related to processor, memory, network adapter, and disk utilization. Performance complements the health monitoring feature and helps expose issues that indicate a possible system component failure, support tuning and optimization to achieve efficiency, or support capacity planning.
+VM insights includes a set of performance charts that target several key performance indicators to help you determine how well a virtual machine is performing. The charts show resource utilization over a period of time. You can use them to identify bottlenecks and anomalies. You can also switch to a perspective that lists each machine to view resource utilization based on the metric selected.
+
+VM insights monitors key operating system performance indicators related to processor, memory, network adapter, and disk utilization. Performance complements the health monitoring feature and helps to:
+
+- Expose issues that indicate a possible system component failure.
+- Support tuning and optimization to achieve efficiency.
+- Support capacity planning.
## Limitations
-Following are limitations in performance collection with VM insights.
+Limitations in performance collection with VM insights:
-- **Available memory** is not available in all Linux versions including Red Hat Linux (RHEL) 6 and CentOS 6. It will be available in Linux versions that use [kernel version 3.14](http://www.man7.org/linux/man-pages/man1/free.1.html) or higher. It may be available in some kernel versions between 3.0 and 3.14.-- Metrics are only available for data disks on Linux virtual machines using XFS filesystem or EXT filesystem family (EXT2, EXT3, EXT4).
+- Available memory isn't available in all Linux versions, including Red Hat Linux (RHEL) 6 and CentOS 6. It will be available in Linux versions that use [kernel version 3.14](http://www.man7.org/linux/man-pages/man1/free.1.html) or higher. It might be available in some kernel versions between 3.0 and 3.14.
+- Metrics are only available for data disks on Linux virtual machines that use XFS filesystem or EXT filesystem family (EXT2, EXT3, EXT4).
## Multi-VM perspective from Azure Monitor
-From Azure Monitor, the Performance feature provides a view of all monitored VMs deployed across workgroups in your subscriptions or in your environment. To access from Azure Monitor, perform the following steps.
+From Azure Monitor, the Performance feature provides a view of all monitored VMs deployed across work groups in your subscriptions or in your environment.
+
+To access from Azure Monitor:
-1. In the Azure portal, select **Monitor**.
-2. Choose **Virtual Machines** in the **Solutions** section.
-3. Select the **Performance** tab.
+1. In the Azure portal, select **Monitor**.
+1. In the **Solutions** section, select **Virtual Machines**.
+1. Select the **Performance** tab.
-![VM insights Performance Top N List view](media/vminsights-performance/vminsights-performance-aggview-01.png)
+![Screenshot that shows a VM insights Performance Top N List view.](media/vminsights-performance/vminsights-performance-aggview-01.png)
-On the **Top N Charts** tab, if you have more than one Log Analytics workspace, choose the workspace enabled with the solution from the **Workspace** selector at the top of the page. The **Group** selector will return subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and virtual machine scale sets of computers related to the selected workspace that you can use to further filter results presented in the charts on this page and across the other pages. Your selection only applies to the Performance feature and does not carry over to Health or Map.
+On the **Top N Charts** tab, if you have more than one Log Analytics workspace, select the workspace enabled with the solution from the **Workspace** selector at the top of the page. The **Group** selector returns subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and virtual machine scale sets of computers related to the selected workspace that you can use to further filter results presented in the charts on this page and across the other pages. Your selection only applies to the Performance feature and doesn't carry over to Health or Map.
-By default, the charts show the last 24 hours. Using the **TimeRange** selector, you can query for historical time ranges of up to 30 days to show how performance looked in the past.
+By default, the charts show the last 24 hours. By using the **TimeRange** selector, you can query for historical time ranges of up to 30 days to show how performance looked in the past.
-The five capacity utilization charts shown on the page are:
+Five capacity utilization charts are shown on the page:
-* CPU Utilization % - shows the top five machines with the highest average processor utilization
-* Available Memory - shows the top five machines with the lowest average amount of available memory
-* Logical Disk Space Used % - shows the top five machines with the highest average disk space used % across all disk volumes
-* Bytes Sent Rate - shows the top five machines with highest average of bytes sent
-* Bytes Receive Rate - shows the top five machines with highest average of bytes received
+* **CPU Utilization %**: Shows the top five machines with the highest average processor utilization.
+* **Available Memory**: Shows the top five machines with the lowest average amount of available memory.
+* **Logical Disk Space Used %**: Shows the top five machines with the highest average disk space used percent across all disk volumes.
+* **Bytes Sent Rate**: Shows the top five machines with the highest average of bytes sent.
+* **Bytes Receive Rate**: Shows the top five machines with the highest average of bytes received.
-Clicking on the pin icon at the upper right-hand corner of any one of the five charts will pin the selected chart to the last Azure dashboard you last viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard will redirect you to VM insights and load the correct scope and view.
+Selecting the pushpin icon in the upper-right corner of a chart pins it to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the correct scope and view.
-Clicking on the icon located to the left of the pin icon on any one of the five charts opens the **Top N List** view. Here you see the resource utilization for that performance metric by individual VM in a list view and which machine is trending highest.
+Select the icon to the left of the pushpin icon on a chart to open the **Top N List** view. This list view shows the resource utilization for a performance metric by individual VM. It also shows which machine is trending the highest.
-![Top N List view for a selected performance metric](media/vminsights-performance/vminsights-performance-topnlist-01.png)
+![Screenshot that shows a Top N List view for a selected performance metric.](media/vminsights-performance/vminsights-performance-topnlist-01.png)
-When you click on the virtual machine, the **Properties** pane is expanded on the right to show the properties of the item selected, such as system information reported by the operating system, properties of the Azure VM, etc. Clicking on one of the options under the **Quick Links** section will redirect you to that feature directly from the selected VM.
+When you select the virtual machine, the **Properties** pane opens on the right side. It shows properties like system information reported by the operating system and the properties of the Azure VM. Selecting an option under the **Quick Links** section redirects you to that feature directly from the selected VM.
-![Virtual Machine Properties pane](./media/vminsights-performance/vminsights-properties-pane-01.png)
+![Screenshot that shows a virtual machine Properties pane.](./media/vminsights-performance/vminsights-properties-pane-01.png)
-Switch to the **Aggregated Charts** tab to view the performance metrics filtered by average or percentiles measures.
+You can switch to the **Aggregated Charts** tab to view the performance metrics filtered by average or percentiles measured.
-![VM insights Performance Aggregate view](./media/vminsights-performance/vminsights-performance-aggview-02.png)
+![Screenshot that shows a VM insights Performance Aggregate view.](./media/vminsights-performance/vminsights-performance-aggview-02.png)
The following capacity utilization charts are provided:
-* CPU Utilization % - defaults showing the average and top 95th percentile
-* Available Memory - defaults showing the average, top 5th, and 10th percentile
-* Logical Disk Space Used % - defaults showing the average and 95th percentile
-* Bytes Sent Rate - defaults showing average bytes sent
-* Bytes Receive Rate - defaults showing average bytes received
+* **CPU Utilization %**: Defaults show the average and top 95th percentile.
+* **Available Memory**: Defaults show the average, top 5th, and 10th percentile.
+* **Logical Disk Space Used %**: Defaults show the average and 95th percentile.
+* **Bytes Sent Rate**: Defaults show the average bytes sent.
+* **Bytes Receive Rate**: Defaults show the average bytes received.
You can also change the granularity of the charts within the time range by selecting **Avg**, **Min**, **Max**, **50th**, **90th**, and **95th** in the percentile selector.
-To view the resource utilization by individual VM in a list view and see which machine is trending with highest utilization, select the **Top N List** tab. The **Top N List** page shows the top 20 machines sorted by the most utilized by 95th percentile for the metric *CPU Utilization %*. You can see more machines by selecting **Load More**, and the results expand to show the top 500 machines.
+To view the resource utilization by individual VM and see which machine is trending with highest utilization, select the **Top N List** tab. The **Top N List** page shows the top 20 machines sorted by the most utilized by 95th percentile for the metric *CPU Utilization %*. To see more machines, select **Load More**. The results expand to show the top 500 machines.
>[!NOTE]
->The list cannot show more than 500 machines at a time.
+>The list can't show more than 500 machines at a time.
>
-![Top N List page example](./media/vminsights-performance/vminsights-performance-topnlist-01.png)
+![Screenshot that shows a Top N List page example.](./media/vminsights-performance/vminsights-performance-topnlist-01.png)
-To filter the results on a specific virtual machine in the list, enter its computer name in the **Search by name** textbox.
+To filter the results on a specific virtual machine in the list, enter its computer name in the **Search by name** text box.
-If you would rather view utilization from a different performance metric, from the **Metric** drop-down list select **Available Memory**, **Logical Disk Space Used %**, **Network Received Byte/s**, or **Network Sent Byte/s** and the list updates to show utilization scoped to that metric.
+If you want to view utilization from a different performance metric, from the **Metric** dropdown list, select **Available Memory**, **Logical Disk Space Used %**, **Network Received Byte/s**, or **Network Sent Byte/s**. The list updates to show utilization scoped to that metric.
-Selecting a virtual machine from the list opens the **Properties** panel on the right-side of the page and from here you can select **Performance detail**. The **Virtual Machine Detail** page opens and is scoped to that VM, similar in experience when accessing VM Insights Performance directly from the Azure VM.
+Selecting a virtual machine from the list opens the **Properties** pane on the right side of the page. From here, you can select **Performance detail**. The **Virtual Machine Detail** page opens and is scoped to that VM. The experience is similar to accessing VM Insights Performance directly from the Azure VM.
## View performance directly from an Azure VM
-To access directly from a virtual machine, perform the following steps.
+To access directly from a virtual machine:
-1. In the Azure portal, select **Virtual Machines**.
-2. From the list, choose a VM and in the **Monitoring** section choose **Insights**.
-3. Select the **Performance** tab.
+1. In the Azure portal, select **Virtual Machines**.
+1. From the list, select a VM. In the **Monitoring** section, select **Insights**.
+1. Select the **Performance** tab.
-This page not only includes performance utilization charts, but also a table showing for each logical disk discovered, its capacity, utilization, and total average by each measure.
+This page shows performance utilization charts. It also shows a table for each logical disk discovered with its capacity, utilization, and total average by each measure.
The following capacity utilization charts are provided:
-* CPU Utilization % - defaults showing the average and top 95th percentile
-* Available Memory - defaults showing the average, top 5th, and 10th percentile
-* Logical Disk Space Used % - defaults showing the average and 95th percentile
-* Logical Disk IOPS - defaults showing the average and 95th percentile
-* Logical Disk MB/s - defaults showing the average and 95th percentile
-* Max Logical Disk Used % - defaults showing the average and 95th percentile
-* Bytes Sent Rate - defaults showing average bytes sent
-* Bytes Receive Rate - defaults showing average bytes received
+* **CPU Utilization %**: Defaults show the average and top 95th percentile.
+* **Available Memory**: Defaults show the average, top 5th, and 10th percentile.
+* **Logical Disk Space Used %**: Defaults show the average and 95th percentile.
+* **Logical Disk IOPS**: Defaults show the average and 95th percentile.
+* **Logical Disk MB/s**: Defaults show the average and 95th percentile.
+* **Max Logical Disk Used %**: Defaults show the average and 95th percentile.
+* **Bytes Sent Rate**: Defaults show the average bytes sent.
+* **Bytes Receive Rate**: Defaults show the average bytes received.
-Clicking on the pin icon at the upper right-hand corner of any one of the charts pins the selected chart to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the performance detail view for the VM.
+Selecting the pushpin icon in the upper-right corner of a chart pins it to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the performance detail view for the VM.
-![VM insights Performance directly from VM view](./media/vminsights-performance/vminsights-performance-directvm-01.png)
+![Screenshot that shows VM insights Performance directly from the VM view.](./media/vminsights-performance/vminsights-performance-directvm-01.png)
## View performance directly from an Azure virtual machine scale set
-To access directly from an Azure virtual machine scale set, perform the following steps.
+To access directly from an Azure virtual machine scale set:
1. In the Azure portal, select **Virtual machine scale sets**.
-2. From the list, choose a VM and in the **Monitoring** section choose **Insights** to view the **Performance** tab.
+1. From the list, select a VM.
+1. In the **Monitoring** section, select **Insights** to view the **Performance** tab.
-This page loads the Azure Monitor performance view, scoped to the selected scale set. This enables you to see the Top N Instances in the scale set across the set of monitored metrics, view the aggregate performance across the scale set, and see the trends for selected metrics across the individual instances n the scale set. Selecting an instance from the list view lets you load it's map or navigate into a detailed performance view for that instance.
+This page loads the Azure Monitor performance view scoped to the selected scale set. This view enables you to see the Top N instances in the scale set across the set of monitored metrics. You can also view the aggregate performance across the scale set. And you can see the trends for selected metrics across the individual instances in the scale set. Selecting an instance from the list view lets you load its map or move into a detailed performance view for that instance.
-Clicking on the pin icon at the upper right-hand corner of any one of the charts pins the selected chart to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the performance detail view for the VM.
+Selecting the pushpin icon in the upper-right corner of a chart pins it to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the performance detail view for the VM.
-![VM insights Performance directly from virtual machine scale set view](./media/vminsights-performance/vminsights-performance-directvmss-01.png)
+![Screenshot that shows VM insights Performance directly from the virtual machine scale set view.](./media/vminsights-performance/vminsights-performance-directvmss-01.png)
>[!NOTE]
->You can also access a detailed performance view for a specific instance from the Instances view for your scale set. Navigate to **Instances** under the **Settings** section, and then choose **Insights**.
--
+>You can also access a detailed performance view for a specific instance from the **Instances** view for your scale set. Under the **Settings** section, go to **Instances** and select **Insights**.
## Next steps -- Learn how to use [Workbooks](vminsights-workbooks.md) that are included with VM insights to further analyze performance and network metrics. -
+- Learn how to use [workbooks](vminsights-workbooks.md) that are included with VM insights to further analyze performance and network metrics.
- To learn about discovered application dependencies, see [View VM insights Map](vminsights-maps.md).
azure-monitor Vminsights Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-workbooks.md
Title: Create interactive reports VM insights with workbooks
+ Title: Create interactive reports with VM insights workbooks
description: Simplify complex reporting with predefined and custom parameterized workbooks for VM insights.
Last updated 05/27/2022
-# Create interactive reports VM insights with workbooks
+# Create interactive reports with VM insights workbooks
-Workbooks combine text,ΓÇ»[log queries](/azure/data-explorer/kusto/query/), metrics, and parameters into rich interactive reports. Workbooks are editable by any other team members who have access to the same Azure resources.
+Workbooks combine text,ΓÇ»[log queries](/azure/data-explorer/kusto/query/), metrics, and parameters into rich interactive reports. Workbooks can be edited by any other team members who have access to the same Azure resources.
-Workbooks are helpful for scenarios such as:
+Workbooks help you to:
-* Exploring the usage of your virtual machine when you don't know the metrics of interest in advance: CPU utilization, disk space, memory, network dependencies, etc. Unlike other usage analytics tools, workbooks let you combine multiple kinds of visualizations and analyses, making them great for this kind of free-form exploration.
-* Explaining to your team how a recently provisioned VM is performing, by showing metrics for key counters and other log events.
-* Sharing the results of a resizing experiment of your VM with other members of your team. You can explain the goals for the experiment with text, then show each usage metric and analytics queries used to evaluate the experiment, along with clear call-outs for whether each metric was above- or below-target.
-* Reporting the impact of an outage on the usage of your VM, combining data, text explanation, and a discussion of next steps to prevent outages in the future.
+* Explore the usage of your virtual machine (VM) when you don't know the metrics of interest in advance, like CPU utilization, disk space, memory, and network dependencies. Unlike other usage analytics tools, workbooks let you combine multiple kinds of visualizations and analyses, making them great for this kind of freeform exploration.
+* Explain to your team how a recently provisioned VM is performing by showing metrics for key counters and other log events.
+* Share the results of a resizing experiment of your VM with other members of your team. You can explain the goals for the experiment with text. Then you can show each usage metric and the analytics queries used to evaluate the experiment, along with clear call-outs for whether each metric was above or below target.
+* Report the impact of an outage on the usage of your VM. You can combine data, text explanation, and a discussion of next steps to prevent outages in the future.
## VM insights workbooks
-VM insights includes the following workbooks. You can use these workbooks or use them as a start to create custom workbooks to address your particular requirements.
+VM insights includes the following workbooks. You can use these workbooks or use them as a starting point to create custom workbooks to address your particular requirements.
### Single virtual machine | Workbook | Description | |-|-|
-| Performance | Provides a customizable version of the Performance view that leverages all of the Log Analytics performance counters that you have enabled. |
-| Connections | Connections provides an in-depth view of the inbound and outbound connections from your VM. |
+| Performance | Provides a customizable version of the **Performance** view that uses all the Log Analytics performance counters that you've enabled. |
+| Connections | Provides an in-depth view of the inbound and outbound connections from your VM. |
### Multiple virtual machines | Workbook | Description | |-|-|
-| Performance | Provides a customizable version of the Top N List and Charts view in a single workbook that leverages all of the Log Analytics performance counters that you have enabled.|
-| Performance counters | A Top N chart view across a wide set of performance counters. |
-| Connections | Connections provides an in-depth view of the inbound and outbound connections from your monitored VMs. |
-| Active Ports | Provides a list of the processes that have bound to the ports on the monitored VMs and their activity in the chosen timeframe. |
+| Performance | Provides a customizable version of the **Top N List and Charts** view in a single workbook that uses all the Log Analytics performance counters that you've enabled.|
+| Performance counters | Provides a **Top N Chart** view across a wide set of performance counters. |
+| Connections | Provides an in-depth view of the inbound and outbound connections from your monitored VMs. |
+| Active Ports | Provides a list of the processes that have bound to the ports on the monitored VMs and their activity in the chosen time frame. |
| Open Ports | Provides the number of ports open on your monitored VMs and the details on those open ports. |
-| Failed Connections | Display the count of failed connections on your monitored VMs, the failure trend, and if the percentage of failures is increasing over time. |
-| Security and Audit | An analysis of your TCP/IP traffic that reports on overall connections, malicious connections, where the IP endpoints reside globally. To enable all features, you will need to enable Security Detection. |
+| Failed Connections | Displays the count of failed connections on your monitored VMs, the failure trend, and if the percentage of failures is increasing over time. |
+| Security and Audit | An analysis of your TCP/IP traffic that reports on overall connections, malicious connections, and where the IP endpoints reside globally. To enable all features, you must enable **Security Detection**. |
| TCP Traffic | A ranked report for your monitored VMs and their sent, received, and total network traffic in a grid and displayed as a trend line. |
-| Traffic Comparison | This workbook lets you compare network traffic trends for a single machine or a group of machines. |
+| Traffic Comparison | Lets you compare network traffic trends for a single machine or a group of machines. |
-## Creating a new workbook
+## Create a new workbook
-A workbook is made up of sections consisting of independently editable charts, tables, text, and input controls. To better understand workbooks, let's start by opening a template and walk through creating a custom workbook.
+A workbook is made up of sections that consist of independently editable charts, tables, text, and input controls. To better understand workbooks, let's open a template and create a custom workbook.
1. Go to the **Monitor** menu in the Azure portal.
-2. Select a virtual machine.
+1. Select a VM.
-3. On the VM insights page, select **Performance** or **Maps** tab and then select **View Workbooks** from the link on the page. From the drop-down list, select **Go to Gallery**.
+1. On the VM insights page, select the **Performance** or **Map** tab and then select **View Workbooks** from the link on the page. From the dropdown list, select **Go To Gallery**.
- :::image type="content" source="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" lightbox="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" alt-text="Screenshot of workbook drop-down list in V M insights.":::
+ :::image type="content" source="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" lightbox="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" alt-text="Screenshot that shows a workbook dropdown list in V M insights.":::
- This launches the workbook gallery with a number of prebuilt workbooks to help you get started.
+ The workbook gallery opens with prebuilt workbooks to help you get started.
-7. Create a new workbook by selecting **New**.
+1. Create a new workbook by selecting **New**.
+ :::image type="content" source="media/vminsights-workbooks/workbook-gallery-01.png" lightbox="media/vminsights-workbooks/workbook-gallery-01.png" alt-text="Screenshot that shows a workbook gallery in V M insights.":::
-## Editing workbook sections
+## Edit workbook sections
-Workbooks have two modes: **editing mode**, and **reading mode**. When a new workbook is first launched, it opens in **editing mode**. It shows all the content of the workbook, including any steps and parameters that are otherwise hidden. **Reading mode** presents a simplified report style view. Reading mode allows you to abstract away the complexity that went into creating a report while still having the underlying mechanics only a few clicks away when needed for modification.
+Workbooks have two modes: editing and reading. A new workbook opens in editing mode. This mode shows all the content of the workbook, including any steps and parameters that are otherwise hidden. Reading mode presents a simplified report-style view. Reading mode allows you to abstract away the complexity that went into creating a report while still having the underlying mechanics only a few clicks away when needed for modification.
-![Screenshot of the Virtual Machines Workbook section in Azure Monitor showing a new workbook in editing mode with editing controls highlighted.](media/vminsights-workbooks/workbook-new-workbook-editor-01.png)
+![Screenshot that shows the Virtual Machines Workbook section in Azure Monitor showing a new workbook in editing mode with editing controls highlighted.](media/vminsights-workbooks/workbook-new-workbook-editor-01.png)
-1. When you're done editing a section, click **Done Editing** in the bottom-left corner of the section.
+1. After you finish editing a section, select **Done Editing** in the lower-left corner of the section.
-2. To create a duplicate of a section, click the **Clone this section** icon. Creating duplicate sections is a great to way to iterate on a query without losing previous iterations.
+1. To create a duplicate of a section, select the **Clone this section** icon. Creating duplicate sections is a great way to iterate on a query without losing previous iterations.
-3. To move up a section in a workbook, click the **Move up** or **Move down** icon.
+1. To move up a section in a workbook, select the **Move up** or **Move down** icon.
-4. To remove a section permanently, click the **Remove** icon.
+1. To remove a section permanently, select the **Remove** icon.
-## Adding text and Markdown sections
+## Add text and Markdown sections
-Adding headings, explanations, and commentary to your workbooks helps turn a set of tables and charts into a narrative. Text sections in workbooks support the [Markdown syntax](https://daringfireball.net/projects/markdown/) for text formatting, like headings, bold, italics, and bulleted lists.
+Adding headings, explanations, and commentary to your workbooks helps turn a set of tables and charts into a narrative. Text sections in workbooks support the [Markdown syntax](https://daringfireball.net/projects/markdown/) for text formatting like headings, bold, italics, and bulleted lists.
-To add a text section to your workbook, use the **Add text** button at the bottom of the workbook, or at the bottom of any section.
+To add a text section to your workbook, select **Add text** in the lower left of the workbook or section.
-## Adding query sections
+## Add query sections
-![Query section in Workbooks](media/vminsights-workbooks/005-workbook-query-section.png)
+![Screenshot that shows the Query section in workbooks.](media/vminsights-workbooks/005-workbook-query-section.png)
-To add query section to your workbook, use the **Add query** button at the bottom of the workbook, or at the bottom of any section.
+To add a query section to your workbook, select **Add query** in the lower left of the workbook or section.
Query sections are highly flexible and can be used to answer questions like: * How was my CPU utilization during the same time period as an increase in network traffic? * What was the trend in available disk space over the last month?
-* How many network connection failures did my VM experience over the last two weeks?
+* How many network connection failures did my VM experience over the last two weeks?
-You also aren't only limited to querying from the context of the virtual machine you launched the workbook from. You can query across multiple virtual machines, as well as Log Analytics workspaces, as long as you have access permission to those resources.
+You also aren't limited to querying from the context of the VM from which you opened the workbook. To query across multiple VMs and Log Analytics workspaces, you must have access permission to those resources.
-To include data from other Log Analytics workspaces or from a specific Application Insights app using the **workspace** identifier. To learn more about cross-resource queries, refer to the [official guidance](../logs/cross-workspace-query.md).
+To include data from other Log Analytics workspaces or from a specific Application Insights app, use the workspace identifier. To learn more about cross-resource queries, see the [official guidance](../logs/cross-workspace-query.md).
### Advanced analytic query settings
-Each section has its own advanced settings, which are accessible via the settings ![Workbooks section editing controls](media/vminsights-workbooks/006-settings.png) icon located to the right of the **Add parameters** button.
+Each section has its own advanced settings, which are accessible via the settings ![Workbooks section editing controls](media/vminsights-workbooks/006-settings.png) icon located to the right of **Add parameters**.
-![Screenshot of the Advanced Settings dialog in the Virtual Machines Workbook section of Azure Monitor. The icon that opens the dialog is highlighted.](media/vminsights-workbooks/007-settings-expanded.png)
+![Screenshot that shows the Advanced Settings dialog with the icon highlighted in the Virtual Machines Workbook section of Azure Monitor.](media/vminsights-workbooks/007-settings-expanded.png)
| Setting | Description | | - |:--|
-| Custom width | Makes an item an arbitrary size, so you can fit many items on a single line allowing you to better organize your charts and tables into rich interactive reports. |
-| Conditionally visible | Specify to hide steps based on a parameter when in reading mode. |
-| Export a parameter| Allow a selected row in the grid or chart to cause later steps to change values or become visible. |
-| Show query when not editing | Displays the query above the chart or table even when in reading mode.
-| Show open in analytics button when not editing | Adds the blue Analytics icon to the right-hand corner of the chart to allow one-click access.|
+| Custom width | Makes an item an arbitrary size so that you can fit many items on a single line to organize your charts and tables into rich interactive reports. |
+| Conditionally visible | Specifies to hide steps based on a parameter when in reading mode. |
+| Export a parameter| Allows a selected row in the grid or chart to cause later steps to change values or become visible. |
+| Show query when not editing | Displays the query that precedes the chart or table even when in reading mode.
+| Show open in analytics button when not editing | Adds the **Analytics** icon to the right corner of the chart to allow one-click access.|
-Most of these settings are fairly intuitive, but to understand **Export a parameter** it is better to examine a workbook that makes use of this functionality.
+Most of these settings are fairly intuitive, but to understand **Export a parameter** it's better to examine a workbook that makes use of this functionality.
-One of the prebuilt workbooks - **TCP Traffic**, provides information on connection metrics from a VM.
+One of the prebuilt workbooks, **TCP Traffic**, provides information on connection metrics from a VM.
-The first section of the workbook is based on log query data. The second section is also based on log query data, but selecting a row in the first table will interactively update the contents of the charts:
+The first section of the workbook is based on log query data. The second section is also based on log query data, but selecting a row in the first table interactively updates the contents of the charts.
-![Screenshot of the Virtual Machines section in Azure Monitor showing the prebuilt workbook TCP Traffic.](media/vminsights-workbooks/008-workbook-tcp-traffic.png)
+![Screenshot that shows the Virtual Machines section in Azure Monitor showing the prebuilt workbook TCP Traffic.](media/vminsights-workbooks/008-workbook-tcp-traffic.png)
The behavior is possible through use of the **When an item is selected, export a parameter** advanced settings, which are enabled in the table's log query.
-![Screenshot of the Advanced Settings dialog for a Virtual Machines workbook with the "When an item is selected, export a parameter" option checked.](media/vminsights-workbooks/009-settings-export.png)
+![Screenshot that shows the Advanced Settings dialog for a Virtual Machines workbook with the "When an item is selected, export a parameter" option checked.](media/vminsights-workbooks/009-settings-export.png)
-The second log query then utilizes the exported values when a row is selected to create a set of values that are then used by the section heading and charts. If no row is selected, it hides the section heading and charts.
+The second log query then utilizes the exported values when a row is selected to create a set of values that are used by the section heading and charts. If no row is selected, it hides the section heading and charts.
For example, the hidden parameter in the second section uses the following reference from the row selected in the grid:
VMConnection
| summarize Sent = sum(BytesSent), Received = sum(BytesReceived) by bin(TimeGenerated, {TimeRange:grain}) ```
-## Adding metrics sections
+## Add metrics sections
-Metrics sections give you full access to incorporate Azure Monitor metrics data into your interactive reports. In VM insights, the prebuilt workbooks will typically contain analytic query data rather than metric data. You may choose to create workbooks with metric data, allowing you to take full advantage of the best of both features all in one place. You also have the ability to pull in metric data from resources in any of the subscriptions you have access to.
+Metrics sections give you full access to incorporate Azure Monitor metrics data into your interactive reports. In VM insights, the prebuilt workbooks typically contain analytic query data rather than metric data. You can create workbooks with metric data, which allows you to take full advantage of the best of both features all in one place. You also have the ability to pull in metric data from resources in any of the subscriptions to which you have access.
-Here is an example of virtual machine data being pulled into a workbook to provide a grid visualization of CPU performance:
+Here's an example of VM data being pulled into a workbook to provide a grid visualization of CPU performance.
-![Screenshot of the metrics section of a virtual machine workbook in Azure Monitor. The CPU performance for each virtual machine is shown graphically.](media/vminsights-workbooks/010-metrics-grid.png)
+![Screenshot that shows the metrics section of a virtual machine workbook in Azure Monitor. The C P U performance for each virtual machine is shown graphically.](media/vminsights-workbooks/010-metrics-grid.png)
-## Adding parameter sections
+## Add parameter sections
Workbook parameters allow you to change values in the workbook without having to manually edit the query or text sections. This removes the requirement of needing to understand the underlying analytics query language and greatly expands the potential audience of workbook-based reporting.
-The values of parameters are replaced in query, text or other parameter sections by putting the name of the parameter in braces, like ``{parameterName}``. Parameter names are limited to similar rules as JavaScript identifiers, alphabetic characters or underscores, followed by alphanumeric characters or underscores. For example, **a1** is allowed, but **1a** is not allowed.
+The values of parameters are replaced in query, text, or other parameter sections by putting the name of the parameter in braces, like ``{parameterName}``. Parameter names are limited to similar rules as JavaScript identifiers, alphabetic characters, or underscores, followed by alphanumeric characters or underscores. For example, *a1* is allowed but *1a* isn't allowed.
-Parameters are linear, starting from the top of a workbook and flowing down to later steps. Parameters declared later in a workbook can override parameters that were declared earlier. This also lets parameters that use queries to access the values from parameters defined earlier. Within a parameter's step itself, parameters are also linear, left to right, where parameters to the right can depend on a parameter declared earlier in that same step.
-
-There are four different types of parameters, which are currently supported:
+Parameters are linear, starting from the top of a workbook and flowing down to later steps. Parameters declared later in a workbook can override parameters that were declared earlier. Parameters that use queries can access the values from parameters defined earlier. Within a parameter's step itself, parameters are also linear, left to right, where parameters to the right can depend on a parameter declared earlier in that same step.
+
+Four different types of parameters are currently supported:
| Parameter | Description | | - |:--|
-| Text | Allows the user to edit a text box, and you can optionally supply a query to fill in the default value. |
+| Text | Allows the user to edit a text box. You can optionally supply a query to fill in the default value. |
| Drop down | Allows the user to choose from a set of values. |
-| Time range picker| Allows the user to choose from a predefined set of time range values, or pick from a custom time range.|
+| Time range picker| Allows the user to choose from a predefined set of time range values or pick from a custom time range.|
| Resource picker | Allows the user to choose from the resources selected for the workbook.|
-### Using a text parameter
+### Use a text parameter
-The value a user types in the text box is replaced directly in the query, with no escaping or quoting. If the value you need is a string, the query should have quotes around the parameter (like **'{parameter}'**).
+The value a user enters in the text box is replaced directly in the query, with no escaping or quoting. If the value you need is a string, the query should have single quotation marks around the parameter. An example is **'{parameter}'**.
-The text parameter allows the value in a text box to be used anywhere. It can be a table name, column name, function name, operator, etc. The text parameter type has a setting **Get default value from analytics query**, which allows the workbook author to use a query to populate the default value for that text box.
+The text parameter allows the value in a text box to be used anywhere. It can be a table name, column name, function name, or operator. The text parameter type has a setting **Get default value from analytics query**, which allows the workbook author to use a query to populate the default value for that text box.
-When using the default value from a log query, only the first value of the first row (row 0, column 0) is used as the default value. Therefore it is recommended to limit your query to return just one row and one column. Any other data returned by the query is ignored.
+When the default value is used from a log query, only the first value of the first row (row 0, column 0) is used as the default value. For this reason, we recommend that you limit your query to return only one row and one column. Any other data returned by the query is ignored.
-Whatever value the query returns will be replaced directly with no escaping or quoting. If the query returns no rows, the result of the parameter is either an empty string (if the parameter is not required) or undefined (if the parameter is required).
+Whatever value the query returns is replaced directly with no escaping or quoting. If the query returns no rows, the result of the parameter is either an empty string (if the parameter isn't required) or undefined (if the parameter is required).
-### Using a drop-down
+### Use a dropdown
-The dropdown parameter type lets you create a drop-down control, allowing the selection of one or many values.
+The dropdown parameter type lets you create a dropdown control, which is used to select one or many values.
-The dropdown is populated by a log query or JSON. If the query returns one column, the values in that column are both the value and the label in the drop-down control. If the query returns two columns, the first column is the value, and the second column is the label shown in the drop-down. If the query returns three columns, the third column is used to indicate the default selection in that drop-down. This column can be any type, but the simplest is to use bool or numeric types, where 0 is false, and 1 is true.
+The dropdown is populated by a log query or JSON. If the query returns one column, the values in that column are both the value and the label in the dropdown control. If the query returns two columns, the first column is the value, and the second column is the label shown in the dropdown. If the query returns three columns, the third column is used to indicate the default selection in that dropdown. This column can be any type, but the simplest is to use bool or numeric types, where 0 is false and 1 is true.
-If the column is a string type, null/empty string is considered false, and any other value is considered true. For single selection drop-downs, the first value with a true value is used as the default selection. For multiple selection drop-downs, all values with a true value are used as the default selected set. The items in the drop-down are shown in whatever order the query returned rows.
+If the column is a string type, null/empty string is considered false. Any other value is considered true. For single-selection dropdowns, the first value with a true value is used as the default selection. For multiple-selection dropdowns, all values with a true value are used as the default selected set. The items in the dropdown are shown in whatever order the query returned rows.
-Let's look at the parameters present in the Connections Overview report. Click the edit symbol next to **Direction**.
+Let's look at the parameters present in the Connections Overview report. Select the edit symbol next to **Direction**.
-![Screenshot of the section for adding and editing report parameters in Azure Monitor. The Edit icon for the Direction parameter is selected.](media/vminsights-workbooks/011-workbook-using-dropdown.png)
+![Screenshot that shows the section for adding and editing report parameters in Azure Monitor. The Edit icon for the Direction parameter is selected.](media/vminsights-workbooks/011-workbook-using-dropdown.png)
-This will launch the **Edit Parameter** menu item.
+This action opens the **Edit Parameter** pane.
-![Screenshot of the Edit Parameter dialog. The Parameter name is Direction, the Parameter type is Drop down, and Get data from JSON is selected.](media/vminsights-workbooks/012-workbook-edit-parameter.png)
+![Screenshot that shows the Edit Parameter pane. The Parameter name is Direction, the Parameter type is Drop down, and Get data from JSON is selected.](media/vminsights-workbooks/012-workbook-edit-parameter.png)
-The JSON lets you generate an arbitrary table populated with content. For example, the following JSON generates two values in the drop-down:
+The JSON lets you generate an arbitrary table populated with content. For example, the following JSON generates two values in the dropdown:
``` [
The JSON lets you generate an arbitrary table populated with content. For exampl
] ```
-A more applicable example is using a drop-down to pick from a set of performance counters by name:
+A more applicable example is using a dropdown list to pick from a set of performance counters by name:
``` Perf
Perf
| project Counter = pack('counter', CounterName, 'object', ObjectName), CounterName, group = ObjectName ```
-The query will display results as follows:
+The query shows the following results:
-![Perf counter dropdown](media/vminsights-workbooks/013-workbook-edit-parameter-perf-counters.png)
+![Screenshot that shows the Perf counter dropdown.](media/vminsights-workbooks/013-workbook-edit-parameter-perf-counters.png)
-Drop-downs are incredibly powerful tools for customizing and creating interactive reports.
+Dropdown lists are powerful tools you can use to customize and create interactive reports.
### Time range parameters
-While you can make your own custom time range parameter via the dropdown parameter type, you can also use the out-of-box time range parameter type if you don't need the same degree of flexibility.
+You can make your own custom time range parameter via the dropdown parameter type. You can also use the out-of-box time range parameter type if you don't need the same degree of flexibility.
-Time range parameter types have 15 default ranges that go from five minutes to the last 90 days. There is also an option to allow custom time range selection, which allows the operator of the report to choose explicit start and stop values for the time range.
+Time range parameter types have 15 default ranges that go from five minutes to the last 90 days. There's also an option to allow custom time range selection. The operator of the report can choose explicit start and stop values for the time range.
### Resource picker
-The resource picker parameter type gives you the ability to scope your report to certain types of resources. An example of a prebuilt workbook that leverages the resource picker type is the **Performance** workbook.
+The resource picker parameter type gives you the ability to scope your report to certain types of resources. An example of a prebuilt workbook that uses the resource picker type is the **Performance** workbook.
-![Workspaces dropdown](media/vminsights-workbooks/014-workbook-edit-parameter-workspaces.png)
+![Screenshot that shows the Workspaces dropdown.](media/vminsights-workbooks/014-workbook-edit-parameter-workspaces.png)
-## Saving and sharing workbooks with your team
+## Save and share workbooks with your team
-Workbooks are saved within a Log Analytics Workspace or a virtual machine resource, depending on how you access the workbooks gallery. The workbook can be saved to the **My Reports** section that's private to you or in the **Shared Reports** section that's accessible to everyone with access to the resource. To view all the workbooks in the resource, click the **Open** button in the action bar.
+Workbooks are saved within a Log Analytics workspace or a VM resource, depending on how you access the workbooks gallery. The workbook can be saved to the **My Reports** section that's private to you or in the **Shared Reports** section that's accessible to everyone with access to the resource. To view all the workbooks in the resource, select **Open**.
To share a workbook that's currently in **My Reports**:
-1. Click **Open** in the action bar
-2. Click the "..." button beside the workbook you want to share
-3. Click **Move to Shared Reports**.
+1. Select **Open**.
+1. Select the ellipsis (**...**) next to the workbook you want to share.
+1. Select **Move to Shared Reports**.
-To share a workbook with a link or via email, click **Share** in the action bar. Keep in mind that recipients of the link need access to this resource in the Azure portal to view the workbook. To make edits, recipients need at least Contributor permissions for the resource.
+To share a workbook with a link or via email, select **Share**. Keep in mind that recipients of the link need access to this resource in the Azure portal to view the workbook. To make edits, recipients need at least Contributor permissions for the resource.
-To pin a link to a workbook to an Azure Dashboard:
+To pin a link to a workbook to an Azure dashboard:
-1. Click **Open** in the action bar
-2. Click the "..." button beside the workbook you want to pin
-3. Click **Pin to dashboard**.
+1. Select **Open**.
+1. Select the ellipsis (**...**) next to the workbook you want to pin.
+1. Select **Pin to dashboard**.
## Next steps -- To identify limitations and overall VM performance, see [View Azure VM Performance](vminsights-performance.md).--- To learn about discovered application dependencies, see [View VM insights Map](vminsights-maps.md).
+- To identify limitations and overall VM performance, see [View Azure VM performance](vminsights-performance.md).
+- To learn about discovered application dependencies, see [View VM insights map](vminsights-maps.md).
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
In the topology illustrated above, the on-premises network is connected to a hub
* [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md) * [Configure network features for an Azure NetApp Files volume](configure-network-features.md) * [Virtual network peering](../virtual-network/virtual-network-peering-overview.md)
+* [Configure Virtual WAN for Azure NetApp Files](configure-virtual-wan.md)
azure-portal How To Manage Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-manage-azure-support-request.md
Title: Manage an Azure support request
description: Learn about viewing support requests and how to send messages, upload files, and manage options. tags: billing Previously updated : 09/01/2022 Last updated : 03/21/2023 # Manage an Azure support request
View the details and status of support requests by going to **Help + support** >
:::image type="content" source="media/how-to-manage-azure-support-request/all-requests-lower.png" alt-text="All support requests":::
-On this page, you can search, filter, and sort support requests. Select a support request to view details, including its severity and any messages associated with the request.
+On this page, you can search, filter, and sort support requests. By default, you may only see recent open requests. Change the filter options to select a longer period of time or to include support requests that have already been closed.
+
+To view details about a support request to view details, including its severity and any messages associated with the request, select it from the list.
## Send a message
On this page, you can search, filter, and sort support requests. Select a suppor
1. On the **All support requests** page, select the support request.
-1. On the **Support Request** page, select **Change**.
+1. On the **Support Request** page, select **Change severity**.
1. The Azure portal shows one of two screens, depending on whether your request is already assigned to a support engineer:
To change your **Advanced diagnostic information** selection after the request h
1. On the **All support requests** page, select the support request.
-1. On the **Support Request** page, look for **Advanced diagnostic information** and then select **Change**.
+1. On the **Support Request** page, select **Advanced diagnostic information** near the top of the screen.
-1. Select **Yes** or **No**, then select **OK** to confirm.
+1. Select **Yes** or **No**, then select **Submit**.
:::image type="content" source="media/how-to-manage-azure-support-request/grant-permission-manage.png" alt-text="Grant permissions for diagnostic information":::
You can use the file upload option to upload a diagnostic file, such as a [brows
1. On the **All support requests** page, select the support request.
-1. On the **Support Request** page, select the **File upload** box, then browse to find your file and select **Upload**.
+1. On the **Support Request** page, select the **Upload file** box, then browse to find your file and select **Upload**.
### File upload guidelines
Follow these guidelines when you use the file upload option:
## Close a support request
-To close a support request, [send a message](#send-a-message) and let us know you'd like to close the request.
+To close a support request, select the **Close request** option near the top of the screen. When prompted to confirm, select **Close**. You'll receive a confirmation email when your request is closed.
## Reopen a closed request
-To reopen a closed support request, create a [new message](#send-a-message), which automatically reopens the request.
+To reopen a closed support request, select **Reopen request** near the top of the screen. When prompted to confirm, select **Reopen request.** Your support request will then be active again.
## Cancel a support plan
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
Title: Linter settings for Bicep config description: Describes how to customize configuration values for the Bicep linter Previously updated : 01/30/2023 Last updated : 03/16/2023 # Add linter settings in the Bicep config file
The following example shows the rules that are available for configuration.
"simplify-interpolation": { "level": "warning" },
+ "simplify-json-null": {
+ "level": "warning"
+ }
"use-parent-property": { "level": "warning" },
azure-resource-manager Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/install.md
Title: Set up Bicep development and deployment environments description: How to configure Bicep development and deployment environments Previously updated : 01/10/2023 Last updated : 03/17/2023
If you'd like to try the latest pre-release bits of Bicep before they're release
> [!WARNING] > These pre-release builds are much more likely to have known or unknown bugs.
+## Install the NuGet package
+
+The Bicep team has made the [Azure.Bicep.Core NuGet package](https://www.nuget.org/packages/Azure.Bicep.Core) publicly available on nuget.org. While it is public, it is not a supported package. Any dependency you take on this package will be done at your own risk and we reserve the right to push breaking changes to this package at any time.
+
+For more information about installing and consuming NuGet packages, see [Consume packages](/nuget/consume-packages/overview-and-workflow).
+ ## Next steps For more information about using Visual Studio Code and the Bicep extension, see [Quickstart: Create Bicep files with Visual Studio Code](./quickstart-create-bicep-use-visual-studio-code.md).
azure-resource-manager Linter Rule Simplify Json Null https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-simplify-json-null.md
+
+ Title: Linter rule - simplify JSON null
+description: Linter rule - simplify JSON null
+ Last updated : 03/20/2023++
+# Linter rule - simplify JSON null
+
+This rule finds `json('null')`.
+
+## Linter rule code
+
+Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:
+
+`simplify-json-null`
+
+## Solution
+
+The following example fails this test because `json('null')` is used:
+
+```bicep
+@description('The name of the API Management service instance')
+param apiManagementServiceName string = 'apiservice${uniqueString(resourceGroup().id)}'
+
+@description('The email address of the owner of the service')
+@minLength(1)
+param publisherEmail string
+
+@description('The name of the owner of the service')
+@minLength(1)
+param publisherName string
+
+@description('The pricing tier of this API Management service')
+@allowed([
+ 'Premium'
+])
+param sku string = 'Premium'
+
+@description('The instance size of this API Management service.')
+param skuCount int = 3
+
+@description('Location for all resources.')
+param location string = resourceGroup().location
+
+@description('Zone numbers e.g. 1,2,3.')
+param availabilityZones array = [
+ '1'
+ '2'
+ '3'
+]
+
+resource apiManagementService 'Microsoft.ApiManagement/service@2022-08-01' = {
+ name: apiManagementServiceName
+ location: location
+ zones: ((length(availabilityZones) == 0) ? json('null') : availabilityZones)
+ sku: {
+ name: sku
+ capacity: skuCount
+ }
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ publisherEmail: publisherEmail
+ publisherName: publisherName
+ }
+}
+```
+
+You can simplify the syntax by replacing `json('null')` by `null`:
+
+```bicep
+@description('The name of the API Management service instance')
+param apiManagementServiceName string = 'apiservice${uniqueString(resourceGroup().id)}'
+
+@description('The email address of the owner of the service')
+@minLength(1)
+param publisherEmail string
+
+@description('The name of the owner of the service')
+@minLength(1)
+param publisherName string
+
+@description('The pricing tier of this API Management service')
+@allowed([
+ 'Premium'
+])
+param sku string = 'Premium'
+
+@description('The instance size of this API Management service.')
+param skuCount int = 3
+
+@description('Location for all resources.')
+param location string = resourceGroup().location
+
+@description('Zone numbers e.g. 1,2,3.')
+param availabilityZones array = [
+ '1'
+ '2'
+ '3'
+]
+
+resource apiManagementService 'Microsoft.ApiManagement/service@2022-08-01' = {
+ name: apiManagementServiceName
+ location: location
+ zones: ((length(availabilityZones) == 0) ? null : availabilityZones)
+ sku: {
+ name: sku
+ capacity: skuCount
+ }
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ publisherEmail: publisherEmail
+ publisherName: publisherName
+ }
+}
+```
+
+You can simplify the syntax by selecting **Quick Fix** as shown on the following screenshot:
++
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
Title: Use Bicep linter description: Learn how to use Bicep linter. Previously updated : 01/30/2023 Last updated : 03/16/2023 # Use Bicep linter
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [secure-params-in-nested-deploy](./linter-rule-secure-params-in-nested-deploy.md) - [secure-secrets-in-params](./linter-rule-secure-secrets-in-parameters.md) - [simplify-interpolation](./linter-rule-simplify-interpolation.md)
+- [simplify-json-null](./linter-rule-simplify-json-null.md)
- [use-parent-property](./linter-rule-use-parent-property.md) - [use-protectedsettings-for-commandtoexecute-secrets](./linter-rule-use-protectedsettings-for-commandtoexecute-secrets.md) - [use-recent-api-versions](./linter-rule-use-recent-api-versions.md)
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
description: Describes how to deploy a service catalog's managed application for
Previously updated : 03/16/2023 Last updated : 03/21/2023 # Quickstart: Deploy a service catalog managed application
You also need to create a name for the managed application resource group. The r
Run the following commands to create the managed resource group's name. ```azurepowershell
-$mrgprefix = 'rg-sampleManagedApplication-'
+$mrgprefix = 'mrg-sampleManagedApplication-'
$mrgtimestamp = Get-Date -UFormat "%Y%m%d%H%M%S" $mrgname = $mrgprefix + $mrgtimestamp $mrgname ```
-The `$mrgprefix` and `$mrgtimestamp` variables are concatenated to create a managed resource group name like _rg-sampleManagedApplication-20230310100148_ that's stored in the `$mrgname` variable. The name's format `rg-{definitionName}-{dateTime}` is the same format as the portal's default value. You use the `$mrgname` variable's value when you deploy the managed application.
+The `$mrgprefix` and `$mrgtimestamp` variables are concatenated to create a managed resource group name like _mrg-sampleManagedApplication-20230310100148_ that's stored in the `$mrgname` variable. The name's format `mrg-{definitionName}-{dateTime}` is the same format as the portal's default value. You use the `$mrgname` variable's value when you deploy the managed application.
You need to provide several parameters to the deployment command for the managed application. You can use a JSON formatted string or create a JSON file. In this example, we use a JSON formatted string. The PowerShell escape character for the quote marks is the backtick (`` ` ``) character. The backtick is also used for line continuation so that commands can use multiple lines.
You also need to create a name and path for the managed application resource gro
Run the following commands to create the managed resource group's path. ```azurecli
-mrgprefix='rg-sampleManagedApplication-'
+mrgprefix='mrg-sampleManagedApplication-'
mrgtimestamp=$(date +%Y%m%d%H%M%S) mrgname="${mrgprefix}${mrgtimestamp}" subid=$(az account list --query [].id --output tsv) mrgpath="/subscriptions/$subid/resourceGroups/$mrgname" ```
-The `mrgprefix` and `mrgtimestamp` variables are concatenated to create a managed resource group name like _rg-sampleManagedApplication-20230310100148_ that's stored in the `mrgname` variable. The name's format:`rg-{definitionName}-{dateTime}` is the same format as the portal's default value. The `mrgname` and `subid` variable's are concatenated to create the `mrgpath` variable value that creates the managed resource group during the deployment.
+The `mrgprefix` and `mrgtimestamp` variables are concatenated to create a managed resource group name like _mrg-sampleManagedApplication-20230310100148_ that's stored in the `mrgname` variable. The name's format:`mrg-{definitionName}-{dateTime}` is the same format as the portal's default value. The `mrgname` and `subid` variable's are concatenated to create the `mrgpath` variable value that creates the managed resource group during the deployment.
You need to provide several parameters to the deployment command for the managed application. You can use a JSON formatted string or create a JSON file. In this example, we use a JSON formatted string. The PowerShell escape character for the quote marks is the backslash (`\`) character. The backslash is also used for line continuation so that commands can use multiple lines.
The parameters to create the managed resources:
- **Resource group**: Select the resource group. For this example, create a resource group named _applicationGroup_. - **Region**: Select the location where you want to deploy the resource. - **Application Name**: Enter a name for your managed application. For this example, use _demoManagedApplication_.
- - **Application resources Resource group name**: The name of the managed resource group that contains the resources that are deployed for the managed application. The default name is in the format `rg-{definitionName}-{dateTime}` but you can change the name.
+ - **Managed Resource Group**: The name of the managed resource group that contains the resources that are deployed for the managed application. The default name is in the format `mrg-{definitionName}-{dateTime}` but you can change the name.
1. Provide values for the **Web App settings** tab and select **Next: Storage settings**.
To review the managed resource group's deny assignments, use the Azure portal or
# [Portal](#tab/azure-portal)
-Go to the managed resource group with the name prefix **rg-sampleManagedApplication** and select **Overview** to display the resources that were deployed. The resource group contains an App Service, App Service plan, and storage account.
+Go to the managed resource group with the name prefix **mrg-sampleManagedApplication** and select **Overview** to display the resources that were deployed. The resource group contains an App Service, App Service plan, and storage account.
:::image type="content" source="./media/deploy-service-catalog-quickstart/view-managed-resource-group.png" alt-text="Screenshot that shows the managed resource group that contains the resources deployed by the managed application definition.":::
The managed resource group and each resource created by the managed application
To see the role assignment from the Azure portal:
-1. Go to your **rg-sampleManagedApplication** resource group.
+1. Go to your **mrg-sampleManagedApplication** resource group.
1. Select **Access Control (IAM)** > **Role assignments**. You can also view the resource's **Deny assignments**.
The role assignment gives the application's publisher access to manage the stora
## Clean up resources
-When you're finished with the managed application, you can delete the resource groups and that removes all the resources you created. For example, in this quickstart you created the resource groups _applicationGroup_ and a managed resource group with the prefix _rg-sampleManagedApplication_.
+When you're finished with the managed application, you can delete the resource groups and that removes all the resources you created. For example, in this quickstart you created the resource groups _applicationGroup_ and a managed resource group with the prefix _mrg-sampleManagedApplication_.
# [PowerShell](#tab/azure-powershell)
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
Title: Create and publish Azure Managed Application in service catalog
-description: Describes how to create and publish an Azure Managed Application in your service catalog that's intended for members of your organization.
+description: Describes how to create and publish an Azure Managed Application in your service catalog using Azure PowerShell, Azure CLI, or Azure portal.
Previously updated : 03/01/2023 Last updated : 03/17/2023 # Quickstart: Create and publish an Azure Managed Application definition
To publish a managed application to your service catalog, do the following tasks
- Create an Azure Resource Manager template (ARM template) that defines the resources to deploy with the managed application. - Define the user interface elements for the portal when deploying the managed application. - Create a _.zip_ package that contains the required JSON files. The _.zip_ package file has a 120-MB limit for a service catalog's managed application definition.-- Deploy the managed application definition so it's available in your service catalog.
+- Publish the managed application definition so it's available in your service catalog.
If your managed application definition is more than 120 MB or if you want to use your own storage account for your organization's compliance reasons, go to [Quickstart: Bring your own storage to create and publish an Azure Managed Application definition](publish-service-catalog-bring-your-own-storage.md).
Upload _app.zip_ to an Azure storage account so you can use it when you deploy t
# [PowerShell](#tab/azure-powershell)
+In Visual Studio Code, open a new PowerShell terminal and sign in to your Azure subscription.
+
+```azurepowershell
+Connect-AzAccount
+```
+
+The command opens your default browser and prompts you to sign in to Azure. For more information, go to [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+ ```azurepowershell New-AzResourceGroup -Name packageStorageGroup -Location westus3
Set-AzStorageBlobContent `
# [Azure CLI](#tab/azure-cli)
+In Visual Studio Code, open a new Bash terminal session and sign in to your Azure subscription. If you have Git installed, select Git Bash.
+
+```azurecli
+az login
+```
+
+The command opens your default browser and prompts you to sign in to Azure. For more information, go to [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+ ```azurecli az group create --name packageStorageGroup --location westus3
az storage blob upload \
For more information about storage authentication, see [Choose how to authorize access to blob data with Azure CLI](../../storage/blobs/authorize-data-operations-cli.md).
+# [Portal](#tab/azure-portal)
+
+Create a storage account in a new resource group:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select **Create a resource** from the portal's **Home** page.
+
+ :::image type="content" source="./media/publish-service-catalog-app/create-resource.png" alt-text="Screenshot of Azure portal home page with create a resource highlighted.":::
+
+1. Search for _storage account_ and select it from the available options.
+1. Select **Create** on the **Storage accounts** page.
+
+ :::image type="content" source="./media/publish-service-catalog-app/create-storage-account.png" alt-text="Screenshot of the storage accounts page with the create button highlighted.":::
+
+1. On the **Basics** tab, enter the required information.
+
+ :::image type="content" source="./media/publish-service-catalog-app/create-storage-account-basics.png" alt-text="Screenshot of the Basics tab on the Azure form to create a storage account.":::
+
+ - **Resource group**: Select **Create new** to create the _packageStorageGroup_ resource group.
+ - **Storage account name**: Enter a unique storage account name. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers.
+ - **Region**: _West US3_
+ - **Performance**: _Standard_
+ - **Redundancy**: _Locally-redundant storage (LRS)_.
+
+1. Accept the defaults on the other tabs.
+1. Select **Review** and then select **Create**.
+1. Select **Go to resource** to go to the storage account.
+
+Create a container and upload the _app.zip_ file:
+
+1. Go to **Data storage** and select **Containers**.
+
+ :::image type="content" source="./media/publish-service-catalog-app/create-new-container.png" alt-text="Screenshot of the storage account's screen to create a new container.":::
+
+1. Configure the container's properties and select **Create**.
+
+ :::image type="content" source="./media/publish-service-catalog-app/create-new-container-properties.png" alt-text="Screenshot of the new container screen to enter a name and public access level.":::
+
+ - **Name**: _appcontainer_.
+ - **Public access level**: Select _Blob_.
+
+1. Select _appcontainer_.
+1. Select **Upload** and follow the prompts to upload your _app.zip_ file to the container.
+
+ :::image type="content" source="./media/publish-service-catalog-app/upload-zip-file.png" alt-text="Screenshot of the appcontainer to upload the zip file to your storage account.":::
+
+ You can drag and drop the file to the portal or browse to the file's location on your computer.
+
+1. After the file is uploaded, select _app.zip_ in the container.
+1. Copy the _app.zip_ file's URL from the **Overview** > **URL**.
+
+ :::image type="content" source="./media/publish-service-catalog-app/copy-file-url.png" alt-text="Screenshot of the zip file's URL with copy button highlighted.":::
+
+Make a note of the _app.zip_ file's URL because you need it to create the managed application definition.
+ ## Create the managed application definition
-In this section you get identity information from Azure Active Directory, create a resource group, and create the managed application definition.
+In this section, you get identity information from Azure Active Directory, create a resource group, and deploy the managed application definition.
### Get group ID and role definition ID The next step is to select a user, security group, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the assigned role. The role can be any Azure built-in role like Owner or Contributor. +
+# [PowerShell](#tab/azure-powershell)
+ This example uses a security group, and your Azure Active Directory account should be a member of the group. To get the group's object ID, replace the placeholder `managedAppDemo` with your group's name. You use this variable's value when you deploy the managed application definition. To create a new Azure Active Directory group, go to [Manage Azure Active Directory groups and group membership](../../active-directory/fundamentals/how-to-manage-groups.md).
-# [PowerShell](#tab/azure-powershell)
- ```azurepowershell $principalid=(Get-AzADGroup -DisplayName managedAppDemo).Id ```
-# [Azure CLI](#tab/azure-cli)
+Next, get the role definition ID of the Azure built-in role you want to grant access to the user, group, or application. You use this variable's value when you deploy the managed application definition.
-```azurecli
-principalid=$(az ad group show --group managedAppDemo --query id --output tsv)
+```azurepowershell
+$roleid=(Get-AzRoleDefinition -Name Owner).Id
``` -
+# [Azure CLI](#tab/azure-cli)
-Next, get the role definition ID of the Azure built-in role you want to grant access to the user, group, or application. You use this variable's value when you deploy the managed application definition.
+This example uses a security group, and your Azure Active Directory account should be a member of the group. To get the group's object ID, replace the placeholder `managedAppDemo` with your group's name. You use this variable's value when you deploy the managed application definition.
-# [PowerShell](#tab/azure-powershell)
+To create a new Azure Active Directory group, go to [Manage Azure Active Directory groups and group membership](../../active-directory/fundamentals/how-to-manage-groups.md).
-```azurepowershell
-$roleid=(Get-AzRoleDefinition -Name Owner).Id
+```azurecli
+principalid=$(az ad group show --group managedAppDemo --query id --output tsv)
```
-# [Azure CLI](#tab/azure-cli)
+Next, get the role definition ID of the Azure built-in role you want to grant access to the user, group, or application. You use this variable's value when you deploy the managed application definition.
```azurecli roleid=$(az role definition list --name Owner --query [].name --output tsv) ``` -
+# [Portal](#tab/azure-portal)
-### Create the managed application definition
+In the portal, the group ID and role ID are configured when you publish the managed application definition.
-Create a resource group for your managed application definition.
++
+### Publish the managed application definition
# [PowerShell](#tab/azure-powershell)
+Create a resource group for your managed application definition.
+ ```azurepowershell New-AzResourceGroup -Name appDefinitionGroup -Location westus3 ```
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az group create --name appDefinitionGroup --location westus3
-```
---
-Create the managed application definition in the resource group.
-
-The `blob` command that's run from Azure PowerShell or Azure CLI creates a variable that's used to get the URL for the package _.zip_ file. That variable is used in the command that creates the managed application definition.
-
-# [PowerShell](#tab/azure-powershell)
+The `blob` command creates a variable to store the URL for the package _.zip_ file. That variable is used in the command that creates the managed application definition.
```azurepowershell $blob = Get-AzStorageBlob -Container appcontainer -Blob app.zip -Context $ctx
New-AzManagedApplicationDefinition `
-PackageFileUri $blob.ICloudBlob.StorageUri.PrimaryUri.AbsoluteUri ```
+When the command completes, you have a managed application definition in your resource group.
+
+Some of the parameters used in the preceding example are:
+
+- `ResourceGroupName`: The name of the resource group where the managed application definition is created.
+- `LockLevel`: The `lockLevel` on the managed resource group prevents the customer from performing undesirable operations on this resource group. Currently, `ReadOnly` is the only supported lock level. `ReadOnly` specifies that the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock level.
+- `Authorization`: Describes the principal ID and the role definition ID that are used to grant permission to the managed resource group.
+ - `"${principalid}:$roleid"` or you can use curly braces for each variable `"${principalid}:${roleid}"`.
+ - Use a comma to separate multiple values: `"${principalid1}:$roleid1", "${principalid2}:$roleid2"`.
+- `PackageFileUri`: The location of a _.zip_ package file that contains the required files.
+ # [Azure CLI](#tab/azure-cli)
-In the `blob` command's `account-name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
+Create a resource group for your managed application definition.
+
+```azurecli
+az group create --name appDefinitionGroup --location westus3
+```
+
+In the `blob` command's `account-name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name. The `blob` command creates a variable to store the URL for the package _.zip_ file. That variable is used in the command that creates the managed application definition.
```azurecli blob=$(az storage blob url \
az managedapp definition create \
--package-file-uri "$blob" ``` -- When the command completes, you have a managed application definition in your resource group. Some of the parameters used in the preceding example are: -- **resource group**: The name of the resource group where the managed application definition is created.-- **lock level**: The `lockLevel` on the managed resource group prevents the customer from performing undesirable operations on this resource group. Currently, `ReadOnly` is the only supported lock level. `ReadOnly` specifies that the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock level.-- **authorizations**: Describes the principal ID and the role definition ID that are used to grant permission to the managed resource group.
+- `resource-group`: The name of the resource group where the managed application definition is created.
+- `lock-level`: The `lockLevel` on the managed resource group prevents the customer from performing undesirable operations on this resource group. Currently, `ReadOnly` is the only supported lock level. `ReadOnly` specifies that the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock level.
+- `authorizations`: Describes the principal ID and the role definition ID that are used to grant permission to the managed resource group.
+ - `"$principalid:$roleid"` or you can use curly braces like `"${principalid}:${roleid}"`.
+ - Use a space to separate multiple values: `"$principalid1:$roleid1" "$principalid2:$roleid2"`.
+- `package-file-uri`: The location of a _.zip_ package file that contains the required files.
+
+# [Portal](#tab/azure-portal)
+
+To publish a managed application definition from the Azure portal, use the following steps.
- - **Azure PowerShell**: `"${principalid}:$roleid"` or you can use curly braces for each variable `"${principalid}:${roleid}"`. Use a comma to separate multiple values: `"${principalid1}:$roleid1", "${principalid2}:$roleid2"`.
- - **Azure CLI**: `"$principalid:$roleid"` or you can use curly braces as shown in PowerShell. Use a space to separate multiple values: `"$principalid1:$roleid1" "$principalid2:$roleid2"`.
+1. Select **Create a resource** from the portal's **Home** page.
+1. Search for _Service Catalog Managed Application Definition_ and select it from the available options.
+1. Select **Create** from the **Service Catalog Managed Application Definition** page.
-- **package file URI**: The location of a _.zip_ package file that contains the required files.
+ :::image type="content" source="./media/publish-service-catalog-app/create-service-catalog-definition.png" alt-text="Screenshot of the Service Catalog Managed Application Definition page with the create button highlighted.":::
+
+1. On the **Basics** tab, enter the following information and select **Next: Package**:
+
+ :::image type="content" source="./media/publish-service-catalog-app/create-service-catalog-definition-basics.png" alt-text="Screenshot of the Basics tab on the form to create a service catalog definition. ":::
+
+ - **Project details**:
+ - Select your subscription name.
+ - Create a new resource group named _appDefinitionGroup_.
+ - **Instance details**:
+ - **Name**: Enter a name like _instance-name_. The name isn't used in the definition but the form requires an entry.
+ - **Region**: _West US3_
+ - **Application details**:
+ - **Name**: _sampleManagedApplication_
+ - **Display name**: _Sample managed application_
+ - **Description**: _Sample managed application that deploys web resources_
+
+1. On the **Package** tab, enter the **Package file uri** for your _app.zip_ file.
+1. Ignore the **Management settings** tab.
+1. On the **Authentication and lock level** tab, enter the following information and then select **Review + create**.
+
+ :::image type="content" source="./media/publish-service-catalog-app/create-service-catalog-definition-authentication.png" alt-text="Screenshot of the authentication and lock level for the managed application definition.":::
+
+ - **Lock level**: Select _Read Only_.
+ - Select **Add members**.
+ - **Roles**: Select _Owner_.
+ - **Select principals**: Select your group's name like _managedAppDemo_.
+
+ The **Lock level** on the managed resource group prevents the customer from performing undesirable operations on this resource group. Currently, `Read Only` is the only supported lock level. `Read Only` specifies that the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock level.
+
+1. After **Validation Passed** is displayed, select **Create**.
+
+ :::image type="content" source="./media/publish-service-catalog-app/create-service-catalog-definition-validation.png" alt-text="Screenshot of portal that shows validation passed for the managed application definition.":::
+
+When the deployment is complete, you have a managed application definition in your resource group.
++ ## Make sure users can see your definition
You have access to the managed application definition, but you want to make sure
## Next steps
-You've published the managed application definition. Now, learn how to deploy an instance of that definition.
+You've published the managed application definition. The next step is to learn how to deploy an instance of that definition.
> [!div class="nextstepaction"] > [Quickstart: Deploy a service catalog managed application](deploy-service-catalog-quickstart.md)
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/key-vault-parameter.md
description: Shows how to pass a secret from a key vault as a parameter during d
Last updated 06/18/2021 - # Use Azure Key Vault to pass secure parameter value during deployment
Instead of putting a secure value (like a password) directly in your template or parameter file, you can retrieve the value from an [Azure Key Vault](../../key-vault/general/overview.md) during a deployment. You retrieve the value by referencing the key vault and secret in your parameter file. The value is never exposed because you only reference its key vault ID. > [!IMPORTANT]
-> This article focuses on how to pass a sensitive value as a template parameter. When the secret is passed as a parameter, the key vault can exist in a different subscription than the resource group you're deploying to.
+> This article focuses on how to pass a sensitive value as a template parameter. When the secret is passed as a parameter, the key vault can exist in a different subscription than the resource group you're deploying to.
> > This article doesn't cover how to set a virtual machine property to a certificate's URL in a key vault. For a quickstart template of that scenario, see [Install a certificate from Azure Key Vault on a Virtual Machine](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/vm-winrm-keyvault-windows).
azure-resource-manager Test Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/test-toolkit.md
Title: ARM template test toolkit description: Describes how to run the Azure Resource Manager template (ARM template) test toolkit on your template. The toolkit lets you see if you have implemented recommended practices. Previously updated : 11/16/2022 Last updated : 03/21/2023
To learn more about the ARM template test toolkit, and for hands-on guidance, se
1. If you don't already have PowerShell, [install PowerShell on Windows](/powershell/scripting/install/installing-powershell-core-on-windows).
-1. [Download the latest .zip file](https://aka.ms/arm-ttk-latest) for the test toolkit and extract it.
+1. [Download the latest .zip file](https://github.com/Azure/arm-ttk/releases) for the test toolkit and extract it.
1. Start PowerShell.
To learn more about the ARM template test toolkit, and for hands-on guidance, se
1. If you don't already have PowerShell, [install PowerShell on Linux](/powershell/scripting/install/installing-powershell-core-on-linux).
-1. [Download the latest .zip file](https://aka.ms/arm-ttk-latest) for the test toolkit and extract it.
+1. [Download the latest .zip file](https://github.com/Azure/arm-ttk/releases) for the test toolkit and extract it.
1. Start PowerShell.
To learn more about the ARM template test toolkit, and for hands-on guidance, se
brew install coreutils ```
-1. [Download the latest .zip file](https://aka.ms/arm-ttk-latest) for the test toolkit and extract it.
+1. [Download the latest .zip file](https://github.com/Azure/arm-ttk/releases) for the test toolkit and extract it.
1. Start PowerShell.
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
Title: Connect a classic Azure Video Indexer account to ARM
description: This topic explains how to connect an existing classic paid Azure Video Indexer account to an ARM-based account Previously updated : 05/04/2022 Last updated : 03/20/2023
This article shows how to connect an existing classic paid Azure Video Indexer account to an Azure Resource Manager (ARM)-based (recommended) account. To create a new ARM-based account, see [create a new account](create-account-portal.md). To understand the Azure Video Indexer account types, review [account types](accounts-overview.md).
-In this article, we demonstrate options of connecting your **existing** Azure Video Indexer account to an [ARM][docs-arm-overview]-based account.
+In this article, we demonstrate options of connecting your **existing** Azure Video Indexer account to an [ARM][docs-arm-overview]-based account. You can also view the following video.
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW10iby]
## Prerequisites
azure-video-indexer Storage Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/storage-behind-firewall.md
Title: Use Video Indexer with storage behind firewall description: This article gives an overview how to configure Azure Video Indexer to use storage behind firewall. Previously updated : 03/02/2023 Last updated : 03/21/2023
When you create a Video Indexer account, you must associate it with a Media Serv
If you want to use a firewall to secure your storage account and enable trusted storage, [Managed Identities](/azure/media-services/latest/concept-managed-identities) authentication that allows Video Indexer access through the firewall is the preferred option. It allows Video Indexer and Media Services to access the storage account that has been configured without needing public access for [trusted storage access.](../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-to-trusted-azure-services)
+[!IMPORTANT] When you lock your storage accounts without public access be aware that the client device you're using to download the video source file using the Video Indexer portal will be the source ip that the storage account will see and allow/deny depending on the network configuration of your storage account. For instance, if I'm accessing the Video Indexer portal from my home network and I want to download the video source file a sas url to the storage account is created, my device will initiate the request and as a consequence the storage account will see my home ip as source ip. If you did not add exception for this ip you will not be able to access the SAS url to the source video. Work with your network/storage administrator on a network strategy i.e. use your corporate network, VPN or Private Link.
+ Follow these steps to enable Managed Identity for Media Services and Storage and then lock your storage account. It's assumed that you already created a Video Indexer account and associated with a Media Services and Storage account. ## Assign the Managed Identity and role
Follow these steps to enable Managed Identity for Media Services and Storage and
:::image type="content" source="./media/storage-behind-firewall/trusted-service-storage-lock-select-exceptions.png" alt-text="Screenshot of how to disable public access for your storage account and enable exception for trusted services from the Azure portal."::: 1. Under **Exceptions**, make sure that **Allow Azure services on the trusted services list to access this storage account** is selected. +
+## Upload from locked storage account
+
+When uploading a file to Video Indexer you can provide a link to a video using a SAS locator. If the storage account hosting the video is not publicly accessible we need to use the Managed Identity and Trusted Service approach. Since there is no way for us to know if a SAS url is pointing to a locked storage account, and this also applies to the storage account connected to Media Services, you need to explicitly set the query parameter `useManagedIdentityToDownloadVideo` to `true` in the [upload-video API call](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). In addition, you also need to set the role `Azure Storage : Storage Blob Data Owner` on this storage account as you did with the storage account connect to Media Services in the previous section.
+ ## Summary This concludes the tutorial. With these steps you've completed the following activities:
This concludes the tutorial. With these steps you've completed the following act
## Next steps
-[Disaster recovery](video-indexer-disaster-recovery.md)
+[Disaster recovery](video-indexer-disaster-recovery.md)
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
Title: Upload and index videos with Azure Video Indexer description: Learn two methods for uploading and indexing videos by using Azure Video Indexer. Previously updated : 11/15/2021 Last updated : 03/20/2023
After Azure Video Indexer is done analyzing, you get an email with a link to you
You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API to upload and index your videos based on a URL. The code sample that follows includes the commented-out code that shows how to upload the byte array.
+You can also view the following video.
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW10CsK]
+ > [!NOTE] > Before you proceed, make sure to review [API recommendations](video-indexer-use-apis.md#recommendations).
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
In this article, you learn how to:
> [!div class="checklist"] >
-> * Export the certificate for LDAPS authentication
-> * Upload the LDAPS certificate to blob storage and generate a SAS URL
+> * (Optional) Export the certificate for LDAPS authentication
+> * (Optional) Upload the LDAPS certificate to blob storage and generate a SAS URL
> * Configure NSX-T DNS for resolution to your Active Directory Domain > * Add Active Directory over (Secure) LDAPS (LDAP over SSL) or (unsecure) LDAP > * Add existing AD group to cloudadmin group
In this article, you learn how to:
> * Remove AD group from the cloudadmin role > * Remove existing external identity sources
+>[!NOTE]
+>[Export the certificate for LDAPS authentication](#optional-export-the-certificate-for-ldaps-authentication) and [Upload the LDAPS certificate to blob storage and generate a SAS URL](#optional-upload-the-ldaps-certificate-to-blob-storage-and-generate-a-sas-url) are optional steps as now the certificate(s) will be downloaded from the domain controller(s) automatically through the parameter(s) **PrimaryUrl** and/or **SecondaryUrl** if the parameter **SSLCertificatesSasUrl** is not provided. You can still provide **SSLCertificatesSasUrl** and follow the optional steps to manually export and upload the certificate(s).
+ ## Prerequisites - Connectivity from your Active Directory network to your Azure VMware Solution private cloud must be operational.
In this article, you learn how to:
- You need to have a valid certificate. To create a certificate, follow the steps shown in [create a certificate for secure LDAP](../active-directory-domain-services/tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap). Make sure the certificate meets the requirements that are listed after the steps you used to create a certificate for secure LDAP. >[!NOTE] >Self-sign certificates are not recommended for production environments.
- - [Export the certificate for LDAPS authentication](#export-the-certificate-for-ldaps-authentication) and upload it to an Azure Storage account as blob storage. Then, you'll need to [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
+ - Optional: The certificate(s) will be downloaded from the domain controller(s) automatically through the parameter(s) **PrimaryUrl** and/or **SecondaryUrl** if the parameter **SSLCertificatesSasUrl** is not provided. If you prefer to manually export and upload the certificate(s), please [export the certificate for LDAPS authentication](#optional-export-the-certificate-for-ldaps-authentication) and upload it to an Azure Storage account as blob storage. Then, you'll need to [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
- Ensure Azure VMware Solution has DNS resolution configured to your on-premises AD. Enable DNS Forwarder from Azure portal. See [Configure DNS forwarder for Azure VMware Solution](configure-dns-azure-vmware-solution.md) for further information. >[!NOTE] >For more information about LDAPS and certificate issuance, see with your security or identity management team.
-## Export the certificate for LDAPS authentication
+## (Optional) Export the certificate for LDAPS authentication
First, verify that the certificate used for LDAPS is valid. If you don't already have a certificate, follow the steps to [create a certificate for secure LDAP](../active-directory-domain-services/tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap) before you continue.
Now proceed to export the certificate
>[!NOTE] >If more than one domain controller is LDAPS enabled, repeat the export procedure in the additional domain controller(s) to also export the corresponding certificate(s). Be aware that you can only reference two LDAPS server in the `New-LDAPSIdentitySource` Run Command. If the certificate is a wildcard certificate, for example ***.avsdemo.net** you only need to export the certificate from one of the domain controllers.
-## Upload the LDAPS certificate to blob storage and generate a SAS URL
+## (Optional) Upload the LDAPS certificate to blob storage and generate a SAS URL
- Upload the certificate file (.cer format) you just exported to an Azure Storage account as blob storage. Then [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
In your Azure VMware Solution private cloud, you'll run the `New-LDAPSIdentitySo
| **Field** | **Value** | | | | | **GroupName** | The group in the external identity source that gives the cloudadmin access. For example, **avs-admins**. |
- | **CertificateSAS** | Path to SAS strings with the certificates for authentication to the AD source. If you're using multiple certificates, separate each SAS string with a comma. For example, **pathtocert1,pathtocert2**. |
+ | **SSLCertificatesSasUrl** | Path to SAS strings with the certificates for authentication to the AD source. If you're using multiple certificates, separate each SAS string with a comma. For example, **pathtocert1,pathtocert2**. |
| **Credential** | The domain username and password used for authentication with the AD source (not cloudadmin). The user must be in the **username@avslab.local** format. | | **BaseDNGroups** | Where to look for groups, for example, **CN=group1, DC=avsldap,DC=local**. Base DN is needed to use LDAP Authentication. | | **BaseDNUsers** | Where to look for valid users, for example, **CN=users,DC=avsldap,DC=local**. Base DN is needed to use LDAP Authentication. |
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
Appendix 1 shows proxy URLs required by the Azure Arc-enabled private cloud. The
- [Google Container Registry](http://gcr.io/) - [Red Hat Quay.io](http://quay.io/)
+- [Docker](https://hub.docker.com/)
+- [Harbor](https://goharbor.io/)
+- [Container Registry](https://container-registry.com/)
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-vmware-hcx.md
Continue to the next tutorial to configure the VMware HCX Connector. After you'
>[Configure VMware HCX in Azure VMware Solution](configure-vmware-hcx.md)
+>[Understanding HCX Network Underlay Requirements](https://communities.vmware.com/t5/HCX-Documents/HCX-Network-Underlay-Characterization-and-Performance-Outcomes-1/ta-p/2873360)
+ >[VMware blog series - cloud migration](https://blogs.vmware.com/vsphere/2019/10/cloud-migration-series-part-2.html) >[Uninstall VMware HCX in Azure VMware Solution](uninstall-vmware-hcx.md)
azure-web-pubsub Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/resource-faq.md
Previously updated : 11/11/2021 Last updated : 03/21/2023 # Azure Web PubSub service FAQ
Azure Web PubSub service is more suitable for situations where:
## Where does my data reside?
-Azure Web PubSub service works as a data processor service and doesn't store any customer data. Azure Web PubSub service processes customer data within the region the customer deploys the service instance in. If you use Azure Web PubSub service together with other Azure services, like Azure Storage for diagnostics, see [this white paper](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/) for guidance about how to keep data residency in Azure regions.
+Azure Web PubSub does not store any customer data. If you use Azure Web PubSub service together with other Azure services, like Azure Storage for diagnostics, see [this white paper](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/) for guidance about how to keep data residency in Azure regions.
backup Backup Azure Enhanced Soft Delete About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md
Title: Overview of enhanced soft delete for Azure Backup (preview)
description: This article gives an overview of enhanced soft delete for Azure Backup. Previously updated : 03/06/2023 Last updated : 03/21/2023
The key benefits of enhanced soft delete are:
## Supported regions
-Enhanced soft delete is currently available in the following regions: East US, West US, West US 2, West Central US, Japan East, Brazil South, Australia East, and North Europe.
+Enhanced soft delete is available in all Azure public regions.
## Supported scenarios
cognitive-services Use Case Alt Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/use-case-alt-text.md
Previously updated : 07/22/2022 Last updated : 03/17/2023
-# Overview: Generate alt text of images with Image Analysis
+# Overview: Generate image alt-text with Image Analysis
-Grow your customer base by making your products and services more accessible. Generate a description of an image in human-readable language, using complete sentences. Computer Vision's algorithms generate various descriptions based on the objects identified in the image. The descriptions are each evaluated, and a confidence score is generated. You then get the list of descriptions, ordered from highest confidence score to lowest.
+## What is alt text?
-## Image captioning example
+Alt text, or alternative text, is an HTML attribute added to the `<img>` tag that displays images on an application or web page. It looks like this in plain HTML code:
-The following screenshot is an example of automatically generated alt text for an image.
+`<img src="elephant.jpg" alt="An elephant in a grassland">`
+Alt text enables website owners to describe an image in plain text. These image descriptions improve accessibility by enabling screen readers such as Microsoft Narrator, JAWS, and NVDA to accurately communicate image content to their visually impaired and blind users.
-## Adding alt text to images
+Alt text is also vital for image search engine optimization (SEO). It helps search engines understand the visual content in your images. The search engine is then better able to include and rank your website in search results when users search for the content in your website.
-Alternative text (alt text) is descriptive text that conveys the meaning and context of a visual item in a digital setting, such as on an app or web page. When screen readers such as Microsoft Narrator, JAWS, and NVDA reach digital content with alt text, they read the alt text aloud, allowing people to better understand what is on the screen. Well written, descriptive alt text dramatically reduces ambiguity and improves the user experience. Alt text needs to convey the purpose and meaning of an image, which requires understanding and interpretation in addition to object detection.
+## Auto-generate alt text with Image Analysis
-## Key features
+Image Analysis offers image captioning models that generate one-sentence descriptions of image visual content. You can use these AI generated captions as alt text for your images.
-- Human-readable captions with confidence: Generate a description of an entire image in human-readable language, using complete sentences. Computer Vision's algorithms generate various descriptions based on the objects identified in the image. The descriptions are each evaluated and a confidence score generated. A list is then returned ordered from highest confidence score to lowest. -- Multiple captions available: You can specify the number of possible captions it will generate and choose the one that works best for your business.
-## Benefits for your business
+Auto-generated caption: "An elephant in a grassland."
-Accessibility is the design of products, devices, services, vehicles, or environments to be usable by people with disabilities. Accessibility has been an increasingly important topic in product and service development as it makes digital experiences available to more people in the world. Accessibility will boost your business in multiple ways:
+MicrosoftΓÇÖs own products such as PowerPoint, Word, and Edge browser use image captioning by Image Analysis to generate alt text.
-- **Improve user experience**: by adding alt text, you make the information in images available to users who are blind or have low vision, as well as users who can't load the images due to internet connection. -- **Make images more discoverable and searchable**: adding captions and tags to images will help search engine crawlers find your images and rank them higher in search results.-- **Meet legal compliance**: there may potentially be legal repercussions for business that aren't accessibility-compliant. Building products with higher accessibility helps the business avoid potential risks of legal action now and in the future.
-
+## Benefits for your website
-## Next Steps
-The following tutorial provides a complete solution to generate alternative text of images on web applications.
+- **Improve accessibility and user experience for blind and low-vision users**. Alt Text makes visual information in images available to screen readers used by blind and low-vision users.
+- **Meet legal compliance requirements**. Some websites may be legally required to remove all accessibility barriers. Using alt text for accessibility helps website owners minimize risk of legal action now and in the future.
+- **Make your website more discoverable and searchable**. Image alt text helps search engine crawlers find images on your website more easily and rank them higher in search results.
-> [!div class="nextstepaction"]
-> [Tutorial: Use Computer Vision to generate image metadata](./Tutorials/storage-lab-tutorial.md)
+## Frequently Asked Questions
+
+### What languages are image captions available in?
+
+Image captions are available in English, Chinese, Portuguese, Japanese, and Spanish in Image Analysis 3.2 API. In the Image Analysis 4.0 API (preview), image captions are only available in English.
+
+### What confidence threshold should I use?
+
+To ensure accurate alt text for all images, you can choose to only accept captions above a certain confidence level. The right confidence level varies for each user depending on the type of images and usage scenario.
+In general, we advise a confidence threshold of `0.4` for the Image Analysis 3.2 API and of `0.0` for the Image Analysis 4.0 API (preview).
+### What can I do about embarrassing or erroneous captions?
+On rare occasions, image captions can contain embarrassing errors, such as labeling a male-identifying person as a "woman" or labeling an adult woman as a "girl". We encourage users to consider using the latest Image Analysis 4.0 API (preview) which eliminates some errors by supporting gender-neutral captions.
+
+Please report any embarrassing or offensive captions by going to the [Azure portal](https://ms.portal.azure.com/#home) and navigating to the **Feedback** button in the top right.
+
+## Next Steps
+Follow a quickstart to begin automatically generating alt text by using image captioning on Image Analysis.
+
+> [!div class="nextstepaction"]
+> [Image Analysis quickstart](./quickstarts-sdk/image-analysis-client-library-40.md)
cognitive-services Chatgpt Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/chatgpt-quickstart.md
Title: 'Quickstart - Using the ChatGPT API'
+ Title: 'Quickstart - Get started using ChatGPT (Preview) and GPT-4 (Preview) with Azure OpenAI Service'
-description: Walkthrough on how to get started with Azure OpenAI Service ChatGPT API.
+description: Walkthrough on how to get started with ChatGPT and GPT-4 on Azure OpenAI Service.
Previously updated : 02/07/2023 Last updated : 03/21/2023 zone_pivot_groups: openai-quickstart recommendations: false
-# Quickstart: Get started using ChatGPT with Azure OpenAI Service
+# Quickstart: Get started using ChatGPT (preview) and GPT-4 (preview) with Azure OpenAI Service
Use this article to get started using Azure OpenAI.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 03/09/2023 Last updated : 03/21/2023
Azure OpenAI provides access to many different models, grouped by family and cap
| Model family | Description | |--|--|
-| [GPT-3](#gpt-3-models) | A series of models that can understand and generate natural language. This includes the new [ChatGPT model](#chatgpt-gpt-35-turbo). |
+| [GPT-4](#gpt-4-models) | A set of models that improve on GPT-3.5 and can understand as well as generate natural language and code. **These models are currently in preview.**|
+| [GPT-3](#gpt-3-models) | A series of models that can understand and generate natural language. This includes the new [ChatGPT model (preview)](#chatgpt-gpt-35-turbo-preview). |
| [Codex](#codex-models) | A series of models that can understand and generate code, including translating natural language to code. | | [Embeddings](#embeddings-models) | A set of models that can understand and use embeddings. An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently, we offer three families of Embeddings models for different functionalities: similarity, text search, and code search. |
You can get a list of models that are available for both inference and fine-tuni
## Finding the right model
-We recommend starting with the most capable model in a model family to confirm whether the model capabilities meet your requirements. Then you can stay with that model or move to a model with lower capability and cost, optimizing around that model's capabilities.
+We recommend starting with the most capable model in a model family to confirm whether the model capabilities meet your requirements. Then you can stay with that model or move to a model with lower capability and cost, optimizing around that model's capabilities.
+
+## GPT-4 models (preview)
+
+ GPT-4 can solve difficult problems with greater accuracy than any of OpenAI's previous models. Like gpt-35-turbo, GPT-4 is optimized for chat but works well for traditional completions tasks.
+
+ These models are currently in preview. For access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4).
+- `gpt-4`
+- `gpt-4-32k`
+
+The `gpt-4` supports 8192 max input tokens and the `gpt-4-32k` supports up to 32,768 tokens.
## GPT-3 models
Ada is usually the fastest model and can perform tasks like parsing text, addres
**Use for**: Parsing text, simple classification, address correction, keywords
-### ChatGPT (gpt-35-turbo)
+### ChatGPT (gpt-35-turbo) (preview)
The ChatGPT model (gpt-35-turbo) is a language model designed for conversational interfaces and the model behaves differently than previous GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the ChatGPT model is conversation-in and message-out. The model expects a prompt string formatted in a specific chat-like transcript format, and returns a completion that represents a model-written message in the chat.
-The ChatGPT model uses the same completion API that you use for other models like text-davinci-002, but it requires a unique prompt format. It's important to use the new prompt format to get the best results. Without the right prompts, the model tends to be verbose and provides less useful responses. To learn more check out our [in-depth how-to](../how-to/chatgpt.md).
+To learn more about the ChatGPT model and how to interact with the Chat API check out our [in-depth how-to](../how-to/chatgpt.md).
## Codex models
When using our embeddings models, keep in mind their limitations and risks.
| davinci<sup>1</sup> | Yes | No | N/A | East US<sup>2</sup>, South Central US, West Europe<sup>2</sup> | 2,049 | Oct 2019| | text-davinci-001 | Yes | No | South Central US, West Europe | N/A | | | | text-davinci-002 | Yes | No | East US, South Central US, West Europe | N/A | 4,097 | Jun 2021 |
-| text-davinci-003 | Yes | No | East US | N/A | 4,097 | Jun 2021 |
+| text-davinci-003 | Yes | No | East US, West Europe | N/A | 4,097 | Jun 2021 |
| text-davinci-fine-tune-002<sup>1</sup> | Yes | No | N/A | East US, West Europe<sup>2</sup> | | |
-| gpt-35-turbo<sup>3</sup> (ChatGPT) | Yes | No | N/A | East US, South Central US | 4,096 | Sep 2021
+| gpt-35-turbo<sup>3</sup> (ChatGPT) (preview) | Yes | No | East US, South Central US | N/A | 4,096 | Sep 2021
<sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model. <br><sup>2</sup> East US and West Europe are currently unavailable for new customers to fine-tune due to high demand. Please use US South Central region for fine-tuning.
-<br><sup>3</sup> Currently, only version `"0301"` of this model is available. This version of the model will be deprecated on 8/1/2023 in favor of newer version of the gpt-35-model. See [ChatGPT model versioning](../how-to/chatgpt.md#model-versioning) for more details.
+<br><sup>3</sup> Currently, only version `0301` of this model is available. This version of the model will be deprecated on 8/1/2023 in favor of newer version of the gpt-35-model. See [ChatGPT model versioning](../how-to/chatgpt.md#model-versioning) for more details.
+
+### GPT-4 Models
+
+| Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| -- | -- | - | - | - | -- | - |
+| `gpt-4` <sup>1,</sup><sup>2</sup> (preview) | Yes | No | East US, South Central US | N/A | 8,192 | September 2021 |
+| `gpt-4-32k` <sup>1,</sup><sup>2</sup> (preview) | Yes | No | East US, South Central US | N/A | 32,768 | September 2021 |
+
+<sup>1</sup> The model is in preview and only available by request.<br>
+<sup>2</sup> Currently, only version `0314` of this model is available.
### Codex Models | Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
When using our embeddings models, keep in mind their limitations and risks.
### Embeddings Models | Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | | | |
-| text-embeddings-ada-002 | No | Yes | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 |
+| text-embedding-ada-002 | No | Yes | East US, South Central US, West Europe | N/A |8,192 | Sep 2021 |
| text-similarity-ada-001 | No | Yes | East US, South Central US, West Europe | N/A | 2,046 | Aug 2020 | | text-similarity-babbage-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 | | text-similarity-curie-001 | No | Yes | East US, South Central US, West Europe | N/A | 2046 | Aug 2020 |
cognitive-services Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/prompt-engineering.md
+
+ Title: Azure OpenAI Service | Introduction to Prompt engineering
+
+description: Learn how to use prompt engineering to optimize your work with Azure OpenAI Service.
++ Last updated : 03/21/2023++++
+recommendations: false
+keywords:
++
+# Introduction to prompt engineering
+
+GPT-3, GPT-3.5, and GPT-4 models from OpenAI are prompt-based. With prompt-based models, the user interacts with the model by entering a text prompt, to which the model responds with a text completion. This completion is the modelΓÇÖs continuation of the input text.
+
+While these models are extremely powerful, their behavior is also very sensitive to the prompt. This makes prompt construction an important skill to develop.
+
+Prompt construction can be difficult. In practice, the prompt acts to configure the model weights to complete the desired task, but it's more of an art than a science, often requiring experience and intuition to craft a successful prompt. The goal of this article is to help get you started with this learning process. It attempts to capture general concepts and patterns that apply to all GPT models. However it's important to understand that each model behaves differently, so the learnings may not apply equally to all models.
+
+## Basics
+
+This section covers the basic concepts and elements of GPT prompts.
+
+Text prompts are how users interact with GPT models. As with all generative language models, GPT models attempt to produce the next series of words that are most likely to follow from the previous text. It's as if we're saying *What is the first thing that comes to your mind when I say `<prompt>`?ΓÇ¥*. The examples below demonstrate this behavior. Given the first words of famous content, the model is able to accurately continue the text.
+
+| Prompt |Completion |
+||--|
+| Four score and seven years ago our | fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. […]|
+| "Call me Ishmael." | "Some years ago—never mind how long precisely—having little o no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world." […]|
+
+As you develop more complex prompts, it's helpful to keep this fundamental behavior in mind. Regardless of the prompt that is provided, the model is simply responding with what it determines is most likely (given its training data and training targets). If you ask a question in your prompt, the model isn't following a separate ΓÇ£Q&AΓÇ¥ code path, but rather it appears to answer the question because an answer is the most likely sort of response for the given question as input.
+
+### Prompt components
+
+When using the Completion API while there is no differentiation between different parts of the prompt, it can still be useful for learning and discussion to identify underlying prompt components. With the [Chat Completion API](../how-to/chatgpt.md) there are distinct sections of the prompt that are sent to the API in the form of an array of dictionaries with associated roles: system, user, and assistant. This guidance will focus more generally on how to think about prompt construction rather than providing prescriptive guidance that is specific to one API over another.
+
+It is also important to understand that while there could be other valid ways to dissect prompts, the goal of this breakdown is to provide a relatively simple way to think about prompt construction. With the Completion API, all the components are optional, but at least one must be present and most prompts include more than one component. There can be some grey area between components as well. The order presented below roughly corresponds to how commonly each component is used, from most to least.
+
+#### Instructions
+
+Instructions are likely the most commonly used prompt component. Instructions are simply that: instructions to the model on what to do. While simple in concept, they can be complex in practice. The following table illustrates this by showing both simple and more complex instructions, in two different forms.
+
+|Instruction Type | Prompt | Completion |
+|--||--|
+Simple instructions | Write an introduction for a weekly newsletter. | Welcome to our weekly newsletter. This week we have a special focus on productivity. We'll be sharing tips and tricks on how to be productive in all areas of your life. We hope you enjoy and please share with your friends! |
+| Complex instructions | Write an introduction for a weekly newsletter to my company named Contoso. It should mention the great all hands meeting we had last week and thank the team for all their hard work over the tough past few months. | Welcome to Contoso's weekly newsletter! We had a great all hands meeting last week and I want to thank the team for all their hard work over the tough past few months. We're looking forward to a bright future and I'm confident that with our team's dedication and commitment, we will achieve great things. Thank you for being a part of our team and I look forward to working with you all in the coming weeks and months. Sincerely, [Your name]. |
+| Complex Instructions | Write an introduction for a weekly newsletter, including the following: <br>- My company is named contoso <br>- An exuberant greeting <br>- Thanking the team for their hard work over a couple tough months <br>- Positive outlook for the coming quarter<br>- Signed by the SLT | Hello Contoso team! I wanted to take a moment to thank you for all your hard work over the past few months. It's been a tough couple of months, but you've all pulled together and done an amazing job. I'm really proud of you all. Looking ahead to the next quarter, I'm feeling really positive. I think we're going to achieve some great things. Thanks again, The Senior Leadership Team.|
+
+### Primary Content
+
+Primary content refers to some sort of text that is being processed or transformed by the model. Primary content is typically used in conjunction with instructions. A simple example would be language translation. In the example below, the English text would be considered the primary content while ΓÇ£Translate to French:ΓÇ¥ is an instruction.
+
+| Prompt | Completion|
+|--|--|
+| Can you please tell me how to get to the museum?<br>Translate to French: |`Pouvez-vous s'il vous plaît me dire comment aller au musée?` |
+
+Primary content can also be much longer. In the following example the primary content is the introduction section from the [Wikipedia entry on time complexity](https://en.wikipedia.org/wiki/Time_complexity), which is almost 300 words long. This content has been abbreviated in our table for ease of display.
+
+| Prompt | Completion|
+|--|--|
+| In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by [...]<br>Summarize the previous text in a few sentences:| Time complexity is a measure of how long an algorithm takes to run. It is typically expressed as a function of the size of the input. The most common measure is worst-case time complexity, which is the maximum amount of time required for inputs of a given size.|
+| In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by [...]<br>TL;DR (Too long didn't read) | The time complexity of an algorithm is the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. The time complexity is generally expressed as a function of the size of the input.|
+
+GPT models can also handle primary content that is structured. In the example below, a TSV about beers in Belgium (abbreviated for display in this article) is passed to the model as part of the prompt. It is capable of properly interpreting the content and answering questions about the data.
+
+| Prompt | Completion |
+|--||
+| `Belgian Beer Brands` <br>`Beer name Beer style color ABV Brewery`<br>`"Chimay Gold: ""Doree""" trappist pale ale gold 4.80% Chimay Brewery (official trappist)`<br>`"Chimay Blue: ""Grande Reserve""" trappist dark ale dark 9.00%`<br>[…]<br>Referencing the table above, how many beers are less than 6% ABV? | There are 10 beers in the table that are less than 6% ABV. |
+
+### Examples
+
+Successful prompts often rely on the practice of ΓÇ£one-shotΓÇ¥ or ΓÇ£few-shotΓÇ¥ learning. This refers to the inclusion of one or more examples of the desired behavior of the model, typically by including input and output pairs. This is not learning in the sense that the model is permanently changed, but rather that the examples better condition the model to respond as desired for only the current inference. The use of prompts with no examples is sometimes referred to as ΓÇ£zero-shotΓÇ¥ learning. Please note that with the Chat Completion API few-shot learning examples are typically added to the messages array in the form of example user/assistant interactions after the initial system message.
+
+| Learning Type| Prompt| Completion|
+|- |-|--|
+| Headline: Coach confident injury won't derail Warriors<br>Topic:| The coach is confident that the injury won't derail the Warriors' season. The team is still focused on their goals and that they will continue to work hard to achieve them.|
+| Headline: Twins' Correa to use opt-out, test free agency<br>Topic: Baseball<br>Headline: Qatar World Cup to have zones for sobering up<br>Topic: Soccer<br>Headline: Yates: Fantasy football intel for Week 6<br>Topic: Football<br>Headline: Coach confident injury won't derail Warriors<br>Topic: | Basketball |
+
+The example above illustrates the utility of few-shot learning. Without the examples, the model seems to be guessing at the desired behavior, while the examples cleanly show the model how to operate. This also demonstrates the power of the model, it can infer the category of label that is wanted, even without a ΓÇÿbasketballΓÇÖ label in the examples.
+
+### Cue
+
+Cues act as the ΓÇ£jumpstartΓÇ¥ for the output of the model, helping to direct the model to the desired output. It is often a prefix that the model can build onto. In the example below, we show the cue directing the story to include key elements. Cues are often used along with instructions, but not always. In the examples below, the model is being used to summarize a long announcement email. A cue is used in one case to help focus the output of the model and to suggest a specific output format (bullet points) in the other.
+
+In the context of the Chat Completion API instructions take the form of the system message, and few-shot examples are indicated by providing a series of example user/assistant interactions to help prime the model.
+
+| Number of Cues|Prompt | Completion|
+|-|-|--|
+| 0 | Performance reviews are a core component of our Performance & Development approach at Contoso. […] <br> Summarize the above email message: | The purpose of this email is to inform employees of recent changes to the performance review experience, which is part of Contoso's Performance & Development approach. The changes include an improved structure, upgraded in-tool user experience, and focused prompts. Additionally, there is now an optional Conversation Starters section for employees to flag topics they'd like to discuss with their managers. |
+| 1 | Performance reviews are a core component of our Performance & Development approach at Contoso. […] <br> Summarize the above email message:<br> The key takeaway of the message is | that Contoso is improving the performance review experience by making it more focused on core priorities, expectations and impact. Additionally, they are upgrading the in-tool user experience with tips, resources and guidance. Finally, they are adding guidance on how to activate company-wide core priorities. |
+|2 | Performance reviews are a core component of our Performance & Development approach at Contoso. […] <br> Summarize the above email message: <br>Key Points:<br>• | Contoso is introducing changes to the performance review experience, based on feedback from employees.<br>• The changes include a new focus on core priorities, expectations, and impact; an upgraded user experience; and new guidance on company-wide core priorities.<br>• Performance reviews are a time to reflect on past performance and learning as well as to plan for future priorities and growth opportunities.<br>• Employees and managers should write comments in advance of the performance review meeting.<br>• There should be at least two performance reviews per year.<br>• The time investment for each review should remain about the same. |
+
+### Supporting content
+
+Supporting content is information that the model can utilize to influence the output in some way. It differs from primary content in that it is not the main target of task, but it is typically used along with primary content. Common examples include contextual information such as the current date, the name of the user, user preferences, and so on. The example below uses supporting content to help organize a set of planned workshops for the user. Without the supporting (important topics) the model simply lists off the workshops (truncated for display), when informed of my important topics, the model is able to accurately group the meetings.
+
+| Supporting Content| Prompt | Completion|
+|--|-|--|
+| Without | Workshop list:<br>Title Description<br> Planning Workshop Kickoff Please join us to kick off our workshop! Hear from our leaders about cross group priorities and learnings for the year ahead. The invitation to this event has already been sent to everyone in […]<br>Summarize the workshops above, grouped by topic: | Workshop list: <br> Planning Workshop Kickoff <br> -Planning Workshop Wrap-up<br>- Core MeetUp<br>- Customer Driven Perspectives<br>-Impact and Key Focus Areas<br>-Best practices<br>-Accelerating app adoption […] |
+| With | Workshop list: <br>Title Description<br>Planning Workshop Kickoff Please join us to kick off our workshop! Hear from our leaders about cross group priorities and learnings for the year ahead. The invitation to this event has already been sent to everyone in […] <br>My Important Topics: prompt engineering, search, GPT models<br>Summarize the workshops above, grouped by my important topics:| 1. Prompt Engineering: <br>- New prompt engineering tips<br>- The art of prompt engineering<br>2. Search:<br>-Intro to vector search with embedding<br>3. GPT Models:<br>- Intro to GPT-4<br>- ChatGPT in-depth.|
+
+## Best practices
+
+- **Be Specific**. Leave as little to interpretation as possible. Restrict the operational space.
+- **Be Descriptive**. Use analogies.
+- **Double Down**. Sometimes you may need to repeat yourself to the model. Give instructions before and after your primary content, use an instruction and a cue, etc.
+- **Order Matters**. The order in which you present information to the model may impact the output. Whether you put instructions before your content (“summarize the following…”) or after (“summarize the above…”) can make a difference in output. Even the order of few-shot examples can matter. This is referred to as recency bias.
+- **Give the model an ΓÇ£outΓÇ¥**. It can sometimes be helpful to give the model an alternative path if it is unable to complete the assigned task. For example, when asking a question over a piece of text you might include something like "respond with ΓÇÿnot foundΓÇÖ if the answer is not present". This can help the model avoid generating false responses.
+
+## Space efficiency
+
+While the input size increases with each new generation of GPT models, there will continue to be scenarios that provide more data than the model can handle. GPT models break words into ΓÇ£tokensΓÇ¥. While common multi-syllable words are often a single token, less common words are broken in syllables. Tokens can sometimes be counter-intuitive, as shown by the example below which demonstrates token boundaries for different date formats. In this case, spelling out the entire month is more space efficient than a fully numeric date. The current range of token support goes from 2000 tokens with earlier GPT-3 models to up to 32,768 tokens with the 32k version of the latest GPT-4 model.
++
+Given this limited space, it is important to use it as efficiently as possible.
+- Tables ΓÇô As shown in the examples in the previous section, GPT models can understand tabular formatted data quite easily. This can be a space efficient way to include data, rather than preceding every field with name (such as with JSON).
+- White Space ΓÇô Consecutive whitespaces are treated as separate tokens which can be an easy way to waste space. Spaces preceding a word, on the other hand, are typically treated as part of the same token as the word. Carefully watch your usage of whitespace and donΓÇÖt use punctuation when a space alone will do.
+
+## Next steps
+
+[Learn more about Azure OpenAI](../overview.md)
cognitive-services Chatgpt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/chatgpt.md
Title: How to work with the Chat Markup Language (preview)-
-description: Learn how to work with Chat Markup Language (preview)
--
+ Title: How to work with the ChatGPT and GPT-4 models (preview)
+
+description: Learn about the options for how to use the ChatGPT and GPT-4 models (preview)
++ Previously updated : 03/09/2023 Last updated : 03/21/2023 keywords: ChatGPT
+zone_pivot_groups: openai-chat
-# Learn how to work with Chat Markup Language (preview)
+# Learn how to work with the ChatGPT and GPT-4 models (preview)
-The ChatGPT model (`gpt-35-turbo`) is a language model designed for conversational interfaces and the model behaves differently than previous GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the ChatGPT model is conversation-in and message-out. The model expects a prompt string formatted in a specific chat-like transcript format, and returns a completion that represents a model-written message in the chat. While the prompt format was designed specifically for multi-turn conversations, you'll find it can also work well for non-chat scenarios too.
+The ChatGPT and GPT-4 models are language models that are optimized for conversational interfaces. The models behave differently than the older GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the ChatGPT and GPT-4 models are conversation-in and message-out. The models expect input formatted in a specific chat-like transcript format, and return a completion that represents a model-written message in the chat. While this format was designed specifically for multi-turn conversations, you'll find it can also work well for non-chat scenarios too.
-The ChatGPT model can be used with the same [completion API](../reference.md#completions) that you use for other models like text-davinci-002, but it requires a unique prompt format known as Chat Markup Language (ChatML). It's important to use the new prompt format to get the best results. Without the right prompts, the model tends to be verbose and provides less useful responses.
+In Azure OpenAI there are two different options for interacting with these type of models:
-## Working with the ChatGPT model
+- Chat Completion API.
+- Completion API with Chat Markup Language (ChatML).
-The following code snippet shows the most basic way to use the ChatGPT model. We also have a UI driven experience that you can learn about in the [ChatGPT Quickstart](../chatgpt-quickstart.md).
+The Chat Completion API is a new dedicated API for interacting with the ChatGPT and GPT-4 models. **Both sets of models are currently in preview**. This API is the preferred method for accessing these models. **It is also the only way to access the new GPT-4 models**.
-```python
-import os
-import openai
-openai.api_type = "azure"
-openai.api_base = "https://{your-resource-name}.openai.azure.com/"
-openai.api_version = "2022-12-01"
-openai.api_key = os.getenv("OPENAI_API_KEY")
+ChatML uses the same [completion API](../reference.md#completions) that you use for other models like text-davinci-002, it requires a unique token based prompt format known as Chat Markup Language (ChatML). This provides lower level access than the dedicated Chat Completion API, but also requires additional input validation, only supports ChatGPT (gpt-35-turbo) models, and **the underlying format is more likely to change over time**.
-response = openai.Completion.create(
- engine="gpt-35-turbo",
- prompt="<|im_start|>system\nAssistant is a large language model trained by OpenAI.\n<|im_end|>\n<|im_start|>user\nWhat's the difference between garbanzo beans and chickpeas?\n<|im_end|>\n<|im_start|>assistant\n",
- temperature=0,
- max_tokens=500,
- top_p=0.5,
- stop=["<|im_end|>"])
+This article walks you through getting started with the new ChatGPT and GPT-4 models. It's important to use the techniques described here to get the best results. If you try to interact with the models the same way you did with the older model series, the models will often be verbose and provide less useful responses.
-print(response['choices'][0]['text'])
-```
-> [!NOTE]
-> The following parameters aren't available with the gpt-35-turbo model: `logprobs`, `best_of`, and `echo`. If you set any of these parameters, you'll get an error.
-The `<|im_end|>` token indicates the end of a message. We recommend including `<|im_end|>` token as a stop sequence to ensure that the model stops generating text when it reaches the end of the message. You can read more about the special tokens in the [Chat Markup Language (ChatML)](#chatml) section.
-Consider setting `max_tokens` to a slightly higher value than normal such as 300 or 500. This ensures that the model doesn't stop generating text before it reaches the end of the message.
-## Model versioning
-> [!NOTE]
-> `gpt-35-turbo` is equivalent to the `gpt-3.5-turbo` model from OpenAI.
-Unlike previous GPT-3 and GPT-3.5 models, the `gpt-35-turbo` model will continue to be updated. When creating a [deployment](./create-resource.md#deploy-a-model) of `gpt-35-turbo`, you'll also need to specify a model version.
-
-Currently, only version `"0301"` is available. This is equivalent to the `gpt-3.5-turbo-0301` model from OpenAI. We'll continue to make updated versions available in the future. You can find model deprecation times on our [models](../concepts/models.md) page.
-
-One thing that's important to note is that Chat Markup Language (ChatML) will continue to evolve with the new versions of the model. You may need to make updates to your prompts when you upgrade to a new version of the model.
-
-<a id="chatml"></a>
-
-## Working with Chat Markup Language (ChatML)
-
-> [!NOTE]
-> OpenAI continues to improve the `gpt-35-turbo` model and the Chat Markup Language used with the model will continue to evolve in the future. We'll keep this document updated with the latest information.
-
-OpenAI trained the gpt-35-turbo model on special tokens that delineate the different parts of the prompt. The prompt starts with a system message that is used to prime the model followed by a series of messages between the user and the assistant.
-
-The format of a basic ChatML prompt is as follows:
-
-```
-<|im_start|>system
-Provide some context and/or instructions to the model.
-<|im_end|>
-<|im_start|>user
-The userΓÇÖs message goes here
-<|im_end|>
-<|im_start|>assistant
-```
-
-### System message
-
-The system message is included at the beginning of the prompt between the `<|im_start|>system` and `<|im_end|>` tokens. This message provides the initial instructions to the model. You can provide various information in the system message including:
-
-* A brief description of the assistant
-* Personality traits of the assistant
-* Instructions or rules you would like the instructions to follow
-* Data or information needed for the model, such as relevant questions from an FAQ
-
-You can customize the system message for your use case or just include a basic system message. The system message is optional, but it's recommended to at least include a basic one to get the best results.
-
-### Messages
-
-After the system message, you can include a series of messages between the **user** and the **assistant**. Each message should begin with the `<|im_start|>` token followed by the role (`user` or `assistant`) and end with the `<|im_end|>` token.
-
-```
-<|im_start|>user
-What is thermodynamics?
-<|im_end|>
-```
-
-To trigger a response from the model, the prompt should end with `<|im_start|>assistant` token indicating that it's the assistant's turn to respond. You can also include messages between the user and the assistant in the prompt as a way to do few shot learning.
-
-### Prompt examples
-
-The following section shows examples of different styles of prompts that you could use with the ChatGPT model. These examples are just a starting point, and you can experiment with different prompts to customize the behavior for your own use cases.
-
-#### Basic example
-
-If you want the ChatGPT model to behave similarly to [chat.openai.com](https://chat.openai.com/), you can use a basic system message like "Assistant is a large language model trained by OpenAI."
-
-```
-<|im_start|>system
-Assistant is a large language model trained by OpenAI.
-<|im_end|>
-<|im_start|>user
-What's the difference between garbanzo beans and chickpeas?
-<|im_end|>
-<|im_start|>assistant
-```
-
-#### Example with instructions
-
-For some scenarios, you may want to give additional instructions to the model to define guardrails for what the model is able to do.
-
-```
-<|im_start|>system
-Assistant is an intelligent chatbot designed to help users answer their tax related questions.
-
-Instructions:
-- Only answer questions related to taxes. -- If you're unsure of an answer, you can say "I don't know" or "I'm not sure" and recommend users go to the IRS website for more information.
-<|im_end|>
-<|im_start|>user
-When are my taxes due?
-<|im_end|>
-<|im_start|>assistant
-```
-
-#### Using data for grounding
-
-You can also include relevant data or information in the system message to give the model extra context for the conversation. If you only need to include a small amount of information, you can hard code it in the system message. If you have a large amount of data that the model should be aware of, you can use [embeddings](../tutorials/embeddings.md?tabs=command-line) or a product like [Azure Cognitive Search](https://azure.microsoft.com/services/search/) to retrieve the most relevant information at query time.
-
-```
-<|im_start|>system
-Assistant is an intelligent chatbot designed to help users answer technical questions about Azure OpenAI Serivce. Only answer questions using the context below and if you're not sure of an answer, you can say "I don't know".
-
-Context:
-- Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-3, Codex and Embeddings model series.-- Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.-- At Microsoft, we're committed to the advancement of AI driven by principles that put people first. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes requiring applicants to show well-defined use cases, incorporating MicrosoftΓÇÖs principles for responsible AI use
-<|im_end|>
-<|im_start|>user
-What is Azure OpenAI Service?
-<|im_end|>
-<|im_start|>assistant
-```
-
-#### Few shot learning with ChatML
-
-You can also give few shot examples to the model. The approach for few shot learning has changed slightly because of the new prompt format. You can now include a series of messages between the user and the assistant in the prompt as few shot examples. These examples can be used to seed answers to common questions to prime the model or teach particular behaviors to the model.
-
-This is only one example of how you can use few shot learning with ChatGPT. You can experiment with different approaches to see what works best for your use case.
-
-```
-<|im_start|>system
-Assistant is an intelligent chatbot designed to help users answer their tax related questions.
-<|im_end|>
-<|im_start|>user
-When do I need to file my taxes by?
-<|im_end|>
-<|im_start|>assistant
-In 2023, you will need to file your taxes by April 18th. The date falls after the usual April 15th deadline because April 15th falls on a Saturday in 2023. For more details, see https://www.irs.gov/filing/individuals/when-to-file
-<|im_end|>
-<|im_start|>user
-How can I check the status of my tax refund?
-<|im_end|>
-<|im_start|>assistant
-You can check the status of your tax refund by visiting https://www.irs.gov/refunds
-<|im_end|>
-```
-
-#### Using Chat Markup Language for non-chat scenarios
-
-ChatML is designed to make multi-turn conversations easier to manage, but it also works well for non-chat scenarios.
-
-For example, for an entity extraction scenario, you might use the following prompt:
-
-```
-<|im_start|>system
-You are an assistant designed to extract entities from text. Users will paste in a string of text and you will respond with entities you've extracted from the text as a JSON object. Here's an example of your output format:
-{
- "name": "",
- "company": "",
- "phone_number": ""
-}
-<|im_end|>
-<|im_start|>user
-Hello. My name is Robert Smith. IΓÇÖm calling from Contoso Insurance, Delaware. My colleague mentioned that you are interested in learning about our comprehensive benefits policy. Could you give me a call back at (555) 346-9322 when you get a chance so we can go over the benefits?
-<|im_end|>
-<|im_start|>assistant
-```
--
-## Preventing unsafe user inputs
-
-It's important to add mitigations into your application to ensure safe use of the Chat Markup Language.
-
-We recommend that you prevent end-users from being able to include special tokens in their input such as `<|im_start|>` and `<|im_end|>`. We also recommend that you include additional validation to ensure the prompts you're sending to the model are well formed and follow the Chat Markup Language format as described in this document.
-
-You can also provide instructions in the system message to guide the model on how to respond to certain types of user inputs. For example, you can instruct the model to only reply to messages about a certain subject. You can also reinforce this behavior with few shot examples.
--
-## Managing conversations with ChatGPT
-
-The token limit for `gpt-35-turbo` is 4096 tokens. This limit includes the token count from both the prompt and completion. The number of tokens in the prompt combined with the value of the `max_tokens` parameter must stay under 4096 or you'll receive an error.
-
-ItΓÇÖs your responsibility to ensure the prompt and completion falls within the token limit. This means that for longer conversations, you need to keep track of the token count and only send the model a prompt that falls within the token limit.
-
-The following code sample shows a simple example of how you could keep track of the separate messages in the conversation.
-
-```python
-import os
-import openai
-openai.api_type = "azure"
-openai.api_base = "https://{your-resource-name}.openai.azure.com/"
-openai.api_version = "2022-12-01"
-openai.api_key = os.getenv('api_key')
-
-# defining a function to create the prompt from the system message and the conversation messages
-def create_prompt(system_message, messages):
- prompt = system_message
- for message in messages:
- prompt += f"\n<|im_start|>{message['sender']}\n{message['text']}\n<|im_end|>"
- prompt += "\n<|im_start|>assistant\n"
- return prompt
-
-# defining the user input and the system message
-user_input = "<your user input>"
-system_message = f"<|im_start|>system\n{'<your system message>'}\n<|im_end|>"
-
-# creating a list of messages to track the conversation
-messages = [{"sender": "user", "text": user_input}]
-
-response = openai.Completion.create(
- engine="gpt-35-turbo",
- prompt=create_prompt(system_message, messages),
- temperature=0.5,
- max_tokens=250,
- top_p=0.9,
- frequency_penalty=0,
- presence_penalty=0,
- stop=['<|im_end|>']
-)
-
-messages.append({"sender": "assistant", "text": response['choices'][0]['text']})
-print(response['choices'][0]['text'])
-```
-
-## Staying under the token limit
-
-The simplest approach to staying under the token limit is to truncate the oldest messages in the conversation when you reach the token limit.
-
-You can choose to always include as many tokens as possible while staying under the limit or you could always include a set number of previous messages assuming those messages stay within the limit. It's important to keep in mind that longer prompts take longer to generate a response and incur a higher cost than shorter prompts.
-
-You can estimate the number of tokens in a string by using the [tiktoken](https://github.com/openai/tiktoken) Python library as shown below.
-
-```python
-import tiktoken
-
-cl100k_base = tiktoken.get_encoding("cl100k_base")
-
-enc = tiktoken.Encoding(
- name="gpt-35-turbo",
- pat_str=cl100k_base._pat_str,
- mergeable_ranks=cl100k_base._mergeable_ranks,
- special_tokens={
- **cl100k_base._special_tokens,
- "<|im_start|>": 100264,
- "<|im_end|>": 100265
- }
-)
-
-tokens = enc.encode(
- "<|im_start|>user\nHello<|im_end|><|im_start|>assistant",
- allowed_special={"<|im_start|>", "<|im_end|>"}
-)
-
-assert len(tokens) == 7
-assert tokens == [100264, 882, 198, 9906, 100265, 100264, 78191]
-```
-
-## Next steps
-
-* [Learn more about Azure OpenAI](../overview.md).
-* Get started with the ChatGPT model with [the ChatGPT quickstart](../chatgpt-quickstart.md).
-* For more examples check out the [Azure OpenAI Samples GitHub repository](https://github.com/Azure/openai-samples)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
Previously updated : 03/01/2022 Last updated : 03/21/2023 recommendations: false keywords:
Azure OpenAI Service provides REST API access to OpenAI's powerful language mode
| Feature | Azure OpenAI | | | |
-| Models available | GPT-3 base series <br>**New ChatGPT (gpt-35-turbo)**<br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
+| Models available | **NEW GPT-4 series (preview)** <br> GPT-3 base series <br>**NEW ChatGPT (gpt-35-turbo) (preview)**<br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* Currently unavailable. \*\*East US and West Europe Fine-tuning is currently unavailable to new customers. Please use US South Central for US based training| | Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | | Virtual network support & private link support | Yes |
All solutions using Azure OpenAI are also required to go through a use case revi
## Comparing Azure OpenAI and OpenAI
-Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.
+Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.
With Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering.
The number of examples typically range from 0 to 100 depending on how many can f
### Models
-The service provides users access to several different models. Each model provides a different capability and price point. The GPT-3 base models are known as Davinci, Curie, Babbage, and Ada in decreasing order of capability and increasing order of speed.
+The service provides users access to several different models. Each model provides a different capability and price point.
+
+GPT-4 models are the latest available models. These models are currently in preview. For access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4).
+
+The GPT-3 base models are known as Davinci, Curie, Babbage, and Ada in decreasing order of capability and increasing order of speed.
The Codex series of models is a descendant of GPT-3 and has been trained on both natural language and code to power natural language to code use cases. Learn more about each model on our [models concept page](./concepts/models.md).
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
Previously updated : 03/01/2023 Last updated : 03/21/2023
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 |
-| Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model: 300 <br> All other models: 300|
+| Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model (preview): 300 <br> GPT-4 models (preview): 12 <br> All other models: 300 |
| Tokens per minute per model* | Davinci-models (002 and later): 40,000 <br> ChatGPT model: 120,000 <br> All other models: 120,000 | | Max fine-tuned model deployments* | 2 | | Ability to deploy same model to multiple deployments | Not allowed |
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
Previously updated : 11/17/2022-- Last updated : 03/21/2023++ recommendations: false # Azure OpenAI Service REST API reference
-This article provides details on the REST API endpoints for Azure OpenAI, a service in the Azure Cognitive Services suite. The REST APIs are broken up into two categories:
-
-* **Management APIs**: The Azure Resource Manager (ARM) provides the management layer in Azure that allows you to create, update and delete resource in Azure. All services use a common structure for these operations. [Learn More](../../azure-resource-manager/management/overview.md)
-* **Service APIs**: Azure OpenAI provides you with a set of REST APIs for interacting with the resources & models you deploy via the Management APIs.
-
-## Management APIs
-
-Azure OpenAI is deployed as a part of the Azure Cognitive Services. All Cognitive Services rely on the same set of management APIs for creation, update and delete operations. The management APIs are also used for deploying models within an OpenAI resource.
-
-[**Management APIs reference documentation**](/rest/api/cognitiveservices/)
+This article provides details on the inference REST API endpoints for Azure OpenAI.
## Authentication Azure OpenAI provides two methods for authentication. you can use either API Keys or Azure Active Directory. -- **API Key authentication**: For this type of authentication, all API requests must include the API Key in the ```api-key``` HTTP header. The [Quickstart](./quickstart.md) provides a tutorial for how to make calls with this type of authentication
+- **API Key authentication**: For this type of authentication, all API requests must include the API Key in the ```api-key``` HTTP header. The [Quickstart](./quickstart.md) provides guidance for how to make calls with this type of authentication.
- **Azure Active Directory authentication**: You can authenticate an API call using an Azure Active Directory token. Authentication tokens are included in a request as the ```Authorization``` header. The token provided must be preceded by ```Bearer```, for example ```Bearer YOUR_AUTH_TOKEN```. You can read our how-to guide on [authenticating with Azure Active Directory](./how-to/managed-identity.md).
The service APIs are versioned using the ```api-version``` query parameter. All
POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-12-01 ```
-We currently have the following versions available: ```2022-12-01```
- ## Completions+ With the Completions operation, the model will generate one or more predicted completions based on a provided prompt. The service can also return the probabilities of alternative tokens at each position. **Create a completion**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
| Parameter | Type | Required? | Description | |--|--|--|--| | ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls |
+| ```deployment-id``` | string | Required | The deployment name you chose when you deployed the model. |
| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. | **Supported versions** -- `2022-12-01`
+- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)
**Request body** | Parameter | Type | Required? | Default | Description | |--|--|--|--|--| | ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt(s) to generate completions for, encoded as a string, a list of strings, or a list of token lists. Note that ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model will generate as if from the beginning of a new document. |
-| ```max_tokens``` | integer | Optional | 16 | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except davinci-codex, which supports 4096). |
+| ```max_tokens``` | integer | Optional | 16 | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). |
| ```temperature``` | number | Optional | 1 | What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. | | ```top_p``` | number | Optional | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
+| ```logit_bias``` | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <\|endoftext\|> token from being generated. |
+| ```user``` | string | Optional | | A unique identifier representing your end-user, which can help monitoring and detecting abuse |
| ```n``` | integer | Optional | 1 | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. | | ```stream``` | boolean | Optional | False | Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.| | ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. |
+| ```suffix```| string | Optional | null | he suffix that comes after a completion of inserted text. |
| ```echo``` | boolean | Optional | False | Echo back the prompt in addition to the completion. This parameter cannot be used with `gpt-35-turbo`. | | ```stop``` | string or array | Optional | null | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence. | | ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | | ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | | ```best_of``` | integer | Optional | 1 | Generates best_of completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return ΓÇô best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. This parameter cannot be used with `gpt-35-turbo`. |
-| ```logit_bias``` | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <\|endoftext\|> token from being generated. |
#### Example request
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM
} ```
+In the example response, `finish_reason` equals `stop`. If `finish_reason` equals `content_filter` consult our [content filtering guide](./concepts/content-filter.md) to understand why this is occurring.
+ ## Embeddings
-Get a vector representation of a given input that can be easily consumed by machine learning models and other algorithms.
+Get a vector representation of a given input that can be easily consumed by machine learning models and other algorithms.
**Create an embedding**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2022-12-01`
+- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
#### Example request ```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2022-12-01\
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2022-12-01 \
-H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d "{\"input\": \"The food was delicious and the waiter...\"}"
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM
} ```
+## Chat completions
+
+Create completions for chat messages with the ChatGPT (preview) and GPT-4 (preview) models. Chat completions are currently only available with `api-version=2023-03-15-preview`.
+
+**Create chat completions**
+
+```http
+POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions?api-version={api-version}
+```
+
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+
+**Supported versions**
+
+- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+
+#### Example request
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-03-15-preview \
+ -H "Content-Type: application/json" \
+ -H "api-key: YOUR_API_KEY" \
+ -d '{"messages":[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},{"role": "user", "content": "Do other Azure Cognitive Services support this too?"}]}'
+
+```
+
+#### Example response
+
+```console
+{"id":"chatcmpl-6v7mkQj980V1yBec6ETrKPRqFjNw9",
+"object":"chat.completion","created":1679072642,
+"model":"gpt-35-turbo",
+"usage":{"prompt_tokens":58,
+"completion_tokens":68,
+"total_tokens":126},
+"choices":[{"message":{"role":"assistant",
+"content":"Yes, other Azure Cognitive Services also support customer managed keys. Azure Cognitive Services offer multiple options for customers to manage keys, such as using Azure Key Vault, customer-managed keys in Azure Key Vault or customer-managed keys through Azure Storage service. This helps customers ensure that their data is secure and access to their services is controlled."},"finish_reason":"stop","index":0}]}
+```
+
+In the example response, `finish_reason` equals `stop`. If `finish_reason` equals `content_filter` consult our [content filtering guide](./concepts/content-filter.md) to understand why this is occurring.
+
+Output formatting adjusted for ease of reading, actual output is a single block of text without line breaks.
+
+| Parameter | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| ```messages``` | array | Required | | The messages to generate chat completions for, in the chat format. |
+| ```temperature```| number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.\nWe generally recommend altering this or `top_p` but not both. |
+| ```n``` | integer | Optional | 1 | How many chat completion choices to generate for each input message. |
+| ```stream``` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message." |
+| ```stop``` | string or array | Optional | null | Up to 4 sequences where the API will stop generating further tokens.|
+| ```max_tokens``` | integer | Optional | inf | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).|
+| ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.|
+| ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.|
+| ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.|
+| ```user``` | string | Optional | | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.|
+
+## Management APIs
+
+Azure OpenAI is deployed as a part of the Azure Cognitive Services. All Cognitive Services rely on the same set of management APIs for creation, update and delete operations. The management APIs are also used for deploying models within an OpenAI resource.
+
+[**Management APIs reference documentation**](/rest/api/cognitiveservices/)
+ ## Next steps
-Learn about [managing deployments, models, and finetuning with the REST API](/rest/api/cognitiveservices/azureopenaistable/deployments/create).
-Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
+Learn about [managing deployments, models, and fine-tuning with the REST API](/rest/api/cognitiveservices/azureopenaistable/deployments/create).
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
Previously updated : 03/01/2023 Last updated : 03/21/2023 recommendations: false keywords:
keywords:
## March 2023 -- **ChatGPT (gpt-35-turbo) public preview**. To learn more checkout the new [quickstart](./quickstart.md), and [how-to articles](./how-to/chatgpt.md).
+- **GPT-4 series models are now available in preview on Azure OpenAI**. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4). These models are currently available in the East US and South Central US regions.
+
+- **New Chat Completion API for ChatGPT and GPT-4 models released in preview on 3/21**. To learn more checkout the [updated quickstarts](./quickstart.md) and [how-to article](./how-to/chatgpt.md).
+
+- **ChatGPT (gpt-35-turbo) preview**. To learn more checkout the [how-to article](./how-to/chatgpt.md).
- Increased training limits for fine-tuning: The max training job size (tokens in training file) x (# of epochs) is 2 Billion tokens for all models. We have also increased the max training job from 120 to 720 hours. - Adding additional use cases to your existing access.  Previously, the process for adding new use cases required customers to reapply to the service. Now, we're releasing a new process that allows you to quickly add new use cases to your use of the service. This process follows the established Limited Access process within Azure Cognitive Services. [Existing customers can attest to any and all new use cases here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUM003VEJPRjRSOTZBRVZBV1E5N1lWMk1XUyQlQCN0PWcu). Please note that this is required anytime you would like to use the service for a new use case you did not originally apply for.
communication-services Enable Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/enable-logging.md
They're all viable and flexible options that can adapt to your specific storage
By choosing to send your logs to a [Log Analytics workspace](../../../azure-monitor/logs/log-analytics-overview.md) destination, you enable more features within Azure Monitor generally and for your Communications Services. Log Analytics is a tool within Azure portal used to create, edit, and run [queries](../../../azure-monitor/logs/queries.md) with data in your Azure Monitor logs and metrics and [Workbooks](../../../azure-monitor/visualize/workbooks-overview.md), [alerts](../../../azure-monitor/alerts/alerts-log.md), [notification actions](../../../azure-monitor/alerts/action-groups.md), [REST API access](/rest/api/loganalytics/), and many others.
-For your Communications Services logs, we've provided a useful [default query pack](../../../azure-monitor/logs/query-packs.md#default-query-pack) to provide an initial set of insights to quickly analyze and understand your data. These query packs are described here: [Log Analytics for Communications Services](log-analytics.md). We've also created many insights and visualizations using workbooks, which are described in: [Workbooks for Communications Services logs](insights.md).
+For your Communications Services logs, we've provided a useful [default query pack](../../../azure-monitor/logs/query-packs.md#default-query-pack) to provide an initial set of insights to quickly analyze and understand your data. These query packs are described here: [Log Analytics for Communications Services](log-analytics.md).
communication-services Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights.md
- Title: Azure Communication Services Insights Preview-
-description: Descriptions of data visualizations available for Communications Services via Workbooks
---- Previously updated : 10/25/2021-----
-# Communications Services Insights Preview
-
-## Overview
-Within your Communications Resource, we have provided an **Insights Preview** feature that displays a number of data visualizations conveying insights from the Azure Monitor logs and metrics monitored for your Communications Services. The visualizations within Insights are made possible via [Azure Monitor Workbooks](../../../azure-monitor/visualize/workbooks-overview.md). In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](enable-logging.md), and to enable Workbooks, you will need to send your logs to a [Log Analytics workspace](../../../azure-monitor/logs/log-analytics-overview.md) destination.
--
-## Accessing Azure Insights for Communication Services
-
-1. From the **Azure Portal** homepage, select your **Communication Service** resource:
-
- :::image type="content" source="media\workbooks\azure-portal-home-browser.png" alt-text="Azure Portal Home":::
-
-2. Once you are inside your resource, scroll down on the left nav bar to the **Monitor** category and click on the **Insights** tab:
-
- :::image type="content" source="media\workbooks\acs-insights-nav.png" alt-text="Insights navigation":::
-
-3. This should display the **Insights** dashboard for your Communication Service resource:
-
- :::image type="content" source="media\workbooks\acs-insights-tab.png" alt-text="Communication Services Insights tab":::
-
-## Insights dashboard navigation
-
-The Communication Service Insights dashboards give users an intuitive and clear way to navigate their resourceΓÇÖs log data. The **Overview** section provides a view across all modalities, so the user can see the different ways in which their resource has been used in a specific time range:
--
-Users can control the time range and time granularity to display with the parameters displayed at the top:
--
-These parameters are global, meaning that they will update the data displayed across the entire dashboard.
-
-The **Overview** section contains an additional parameter to control the type of visualization that is displayed:
--
-This parameter is local, meaning it only affects the plots in this section.
-
-The rest of the tabs display log data that is related to a specific modality.
--
-Since **Voice and video** logs are the most complex in nature, this modality is broken down into four subsections:
--
-The **Summary** tab contains general information about Voice and video usage, including the types of media streams shared, the types of endpoints participating in a call (e.g. VoIP, Bot, Application, PSTN, or Server), the OS usage, and participant end reasons:
--
-The **Volume** tab under the **Voice and video** modality displays the number of calls and the number of participants in a specific period of time (**Time range** parameter), subdivided into time bins (**Time granularity** parameter):
--
-The **Volume** tab contains a **Grouping** parameter, which helps visualize the number of calls and participants segmented by either Call type (P2P vs. Group calls) and Interop Calls (pure Azure Communication Services vs. Teams Interop):
--
-The **Quality** tab under **Voice and video** allows users to inspect the quality distribution of calls, where quality is defined at three levels for this dashboard:
--- The proportion of poor-quality media streams (**Stream quality** plot), where a streamΓÇÖs quality is classified as Poor when it has at least one unhealthy telemetry value, where unhealthy ranges are defined as:
- - Jitter > 30 milliseconds
- - Packet loss rate > 10%
- - Round trip time > 500 milliseconds
--- The proportion of **Impacted calls**, where an impacted call is defined as a call that has at least one poor quality stream--- **Participant end reasons**, which keep track of the reason why a participant left a call. End reasons are [SIP codes](https://en.wikipedia.org/wiki/List_of_SIP_response_codes), which are numeric codes that describe the specific status of a signaling request. SIP codes can be grouped into six categories: *Success*, *Client Failure*, *Server Failure*, *Global Failure*, *Redirection*, and *Provisional*. The distribution of SIP code categories is shown in the pie chart on the left hand side, while a list of the specific SIP codes for participant end reasons is provided on the right hand side--
-Quality can also be filtered by the types of media streams (**Media Type** parameter) used in the call, e.g. to only get the impacted calls in terms of video stream quality:
--
-And can also be filtered by endpoint types (**Endpoint Type** parameter), e.g. getting the participant end reasons for PSTN participants. These filters allow for multiple selections:
--
-The **Details** tab offers a quick way to navigate through the **Voice and video** calls made in a time range by grouping calls by dates, and showing the details of every call made in terms of the participants in that call and the outgoing streams per participant, together with duration and telemetry values for these:
--
-The details of a call are initially hidden. A list of the participants is displayed after clicking on a call:
--
-And clicking on a participant displays a list of the outgoing streams for that participant, together with their duration (proportional to the full call duration) and telemetry values, where unhealthy values are displayed in red:
--
-The **Authentication** tab shows authentication logs, which are created through operations such as issuing an access token or creating an identity. The data displayed includes the types of operations performed and the results of those operations:
--
-The **Chat** tab displays the data for all chat-related operations and their result types:
--
-The **SMS** tab displays the operations and results for SMS usage through an Azure Communication Services resource (we currently donΓÇÖt have any data for this modality):
--
-The **Email** tab displays delivery status, email size, and email count:
-
-The **Recording** tab displays data relevant to total recordings, recording format, recording channel types and number of recording per call:
--
-The **Call Automation** tab displays data about calls placed or answered using Call Automation SDK, like active call count, operations executed and errors encountered by your resource over time. You can also examine a particular call by looking at the sequence of operations taken on that call using the SDK:
-
-## Editing dashboards
-
-The **Insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar:
--
-Editing these dashboards does not modify the **Insights** tab, but rather creates a separate workbook which can be accessed on your resourceΓÇÖs Workbooks tab:
--
-For an in-depth description of workbooks, please refer to the [Azure Monitor Workbooks](../../../azure-monitor/visualize/workbooks-overview.md) documentation.
communication-services Call Automation Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights/call-automation-insights.md
+
+ Title: Azure Communication Services Call Automation Insights Preview
+
+description: Descriptions of data visualizations available for Call Automation Communications Services via Workbooks
++++ Last updated : 03/08/2021+++++
+# Call Automation Insights
+
+In this document, we outline the available insights dashboard to monitor Call Automation logs and metrics.
+
+## Overview
+Within your Communications Resource, we've provided an **Insights Preview** feature that displays many data visualizations conveying insights from the Azure Monitor logs and metrics monitored for your Communications Services. The visualizations within Insights are made possible via [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md). In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). To enable Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
++
+## Prerequisites
+
+- In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). You need to enable `Operational Call Automation Logs`.
+- To use Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
+
+## Accessing Azure Insights for Communication Services
+
+Inside your Azure Communication Services resource, scroll down on the left nav bar to the **Monitor** category and click on the **Insights** tab:
++
+## Call Automation Insights
+
+The **Call Automation** tab displays data about calls placed or answered using Call Automation SDK, like active call count, operations executed and errors encountered by your resource over time. You can also examine a particular call by looking at the sequence of operations taken on that call using the SDK:
+++
+## More information about workbooks
+
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
+
+## Editing dashboards
+
+The **Insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar:
++
+Editing these dashboards doesn't modify the **Insights** tab, but rather creates a separate workbook that can be accessed on your resourceΓÇÖs Workbooks tab:
++
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
communication-services Call Recording Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights/call-recording-insights.md
+
+ Title: Azure Communication Services Call Recording Insights Preview
+
+description: Descriptions of data visualizations available for Call Recording Communications Services via Workbooks
++++ Last updated : 03/08/2021+++++
+# Call Recording Insights
+
+In this document, we outline the available insights dashboard to monitor Call Recording logs and metrics.
+
+## Overview
+Within your Communications Resource, we've provided an **Insights Preview** feature that displays many data visualizations conveying insights from the Azure Monitor logs and metrics monitored for your Communications Services. The visualizations within Insights are made possible via [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md). In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). To enable Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
++
+## Prerequisites
+
+- In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). You need to enable `Call Recording Summary Logs`.
+- To use Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
+
+## Accessing Azure Insights for Communication Services
+
+Inside your Azure Communication Services resource, scroll down on the left nav bar to the **Monitor** category and click on the **Insights** tab:
++
+## Call Recording Insights
+
+The **Recording** tab displays data relevant to total recordings, recording format, recording channel types and number of recording per call:
++
+## More information about workbooks
+
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
+
+## Editing dashboards
+
+The **Insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar:
++
+Editing these dashboards doesn't modify the **Insights** tab, but rather creates a separate workbook that can be accessed on your resourceΓÇÖs Workbooks tab:
++
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
communication-services Chat Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights/chat-insights.md
+
+ Title: Azure Communication Services Chat Insights Dashboard
+
+description: Descriptions of data visualizations available for Chat Communications Services via Workbooks
++++ Last updated : 03/08/2021+++++
+# Chat Insights
+
+In this document, we outline the available insights dashboard to monitor Chat logs and metrics.
+
+## Overview
+Within your Communications Resource, we've provided an **Insights Preview** feature that displays many data visualizations conveying insights from the Azure Monitor logs and metrics monitored for your Communications Services. The visualizations within Insights are made possible via [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md). In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). To enable Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
++
+## Prerequisites
+
+- In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). You need to enable `Operational Chat Logs`, `Operational Authentication Logs`.
+- To use Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
+
+## Accessing Azure Insights for Communication Services
+
+Inside your Azure Communication Services resource, scroll down on the left nav bar to the **Monitor** category and click on the **Insights** tab:
++
+## Authentication insights
+
+The **Authentication** tab shows authentication logs, which are created through operations such as issuing an access token or creating an identity. The data displayed includes the types of operations performed and the results of those operations:
++
+## Chat insights
+
+The **Chat** tab displays the data for all chat-related operations and their result types:
++
+## More information about workbooks
+
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
+
+## Editing dashboards
+
+The **Insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar:
++
+Editing these dashboards doesn't modify the **Insights** tab, but rather creates a separate workbook that can be accessed on your resourceΓÇÖs Workbooks tab:
++
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
communication-services Email Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights/email-insights.md
+
+ Title: Azure Communication Services Email Insights Dashboard
+
+description: Descriptions of data visualizations available for Email Communications Services via Workbooks
++++ Last updated : 03/08/2021+++++
+# Email Insights
+
+In this document, we outline the available insights dashboard to monitor Email logs and metrics.
+
+## Overview
+Within your Communications Resource, we've provided an **Insights Preview** feature that displays many data visualizations conveying insights from the Azure Monitor logs and metrics monitored for your Communications Services. The visualizations within Insights are made possible via [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md). In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). To enable Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
++
+## Prerequisites
+
+- In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). You need to enable `Email Service Send Mail Logs`, `Email Service Delivery Status Update Logs` , `Email Service User Engagement Logs.`
+- To use Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
+
+## Accessing Azure Insights for Communication Services
+
+Inside your Azure Communication Services resource, scroll down on the left nav bar to the **Monitor** category and click on the **Insights** tab:
++
+## Email Insights
+
+The **Email** tab displays delivery status, email size, and email count:
++
+## More information about workbooks
+
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
+
+## Editing dashboards
+
+The **Insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar:
++
+Editing these dashboards doesn't modify the **Insights** tab, but rather creates a separate workbook that can be accessed on your resourceΓÇÖs Workbooks tab:
++
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
communication-services Sms Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights/sms-insights.md
+
+ Title: Azure Communication Services SMS Insights Dashboard
+
+description: Descriptions of data visualizations available for SMS Communications Services via Workbooks
+++ Last updated : 03/16/2021+++++
+# SMS Insights Dashboard
+
+In this document, we outline the available insights dashboard to monitor SMS logs and metrics.
+
+## Overview
+
+The SMS Insights dashboard in your communication resource shows data visualizations based on the logs and metrics for your SMS usage. It's powered by Azure Monitor logs and metrics that are collected and stored. [Azure Monitor Workbooks](https://learn.microsoft.com/azure/azure-monitor/visualize/workbooks-overview) are used to create the data visualization. To enable Azure Monitor collection to populate the SMS Insights dashboard see [Enable Azure Monitor in Diagnostic Settings](https://learn.microsoft.com/azure/communication-services/concepts/analytics/enable-logging). Ensure that logs are sent to a [Log Analytics workspace](https://learn.microsoft.com/azure/azure-monitor/logs/log-analytics-overview).
++
+## Prerequisites
+
+- In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). You need to enable `SMS Operational Logs`
+- To use Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
+
+## Accessing Azure Insights for Communication Services
+
+Inside your Azure Communication Services resource, scroll down on the left nav bar to the **Monitor** category and click on the **Insights** tab:
++
+## Dashboard structure
+
+The `SMS Insights Dashboard` is made out of four sections:
++
+### Overview section
+The Overview section provides an overall performance of sent messages along with SMS failure breakdown. The user can filter the SMS performance data by time specific time range, number type, sender number, and destination. The data is presented in interactive graphs that the user can click on to further drill down into logs.
+
+Great to help answer general questions like:
+- How many SMS have I sent through my resource?
+- Are my messages being blocked or failing at a glance?
+- What is my message distribution by country?
+
+#### Top metrics
++
+#### SMS by country
+
+### Message delivery rates section
+The Message Delivery Rates section provides insights into SMS performance and delivery rate per day. The user can select a specific date in the graph to drill into logs.
+
+Can help answer questions like:
+- Are there particular days where I'm seeing fewer deliveries?
+- Are there any geographies where delivery suffers most?
++
+### Opt-in & opt-out rates
+The Opt-in & opt-out rates section provides insights into end user responses for opt-ins/outs and help.
+
+Answer questions like:
+- What percentage of my users are opt-in vs opt-out?
++
+### Response patterns
+The Response patterns section provides insights into the total SMS usage in a day across messages delivered, received, failed, and blocked.
+
+This section helps you understand:
+- How is my solution performing over time?
++
+## Exporting logs
+The SMS logs can be exported by clicking on the Download button on the top right corner of the logs table:
++
+## Editing dashboards
+
+The **SMS insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar:
++
+Editing these dashboards doesn't modify the **Insights** tab, but rather creates a separate workbook, which can be accessed on your resourceΓÇÖs Workbooks tab:
++
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
communication-services Voice And Video Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights/voice-and-video-insights.md
+
+ Title: Azure Communication Services Voice and Video Insights Preview
+
+description: Descriptions of data visualizations available for Voice and Video Communications Services via Workbooks
++++ Last updated : 03/08/2021+++++
+# Voice and video Insights
+
+In this document, we outline the available insights dashboard to monitor Voice and Video logs and metrics.
+
+## Overview
+Within your Communications Resource, we've provided an **Insights Preview** feature that displays many data visualizations conveying insights from the Azure Monitor logs and metrics monitored for your Communications Services. The visualizations within Insights are made possible via [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md). In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). To enable Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
++
+## Prerequisites
+
+- In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). You need to enable `Operational Authentication Logs`, `Call Summary Logs`, `Call Diagnostic Logs`.
+- To use Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
+
+## Accessing Azure Insights for Communication Services
+
+Inside your Azure Communication Services resource, scroll down on the left nav bar to the **Monitor** category and click on the **Insights** tab:
++
+## Authentication insights
+
+The **Authentication** tab shows authentication logs, which are created through operations such as issuing an access token or creating an identity. The data displayed includes the types of operations performed and the results of those operations:
++
+## Voice and Video Insights
+
+Since **Voice and video** logs are the most complex in nature, this modality is broken down into four subsections:
++
+The **Summary** tab contains general information about Voice and video usage, including the types of media streams shared, the types of endpoints participating in a call (e.g. VoIP, Bot, Application, PSTN, or Server), the OS usage, and participant end reasons:
++
+The **Volume** tab under the **Voice and video** modality displays the number of calls and the number of participants in a specific period of time (**Time range** parameter), subdivided into time bins (**Time granularity** parameter):
++
+The **Volume** tab contains a **Grouping** parameter, which helps visualize the number of calls and participants segmented by either Call type (P2P vs. Group calls) and Interop Calls (pure Azure Communication Services vs. Teams Interop):
++
+The **Quality** tab under **Voice and video** allows users to inspect the quality distribution of calls, where quality is defined at three levels for this dashboard:
+
+- The proportion of poor-quality media streams (**Stream quality** plot), where a streamΓÇÖs quality is classified as Poor when it has at least one unhealthy telemetry value, where unhealthy ranges are defined as:
+ - Jitter > 30 milliseconds
+ - Packet loss rate > 10%
+ - Round trip time > 500 milliseconds
+
+- The proportion of **Impacted calls**, where an impacted call is defined as a call that has at least one poor quality stream
+
+- **Participant end reasons**, which keep track of the reason why a participant left a call. End reasons are [SIP codes](https://en.wikipedia.org/wiki/List_of_SIP_response_codes), which are numeric codes that describe the specific status of a signaling request. SIP codes can be grouped into six categories: *Success*, *Client Failure*, *Server Failure*, *Global Failure*, *Redirection*, and *Provisional*. The distribution of SIP code categories is shown in the pie chart on the left hand side, while a list of the specific SIP codes for participant end reasons is provided on the right hand side
++
+Quality can also be filtered by the types of media streams (**Media Type** parameter) used in the call, e.g. to only get the impacted calls in terms of video stream quality:
++
+And can also be filtered by endpoint types (**Endpoint Type** parameter), e.g. getting the participant end reasons for PSTN participants. These filters allow for multiple selections:
++
+The **Details** tab offers a quick way to navigate through the **Voice and video** calls made in a time range by grouping calls by dates, and showing the details of every call made in terms of the participants in that call and the outgoing streams per participant, together with duration and telemetry values for these:
++
+The details of a call are initially hidden. A list of the participants is displayed after clicking on a call:
++
+And clicking on a participant displays a list of the outgoing streams for that participant, together with their duration (proportional to the full call duration) and telemetry values, where unhealthy values are displayed in red:
++
+## More information about workbooks
+
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
+
+## Editing dashboards
+
+The **Insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar:
++
+Editing these dashboards doesn't modify the **Insights** tab, but rather creates a separate workbook that can be accessed on your resourceΓÇÖs Workbooks tab:
++
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
communication-services Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/log-analytics.md
Title: Azure Communication Services - Log Analytics Preview
-description: About using Log a Analytics for Call Summary and Call Diagnostic logs
+description: About using Log Analytics for Call Summary and Call Diagnostic logs
## Overview and access
-Before you can take advantage of [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your Communications Services logs, you must first follow the steps outlined in [Enable logging in Diagnostic Settings](enable-logging.md). Once you have enabled your logs and a [Log Analytics Workspace](../../../azure-monitor/logs/workspace-design.md), you will have access to many helpful [default query packs](../../../azure-monitor/logs/query-packs.md#default-query-pack) that will help you quickly visualize and understand the data available in your logs, which are described below. Through Log Analytics, you also get access to more Communications Services Insights via Azure Monitor Workbooks (see: [Communications Services Insights](insights.md)), the ability to create our own queries and Workbooks, [Log Analytics APIs overview](../../../azure-monitor/logs/api/overview.md) to any query.
+Before you can take advantage of [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your Communications Services logs, you must first follow the steps outlined in [Enable logging in Diagnostic Settings](enable-logging.md). Once you've enabled your logs and a [Log Analytics Workspace](../../../azure-monitor/logs/workspace-design.md), you will have access to many helpful [default query packs](../../../azure-monitor/logs/query-packs.md#default-query-pack) that will help you quickly visualize and understand the data available in your logs, which are described below. Through Log Analytics, you also get access to more Communications Services Insights via Azure Monitor Workbooks, the ability to create our own queries and Workbooks, [Log Analytics APIs overview](../../../azure-monitor/logs/api/overview.md) to any query.
### Access You can access the queries by starting on your Communications Services resource page, and then clicking on "Logs" in the left navigation within the Monitor section: :::image type="content" source="media\log-analytics\access-log-analytics.png" alt-text="Log Analytics navigation":::
-From there, you are presented with a modal screen that contains all of the [default query packs](../../../azure-monitor/logs/query-packs.md#default-query-pack) available for your Communications Services, with list of Query Packs available to navigate to the left.
+From there, you're presented with a modal screen that contains all of the [default query packs](../../../azure-monitor/logs/query-packs.md#default-query-pack) available for your Communications Services, with list of Query Packs available to navigate to the left.
:::image type="content" source="media\log-analytics\log-analytics-modal-resource.png" alt-text="log analytics queries modal" lightbox="media\log-analytics\log-analytics-modal-resource.png":::
-If you close the modal screen, you can still navigate to the various query packs, directly access data in the form of tables based on the schema of the logs and metrics you have enabled in your Diagnostic Setting. Here, you can create your own queries from the data using [KQL (Kusto)](/azure/data-explorer/kusto/query/). Learn more about using, editing, and creating queries by reading more about: [Log Analytics Queries](../../../azure-monitor/logs/queries.md)
+If you close the modal screen, you can still navigate to the various query packs, directly access data in the form of tables based on the schema of the logs and metrics you've enabled in your Diagnostic Setting. Here, you can create your own queries from the data using [KQL (Kusto)](/azure/data-explorer/kusto/query/). Learn more about using, editing, and creating queries by reading more about: [Log Analytics Queries](../../../azure-monitor/logs/queries.md)
:::image type="content" source="media\log-analytics\log-analytics-queries-resource.png" alt-text="Log Analytics queries in resource" lightbox="media\log-analytics\log-analytics-queries-resource.png":::
ACSCallSummary
| render columnchart title="Number of calls per hour in last day" ```
communication-services Network Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/network-diagnostic.md
The **Network Diagnostics Tool** enables Azure Communication Services developers
As part of the diagnostics performed, the user is asked to enable permissions for the tool to access their devices. Next, the user is asked to record their voice, which is then played back using an echo bot to ensure that the microphone is working. The tool finally, performs a video test. The test uses the camera to detect video and measure the quality for sent and received frames.
-If you are looking to build your own Network Diagnostic Tool or to perform deeper integration of this tool into your application, you can leverage [pre-call diagnostic APIs](../voice-video-calling/pre-call-diagnostics.md) for the calling SDK.
+If you're looking to build your own Network Diagnostic Tool or to perform deeper integration of this tool into your application, you can leverage [pre-call diagnostic APIs](../voice-video-calling/pre-call-diagnostics.md) for the calling SDK.
## Performed tests
If you are looking to build your own Network Diagnostic Tool or to perform deepe
## Privacy
-When a user runs a network diagnostic, the tool collects and store service and client telemetry data to verify your network conditions and ensure that they're compatible with Azure Communication Services. The telemetry collected doesn't contain personal identifiable information. The test utilizes both audio and video collected through your device for this verification. The audio and video used for the test aren't stored.
+When a user runs a network diagnostic, the tool collects and store service and client telemetry data to verify your network conditions and ensure that they're compatible with Azure Communication Services.
+
+The telemetry collected doesn't contain personal identifiable information. The test utilizes both audio and video collected through your device for this verification. The audio and video used for the test aren't stored.
## Support
-The test provides a **unique identifier** for your test which you can provide our support team who can provide further help. For more information see [help and support options](../../support.md)
+The test provides a **unique identifier** for your test, which you can provide our support team who can provide further help. For more information, see [help and support options](../../support.md)
## Next Steps
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
| | Honor setting "Meeting reactions" | No API available | | DevOps | [Azure Metrics](../../metrics.md) | ✔️ | | | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
-| | [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ |
+| | [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ |
| | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | | | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
The following table shows supported server-side capabilities available in Azure
| [Manage ACS call recording](../../voice-video-calling/call-recording.md) | ❌ | | [Azure Metrics](../../metrics.md) | ✔️ | | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
-| [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ |
+| [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ |
| [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ |
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
The following list presents the set of features that are currently available in
| | Honor setting "SIP devices can be used for calls" | ✔️ | | DevOps | [Azure Metrics](../metrics.md) | ✔️ | | | [Azure Monitor](../logging-and-diagnostics.md) | ✔️ |
-| | [Azure Communication Services Insights](../analytics/insights.md) | ✔️ |
+| | [Azure Communication Services Insights](../analytics/insights/voice-and-video-insights.md) | ✔️ |
| | [Azure Communication Services Voice and video calling events](../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | | | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
communication-services Meeting Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md
The following list of capabilities is allowed when Teams user participates in Te
| | Honor setting "Meeting reactions" | No API available | | DevOps | [Azure Metrics](../../metrics.md) | ✔️ | | | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
-| | [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ |
+| | [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ |
| | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | | | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
communication-services Phone Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/phone-capabilities.md
The following list of capabilities is supported for scenarios where at least one
| Teams dial out plan policies | Start a phone call honoring dial plan policy | ❌ | | DevOps | [Azure Metrics](../../metrics.md) | ✔️ | | | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
-| | [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ |
+| | [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ |
| | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | | | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
communication-services Classification Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/classification-concepts.md
When you submit a job to Job Router, you can either specify the queue, priority, and worker selectors manually or you can specify a classification policy to drive these values.
-If you choose to use a classification policy, you will receive a [JobClassified Event][job_classified_event] or a [JobClassificationFailed Event][job_classify_failed_event] with the result. Once the job has been successfully classified, it will be automatically queued. If the classification process fails, you'll need to intervene to fix it.
+If you choose to use a classification policy, you receive a [JobClassified Event][job_classified_event] or a [JobClassificationFailed Event][job_classify_failed_event] with the result. Once the job has been successfully classified, it's automatically queued. If the classification process fails, you need to intervene to fix it.
The process of classifying a Job involves optionally setting the following properties:
The process of classifying a Job involves optionally setting the following prope
The priority of a Job can be resolved during classification using one of many rule engines.
-See the [Rule concepts](router-rule-concepts.md) page for more information.
+For more information, see the [Rule concepts](router-rule-concepts.md) page.
## Worker selectors
-Each job carries a collection of worker selectors, that are evaluated against the worker labels. These are conditions that need to be true of a worker to be a match.
-You can use the classification policy to attach these conditions to a job. You can do this by specifying one or more selector attachments.
+Each job carries a collection of worker selectors that's evaluated against the worker labels. These conditions need to be true of a worker to be a match.
+You can use the classification policy to attach these conditions to a job; you can do it by specifying one or more selector attachments.
-For more information see the section [below](#using-label-selector-attachments).
+For more information, see, the section: [using label selector attachments](#using-label-selector-attachments).
## Queue selectors You can also specify a collection of label selector attachments to select the Queue based on its labels.
-For more information see the section [below](#using-label-selector-attachments).
+For more information, see, the section: [using label selector attachments](#using-label-selector-attachments).
## Using label selector attachments
The following label selector attachments are available:
**Static label selector -** Always attaches the given `LabelSelector` to the Job.
-**Conditional label selector -** Evaluates a condition defined by a [rule](router-rule-concepts.md). If it resolves to `true`, then the specified collection of selectors will be attached to the Job.
+**Conditional label selector -** Evaluates a condition defined by a [rule](router-rule-concepts.md). If it resolves to `true`, then the specified collection of selectors are attached to the Job.
**Passthrough label selector -** Attaches a selector to the Job with the specified key and operator but gets the value from the Job label of the same key.
-**Rule label selector -** Sources a collection of selectors from one of many rule engines. Read the [RouterRule concepts](router-rule-concepts.md) page for more information.
+**Rule label selector -** Sources a collection of selectors from one of many rule engines. Read the [RouterRule concepts](router-rule-concepts.md) page, for more information.
**Weighted allocation label selector -** Enables you to specify a percentage-based weighting and a collection of selectors to attach based on the weighting allocation. For example, you may want 30% of the Jobs to go to "Vendor 1" and 70% of Jobs to go to "Vendor 2".
The following label selector attachments are available:
Once a Job has been classified, it can be reclassified in the following ways:
-1. You can update the Job labels, which will cause the Job Router to evaluate the new labels with the previous Classification Policy.
-2. You can update the Classification Policy ID of a Job, which will cause Job Router to process the existing Job against the new policy.
+1. You can update the Job labels, which cause the Job Router to evaluate the new labels with the previous Classification Policy.
+2. You can update the Classification Policy ID of a Job, which causes Job Router to process the existing Job against the new policy.
3. An Exception Policy **trigger** can take the **action** of requesting a Job be reclassified. > [!NOTE]
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
Chat messages are stored for 90 days. Submit [a request to Azure Support](../../
## Voice and video calling
+### PSTN Call limitations
+
+| **Name** | Limit |
+|--|--|
+|Number of outbound concurrent calls | 2
+ ### Call maximum limitations | **Name** | Limit |
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Title: Azure Communication Services Calling SDK overview
-description: Provides an overview of the Calling SDK.
+description: Provides an overview of the Calling SDK capabilities limitations features for video and audio.
# Calling SDK overview
-The Calling SDK enables end-user devices to drive voice and video communication experiences. This page provides detailed descriptions of Calling features, including platform and browser support information. To get started right away, please check out [Calling quickstarts](../../quickstarts/voice-video-calling/getting-started-with-calling.md) or [Calling hero sample](../../samples/calling-hero-sample.md).
+The Calling SDK enables end-user devices to drive voice and video communication experiences. This page provides detailed descriptions of Calling features, including platform and browser support information. To get started right away, check out [Calling quickstarts](../../quickstarts/voice-video-calling/getting-started-with-calling.md) or [Calling hero sample](../../samples/calling-hero-sample.md).
Once you've started development, check out the [known issues page](../known-issues.md) to find bugs we're working on. Key features of the Calling SDK: -- **Addressing** - Azure Communication Services provides generic [identities](../identity-model.md) that are used to address communication endpoints. Clients use these identities to authenticate to the service and communicate with each other. These identities are used in Calling APIs that provides clients visibility into who is connected to a call (the roster).-- **Encryption** - The Calling SDK encrypts traffic and prevents tampering on the wire.
+- **Addressing** - Azure Communication Services provides generic [identities](../identity-model.md) that are used to address communication endpoints. Clients use these identities to authenticate to the service and communicate with each other. These identities are used in Calling APIs that provide clients visibility into who is connected to a call (the roster).
+- **Encryption** - The Calling SDK encrypts traffic and prevents tampering on the wire.
- **Device Management and Media** - The Calling SDK provides facilities for binding to audio and video devices, encodes content for efficient transmission over the communications dataplane, and renders content to output devices and views that you specify. APIs are also provided for screen and application sharing. - **PSTN** - The Calling SDK can initiate voice calls with the traditional publicly switched telephone network, [using phone numbers you acquire in the Azure portal](../../quickstarts/telephony/get-phone-number.md) or programmatically.-- **Teams Meetings** - The Calling SDK can [join Teams meetings](../../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with the Teams voice and video dataplane. -- **Notifications** - The Calling SDK provides APIs allowing clients to be notified of an incoming call. In situations where your app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform end-users of an incoming call.
+- **Teams Meetings** - The Calling SDK can [join Teams meetings](../../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with the Teams voice and video dataplane.
+- **Notifications** - The Calling SDK provides APIs allowing clients to be notified of an incoming call. In situations where your app isn't running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform end-users of an incoming call.
-## Detailed capabilities
+## Detailed capabilities
The following list presents the set of features that are currently available in the Azure Communication Services Calling SDKs. - | Group of features | Capability | JS | Windows | Java (Android) | Objective-C (iOS) | | -- | - | | - | -- | -- | | Core Capabilities | Place a one-to-one call between two users | ✔️ | ✔️ | ✔️ | ✔️ |
The following list presents the set of features that are currently available in
| | Show state of a call<br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | ✔️ | ✔️ | ✔️ | | | Show if a participant is muted | ✔️ | ✔️ | ✔️ | ✔️ | | | Show the reason why a participant left a call | ✔️ | ✔️ | ✔️ | ✔️ |
-| Screen sharing | Share the entire screen from within the application | ✔️ | ❌ | ❌ | ❌ |
-| | Share a specific application (from the list of running applications) | ✔️ | ❌ | ❌ | ❌ |
-| | Share a web browser tab from the list of open tabs | ✔️ | ❌ | ❌ | ❌ |
+| Screen sharing | Share the entire screen from within the application | ✔️ | ✔️<sup>1</sup> | ✔️<sup>1</sup> | ✔️<sup>1</sup> |
+| | Share a specific application (from the list of running applications) | ✔️ | ✔️<sup>1</sup> | ❌ | ❌ |
+| | Share a web browser tab from the list of open tabs | ✔️ | | | |
| | Share system audio during screen sharing | ❌ | ❌ | ❌ | ❌ | | | Participant can view remote screen share | ✔️ | ✔️ | ✔️ | ✔️ | | Roster | List participants | ✔️ | ✔️ | ✔️ | ✔️ |
The following list presents the set of features that are currently available in
| | Get camera list | ✔️ | ✔️ | ✔️ | ✔️ | | | Set camera | ✔️ | ✔️ | ✔️ | ✔️ | | | Get selected camera | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get microphone list | ✔️ | ✔️ | ❌ | ❌ |
-| | Set microphone | ✔️ | ✔️ | ❌ | ❌ |
-| | Get selected microphone | ✔️ | ✔️ | ❌ | ❌ |
-| | Get speakers list | ✔️ | ✔️ | ❌ | ❌ |
-| | Set speaker | ✔️ | ✔️ | ❌ | ❌ |
-| | Get selected speaker | ✔️ | ✔️ | ❌ | ❌ |
+| | Get microphone list | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌<sup>2</sup> |
+| | Set microphone | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> |
+| | Get selected microphone | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> |
+| | Get speakers list | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> |
+| | Set speaker | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> |
+| | Get selected speaker | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> |
| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | ✔️ | ✔️ | ✔️ | | | Set / update scaling mode | ✔️ | ✔️ | ✔️ | ✔️ | | | Render remote video stream | ✔️ | ✔️ | ✔️ | ✔️ |
+1. The Share Screen capability can be achieved using Raw Media (this feature is in Public Preview), if you want to learn, **how to add Raw Media**, visit [the quickstart guide](../../quickstarts/voice-video-calling/get-started-raw-media-access.md).
+2. The Calling SDK doesn't have an explicit API, you need to use the OS (android & iOS) API to achieve it.
+
+## UI Library
+
+UI Library makes it easy for you to build modern communications user experiences using Azure Communication Services. It provides a library of production-ready UI components to drop into your application.
+
+With this set of prebuilt controls, you can create beautiful designs using [the fluent UI SDK](https://developer.microsoft.com/en-us/fluentui#/) components and audio/video communication experiences. If you want to learn more about the UI Library, you can visit [the overview site](../ui-library/ui-library-overview.md), where you find details about web and mobile platforms.
## Calling SDK streaming support
-The Communication Services Calling SDK supports the following streaming configurations:
+
+The Azure Communication Services Calling SDK supports the following streaming configurations:
| Limit | Web | Windows/Android/iOS | | - | | -- | | **Maximum # of outgoing local streams that can be sent simultaneously** | 1 video and 1 screen sharing | 1 video + 1 screen sharing | | **Maximum # of incoming remote streams that can be rendered simultaneously** | 4 videos + 1 screen sharing | 6 videos + 1 screen sharing |
-While the Calling SDK won't enforce these limits, your users may experience performance degradation if they're exceeded.
+While the Calling SDK don't enforce these limits, your users may experience performance degradation if they're exceeded.
## Calling SDK timeouts
The following timeouts apply to the Communication Services Calling SDKs:
| PSTN call establishment timeout | 115 | | Promote 1:1 call to a group call timeout | 115 |
-## Maximum call duration:
-The maximum call duration is 30 hours, participants that reach the maximum call duration lifetime of 30 hours will be disconnected from the call.
+## Maximum call duration
+**The maximum call duration is 30 hours**, participants that reach the maximum call duration lifetime of 30 hours will be disconnected from the call.
## JavaScript Calling SDK support by OS and browser
-The following table represents the set of supported browsers which are currently available. **We support the most recent three major versions of the browser (most recent three minor versions for Safari)** unless otherwise indicated.
+The following table represents the set of supported browsers, which are currently available. **We support the most recent three major versions of the browser (most recent three minor versions for Safari)** unless otherwise indicated.
| Platform | Chrome | Safari | Edge | Firefox | Webview | | | | | | - | - |
The following table represents the set of supported browsers which are currently
| Windows | ✔️ | ❌ | ✔️ | ✔️ | ❌ | | Ubuntu/Linux | ✔️ | ❌ | ❌ | ❌ | ❌ |
-* Outgoing Screen Sharing is not supported on iOS or Android.
-* Firefox support is in public preview.
-* ACS only supports Android System WebView on Android, iOS WebView(WKWebView) in public preview. Other types of embedded browsers or WebView on other OS platforms are not officially supported, for example, GeckoView, Chromium Embedded Framework (CEF), Microsoft Edge WebView2.
-Running JavaScript Calling SDK on these platforms is not actively tested, it may or may not work.
-* [An iOS app on Safari can't enumerate/select mic and speaker devices](../known-issues.md#enumerating-devices-isnt-possible-in-safari-when-the-application-runs-on-ios-or-ipados) (for example, Bluetooth); this is a limitation of the OS, and there's always only one device, OS controls default device selection.
+- Outgoing Screen Sharing isn't supported on iOS or Android.
+- Firefox support is in public preview.
+- ACS only supports Android System WebView on Android, iOS WebView(WKWebView) in public preview. Other types of embedded browsers or WebView on other OS platforms aren't officially supported, for example, GeckoView, Chromium Embedded Framework (CEF), Microsoft Edge WebView2.
+Running JavaScript Calling SDK on these platforms isn't actively tested, it may or may not work.
+- [An iOS app on Safari can't enumerate/select mic and speaker devices](../known-issues.md#enumerating-devices-isnt-possible-in-safari-when-the-application-runs-on-ios-or-ipados) (for example, Bluetooth); this issue is a limitation of the OS, and there's always only one device, OS controls default device selection.
## Android Calling SDK support
-* Support for Android API Level 21 or Higher
+- Support for Android API Level 21 or Higher
-* Support for Java 7 or higher
+- Support for Java 7 or higher
-* Support for Android Studio 2.0
+- Support for Android Studio 2.0
## iOS Calling SDK support
-* Support for iOS 10.0+ at build time, and iOS 12.0+ at run time
+- Support for iOS 10.0+ at build time, and iOS 12.0+ at run time
-* Xcode 12.0+
+- Xcode 12.0+
## Calling client - browser security model
For local development, you can use `http://localhost`.
### Embed the Communication Services Calling SDK in an iframe
-A new [permissions policy (also called a feature policy)](https://www.w3.org/TR/permissions-policy-1/#iframe-allow-attribute) is being adopted by various browsers. This policy affects calling scenarios by controlling how applications can access a device's camera and microphone through a cross-origin iframe element.
+A new [permissions policy (also called a feature policy)](https://www.w3.org/TR/permissions-policy-1/#iframe-allow-attribute) is available in various browsers. This policy affects calling scenarios by controlling how applications can access a device's camera and microphone through a cross-origin iframe element.
If you want to use an iframe to host part of the app from a different domain, you must add the `allow` attribute with the correct value to your iframe.
For example, this iframe allows both camera and microphone access:
<iframe allow="camera *; microphone *"> ```
+## Known issues
+
+Once you've started development, check out the [known issues page](../known-issues.md) to find bugs we're working on.
+ ## Next steps > [!div class="nextstepaction"] > [Get started with calling](../../quickstarts/voice-video-calling/getting-started-with-calling.md) For more information, see the following articles:+ - Familiarize yourself with general [call flows](../call-flows.md) - Learn about [call types](../voice-video-calling/about-call-types.md) - [Plan your PSTN solution](../telephony/plan-solution.md)
communication-services Pre Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md
[!INCLUDE [Public Preview Disclaimer](../../includes/public-preview-include.md)]
-The Pre-Call API enables developers to programmatically validate a clientΓÇÖs readiness to join an Azure Communication Services Call. The Pre-Call APIs can be accessed through the Calling SDK. They provide multiple diagnostics including device, connection, and call quality. Pre-Call APIs are available only for Web (JavaScript). We will be enabling these capabilities across platforms in the future, please provide us feedback on what platforms you would like to see Pre-Call APIs on.
+The Pre-Call API enables developers to programmatically validate a clientΓÇÖs readiness to join an Azure Communication Services Call. The Pre-Call APIs can be accessed through the Calling SDK. They provide multiple diagnostics including device, connection, and call quality. Pre-Call APIs are available only for Web (JavaScript). We'll be enabling these capabilities across platforms in the future, provide us with feedback on what platforms you would like to see Pre-Call APIs on.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended. - An active Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A User Access Token to instantiate the call client. Learn how to [create and manage user access tokens](../../quickstarts/identity/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.)
+- A User Access Token to instantiate the call client. Learn how to [create and manage user access tokens](../../quickstarts/identity/access-tokens.md). You can also use the Azure CLI and run the next command with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.)
```azurecli-interactive az communication identity token issue --scope voip --connection-string "yourConnectionString"
The Pre-Call API enables developers to programmatically validate a clientΓÇÖs re
## Accessing Pre-Call APIs >[!IMPORTANT]
->Pre-Call diagnostics are available starting on the version [1.9.1-beta.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.9.1-beta.1) of the Calling SDK. Make sure to use that version when trying the instructions below.
+>Pre-Call diagnostics are available starting on the version [1.9.1-beta.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.9.1-beta.1) of the Calling SDK. Make sure to use that version when trying the next instructions.
-To Access the Pre-Call API, you will need to initialize a `callClient` and provision an Azure Communication Services access token. There you can access the `PreCallDiagnostics` feature and the `startTest` method.
+To Access the Pre-Call API, you need to initialize a `callClient`, and provision an Azure Communication Services access token. There you can access the `PreCallDiagnostics` feature and the `startTest` method.
```javascript import { CallClient, Features} from "@azure/communication-calling";
export declare type PreCallDiagnosticsResult = {
```
-Individual result objects can be accessed as such using the `preCallDiagnosticsResult` constant above. Results for individual tests will be returned as they are completed with many of the test results being available immediately. In the case of the `inCallDiagnostics` test, the results might take up to 1 minute as the test validates quality of the video and audio.
+Individual result objects can be accessed as such using the `preCallDiagnosticsResult` constant. Results for individual tests be returned as they're completed with many of the test results being available immediately. If you use the `inCallDiagnostics` test, the results might take up to 1 minute as the test validates quality of the video and audio.
### Browser support Browser compatibility check. Checks for `Browser` and `OS` compatibility and provides a `Supported` or `NotSupported` value back.
Permission check. Checks whether video and audio devices are available from a pe
```
-In the case that the test fails and the permissions are false for audio and video, the user shouldn't continue into joining a call. Rather you will need to prompt the user to enable the permissions. To do this, it is best to provide specific instruction on how to access permission access based on the OS, version and browser they are on. For more information on permissions check out our [recommendations](https://techcommunity.microsoft.com/t5/azure-communication-services/checklist-for-advanced-calling-experiences-in-mobile-web/ba-p/3266312).
+In the case that the test fails and the permissions are false for audio and video, the user shouldn't continue into joining a call. Rather you need to prompt the user to enable the permissions. To do it, the best way is provided the specific instruction on how to access permission access based on the OS, version and browser they are on. For more information on permissions, check out our [recommendations](https://techcommunity.microsoft.com/t5/azure-communication-services/checklist-for-advanced-calling-experiences-in-mobile-web/ba-p/3266312).
### Device enumeration Device availability. Checks whether microphone, camera and speaker devices are detected in the system and ready to use. Provides an `Available` or `NotAvailable` value back.
Device availability. Checks whether microphone, camera and speaker devices are d
```
-In the case that devices are not available, the user shouldn't continue into joining a call. Rather the user should be prompted to check device connections to ensure any headsets, cameras or speakers are properly connected. For more information on device management check out our [documentation](../../how-tos/calling-sdk/manage-video.md?pivots=platform-web#device-management)
+In the case that devices aren't available, the user shouldn't continue into joining a call. Rather the user should be prompted to check device connections to ensure any headsets, cameras or speakers are properly connected. For more information on device management, check out our [documentation](../../how-tos/calling-sdk/manage-video.md?pivots=platform-web#device-management)
### InCall diagnostics Performs a quick call to check in-call metrics for audio and video and provides results back. Includes connectivity (`connected`, boolean), bandwidth quality (`bandWidth`, `'Bad' | 'Average' | 'Good'`) and call diagnostics for audio and video (`diagnostics`). Diagnostic are provided `jitter`, `packetLoss` and `rtt` and results are generated using a simple quality grade (`'Bad' | 'Average' | 'Good'`).
-InCall diagnostics leverages [media quality stats](./media-quality-sdk.md) to calculate quality scores and diagnose issues. During the pre-call diagnostic, the full set of media quality stats are available for consumption. These will include raw values across video and audio metrics that can be used programatically. The InCall diagnostic provides a convenience layer on top of media quality stats to consume the results without the need to process all the raw data. See section on media stats for instructions to access.
+InCall diagnostics uses [media quality stats](./media-quality-sdk.md) to calculate quality scores and diagnose issues. During the pre-call diagnostic, the full set of media quality stats are available for consumption. These stats include raw values across video and audio metrics that can be used programatically. The InCall diagnostic provides a convenience layer on top of media quality stats to consume the results without the need to process all the raw data. See section on media stats for instructions to access.
```javascript
InCall diagnostics leverages [media quality stats](./media-quality-sdk.md) to ca
At this step, there are multiple failure points to watch out for. The values provided by the API are based on the threshold values required by the service. Those raw thresholds can be found in our [media quality stats documentation](./media-quality-sdk.md#best-practices). -- If connection fails, the user should be prompted to recheck their network connectivity. Connection failures can also be attributed to network conditions like DNS, Proxies or Firewalls. For more information on recommended network setting check out our [documentation](network-requirements.md).
+- If connection fails, the user should be prompted to recheck their network connectivity. Connection failures can also be attributed to network conditions like DNS, Proxies or Firewalls. For more information on recommended network setting, check out our [documentation](network-requirements.md).
- If bandwidth is `Bad`, the user should be prompted to try out a different network or verify the bandwidth availability on their current one. Ensure no other high bandwidth activities might be taking place. ### Media stats
-For granular stats on quality metrics like jitter, packet loss, rtt, etc. `callMediaStatistics` are provided as part of the `preCallDiagnosticsResult` feature. See the [full list and description of the available metrics](./media-quality-sdk.md) in the linked article. You can subscribe to the call media stats to get full collection of them. This is the raw metrics that are used to calculate InCall diagnostic results and which can be consumed granularly for further analysis.
+For granular stats on quality metrics like jitter, packet loss, rtt, etc. `callMediaStatistics` are provided as part of the `preCallDiagnosticsResult` feature. See the [full list and description of the available metrics](./media-quality-sdk.md) in the linked article. You can subscribe to the call media stats to get full collection of them. This stat is the raw metrics that are used to calculate InCall diagnostic results and which can be consumed granularly for further analysis.
```javascript
communication-services Video Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-effects.md
Title: Azure Communication Services Calling video WebJS video effects
-description: In this document, you'll learn how to create video effects on an Azure Communication Services call.
+description: In this document, you learn how to create video effects on an Azure Communication Services call.
> [!NOTE] > Currently browser support for creating video background effects is only supported on Chrome and Edge Desktop Browser (Windows and Mac) and Mac Safari Desktop.
-The Azure Communication Calling SDK allows you to create video effects that other users on a call will be able to see. For example, for a user doing ACS calling using the WebJS SDK you can now enable that the user can turn on background blur. When background blur enabled a user can feel more comfortable in doing a video call that the output video will just show a user and all other content will be blurred.
+The Azure Communication Calling SDK allows you to create video effects that other users on a call are able to see. For example, for a user doing ACS calling using the WebJS SDK you can now enable that the user can turn on background blur. When the background blur is enabled, a user can feel more comfortable in doing a video call that the output video just shows a user, and all other content is blurred.
## Prerequisites ### Install the Azure Communication Services Calling SDK - An Azure account with an active subscription is required. See [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) on how to create an Azure account. - [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended. - An active Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A User Access Token to instantiate a call client. Learn how to [create and manage user access tokens](../../quickstarts/identity/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.)
+- A User Access Token to instantiate a call client. Learn how to [create and manage user access tokens](../../quickstarts/identity/access-tokens.md). You can also use the Azure CLI and run the command with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.)
- Azure Communication Calling client library is properly set up and configured (https://www.npmjs.com/package/@azure/communication-calling).
-An example using the Azure CLI to
+An example using the Azure CLI to
```azurecli-interactive az communication identity token issue --scope voip --connection-string "yourConnectionString" ```
Use ΓÇÿnpm installΓÇÖ command to install the Azure Communication Calling Effects
```console npm install @azure/communication-calling-effects --save ```
-See [here](https://www.npmjs.com/package/@azure/communication-calling-effects) for more details on the calling commmunication effects npm package page.
+For more details on the calling communication effects, visit the [npm package page](https://www.npmjs.com/package/@azure/communication-calling-effects).
## Supported video effects: Currently the video effects support the following ability:
communication-services Push Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/push-notifications.md
Last updated 08/10/2021
-zone_pivot_groups: acs-plat-ios-android
+zone_pivot_groups: acs-plat-web-ios-android
#Customer intent: As a developer, I want to enable push notifications with the acs sdks so that I can create a calling application that provides push notifications to its users.
Here, we'll learn how to enable push notifications for Azure Communication Servi
- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) + ::: zone pivot="platform-android" [!INCLUDE [Enable push notifications Android](./includes/push-notifications/push-notifications-android.md)] ::: zone-end
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md
Azure Communication Services supports various communication formats:
- [SMS](concepts/sms/concepts.md) - [Email](concepts/email/email-overview.md)
-You can connect custom client apps, custom services, and the publicly switched telephony network (PSTN) to your communications experience. You can acquire [phone numbers](./concepts/telephony/plan-solution.md) directly through Azure Communication Services REST APIs, SDKs, or the Azure portal; and use these numbers for SMS or calling applications or you can simply integrate email capabilities to your applications using production-ready email SDKs. Azure Communication Services [direct routing](./concepts/telephony/plan-solution.md) allows you to use SIP and session border controllers to connect your own PSTN carriers and bring your own phone numbers.
+You can connect custom client apps, custom services, and the publicly switched telephony network (PSTN) to your communications experience. You can acquire [phone numbers](./concepts/telephony/plan-solution.md) directly through Azure Communication Services REST APIs, SDKs, or the Azure portal; and use these numbers for SMS or calling applications or you can integrate email capabilities to your applications using production-ready email SDKs. Azure Communication Services [direct routing](./concepts/telephony/plan-solution.md) allows you to use SIP and session border controllers to connect your own PSTN carriers and bring your own phone numbers.
-In addition to REST APIs, [Azure Communication Services client libraries](./concepts/sdk-options.md) are available for various platforms and languages, including Web browsers (JavaScript), iOS (Swift), Android (Java), Windows (.NET). A [UI library](https://aka.ms/acsstorybook) can accelerate development for Web, iOS, and Android apps. Azure Communication Services is identity agnostic and you control how end users are identified and authenticated.
+In addition to REST APIs, [Azure Communication Services client libraries](./concepts/sdk-options.md) are available for various platforms and languages, including Web browsers (JavaScript), iOS (Swift), Android (Java), Windows (.NET). A [UI library](./concepts/ui-library/ui-library-overview.md) can accelerate development for Web, iOS, and Android apps. Azure Communication Services is identity agnostic and you control how end users are identified and authenticated.
Scenarios for Azure Communication Services include: - **Business to Consumer (B2C).** Employees and services engage external customers using voice, video, and text chat in browser and native apps. An organization can send and receive SMS messages, or [operate an interactive voice response system (IVR)](https://github.com/microsoft/botframework-telephony/blob/main/EnableTelephony.md) using a phone number you acquire through Azure. [Integration with Microsoft Teams](./quickstarts/voice-video-calling/get-started-teams-interop.md) can be used to connect consumers to Teams meetings hosted by employees; ideal for remote healthcare, banking, and product support scenarios where employees might already be familiar with Teams. - **Consumer to Consumer (C2C).** Build engaging consumer-to-consumer interaction with voice, video, and rich text chat. Any type of user interface can be built on Azure Communication Services SDKs, or use complete application samples and an open-source UI toolkit to help you get started quickly.
-To learn more, check out our [Microsoft Mechanics video](https://www.youtube.com/watch?v=apBX7ASurgM) or the resources linked below.
+To learn more, check out our [Microsoft Mechanics video](https://www.youtube.com/watch?v=apBX7ASurgM) or the resources linked next.
## Common scenarios
To learn more, check out our [Microsoft Mechanics video](https://www.youtube.com
| | | |**[Create a Communication Services resource](./quickstarts/create-communication-resource.md)**|Begin using Azure Communication Services by using the Azure portal or Communication Services SDK to provision your first Communication Services resource. Once you have your Communication Services resource connection string, you can provision your first user access tokens.| |**[Get a phone number](./quickstarts/telephony/get-phone-number.md)**|Use Azure Communication Services to provision and release telephone numbers. These telephone numbers can be used to initiate or receive phone calls and build SMS solutions.|
-|**[Send an SMS from your app](./quickstarts/sms/send.md)**| Azure Communication Services SMS REST APIs and SDKs is used send and receive SMS messages from service applications.|
-|**[Send an Email from your app](./quickstarts/email/send-email.md)**| Azure Communication Services Email REST APIs and SDKs is used send an email messages from service applications.|
+|**[Send an SMS from your app](./quickstarts/sms/send.md)**| Azure Communication Services SMS REST APIs and SDKs are used to send and receive SMS messages from service applications.|
+|**[Send an Email from your app](./quickstarts/email/send-email.md)**| Azure Communication Services Email REST APIs and SDKs are used to send email messages from service applications.|
After creating a Communication Services resource you can start building client scenarios, such as voice and video calling or text chat:
After creating a Communication Services resource you can start building client s
| | | |**[Create your first user access token](./quickstarts/identity/access-tokens.md)**|User access tokens authenticate clients against your Azure Communication Services resource. These tokens are provisioned and reissued using Communication Services Identity APIs and SDKs.| |**[Get started with voice and video calling](./quickstarts/voice-video-calling/getting-started-with-calling.md)**| Azure Communication Services allows you to add voice and video calling to your browser or native apps using the Calling SDK. |
-|**[Add telephony calling to your app](./quickstarts/telephony/pstn-call.md)**|With Azure Communication Services you can add telephony calling capabilities to your application.|
+|**[Add telephony calling to your app](./quickstarts/telephony/pstn-call.md)**|With Azure Communication Services, you can add telephony calling capabilities to your application.|
|**[Join your calling app to a Teams meeting](./quickstarts/voice-video-calling/get-started-teams-interop.md)**|Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.| |**[Get started with chat](./quickstarts/chat/get-started.md)**|The Azure Communication Services Chat SDK is used to add rich real-time text chat into your applications.|
-|**[Connect a Microsoft Bot to a phone number](https://github.com/microsoft/botframework-telephony)**|Telephony channel is a channel in Microsoft Bot Framework that enables the bot to interact with users over the phone. It leverages the power of Microsoft Bot Framework combined with the Azure Communication Services and the Azure Speech Services. |
+|**[Connect a Microsoft Bot to a phone number](https://github.com/microsoft/botframework-telephony)**|Telephony channel is a channel in Microsoft Bot Framework that enables the bot to interact with users over the phone. It uses the power of Microsoft Bot Framework combined with the Azure Communication Services and the Azure Speech Services. |
| **[Add visual communication experiences](https://aka.ms/acsstorybook)** | The UI Library for Azure Communication Services enables you to easily add rich, visual communication experiences to your applications for both calling and chat. | ## Samples
The following samples demonstrate end-to-end usage of the Azure Communication Se
## Platforms and SDK libraries
-Learn more about the Azure Communication Services SDKs with the resources below. REST APIs are available for most functionality if you want to build your own clients or otherwise access the service over the Internet.
+Learn more about the Azure Communication Services SDKs with the resources listed next. REST APIs are available for most functionality if you want to build your own clients or otherwise access the service over the Internet.
| Resource | Description | | | |
Learn more about the Azure Communication Services SDKs with the resources below.
|**[Chat SDK overview](./concepts/chat/sdk-features.md)**|Review the Communication Services Chat SDK overview.| |**[SMS SDK overview](./concepts/sms/sdk-features.md)**|Review the Communication Services SMS SDK overview.| |**[Email SDK overview](./concepts/email/sdk-features.md)**|Review the Communication Services SMS SDK overview.|
-|**[UI Library overview](https://aka.ms/acsstorybook)**| Review the UI Library for the Communication Services |
+|**[UI Library overview](./concepts/ui-library/ui-library-overview.md)**| Review the UI Library for the Communication Services |
## Design resources
Find comprehensive components, composites, and UX guidance in the [UI Library De
## Other Microsoft Communication Services
-There are two other Microsoft communication products you may consider using that are not directly interoperable with Communication Services at this time:
+There are two other Microsoft communication products you may consider using, these products aren't directly interoperable with Communication Services at this time:
+ - [Microsoft Graph Cloud Communication APIs](/graph/cloud-communications-concept-overview) allow organizations to build communication experiences tied to Azure Active Directory users with Microsoft 365 licenses. This workflow is ideal for applications tied to Azure Active Directory or where you want to extend productivity experiences in Microsoft Teams. There are also APIs to build applications and customization within the [Teams experience.](/microsoftteams/platform/?preserve-view=true&view=msteams-client-js-latest)
- [Azure PlayFab Party](/gaming/playfab/features/multiplayer/networking/) simplifies adding low-latency chat and data communication to games. While you can power gaming chat and networking systems with Communication Services, PlayFab is a tailored option and free on Xbox.
communication-services Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/get-started.md
In this quickstart you learned how to:
You may also want to:
+ - Get started with the [UI Library](../../concepts/ui-library/ui-library-overview.md)
- Learn about [chat concepts](../../concepts/chat/concepts.md) - Familiarize yourself with [Chat SDK](../../concepts/chat/sdk-features.md) - Using [Chat SDK in your React Native](./react-native.md) application.
communication-services Add Custom Verified Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-custom-verified-domains.md
To provision a custom domain you need to
9. You can verify the ownership of the domain by clicking **Verify Domain**
- :::image type="content" source="./media/email-domains-custom-added.png" alt-text="Screenshot that shows that custom domain is successfully added for verification.":::.
+ :::image type="content" source="./media/email-domains-custom-added.png" alt-text="Screenshot that shows custom domain is successfully added for verification.":::.
10. If you would like to resume the verification later, you can click **Close** and resume the verification from **Provision Domains** by clicking **Configure** .
To provision a custom domain you need to
## Changing MailFrom and FROM display name for custom domains
-When Azure Managed Domain is provisioned to send mail, it has default Mail from address as donotreply@xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.azurecomm.net and the FROM display name would be the same. You'll able to configure and change the Mail from address and FROM display name to more user friendly value.
+When Custom Domain is provisioned to send mail, it has default Mail from address as donotreply@notification.azurecommtest.net and the FROM display name would be the same. You'll able to configure and change the Mail from address and FROM display name to more user friendly value.
1. Go the overview page of the Email Communications Service resource that you created earlier. 2. Click **Provision Domains** on the left navigation panel. You'll see list of provisioned domains.
When Azure Managed Domain is provisioned to send mail, it has default Mail from
7. Click **Save**. You'll see the updated values in the overview page.
- :::image type="content" source="./media/email-domains-overview-updated.png" alt-text="Screenshot that shows that how to save the modified values of Mail From and display name." lightbox="media/email-domains-custom-overview-expanded.png":::
+ :::image type="content" source="./media/email-domains-overview-updated.png" alt-text="Screenshot that shows that how to save the modified values of Mail From and display name." lightbox="media/email-domains-overview-updated-expanded.png":::
**Your email domain is now ready to send emails.**
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
In this quickstart, you learned how to send SMS messages by using Communication
> [Receive and reply to SMS](./receive-sms.md) > [!div class="nextstepaction"]
-> [Phone number types](../../concepts/telephony/plan-solution.md)
+> [Enable SMS analytics](../../concepts/analytics/insights/sms-insights.md)
> [!div class="nextstepaction"]
-> [Learn more about SMS](../../concepts/sms/concepts.md)
+> [Phone number types](../../concepts/telephony/plan-solution.md)
communication-services Get Started Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-live-stream.md
Title: Quickstart - Add live stream to your app
-description: In this quickstart, you'll learn how to add live stream calling capabilities to your app using Azure Communication Services.
+description: In this quickstart, you learn how to add live stream calling capabilities to your app using Azure Communication Services.
# Live stream quick start
-Live streaming will empower Contoso to engage thousands of online attendees by adding interactive live audio and video streaming functionality into their web and
-mobile applications that their audiences will love, no matter where they are. Interactive Live Streaming is the ability to broadcast media content to thousands of online
-attendees while enabling some attendees to share their live audio and video, interact via chat, and engage with metadata content such as reactions, polls, quizzes, ads, etc.
+Live streaming empower Contoso to engage thousands of online attendees by adding interactive live audio and video streaming functionality into their web and mobile applications that their audiences love, no matter where they are.
+
+Interactive Live Streaming is the ability to broadcast media content to thousands of online attendees while enable some attendees to share their live audio and video, interact via chat, and engage with metadata content such as reactions, polls, quizzes, ads, etc.
## Prerequisites [!INCLUDE [Public Preview](../../includes/private-preview-include-section.md)] -- [Rooms](../rooms/get-started-rooms.md) meeting will be needed for role-based streaming.-- The quick start examples here are available with the private preview version [1.11.0-alpha.20230124.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.11.0-alpha.20230124.1) of the calling Web SDK. Make sure to use that or higher version when trying this quick start.
+- [Rooms](../rooms/get-started-rooms.md) meeting is needed for role-based streaming.
+- The quick start examples here are available with the preview version [1.11.0-alpha.20230124.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.11.0-alpha.20230124.1) of the calling Web SDK. Make sure to use that or higher version when trying this quick start.
## Live streaming with Rooms Room participants can be assigned one of the following roles: **Presenter**, **Attendee** and **Consumer**. By default, a user is assigned an **Consumer** role, if no other role is assigned.
-Participants with `Consumer` role will be receiving only the live stream. They won't be able to speak or share video or screen. Developers shouldn't show the unmute, share video, and screen option to end users/consumers. Live stream supports both open and closed Rooms. In Open Rooms the default role is `Consumer`.
+Participants with `Consumer` role receive only the live stream. They're not able to speak or share video or screen. Developers shouldn't show the unmute, share video, and screen option to end users/consumers. Live stream supports both open and closed Rooms. In Open Rooms, the default role is `Consumer`.
On the other hand, Participants with other roles receive both real-time and live stream. Developers can choose either stream to play. Check [participant roles and permissions](../../concepts/rooms/room-concept.md#predefined-participant-roles-and-permissions) to know more about the roles capabilities. ### Place a Rooms call (start live streaming)
-Live streaming will start when the Rooms call starts.
+Live streaming start when the Rooms call starts.
```js const context = { roomId: '<RoomId>' }
call.feature(Features.LiveStream).on('participantCountChanged', e => {
For more information, see the following articles: - Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Get started with the [UI Library](https://aka.ms/acsstorybook)
+- Get started with the [UI Library](../../concepts/ui-library/ui-library-overview.md)
- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web) - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Get Started Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
Title: Quickstart - Teams interop on Azure Communication Services
-description: In this quickstart, you'll learn how to join a Teams meeting with the Azure Communication Calling SDK.
+description: In this quickstart, you learn how to join a Teams meeting with the Azure Communication Calling SDK.
Last updated 06/30/2021
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: - Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Get started with the [UI Library](https://aka.ms/acsstorybook)
+- Get started with the [UI Library](../ui-library/get-started-composites.md)
- Learn about [Calling SDK capabilities](./getting-started-with-calling.md) - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
Title: Quickstart - Add video calling to your app (JavaScript)
-description: In this quickstart, you'll learn how to add video calling capabilities to your app using Azure Communication Services.
+description: In this quickstart, you learn how to add video calling capabilities to your app using Azure Communication Services.
Last updated 06/30/2021
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: - Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Get started with the [UI Library](https://aka.ms/acsstorybook)
+- Get started with the [UI Library](../ui-library/get-started-composites.md)
- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web) - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Getting Started With Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md
Title: Quickstart - Add voice calling to your app
-description: In this quickstart, you'll learn how to add calling capabilities to your app using Azure Communication Services.
+description: In this quickstart, you learn how to add calling capabilities to your app using Azure Communication Services.
Last updated 06/30/2021
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: - Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Get started with the [UI Library](https://aka.ms/acsstorybook)
+- Get started with the [UI Library](../../concepts/ui-library/ui-library-overview.md)
- Learn about [Calling SDK capabilities]() - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/overview.md
Azure Communication Services has many samples available, which you can use to te
| [Chat Hero Sample](./chat-hero-sample.md) | Provides a sample of creating a chat application. | [Web](https://github.com/Azure-Samples/communication-services-web-chat-hero) | | [Trusted Authentication Server Sample](./trusted-auth-sample.md) | Provides a sample implementation of a trusted authentication service used to generate user and access tokens for Azure Communication Services. The service by default maps generated identities to Azure Active Directory | [node.JS](https://github.com/Azure-Samples/communication-services-authentication-hero-nodejs), [C#](https://github.com/Azure-Samples/communication-services-authentication-hero-csharp) | [Web Calling Sample](./web-calling-sample.md) | A step by step walk-through of Azure Communication Services Calling features, including PSTN, within the Web. | [Web](https://github.com/Azure-Samples/communication-services-web-calling-tutorial/) |
+| [Web Calling Push Notifications Sample](./web-calling-push-notifications-sample.md) | A step by step walk-through of how to set up an architecture for web calling push notifications. | [Web](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/calling-web-push-notifications) |
| [Network Traversal Sample]( https://github.com/Azure-Samples/communication-services-network-traversal-hero) | Sample app demonstrating network traversal functionality | Node.js ## Quickstart samples
communication-services Web Calling Push Notifications Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/web-calling-push-notifications-sample.md
++
+ Title: ACS Web Calling SDK - Web push notifications
+description: Quickstart tutorial for ACS Web Calling SDK push notifications
++ Last updated : 03/20/2023+++
+# ACS Web Calling SDK - Web push notifications quickstart
+
+ACS Web Calling SDK - Web push notifications is in public preview and available as part of version 1.12.0-beta.2+.
+
+[Please visit our web push notifications quickstart tutorial](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/blob/main/calling-web-push-notifications/README.md)
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
Before your onboarding team can finish onboarding you to the Operator Connect an
1. Wait for your onboarding team to provide you with a form to collect the additional information. 1. Complete the form and give it to your onboarding team.
-1. Wait for your onboarding team to confirm that the onboarding process is complete before testing your portal access.
+1. Wait for your onboarding team to confirm that the onboarding process is complete.
If you don't already have an onboarding team, contact azcog-enablement@microsoft.com, providing your Azure subscription ID and contact details.
-## 4. Test your portal access
+## 4. Test your Operator Connect portal access
-Navigate to the [Operator Connect homepage](https://operatorconnect.microsoft.com/) and check that you're able to sign in.
+> [!IMPORTANT]
+> Before testing your Operator Connect portal access, wait for your onboarding team to confirm that the onboarding process is complete.
-## 5. Register your deployment's domain name in Active Directory
+Go to the [Operator Connect homepage](https://operatorconnect.microsoft.com/) and check that you're able to sign in.
+
+## 5. Add the application ID for Azure Communications Gateway to Operator Connect
+
+You must enable the Azure Communications Gateway application within the Operator Connect or Teams Phone Mobile environment. Enabling the application allows Azure Communications Gateway to use the roles that you set up in [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md#10-set-up-application-roles-for-azure-communications-gateway).
+
+To enable the Azure Communications Gateway application, add the application ID of the service principal representing Azure Communications Gateway to your Operator Connect or Teams Phone Mobile environment:
+
+1. Optionally, check the application ID of the service principal to confirm that you're adding the right application.
+ 1. Search for `AzureCommunicationsGateway` with the search bar: it's under the **Azure Active Directory** subheading.
+ 1. On the overview page, check that the value of **Object ID** is `8502a0ec-c76d-412f-836c-398018e2312b`.
+1. Log into the [Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration).
+1. Add a new **Application Id**, pasting in the following value. This value is the application ID for Azure Communications Gateway.
+ ```
+ 8502a0ec-c76d-412f-836c-398018e2312b
+ ```
+
+## 6. Register your deployment's domain name in Active Directory
Microsoft Teams only sends traffic to domains that you've confirmed that you own. Your Azure Communications Gateway deployment automatically receives an autogenerated fully qualified domain name (FQDN). You need to add this domain name to your Active Directory tenant as a custom domain name, share the details with your onboarding team and then verify the domain name. This process confirms that you own the domain.
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
If the **Microsoft.VoiceServices** resource provider isn't already registered in
1. Search for the **Microsoft.VoiceServices** resource provider. 1. Check if the resource provider is already marked as registered. If it isn't, choose the resource provider and select **Register**.
-## 10. Set up application roles for Azure Communications Gateway in your Project Synergy application
+## 10. Set up application roles for Azure Communications Gateway
-Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to an AzureCommunicationsGateway service principal under the Project Synergy Enterprise Application. You created the Project Synergy application in [1. Add the Project Synergy application to your Azure tenancy](#1-add-the-project-synergy-application-to-your-azure-tenancy). Microsoft created the Azure Communications Gateway service principal for you when you followed [9. Register the Microsoft Voice Services resource provider](#9-register-the-microsoft-voice-services-resource-provider).
+Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to an AzureCommunicationsGateway service principal under the Project Synergy Enterprise Application. You created the Project Synergy application in [1. Add the Project Synergy application to your Azure tenancy](#1-add-the-project-synergy-application-to-your-azure-tenancy). We created the Azure Communications Gateway service principal for you when you followed [9. Register the Microsoft Voice Services resource provider](#9-register-the-microsoft-voice-services-resource-provider).
-You need to do the following steps in the tenant that contains your Project Synergy application.
+> [!IMPORTANT]
+> Granting permissions has two parts: configuring the service principal with the appropriate roles (this step) and adding the ID of the service principal to the Operator Connect or Teams Phone Mobile environment. You'll add the service principal to the Operator Connect or Teams Phone Mobile environment later, as part of [deploying Azure Communications Gateway](deploy.md).
+
+Do the following steps in the tenant that contains your Project Synergy application.
1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as an Azure Active Directory Global Admin. 1. Select **Azure Active Directory**.
You need to do the following steps in the tenant that contains your Project Syne
} ```
-## 11. Add the application ID for Azure Communications Gateway to Operator Connect
-
-Before you can use the roles that you set up in [10. Set up application roles for Azure Communications Gateway in your Project Synergy application](#10-set-up-application-roles-for-azure-communications-gateway-in-your-project-synergy-application), you must enable the Azure Communications Gateway application within the Operator Connect or Teams Phone Mobile environment.
-
-To enable the Azure Communications Gateway application and the roles, add the application ID of the AzureCommunicationsGateway service principal to your Operator Connect or Teams Phone Mobile environment:
-
-1. Optionally, check the application ID of the service principal to confirm that you're adding the right application.
- 1. Search for `AzureCommunicationsGateway` with the search bar: it's under the **Azure Active Directory** subheading.
- 1. On the overview page, check that the value of **Object ID** is `8502a0ec-c76d-412f-836c-398018e2312b`.
-1. Log into the [Operator Connect Number Management Portal](https://operatorconnect.microsoft.com/operator/configuration).
-1. Add a new **Application Id**, pasting in the following value. This value is the application ID for Azure Communications Gateway.
- ```
- 8502a0ec-c76d-412f-836c-398018e2312b
- ```
- ## Next steps - [Create an Azure Communications Gateway resource](deploy.md)
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
$ServiceArgs = @{
DaprAppId = 'nodeapp' DaprAppPort = 3000 IdentityType = 'UserAssigned'
- IdentityUserAssignedIdentity = $IdentityId
+ IdentityUserAssignedIdentity = @{
+ $IdentityId = @{}
+ }
} New-AzContainerApp @ServiceArgs ```
container-apps Storage Mounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts.md
Ephemeral storage has the following characteristics:
| vCPUs | Ephemeral storage | |--|--|
- | Up to 0.25 | 1 GiB |
- | Up to 0.5 | 2 GiB |
- | Up to 1 | 4 GiB |
+ | 0.25 or lower | 1 GiB |
+ | 0.5 or lower | 2 GiB |
+ | 1 or lower | 4 GiB |
| Over 1 | 8 GiB | To configure ephemeral storage, first define an `EmptyDir` volume in the revision. Then define a volume mount in one or more containers in the revision.
To configure a volume mount for Azure Files storage in the Azure portal, add a f
1. Select **Create** to create the new revision.
cosmos-db Access Previews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-previews.md
Azure Cosmos DB offers several preview features that you can request access to.
:::image type="content" source="./mongodb/media/indexing/unique-reindex-preview.png" alt-text="Screenshot of steps to enable preview feature" lightbox="./mongodb/media/indexing/unique-reindex-preview.png":::
+> [!TIP]
+> If your request is stuck in the **Pending** state for an abnormal amount of time, [create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+ ## Next steps - Learn [how to choose an API](choose-api.md) in Azure Cosmos DB
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
df = spark.read.format("cosmos.olap")\
df.select("id", "_id.objectId").show() ```+
+> [!NOTE]
+> This workaround was designed to work with Spark 2.4.
+ ###### Working with the MongoDB `_id` field in SQL ```SQL
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-residency.md
In Azure Cosmos DB, you must explicitly configure the cross-region data replicat
**Continuous mode Backups**: These backups are resident by default as they are stored in either locally redundant or zone redundant storage. To learn more, see the [continuous backup](provision-account-continuous-backup.md) article.
-**Periodic mode Backups**: By default, periodic mode account backups will be stored in geo-redundant storage. For periodic backup modes, you can configure data redundancy at the account level. There are three redundancy options for the backup storage. They are local redundancy, zone redundancy, or geo redundancy. To learn more, see how to [configure backup redundancy](configure-periodic-backup-restore.md#modify-the-backup-interval-and-retention-period) using portal.
+**Periodic mode Backups**: By default, periodic mode account backups will be stored in geo-redundant storage. For periodic backup modes, you can configure data redundancy at the account level. There are three redundancy options for the backup storage. They are local redundancy, zone redundancy, or geo redundancy. For more information, see [periodic backup/restore](periodic-backup-restore-introduction.md).
## Use Azure Policy to enforce the residency requirements
Azure Policy is a service that you can use to create, assign, and manage policie
## Next steps
-* Configure and manage periodic backup using [Azure portal](configure-periodic-backup-restore.md)
+* Configure and manage periodic backup using [Azure portal](periodic-backup-restore-introduction.md)
* Provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). * Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). * [Migrate to an account from periodic backup to continuous backup](migrate-continuous-backup.md).
cosmos-db Emulator Command Line Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/emulator-command-line-parameters.md
To view the list of parameters, type `Microsoft.Azure.Cosmos.Emulator.exe /?` at
| `Shutdown` | Shuts down the emulator. | `Microsoft.Azure.Cosmos.Emulator.exe /Shutdown` | | `DataPath` | Specifies the path in which to store data files. The default value is `%LocalAppdata%\CosmosDBEmulator`. | `Microsoft.Azure.Cosmos.Emulator.exe /DataPath=E:\SomeDataFolder` | | `Port` | Specifies the port number to use for the emulator. The default value is `8081`. | `Microsoft.Azure.Cosmos.Emulator.exe /Port=65000` |
-| `ComputePort` | Specified the port number to use for the compute interop gateway service. The gateway's HTTP endpoint probe port is calculated as `ComputePort + 79`. Hence, `ComputePort` and `ComputePort + 79` must be open and available. The default value is `8900`. | `Microsoft.Azure.Cosmos.Emulator.exe /ComputePort=65100` |
+| `ComputePort` | Specifies the port number to use for the compute interop gateway service. The gateway's HTTP endpoint probe port is calculated as `ComputePort + 79`. Hence, `ComputePort` and `ComputePort + 79` must be open and available. The default value is `8900`. | `Microsoft.Azure.Cosmos.Emulator.exe /ComputePort=65100` |
| `EnableMongoDbEndpoint=3.2` | Enables API for MongoDB version 3.2. | `Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=3.2` | | `EnableMongoDbEndpoint=3.6` | Enables API for MongoDB version 3.6. | `Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=3.6` | | `EnableMongoDbEndpoint=4.0` | Enables API for MongoDB version 4.0. | `Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=4.0` |
cosmos-db Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-php.md
Optionally, you can also configure the following settings on the **Create Azure
- Other tabs: - **Networking**: Configure [access from a virtual network](../how-to-configure-vnet-service-endpoint.md).
- - **Backup Policy**: Configure either [periodic](../configure-periodic-backup-restore.md) or [continuous](../provision-account-continuous-backup.md) backup policy.
+ - **Backup Policy**: Configure either [periodic](../periodic-backup-restore-introduction.md) or [continuous](../provision-account-continuous-backup.md) backup policy.
- **Encryption**: Use either a service-managed key or a [customer-managed key](../how-to-setup-cmk.md#create-a-new-azure-cosmos-account). - **Tags**: Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups.
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
When an Azure Cosmos DB account is deployed in a single region, generally no dat
To help you protect against complete data loss that might result from catastrophic disasters in a region, Azure Cosmos DB provides two backup modes: - [Continuous backups](./continuous-backup-restore-introduction.md) back up each region every 100 seconds. They enable you to restore your data to any point in time with 1-second granularity. In each region, the backup is dependent on the data committed in that region.-- [Periodic backups](./configure-periodic-backup-restore.md) fully back up all partitions from all containers under your account, with no synchronization across partitions. The minimum backup interval is 1 hour.
+- [Periodic backups](./periodic-backup-restore-introduction.md) fully back up all partitions from all containers under your account, with no synchronization across partitions. The minimum backup interval is 1 hour.
When an Azure Cosmos DB account is deployed in multiple regions, data durability depends on the consistency level that you configure on the account. The following table details, for all consistency levels, the RPO of an Azure Cosmos DB account that's deployed in at least two regions.
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator-release-notes.md
This article shows the Azure Cosmos DB Emulator released versions and it details
## Release notes
+### `2.14.12` (March 20, 2023)
+
+- This release fixes an issue impacting Gremlin and Table endpoint API types. Prior to this fix a client application will fail with a 500 status code when trying to connect to the public emulator's endpoint.
+ ### `2.14.11` (January 27, 2023) - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/introduction.md
Azure Cosmos DB for MongoDB makes it easy to use Azure Cosmos DB as if it were a
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RWXr4T] > [!TIP]
-> Want to try the API for MongoDB with no commitment? Create an Azure Cosmos DB account using [Try Azure Cosmos DB](../try-free.md) for free.
+> Want to try the Azure Cosmos DB for MongoDB with no commitment? Create an Azure Cosmos DB account using [Try Azure Cosmos DB](../try-free.md) for free.
-## API for MongoDB benefits
+## Cosmos DB for MongoDB benefits
-The API for MongoDB has added benefits of being built on Azure Cosmos DB when compared to service offerings such as MongoDB Atlas:
+Cosmos DB for MongoDB has numerous benefits compared to other MongoDB service offerings such as MongoDB Atlas:
-- **Instantaneous scalability**: With the [Autoscale](../provision-throughput-autoscale.md) feature, your database can scale up/down with zero warmup period.
+- **Instantaneous scalability**: With the [Autoscale](../provision-throughput-autoscale.md) feature, your database scales instantaneously with zero warmup period. Other MongoDB offerings such as MongoDB Atlas can take hours to scale up and up to days to scale down.
- **Automatic and transparent sharding**: The API for MongoDB manages all of the infrastructure for you. This management includes sharding and optimizing the number of shards. Other MongoDB offerings such as MongoDB Atlas, require you to specify and manage sharding to horizontally scale. This automation gives you more time to focus on developing applications for your users. -- **Five 9's of availability**: [99.999% availability](../high-availability.md) is easily configurable to ensure your data is always there for you.
+- **Five 9's of availability**: [99.999% availability](../high-availability.md) is easily configurable to ensure your data is always there for you.
-- **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. APIs for MongoDB users are running databases with over 600 TB of storage today. Scaling is done in a cost-efficient manner unlike other MongoDB service offerings. The Azure Cosmos DB platform can scale in increments as small as 1/100th of a VM due to economies of scale and resource governance.
+- **Active-active database**: Unlike MongoDB Atlas, Cosmos DB for MongoDB supports active-active across multiple regions. Databases can span multiple regions, with no single point of failure for **writes and reads for the same data**. MongoDB Atlas global clusters only support active-passive deployments for writes for the same data.
-- **Serverless deployments**: Unlike MongoDB Atlas, the API for MongoDB is a cloud native database that offers a [serverless capacity mode](../serverless.md). With [Serverless](../serverless.md), you're only charged per operation, and don't pay for the database when you don't use it.
+- **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. Scaling is done in a cost-efficient manner unlike other MongoDB service offerings. The Azure Cosmos DB platform can scale in increments as small as 1/100th of a VM due to its architecture. This means that you can scale your database to the exact size you need, without paying for unused resources.
-- **Free Tier**: With Azure Cosmos DB free tier, you get the first 1000 RU/s and 25 GB of storage in your account for free forever, applied at the account level.
+- **Serverless deployments**: Cosmos DB for MongoDB offers a [serverless capacity mode](../serverless.md). With [Serverless](../serverless.md), you're only charged per operation, and don't pay for the database when you don't use it.
+
+- **Free Tier**: With Azure Cosmos DB free tier, you get the first 1000 RU/s and 25 GB of storage in your account for free forever, applied at the account level. Free tier accounts are [sandboxed](../limit-total-account-throughput.md).
+
+- **Free 7 day Continuous Backups**: Azure Cosmos DB for MongoDB offers free 7 day continuous backups for any amount of data. This means that you can restore your database to any point in time within the last 7 days.
- **Upgrades take seconds**: All API versions are contained within one codebase, making version changes as simple as [flipping a switch](upgrade-version.md), with zero downtime. -- **Real time analytics (HTAP) at any scale**: The API for MongoDB offers the ability to run complex analytical queries. Use cases for these queries include business intelligence that can run against your database data in real time with no effect on your database. This analysis is fast and inexpensive, due to the cloud native analytical columnar store being utilized, with no ETL pipelines. Learn more about the [Azure Synapse Link](../synapse-link.md).
+- **Role Based Access Control**: With Azure Cosmos DB for MongoDB, you can assign granular roles and permissions to users to control access to your data and audit user actions- all using native Azure tooling.
+
+- **Flexible single-field indexes**: Unlike single field indexes in MongoDB Atlas, [single field indexes in Cosmos DB for MongoDB](indexing.md) cover multi-field filter queries. There is no need to create compound indexes for each multi-field filter query. This increases developer productivity.
-## How the API for MongoDB works
+- **Real time analytics (HTAP) at any scale**: Cosmos DB for MongoDB offers the ability to run complex analytical queries. Use cases for these queries include business intelligence that can run against your database data in real time with no effect on your database. This analysis is fast and inexpensive, due to the cloud native analytical columnar store being utilized, with no ETL pipelines. Learn more about the [Azure Synapse Link](../synapse-link.md).
-The API for MongoDB implements the wire protocol for MongoDB. This implementation allows transparent compatibility with native MongoDB client SDKs, drivers, and tools. Azure Cosmos DB doesn't host the MongoDB database engine. Any MongoDB client driver compatible with the API version you're using should be able to connect, with no special configuration.
+- **In-depth monitoring capabilities**: Cosmos DB for MongoDB integrates natively with [Azure Monitor](../../azure-monitor/overview.md) to provide in-depth monitoring capabilities.
+
+## How Cosmos DB for MongoDB works
+
+Cosmos DB for MongoDB implements the wire protocol for MongoDB. This implementation allows transparent compatibility with MongoDB client SDKs, drivers, and tools. Azure Cosmos DB doesn't host the MongoDB database engine. Any MongoDB client driver compatible with the API version you're using should be able to connect, with no special configuration.
> [!IMPORTANT] > This article describes a feature of Azure Cosmos DB that provides wire protocol compatibility with MongoDB databases. Microsoft does not run MongoDB databases to provide this service. Azure Cosmos DB is not affiliated with MongoDB, Inc. ### MongoDB feature compatibility
-The API for MongoDB is compatible with the following MongoDB server versions:
+Cosmos DB for MongoDB is compatible with the following MongoDB server versions:
- [Version 5.0 (limited preview)](../access-previews.md) - [Version 4.2](feature-support-42.md)
The API for MongoDB is compatible with the following MongoDB server versions:
### Choosing a server version
-All the APIs for MongoDB versions run on the same codebase, making upgrades a simple task that can be completed in seconds with zero downtime. Azure Cosmos DB simply flips a few feature flags to go from one version to another. The feature flags also enable continued support for older API versions such as 3.2 and 3.6. You can choose the server version that works best for you.
+All versions run on the same codebase, making upgrades a simple task that can be completed in seconds with zero downtime. Azure Cosmos DB simply flips a few feature flags to go from one version to another. The feature flags also enable continued support for older API versions such as 3.2 and 3.6. You can choose the server version that works best for you.
+
+Not sure if your workload is ready? [Reach out to us](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR9aWEKTdeoxPpcB2ORTA2_1UQk44OEhBRjlIWjJMTUxLTzhJVVpPU0M4My4u) to leverage automated tooling to determine if you're ready to migrate to Cosmos DB for MongoDB.
## What you need to know to get started - You aren't billed for virtual machines in a cluster. [Pricing](../how-pricing-works.md) is based on throughput in request units (RUs) configured on a per database or per collection basis. The first 1000 RUs per second are free with [Free Tier](../free-tier.md). -- There are three ways to deploy the API for MongoDB:
+- There are three ways to deploy the Cosmos DB for MongoDB:
- [Provisioned throughput](../set-throughput.md): Set a RU/sec number and change it manually. This model best fits consistent workloads.
All the APIs for MongoDB versions run on the same codebase, making upgrades a si
- Sharded cluster performance is dependent on the shard key you choose when creating a collection. Choose a shard key carefully to ensure that your data is evenly distributed across shards.
+## Frequently asked questions
+
+1. Does Cosmos DB for MongoDB support my data residency requirements?
+
+ Yes, data residency is governed at the database account level which is associated with one or more regions. Customers typically create a database account for each residency requirement. For example, if you have a requirement to store data in the US and EU, you would create two database accounts, one in the US and one in the EU.
+
+2. Does Cosmos DB for MongoDB support documents larger than 2 MB?
+
+ Yes, documents as large as 16 MB are fully supported.
+
+3. Does Cosmos DB for MongoDB support multi-field sort?
+
+ Yes, multi-field sort is supported. A compound index is required for the fields in the sort to ensure the operation is efficient and scalable.
+
+4. Does Cosmos DB for MongoDB scale linearly?
+
+ In many cases, Cosmos DB's costs scale better than linear. For example, if you read a 1KB document, this equates to 1 Request Unit (RU). But if you read a 10KB document, this still equates to roughly 1RU. The [Cosmos DB capacity calculator](https://cosmos.azure.com/capacitycalculator/) can help you estimate your throughput needs.
+
+4. How can I encrypt data and manage access at the field level?
+
+ Cosmos DB for MongoDB supports Field Level Encryption.
+
+5. How do I pay for Request Units (RUs)?
+
+ Cosmos DB for MongoDB offers three capacity modes: provisioned throughput, autoscale, and serverless. **None require an upfront commitment**. Autoscale instantaneously scales to meet your needs, and serverless only charges for the throughput you use.
+
+6. Which features are supported in Cosmos DB for MongoDB?
+
+ Cosmos DB for MongoDB supports a rich set of MongoDB features backed by Cosmos DB's limitless scale architecture. These features include: Aggregation pipelines, Change streams, Indexes, Geospatial queries, and more. See the [feature support matrix](feature-support-42.md) for more details. Not sure if your workload is ready? [Reach out to us](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR9aWEKTdeoxPpcB2ORTA2_1UQk44OEhBRjlIWjJMTUxLTzhJVVpPU0M4My4u) to leverage automated tooling to determine if you're ready to migrate to Cosmos DB for MongoDB.
+
+4. Does Cosmos DB for MongoDB run on-premises?
+
+ Cosmos DB for MongoDB is a cloud-native multi-tenant service and is not available on-premises. Cosmos DB offers an [emulator for local development and testing](../local-emulator.md).
++ ## Next steps - Follow the [Connect a MongoDB application to Azure Cosmos DB](connect-account.md) tutorial to learn how to get your account connection string information.
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-java.md
In this quickstart, you create and manage an Azure Cosmos DB for API for MongoDB
## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with the connection string `.mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==@localhost:10255/admin?ssl=true`.-- [Java Development Kit (JDK) version 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk).
+- [Java Development Kit (JDK) version 8](https://adoptium.net/temurin/releases/?version=8).
- [Maven](https://maven.apache.org/download.cgi). Or run `apt-get install maven` to install Maven. - [Git](https://git-scm.com/downloads).
Now go back to the Azure portal to get your connection string information and co
## Run the console app
-1. Run `mvn package` in a terminal to install required npm modules
+1. Run `mvn package` in a terminal to install required packages
2. Run `mvn exec:java -D exec.mainClass=GetStarted.Program` in a terminal to start your Java application.
cosmos-db Migrate Hbase To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-hbase-to-cosmos-db.md
There are several ways to get a backup of HBase. For example, Snapshot, Export,
Azure Cosmos DB automatically backs up data at periodic intervals, which does not affect the performance or availability of database operations. Backups are stored in Azure storage and can be used to recover data if needed. There are two types of Azure Cosmos DB backups:
-* [Periodic backup](../configure-periodic-backup-restore.md)
+* [Periodic backup](../periodic-backup-restore-introduction.md)
* [Continuous backup](../continuous-backup-restore-introduction.md)
cosmos-db Troubleshoot Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-not-found.md
Wait for the indexing to catch up or change the indexing policy.
The database or container that the item exists in was deleted. #### Solution:
-1. [Restore](../configure-periodic-backup-restore.md#request-data-restore-from-a-backup) the parent resource, or re-create the resources.
+1. [Restore from a backup](../periodic-backup-restore-introduction.md) the parent resource, or re-create the resources.
1. Create a new resource to replace the deleted resource. ### 7. Container/Collection names are case-sensitive
cosmos-db Online Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/online-backup-and-restore.md
There are two backup modes:
* **Continuous backup mode** ΓÇô This mode has two tiers. One tier includes 7-day retention and the second includes 30-day retention. Continuous backup allows you to restore to any point of time within either 7 or 30 days. You can choose this appropriate tier while creating an Azure Cosmos DB account. For more information about the tiers, see [introduction to continuous backup mode](continuous-backup-restore-introduction.md). To enable continuous backup, see the appropriate articles using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). You can also [migrate the accounts from periodic to continuous mode](migrate-continuous-backup.md).
-* **Periodic backup mode** - This mode is the default backup mode for all existing accounts. In this mode, backup is taken at a periodic interval and the data is restored by creating a request with the support team. In this mode, you configure a backup interval and retention for your account. The maximum retention period extends to a month. The minimum backup interval can be one hour. To learn more, see [periodic backup mode](configure-periodic-backup-restore.md).
+* **Periodic backup mode** - This mode is the default backup mode for all existing accounts. In this mode, backup is taken at a periodic interval and the data is restored by creating a request with the support team. In this mode, you configure a backup interval and retention for your account. The maximum retention period extends to a month. The minimum backup interval can be one hour. To learn more, see [periodic backup mode](periodic-backup-restore-introduction.md).
> [!NOTE] > If you configure a new account with continuous backup, you can do self-service restore via Azure portal, PowerShell, or CLI. If your account is configured in continuous mode, you canΓÇÖt switch it back to periodic mode.
Yes, but only for the regular transactional data. Within an analytical store, ba
Next you can learn about how to configure and manage periodic and continuous backup modes for your account:
-* [Configure and manage periodic backup](configure-periodic-backup-restore.md) policy.
+* [Configure and manage periodic backup](periodic-backup-restore-introduction.md) policy.
* What is [continuous backup](continuous-backup-restore-introduction.md) mode? * Enable continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). * Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template).
cosmos-db Periodic Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-restore-introduction.md
+
+ Title: Configure periodic backup
+
+description: Configure Azure Cosmos DB accounts with periodic backup and retention at a specified interval through the portal or a support ticket.
++++ Last updated : 03/16/2023++++
+# Configure Azure Cosmos DB account with periodic backup
++
+Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service, and those backups are globally replicated for resiliency against regional disasters. With Azure Cosmos DB, not only your data, but also the backups of your data are highly redundant and resilient to regional disasters. The following steps show how Azure Cosmos DB performs data backup:
+
+- Azure Cosmos DB automatically takes a full backup of your database every 4 hours and at any point of time, only the latest two backups are stored by default. If the default intervals aren't sufficient for your workloads, you can change the backup interval and the retention period from the Azure portal. You can change the backup configuration during or after the Azure Cosmos DB account is created. If the container or database is deleted, Azure Cosmos DB retains the existing snapshots of a given provisioned throughput container or shared throughput database for 30 days. If throughput is provisioned at the database level, the backup and restore process happens across the entire database scope.
+
+- Azure Cosmos DB stores these backups in Azure Blob storage whereas the actual data resides locally within Azure Cosmos DB.
+
+- To guarantee low latency, the snapshot of your backup is stored in Azure Blob storage in the same region as the current write region (or **one** of the write regions, in case you have a multi-region write configuration). For resiliency against regional disaster, each snapshot of the backup data in Azure Blob storage is again replicated to another region through geo-redundant storage (GRS). The region to which the backup is replicated is based on your source region and the regional pair associated with the source region. To learn more, see the [list of geo-redundant pairs of Azure regions](../availability-zones/cross-region-replication-azure.md) article. You can't access this backup directly. Azure Cosmos DB team restores your backup when you request through a support request.
+
+ The following image shows how an Azure Cosmos DB container with all the three primary physical partitions in West US. The container is backed up in a remote Azure Blob Storage account in West US and then replicated to East US:
+
+ :::image type="content" source="./media/configure-periodic-backup-restore/automatic-backup.png" alt-text="Diagram of periodic full backups taken of multiple Azure Cosmos DB entities in geo-redundant Azure Storage." lightbox="./media/configure-periodic-backup-restore/automatic-backup.png" border="false":::
+
+- The backups are taken without affecting the performance or availability of your application. Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database.
+
+> [!NOTE]
+> For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store is not supported at this time.
+
+## Backup storage redundancy
+
+By default, Azure Cosmos DB stores periodic mode backup data in geo-redundant [blob storage](../storage/common/storage-redundancy.md) that is replicated to a [paired region](../availability-zones/cross-region-replication-azure.md). You can update this default value using Azure PowerShell or CLI and define an Azure policy to enforce a specific storage redundancy option. To learn more, see [update backup storage redundancy](periodic-backup-update-storage-redundancy.md) article.
+
+Change the default geo-redundant backup storage to ensure that your backup data stays within the same region where your Azure Cosmos DB account is provisioned. You can configure the geo-redundant backup to use either locally redundant or zone-redundant storage. Storage redundancy mechanisms store multiple copies of your backups so that it's protected from planned and unplanned events. These events can include transient hardware failure, network or power outages, or massive natural disasters.
+
+You can configure storage redundancy for periodic backup mode at the time of account creation or update it for an existing account. You can use the following three data redundancy options in periodic backup mode:
+
+- **Geo-redundant backup storage:** This option copies your data asynchronously across the paired region.
+
+- **Zone-redundant backup storage:** This option copies your data synchronously across three Azure availability zones in the primary region. For more information, see [Zone-redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)
+
+- **Locally-redundant backup storage:** This option copies your data synchronously three times within a single physical location in the primary region. For more information, see [locally redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)
+
+> [!NOTE]
+> Zone-redundant storage is currently available only in [specific regions](../availability-zones/az-region.md). Depending on the region you select for a new account or the region you have for an existing account; the zone-redundant option will not be available.
+>
+> Updating backup storage redundancy will not have any impact on backup storage pricing.
+
+## Modify the backup interval and retention period
+
+Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos DB account creation or after the account is created. The backup configuration is set at the Azure Cosmos DB account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. You can modify these settings using the Azure portal as described later in this article, or via [PowerShell](periodic-backup-restore-introduction.md#modify-backup-options-using-azure-powershell) or the [Azure CLI](periodic-backup-restore-introduction.md#modify-backup-options-using-azure-cli).
+
+If you've accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account.
+
+### Modify backup options using Azure portal - Existing account
+
+Use the following steps to change the default backup options for an existing Azure Cosmos DB account:
+
+1. Sign into the [Azure portal.](https://portal.azure.com/)
+
+1. Navigate to your Azure Cosmos DB account and open the **Backup & Restore** pane. Update the backup interval and the backup retention period as required.
+
+ - **Backup Interval** - ItΓÇÖs the interval at which Azure Cosmos DB attempts to take a backup of your data. Backup takes a nonzero amount of time and in some case it could potentially fail due to downstream dependencies. Azure Cosmos DB tries its best to take a backup at the configured interval, however, it doesnΓÇÖt guarantee that the backup completes within that time interval. You can configure this value in hours or minutes. Backup Interval can't be less than 1 hour and greater than 24 hours. When you change this interval, the new interval takes into effect starting from the time when the last backup was taken.
+
+ - **Backup Retention** - It represents the period where each backup is retained. You can configure it in hours or days. The minimum retention period canΓÇÖt be less than two times the backup interval (in hours) and it canΓÇÖt be greater than 720 hours.
+
+ - **Copies of data retained** - By default, two backup copies of your data are offered at free of charge. There's an extra charge if you need more than two copies. See the Consumed Storage section in the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to know the exact price for extra copies.
+
+ - **Backup storage redundancy** - Choose the required storage redundancy option, see the [Backup storage redundancy](#backup-storage-redundancy) section for available options. By default, your existing periodic backup mode accounts have geo-redundant storage if the region where the account is being provisioned supports it. Otherwise, the account fallback to the highest redundancy option available. You can choose other storage such as locally redundant to ensure the backup isn't replicated to another region. The changes made to an existing account are applied to only future backups. After the backup storage redundancy of an existing account is updated, it may take up to twice the backup interval time for the changes to take effect, and **you will lose access to restore the older backups immediately.**
+
+ > [!NOTE]
+ > You must have the Azure [Azure Cosmos DB Operator role](../role-based-access-control/built-in-roles.md#cosmos-db-operator) role assigned at the subscription level to configure backup storage redundancy.
+
+ :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-options-existing-accounts.png" alt-text="Screenshot of configuration options including backup interval, retention, and storage redundancy for an existing Azure Cosmos DB account." border="true":::
+
+### Modify backup options using Azure portal - New account
+
+When provisioning a new account, from the **Backup Policy** tab, select **Periodic*** backup policy. The periodic policy allows you to configure the backup interval, backup retention, and backup storage redundancy. For example, you can choose **locally redundant backup storage** or **Zone redundant backup storage** options to prevent backup data replication outside your region.
++
+### Modify backup options using Azure PowerShell
+
+Use the following PowerShell cmdlet to update the periodic backup options:
+
+```azurepowershell-interactive
+Update-AzCosmosDBAccount -ResourceGroupName "resourceGroupName" `
+ -Name "accountName" `
+ -BackupIntervalInMinutes 480 `
+ -BackupRetentionIntervalInHours 16
+```
+
+### Modify backup options using Azure CLI
+
+Use the following CLI command to update the periodic backup options:
+
+```azurecli-interactive
+az cosmosdb update --resource-group "resourceGroupName" \
+ --name "accountName" \
+ --backup-interval 240 \
+ --backup-retention 8
+```
+
+### Modify backup options using Resource Manager template
+
+When deploying the Resource Manager template, change the periodic backup options within the `backupPolicy` object:
+
+```json
+ "backupPolicy": {
+ "type": "Periodic",
+ "periodicModeProperties": {
+ "backupIntervalInMinutes": 240,
+ "backupRetentionIntervalInHours": 8,
+ "backupStorageRedundancy": "Zone"
+ }
+}
+```
+
+## Request data restore from a backup
+
+If you accidentally delete your database or a container, you can [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) or [call the Azure support](https://azure.microsoft.com/support/options/) to restore the data from automatic online backups. Azure support is available for selected plans only such as **Standard**, **Developer**, and plans higher than those tiers. Azure support isn't available with **Basic** plan. To learn about different support plans, see the [Azure support plans](https://azure.microsoft.com/support/plans/) page.
+
+To restore a specific snapshot of the backup, Azure Cosmos DB requires that the data is available during the backup cycle for that snapshot.
+You should have the following details before requesting a restore:
+
+- Have your subscription ID ready.
+- Based on how your data was accidentally deleted or modified, you should prepare to have additional information. It's advised that you have the information available ahead to minimize the back-and-forth that can be detrimental in some time sensitive cases.
+- If the entire Azure Cosmos DB account is deleted, you need to provide the name of the deleted account. If you create another account with the same name as the deleted account, share that with the support team because it helps to determine the right account to choose. It's recommended to file different support tickets for each deleted account because it minimizes the confusion for the state of restore.
+- If one or more databases are deleted, you should provide the Azure Cosmos DB account, and the Azure Cosmos DB database names and specify if a new database with the same name exists.
+- If one or more containers are deleted, you should provide the Azure Cosmos DB account name, database names, and the container names. And specify if a container with the same name exists.
+- If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. **Before you create a support request to restore the data, make sure to [increase the backup retention](#modify-the-backup-interval-and-retention-period) for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way the Azure Cosmos DB support team has enough time to restore your account.
+
+In addition to Azure Cosmos DB account name, database names, container names, you should specify the point in time to use for data restoration. It's important to be as precise as possible to help us determine the best available backups at that time. **It is also important to specify the time in UTC.**
+
+The following screenshot illustrates how to create a support request for a container(collection/graph/table) to restore data by using Azure portal. Provide other details such as type of data, purpose of the restore, time when the data was deleted to help us prioritize the request.
++
+## Considerations for restoring the data from a backup
+
+You may accidentally delete or modify your data in one of the following scenarios:
+
+- Delete the entire Azure Cosmos DB account.
+
+- Delete one or more Azure Cosmos DB databases.
+
+- Delete one or more Azure Cosmos DB containers.
+
+- Delete or modify the Azure Cosmos DB items (for example, documents) within a container. This specific case is typically referred to as data corruption.
+
+- A shared offer database or containers within a shared offer database are deleted or corrupted.
+
+Azure Cosmos DB can restore data in all the above scenarios. A new Azure Cosmos DB account is created to hold the restored data when restoring from a backup. The name of the new account, if it's not specified, has the format `<Azure_Cosmos_account_original_name>-restored1`. The last digit is incremented when multiple restores are attempted. You can't restore data to a precreated Azure Cosmos DB account.
+
+When you accidentally delete an Azure Cosmos DB account, we can restore the data into a new account with the same name, if the account name isn't in use. So, we recommend that you don't re-create the account after deleting it. Because it not only prevents the restored data to use the same name, but also makes discovering the right account to restore from difficult.
+
+When you accidentally delete an Azure Cosmos DB database, we can restore the whole database or a subset of the containers within that database. It's also possible to select specific containers across databases and restore them to a new Azure Cosmos DB account.
+
+When you accidentally delete or modify one or more items within a container (the data corruption case), you need to specify the time to restore to. Time is important if there's data corruption. Because the container is live, the backup is still running, so if you wait beyond the retention period (the default is eight hours) the backups would be overwritten. In order to prevent the backup from being overwritten, increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours from the data corruption.
+
+If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. This way the Azure Cosmos DB support team has enough time to restore your account.
+
+> [!NOTE]
+> After you restore the data, not all the source capabilities or settings are carried over to the restored account. The following settings are not carried over to the new account:
+>
+> - VNET access control lists
+> - Stored procedures, triggers and user-defined functions
+> - Multi-region settings
+> - Managed identity settings
+>
+
+If you assign throughput at the database level, the backup and restore process in this case happen at the entire database level, and not at the individual containers level. In such cases, you can't select a subset of containers to restore.
+
+## Required permissions to change retention or restore from the portal
+
+Principals who are part of the role [CosmosdbBackupOperator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator), owner, or contributor are allowed to request a restore or change the retention period.
+
+## Understanding Costs of extra backups
+
+Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/pricing/details/cosmos-db/). For example, consider a scenario where Backup Retention is configured to **240 hrs** (or 10 days) and Backup Interval is configured to **24** hrs. This configuration implies that there are 10 copies of the backup data. If you have **1 TB** of data in the West US 2 region, the cost would be `0.12 * 1000 * 8` for backup storage in given month.
+
+## Get the restore details from the restored account
+
+After the restore operation completes, you may want to know the source account details from which you restored or the restore time. You can get these details from the Azure portal, PowerShell, or CLI.
+
+### Use Azure portal
+
+Use the following steps to get the restore details from Azure portal:
+
+1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to the restored account.
+
+1. Open the **Tags** page. This page should have the tags **restoredAtTimestamp** and **restoredSourceDatabaseAccountName**. These tags describe the timestamp and the source account name that were used for the periodic restore.
+
+### Use Azure CLI
+
+Run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` fields are within the `tags` field:
+
+```azurecli-interactive
+az cosmosdb show --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup
+```
+
+### Use PowerShell
+
+Import the Az.CosmosDB module and run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` are within the `tags` field:
+
+```powershell-interactive
+Get-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount
+```
+
+## Options to manage your own backups
+
+With Azure Cosmos DB API for NoSQL accounts, you can also maintain your own backups by using one of the following approaches:
+
+- Use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data periodically to a storage solution of your choice.
+
+- Use Azure Cosmos DB [change feed](change-feed.md) to read data periodically for full backups or for incremental changes, and store it in your own storage.
+
+## Post-restore actions
+
+The primary goal of the data restore is to recover the data that you've accidentally deleted or modified. So, we recommend that you first inspect the content of the recovered data to ensure it contains what you are expecting. If everything looks good, you can migrate the data back to the primary account. Although it's possible to use the restored account as your new active account, it's not a recommended option if you have production workloads.
+
+After you restore the data, you get a notification about the name of the new account (itΓÇÖs typically in the format `<original-name>-restored1`) and the time when the account was restored to. The restored account has the same provisioned throughput, indexing policies and it is in same region as the original account. A user who is the subscription admin or a coadmin can see the restored account.
+
+### Migrate data to the original account
+
+The following are different ways to migrate data back to the original account:
+
+- Use the [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md).
+- Use the [change feed](change-feed.md) in Azure Cosmos DB.
+- You can write your own custom code.
+
+It's advised that you delete the container or database immediately after migrating the data. If you don't delete the restored databases or containers, they incur cost for request units, storage, and egress.
+
+## Next steps
+
+- To make a restore request, contact Azure Support by [filing a ticket in the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+- [Create account with continuous backup](provision-account-continuous-backup.md).
+- [Restore continuous backup account](restore-account-continuous-backup.md).
cosmos-db Periodic Backup Update Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-update-storage-redundancy.md
+
+ Title: Update backup storage redundancy for Azure Cosmos DB periodic backup accounts
+description: Learn how to update the backup storage redundancy using Azure CLI, and PowerShell. You can also configure an Azure policy on your accounts to enforce the required storage redundancy.
++++ Last updated : 12/03/2021++++
+# Update backup storage redundancy for Azure Cosmos DB periodic backup accounts
+
+By default, Azure Cosmos DB stores periodic mode backup data in geo-redundant [blob storage](../storage/common/storage-redundancy.md) that is replicated to a [paired region](../availability-zones/cross-region-replication-azure.md). You can override the default backup storage redundancy. This article explains how to update the backup storage redundancy using Azure CLI and PowerShell. It also shows how to configure an Azure policy on your accounts to enforce the required storage redundancy.
+
+## Update using Azure portal
+
+Use the following steps to update backup storage redundancy from the Azure portal:
+
+1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your Azure Cosmos DB account.
+
+1. Open the **Backup & Restore** pane, update the backup storage redundancy and select **Submit**. It takes few minutes for the operation to complete:
+
+ :::image type="content" source="./media/update-backup-storage-redundancy/update-backup-storage-redundancy-portal.png" alt-text="Update backup storage redundancy from the Azure portal":::
+
+## Update using CLI
+
+Use the following steps to update the backup storage redundancy on an existing account using Azure CLI:
+
+1. Install the latest version if Azure CLI or a version higher than or equal to 2.30.0. If you have the "cosmosdb-preview" extension installed, make sure to remove it.
+
+1. Use the following command to get the backup redundancy options available in the regions where your account exists:
+
+ ```azurecli-interactive
+ az cosmosdb locations show --location <region_Name>
+ ```
+
+ ```bash
+ {
+ "id": "subscriptionId/<Subscription_ID>/providers/Microsoft.DocumentDB/locations/eastus/",
+ "name": "East US",
+ "properties": {
+ "backupStorageRedundancies": [
+ "Geo",
+ "Zone",
+ "Local"
+ ],
+ "isResidencyRestricted": false,
+ "supportsAvailabilityZone": true
+ },
+ "type": "Microsoft.DocumentDB/locations"
+ }
+ ```
+
+ The previous command shows a list of backup redundancies available in the specific region. Supported values are displayed in the `backupStorageRedundancies` property. For example some regions such as "East US" support three redundancy options "Geo", "Zone", and "Local" whereas some regions like "UAE North" support only "Geo" and "Local" redundancy options. Before updating, choose the backup storage redundancy option that is supported in all the regions where your account exists.
+
+1. Run the following command with the chosen backup redundancy option to update the backup redundancy on an existing account:
+
+ ```azurecli-interactive
+ az cosmosdb update -n <account_Name> -g <resource_Group> --backup-redundancy "Geo"
+ ```
+
+1. Run the following command to create a new account with the chosen backup redundancy option:
+
+ ```azurecli-interactive
+ az cosmosdb create -n <account_Name> -g <resource_Group> --backup-redundancy "Geo" --locations regionName=westus
+ ```
+
+## Update using PowerShell
+
+1. Install the latest version of Azure PowerShell or a version higher than or equal to 1.4.0
+
+ ```powershell-interactive
+ Install-Module -Name Az.CosmosDB -RequiredVersion 1.4.0
+ ```
+
+1. Use the following command to get the backup redundancy options available in the regions where your account exists:
+
+ ```powershell-interactive
+ $location = Get-AzCosmosDBLocation -Location <region_Name>
+ $location.Properties.BackupStorageRedundancies
+ ```
+
+ The previous command shows a list of backup redundancies available in the specific region. Supported values are displayed in the `backupStorageRedundancies` property. For example some regions such as "East US" support three redundancy options "Geo", "Zone", and "Local" whereas some regions like "UAE North" support only "Geo" and "Local" redundancy options. Before updating, choose the backup storage redundancy option that is supported in all the regions where your account exists.
+
+1. Run the following command with the chosen backup redundancy option to update the backup redundancy on an existing account:
+
+ ```powershell-interactive
+ Update-AzCosmosDBAccount `
+ -Name <account_Name> `
+ -ResourceGroupName <resource_Group> `
+ -BackupStorageRedundancy "Geo"
+ ```
+
+1. Run the following command to create a new account with the chosen backup redundancy option:
+
+ ```powershell-interactive
+ New-AzCosmosDBAccount `
+ -Name <account_Name> `
+ -ResourceGroupName <resource_Group> `
+ -Location <region_Name> `
+ -BackupPolicyType Periodic`
+ -BackupStorageRedundancy "Geo"
+
+ ```
+
+## Add a policy for the backup storage redundancy
+
+Azure Policy helps you to enforce organizational standards and to assess compliance at-scale. The following sample shows how to add an Azure policy for the database accounts to have a backup redundancy of type "Zone".
+
+```json
+"parameters": {},
+ "policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "Microsoft.DocumentDB/databaseAccounts/backupPolicy.periodicModeProperties.backupStorageRedundancy",
+ "match": "Zone"
+ }
+ ]
+ },
+ "then": {
+ "effect": "audit"
+ }
+ }
+```
+
+## Next steps
+
+* Provision an Azure Cosmos DB account with [periodic backup mode](periodic-backup-restore-introduction.md).
+* Provision an account with continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
+* Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template).
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md
tags: billing
Previously updated : 03/07/2023 Last updated : 03/20/2023
A billing account owner uses the following steps to cancel a subscription.
A subscription owner can navigate in the Azure portal to **Subscriptions** and then start at step 3.
-1. In the Azure portal, navigate to Cost Management + Billing.
-1. In the left menu under **Products + services**, select **All billing subscriptions**. If you have a support plan, it's shown in the list.
- :::image type="content" source="./media/cancel-azure-subscription/all-billing-subscriptions.png" alt-text="Screenshot showing all billing subscriptions." lightbox="./media/cancel-azure-subscription/all-billing-subscriptions.png" :::
+1. In the Azure portal, navigate to **Cost Management + Billing**.
+1. In the left menu, select either **Subscriptions** or **Azure subscriptions**, depending on which is available to you. If you have a support plan, it's shown in the list.
1. Select the subscription that you want to cancel. 1. At the top of page, select **Cancel**. 1. If you have any resources associated with the subscription, they're shown on the page. At the top of the page, select **Cancel subscription**.
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
Previously updated : 02/10/2023 Last updated : 03/21/2023
However, you can't exchange dissimilar reservations. For example, you can't exch
You can also exchange a reservation to purchase another reservation of a similar type in a different region. For example, you can exchange a reservation that's in West US 2 region for one that's in West Europe region. > [!NOTE]
-> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Microsoft launched Azure savings plan for compute and it's designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for these compute reservations. For more information about the exchange policy change, see [Changes to the Azure reservation exchange policy](reservation-exchange-policy-changes.md).
+> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Microsoft launched Azure savings plan for compute and it's designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. For more information about the exchange policy change, see [Changes to the Azure reservation exchange policy](reservation-exchange-policy-changes.md).
>
-> For a limited time you may [trade-in](../savings-plan/reservation-trade-in.md) your Azure compute reservations for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration youΓÇÖll need and want additional savings. Learn more about [Azure savings plan for compute](../savings-plan/index.yml) and how it works with reservations.
+> You may [trade-in](../savings-plan/reservation-trade-in.md) your Azure compute reservations for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration youΓÇÖll need and want additional savings. Learn more about [Azure savings plan for compute and how it works with reservations](../savings-plan/index.yml).
When you exchange a reservation, you can change your term from one-year to three-year.
To refund a reservation, go to **Reservation Details** and select **Refund**.
You can return similar types of reservations in one action.
-When you exchange reservations, the new purchase currency amount must be greater than the refund amount. If your new purchase amount is less than the refund amount, you'll get an error. If you see the error, reduce the quantity that you want to return, or increase the amount to purchase.
+When you exchange reservations, the new purchase currency amount must be greater than the refund amount. If your new purchase amount is less than the refund amount, an error message appears. If you see the error, reduce the quantity that you want to return, or increase the amount to purchase.
1. Sign in to the Azure portal and navigate to **Reservations**. 1. In the list of reservations, select the box for each reservation that you want to exchange. 1. At the top of the page, select **Exchange**. 1. If needed, revise the quantity to return for each reservation.
-1. If you select the auto-fill return quantity, you can choose to **Refund all** to fill the list with the full quantity that you own for each reservation. Or, select **Optimize for utilization (7-day)** to fill the list with a quantity that optimizes for utilization based on the last seven days of usage. **Select Apply**.
+1. If you select the autofill return quantity, you can choose to **Refund all** to fill the list with the full quantity that you own for each reservation. Or, select **Optimize for utilization (7-day)** to fill the list with a quantity that optimizes for utilization based on the last seven days of usage. **Select Apply**.
1. At the bottom of the page, select **Next: Purchase**. 1. On the purchase tab, select the available products that you want to exchange for. You can select multiple products of different types. 1. In the Select the product you want to purchase pane, select the products you want and then select **Add to cart** and then select **Close**. 1. When done, select **Next: Review**. 1. Review your reservations to return and new reservations to purchase and then select **Confirm exchange**.
-## Exchange non-premium storage for premium storage
+## Exchange nonpremium storage for premium storage
You can exchange a reservation purchased for a VM size that doesn't support premium storage to a corresponding VM size that does. For example, an _F1_ for an _F1s_. To make the exchange, go to Reservation Details and select **Exchange**. The exchange doesn't reset the term of the reserved instance or create a new transaction.
-If you're exchanging for a different size, series, region or payment frequency, the term will be reset for the new reservation.
+If you're exchanging for a different size, series, region or payment frequency, the term is reset for the new reservation.
## How transactions are processed
Azure has the following policies for cancellations, exchanges, and refunds.
**Refund policies** - We're currently not charging an early termination fee, but in the future there might be a 12% early termination fee for cancellations.-- The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment. For example, assume you have a three-year reservation (36 months). It costs 100 USD per month. It's refunded in the 12th month. The canceled commitment is 2,400 USD (for the remaining 24 months). After the refund, your new available limit for refund will be 47,600 USD (50,000-2,400). In 365 days from the refund, the 47,600 USD limit will be increased by 2,400 USD and your new pool will be 50,000 USD. Any other reservation cancellation for the billing profile or EA enrollment will deplete the same pool, and the same replenishment logic will apply.-- Azure won't process any refund that will exceed the 50,000 USD limit in a 12-month window for a billing profile or EA enrollment.
+- The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment. For example, assume you have a three-year reservation (36 months). It costs 100 USD per month. It's refunded in the 12th month. The canceled commitment is 2,400 USD (for the remaining 24 months). After the refund, your new available limit for refund is 47,600 USD (50,000-2,400). In 365 days from the refund, the 47,600 USD limit increases by 2,400 USD. Your new pool is 50,000 USD. Any other reservation cancellation for the billing profile or EA enrollment depletes the same pool, and the same replenishment logic applies.
+- Azure doesn't process any refund that exceeds the 50,000 USD limit in a 12-month window for a billing profile or EA enrollment.
- Refunds that result from an exchange don't count against the refund limit. - Refunds are calculated based on the lowest price of either your purchase price or the current price of the reservation. - Only reservation order owners can process a refund. [Learn how to Add or change users who can manage a reservation](manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default).
cost-management-billing Reservation Trade In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/reservation-trade-in.md
Previously updated : 03/16/2023 Last updated : 03/21/2023
The following reservations aren't eligible to be traded in for savings plans:
- SUSE Linux plans > [!NOTE]
-> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Azure savings plan for compute is designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. For a limited time you may trade-in your Azure compute reservations for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration you’ll need and want additional savings. For more information, see [Self-service exchanges and refunds for Azure Reservations](../reservations/exchange-and-refund-azure-reservations.md).
+> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Azure savings plan for compute is designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations.
+>
+> You may trade-in your Azure compute reservations for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration youΓÇÖll need and want additional savings. For more information, see [Self-service exchanges and refunds for Azure Reservations](../reservations/exchange-and-refund-azure-reservations.md).
Although compute reservation exchanges will end on January 1, 2024, noncompute reservation exchanges are unchanged. You're able to continue to trade-in reservations for saving plans.
cost-management-billing Overview Azure Hybrid Benefit Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/overview-azure-hybrid-benefit-scope.md
description: Azure Hybrid Benefit is a licensing benefit that lets you bring you
keywords: Previously updated : 12/06/2022 Last updated : 03/21/2023
Azure Hybrid Benefit is a licensing benefit that helps you to significantly redu
You can centrally manage your Azure Hybrid Benefit for SQL Server across the scope of an entire Azure subscription or overall billing account.
-To use centrally manage licenses, you must have a specific role assigned to you, depending on your Azure agreement type:
+To use centrally managed licenses, you must have a specific role assigned to you, depending on your Azure agreement type:
- Enterprise Agreement - Enterprise Administrator
- If you're not an Enterprise admin, you must be assigned that role by your organization (with full access). For more information about how to become a member of the role, see [Add another enterprise administrator](../manage/ea-portal-administration.md#create-another-enterprise-administrator).
+ If you're not an Enterprise admin, your organization must assign you that role with full access. For more information about how to become a member of the role, see [Add another enterprise administrator](../manage/ea-portal-administration.md#create-another-enterprise-administrator).
- Microsoft Customer Agreement - Billing account owner - Billing account contributor - Billing profile owner - Billing profile contributor
- If you don't have one of the roles above, your organization must assign one to you. For more information about how to become a member of the roles, see [Manage billing roles](../manage/understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
+ If you don't have one of the roles, your organization must assign one to you. For more information about how to become a member of the roles, see [Manage billing roles](../manage/understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
At a high level, here's how it works:
At a high level, here's how it works:
1. Under **Cost Management + Billing** in the Azure portal, you (the billing administrator) choose the scope and the number of qualifying licenses that you want to assign to cover the resources in the scope. :::image type="content" source="./media/overview-azure-hybrid-benefit-scope/set-scope-assign-licenses.png" alt-text="Screenshot showing setting a scope and assigning licenses." lightbox="./media/overview-azure-hybrid-benefit-scope/set-scope-assign-licenses.png" :::
-In the previous example, detected usage for 108 normalized core licenses is needed to cover all eligible Azure SQL resources. Detected usage for individual resources was 56 normalized core licenses. For the example, we showed 60 standard core licenses plus 12 Enterprise core licenses (12 * 4 = 48). So 60 + 48 = 108. Normalized core license values are covered in more detail in the following [How licenses apply to Azure resources](#how-licenses-apply-to-azure-resources) section.
+In the previous example, detected usage for 108 normalized core licenses is needed to cover all eligible Azure SQL resources. Detected usage for individual resources was 56 normalized core licenses. For the example, we showed 60 standard core licenses plus 12 Enterprise core licenses (12 * 4 = 48). So 60 + 48 = 108. Normalized core license values are covered in more detail in the following section, [How licenses apply to Azure resources](#how-licenses-apply-to-azure-resources).
- Each hour as resources in the scope run, Azure automatically assigns the licenses to them and discounts the costs correctly. Different resources can be covered each hour. - Any usage above the number of assigned licenses is billed at normal, pay-as-you-go prices. - When you choose to manage the benefit by assigning licenses at a scope level, you can't manage individual resources in the scope any longer.
-The original resource-level way to enable Azure Hybrid Benefit is still available for SQL Server and is currently the only option for Windows Server. It involves a DevOps role selecting the benefit for each individual resource (like a SQL Database or Windows Server VM) when creating or managing it. Doing so results in the hourly cost of that resource being discounted. For more information, see [Azure Hybrid Benefit for Windows Server](/azure/azure-sql/azure-hybrid-benefit).
+The original resource-level way to enable Azure Hybrid Benefit is still available for SQL Server and is currently the only option for Windows Server. It involves a DevOps role selecting the benefit for each individual resource (like a SQL Database or Windows Server VM) when you create or manage it. Doing so results in the hourly cost of that resource being discounted. For more information, see [Azure Hybrid Benefit for Windows Server](/azure/azure-sql/azure-hybrid-benefit).
Enabling centralized management of Azure Hybrid Benefit of for SQL Server at a subscription or account scope level is currently in preview. It's available to enterprise customers and to customers that buy directly from Azure.com with a Microsoft Customer Agreement. We hope to extend the capability to Windows Server and more customers.
Centrally managing Azure Hybrid Benefit at a scope-level covers the following co
- SQL Elastic Pools - SQL Server on Azure VMs
-Resource-level Azure Hybrid Benefit management can cover all the above, too. It's currently the only option for covering the following resources:
+Resource-level Azure Hybrid Benefit management can cover all of those points, too. It's currently the only option for covering the following resources:
- Azure Dedicated Hosts - Azure Data Factory SQL Server Integration Services (SSIS) ## Centralized scope-level management advantages
-You get the following:
+You get the following benefits:
- **A simpler, more scalable approach with better control** - The billing administrator directly assigns available licenses to one or more Azure scopes. The original approach, at a large scale, involves coordinating Azure Hybrid Benefit usage across many resources and DevOps owners. - **An easy-to-use way to optimize costs** - An Administrator can monitor Azure Hybrid Benefit utilization and directly adjust licenses assigned to Azure. For example, an administrator might see an opportunity to save more money by assigning more licenses to Azure. Then they speak with their procurement department to confirm license availability. Finally, they can easily assign the licenses to Azure and start saving. - **A better method to manage costs during usage spikes** - You can easily scale up the same resource or add more resources during temporary spikes. You don't need to assign more SQL Server licenses (for example, closing periods or increased holiday shopping). For short-lived workload spikes, pay-as-you-go charges for the extra capacity might cost less than acquiring more licenses to use Azure Hybrid Benefit for the capacity. Managing the benefit at a scope, rather than at a resource-level, helps you to decide based on aggregate usage.-- **Clear separation of duties to sustain compliance** - In the resource-level Azure Hybrid Benefit model, resource owners might select Azure Hybrid Benefit when there are no licenses available. Or, they might *not* select the benefit when there *are* licenses available. Scope-level management of Azure Hybrid Benefit solves this situation. The billing admins that manage the benefit centrally are positioned to confirm with procurement and software asset management departments how many licenses are available to assign to Azure. The point is illustrated by the following diagram.
+- **Clear separation of duties to sustain compliance** - In the resource-level Azure Hybrid Benefit model, resource owners might select Azure Hybrid Benefit when there are no licenses available. Or, they might *not* select the benefit when there *are* licenses available. Scope-level management of Azure Hybrid Benefit solves this situation. The billing admins that manage the benefit centrally are positioned to confirm with procurement and software asset management departments how many licenses are available to assign to Azure. The following diagram illustrates the point.
:::image type="content" source="./media/overview-azure-hybrid-benefit-scope/duty-separation.svg" alt-text="Diagram showing the separation of duties." border="false" lightbox="./media/overview-azure-hybrid-benefit-scope/duty-separation.svg":::
Both SQL Server Enterprise (core) and SQL Server Standard (core) licenses with S
One rule to understand: One SQL Server Enterprise Edition license has the same coverage as _four_ SQL Server Standard Edition licenses, across all qualified Azure SQL resource types.
-To explain how it works further, the term _normalized core license_ or NCL is used. In alignment with the rule above, one SQL Server Standard core license produces one NCL. One SQL Server Enterprise core license produces four NCLs. For example, if you assign four SQL Server Enterprise core licenses and seven SQL Server Standard core licenses, your total coverage and Azure Hybrid Benefit discounting power is equal to 23 NCLs (4\*4+7\*1).
+To explain how it works further, the term _normalized core license_ or NCL is used. In alignment with the rule, one SQL Server Standard core license produces one NCL. One SQL Server Enterprise core license produces four NCLs. For example, if you assign four SQL Server Enterprise core licenses and seven SQL Server Standard core licenses, your total coverage and Azure Hybrid Benefit discounting power is equal to 23 NCLs (4\*4+7\*1).
-The following table summarizes how many NCLs you need to fully discount the SQL Server license cost for different resource types. Scope-level management of Azure Hybrid Benefit strictly applies the rules in the product terms, summarized below.
+The following table summarizes how many NCLs you need to fully discount the SQL Server license cost for different resource types. Scope-level management of Azure Hybrid Benefit strictly applies the rules in the product terms, summarized as follows.
| **Azure Data Service** | **Service tier** | **Required number of NCLs** | | | | |
The following table summarizes how many NCLs you need to fully discount the SQL
┬╣ *Azure Hybrid Benefit isn't available in the serverless compute tier of Azure SQL Database.*
-┬▓ *Subject to a minimum of four vCore licenses per Virtual Machine.*
+┬▓ *Subject to a minimum of four vCores per Virtual Machine, which translates to four NCL if Standard edition is used, and 16 NCL if Enterprise edition is used.*
## Ongoing scope-level management
-We recommend that you establish a proactive rhythm when centrally managing Azure Hybrid Benefit, similar to the following tasks and order:
+We recommend that you establish a proactive rhythm when centrally managing Azure Hybrid Benefit, similar to the following tasks and order.
- Engage within your organization to understand how many Azure SQL resources and vCores will be used during the next month, quarter, or year. - Work with your procurement and software asset management departments to determine if enough SQL core licenses with Software Assurance are available. The benefit allows licenses supporting migrating workloads to be used both on-premises and in Azure for up to 180 days. So, those licenses can be counted as available. - Assign available licenses to cover your current usage _and_ your expected usage growth during the upcoming period. - Monitor assigned license utilization. - If it approaches 100%, then consult others in your organization to understand expected usage. Confirm license availability then assign more licenses to the scope.
- - If usage is 100%, you might be using resources beyond the number of licenses assigned. Return to the [Create license assignment experience](create-sql-license-assignments.md) and review the usage that Azure shows. Then assign additional available licenses to the scope for more coverage.
+ - If usage is 100%, you might be using resources beyond the number of licenses assigned. Return to the [Create license assignment experience](create-sql-license-assignments.md) and review the usage that Azure shows. Then assign more available licenses to the scope for more coverage.
- Repeat the proactive process periodically. ## Next steps
cost-management-billing Understand Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-usage.md
tags: billing
Previously updated : 03/13/2023 Last updated : 03/20/2023
The detailed usage and charges file contains daily rated usage based on negotiated rates, purchases (for example, reservations, Marketplace fees), and refunds for the specified period.
-Fees don't include credits, taxes, or other charges or discounts.
+Fees don't include credits, taxes, or other charges or discounts. You manually download the usage and charges file.
+
+The information in the usage and charges file is the same information that's [exported from Cost Management](../costs/tutorial-export-acm-data.md). And, it's the same information that's retrieved from the Cost Details API. For more information about choosing a method to get cost details, see [Choose a cost details solution](../automate/usage-details-best-practices.md).
+ The following table covers which charges are included for each account type. Account type | Azure usage | Marketplace usage | Purchases | Refunds
BillingPeriodStartDate | All | The start date of the billing period.
BillingProfileId┬╣ | All | Unique identifier of the EA enrollment, PAYG subscription, MCA billing profile, or AWS consolidated account. BillingProfileName | All | Name of the EA enrollment, PAYG subscription, MCA billing profile, or AWS consolidated account. ChargeType | All | Indicates whether the charge represents usage (**Usage**), a purchase (**Purchase**), or a refund (**Refund**).
-ConsumedService | EA, PAYG | Name of the service the charge is associated with. For more information about choosing a method to get cost details, see [Choose a cost details solution](../automate/usage-details-best-practices.md).
+ConsumedService | All | Name of the service the charge is associated with.
CostCenter┬╣ | EA, MCA | The cost center defined for the subscription for tracking costs (only available in open billing periods for MCA accounts). Cost | EA, PAYG | See CostInBillingCurrency. CostInBillingCurrency | MCA | Cost of the charge in the billing currency before credits or taxes.
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md
This Microsoft 365 (Office 365) connector is supported for the following capabil
ADF Microsoft 365 (Office 365) connector and Microsoft Graph Data Connect enables at scale ingestion of different types of datasets from Exchange Email enabled mailboxes, including address book contacts, calendar events, email messages, user information, mailbox settings, and so on. Refer [here](/graph/data-connect-datasets) to see the complete list of datasets available.
-For now, within a single copy activity and data flow, you can only **ingest data from Microsoft 365 (Office 365) into [Azure Blob Storage](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) in JSON format** (type setOfObjects). If you want to load Microsoft 365 (Office 365) into other types of data stores or in other formats, you can chain the first copy activity or data flow with a subsequent activity to further load data into any of the [supported ADF destination stores](copy-activity-overview.md#supported-data-stores-and-formats) (refer to "supported as a sink" column in the "Supported data stores and formats" table).
+For now, within a single copy activity and data flow, you can only **ingest data from Microsoft 365 (Office 365) into [Azure Blob Storage](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) in JSON format** (type setOfObjects). When copying to Azure Blob Storage, the output is a blob containing JSON text. If you want to load Microsoft 365 (Office 365) into other types of data stores or in other formats, you can chain the first copy activity or data flow with a subsequent activity to further load data into any of the [supported ADF destination stores](copy-activity-overview.md#supported-data-stores-and-formats) (refer to "supported as a sink" column in the "Supported data stores and formats" table).
>[!IMPORTANT] >- The Azure subscription containing the data factory or Synapse workspace and the sink data store must be under the same Azure Active Directory (Azure AD) tenant as Microsoft 365 (Office 365) tenant.
data-factory Connector Troubleshoot Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-hive.md
Previously updated : 01/25/2023 Last updated : 01/28/2023
This article provides suggestions to troubleshoot common problems with the Hive
- **Cause**: You use the Kerberos authentication that is not supported in Azure Data Factory. - **Recommendation**: Try the following steps. If they do not work, check the provided driver to resolve this issue.
- 1. The **kr5.ini** file is in the **C:\Program Files\MIT\Kerberos\bin** folder.
+ 1. The **krb5.ini** file is in the **C:\Program Files\MIT\Kerberos\bin** folder.
2. Add the `KRB5_CONFIG` and `KRB5CCNAME` to the system variable as well.
- 3. Edit the **krb.ini** file.
+ 3. Edit the **krb5.ini** file.
4. Shut down and restart the VM and the SHIR from the machine. ## Next steps
data-factory Control Flow Append Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-append-variable-activity.md
To use a Append Variable activity in a pipeline, complete the following steps:
:::image type="content" source="media/control-flow-append-variable-activity/append-variable.png" alt-text="Shows the UI for an Append Variable activity.":::
+> [!NOTE]
+> The appended variable value does not appear in debug output unless you use a [Set Variable activity](control-flow-set-variable-activity.md) to explicitly set a new variable with its value.
+ ## Type properties Property | Description | Required
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
Previously updated : 08/31/2022 Last updated : 03/21/2023 # Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro GPU device so that I can use it to transform data before I send it to Azure.
You can create and manage virtual machines (VMs) on an Azure Stack Edge Pro GPU device by using the Azure portal, templates, and Azure PowerShell cmdlets, and via the Azure CLI or Python scripts. This article describes how to create and manage a VM on your Azure Stack Edge Pro GPU device by using the Azure portal. > [!IMPORTANT]
-> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication, see [Manage authentication methods for Azure AD Multi-Factor Authentication](../active-directory/authentication/howto-mfa-userdevicesettings.md).
-
+> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication, see [Enable Azure AD Multi-Factor Authentication](/training/modules/secure-aad-users-with-mfa/4-exercise-mfa).
+ ## VM deployment workflow The high-level summary of the deployment workflow is as follows:
databox-online Azure Stack Edge Powershell Based Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-powershell-based-configuration.md
Previously updated : 03/10/2023 Last updated : 03/21/2023 # Use a config file to deploy an Azure Stack Edge device [!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article describes how to automate initial device configuration and activation of Azure Stack Edge devices using PowerShell. Use the steps in this article as alternatives to the local web user interface setup sequence.
+This article describes how to use PowerShell to automate initial device configuration and activation of Azure Stack Edge devices. Use the steps in this article as alternatives to the local web user interface setup sequence.
You can run as many rounds of device configuration as necessary. You can also use the Azure portal or the device local user interface to modify device configuration. ## Usage considerations -- You can apply individual configuration changes to a device using PowerShell cmdlets, or you can apply bulk configuration changes using a JSON file. You can apply changes with a JSON file at any point in the appliance lifecycle.
+- You can apply individual configuration changes to a device using PowerShell cmdlets, or you can apply bulk configuration changes using a JSON file.
+- You can apply changes with a JSON file at any point in the appliance lifecycle.
- To manage devices using the local web user interface, see [Connect to Azure Stack Edge Pro with GPU](azure-stack-edge-gpu-deploy-connect.md?pivots=single-node). - You can't change device authentication using this method. To change device authentication settings, see [Change device password](azure-stack-edge-gpu-manage-access-power-connectivity-mode.md#change-device-password). - Cluster formation is not supported using PowerShell cmdlets. For more information about Azure Stack Edge clusters, see [Install a two-node cluster](azure-stack-edge-gpu-deploy-install.md?pivots=two-node).
Use the following steps to import the PowerShell module and sign in to the devic
Set-Login "https://<IP address>" "<Password1>" "<NewPassword>" ```
-## Change password and fetch the device configuration
+## Fetch the device configuration
-Use the following steps to sign in to a device, change the password, and fetch the device configuration:
-
-1. Sign in to the device and change the device password.
-
- ```azurepowershell
- Set-Login "https://<IP address>" "<CurrentPassword>" "<NewPassword>"
- ```
-
-1. Fetch the device configuration.
+Use the following cmdlet to fetch the device configuration:
```azurepowershell Get-DeviceConfiguration | To-json
Use the following steps to create a device configuration package in PowerShell a
Run the following cmdlets in PowerShell:
-1. Sign in to the device.
-
- ```azurepowershell
- Set-Login "https://<IP address>" "<Password>"
- ```
- 1. Set the `time` object property. ```azurepowershell
Run the following cmdlets in PowerShell:
Once a config.json file has been created, as shown in the previous example, with the desired configuration, use the JSON file to change configuration settings on one or more devices. > [!NOTE]
-> Use a config.json file that meets the needs of your organization. [Sample JSON files are available here](https://github.com/Azure-Samples/azure-stack-edge-deploy-vms/tree/master/PowerShellBasedConfiguration/).
+> Use a config.json file that meets the needs of your organization. [Sample JSON files are available here](https://aka.ms/aseztp-ps).
### Configure a single-node device
This sequence of PowerShell cmdlets signs in to the device, applies device confi
Run the following cmdlets in PowerShell:
-1. Sign in to the device.
-
- ```azurepowershell
- Set-Login "https://<IP address>" "<Password>"
- ```
- 1. Before you run the device configuration operation, ensure that the JSON file uses the `nodeName` of the device to be changed. > [!NOTE]
Run the following cmdlets in PowerShell:
```azurepowershell Get-DeviceConfiguration | To-json ```+ ### Configure a two-node device
-This sequence of PowerShell cmdlets signs in to a two-node device, applies device configuration settings from a JSON file, verifies completion of the operation, and then fetches the new device configuration.
+This sequence of PowerShell cmdlets applies device configuration settings from a JSON file, verifies completion of the operation, and then fetches the new device configuration.
> [!NOTE] > Two-node configurations are only supported on Azure Stack Edge Pro GPU and Azure Stack Edge Pro 2 devices. Run the following cmdlets in PowerShell:
-1. Sign in to the device.
-
- ```azurepowershell
- Set-Login "https://<IP address>" "<Password>"
- ```
- 1. Before you run the device configuration operation, ensure that the JSON file uses the `nodeName` of the device to be changed. > [!NOTE]
Use the following steps to activate an Azure Stack Edge device. Note that activa
1. Retrieve the `ActivationKey` for your device. For detailed steps, see [Create a management resource, and Get the activation key](azure-stack-edge-gpu-deploy-prep.md#create-a-management-resource-for-each-device).
-1. Sign in to the device.
-
- ```azurepowershell
- Set-Login "https://<IP address>" "Password"
- ```
- 1. Set the `ActivationKey` property. ```azurepowershell
Use the following steps to activate an Azure Stack Edge device. Note that activa
Get-DeviceConfiguration | To-json ```
- Here's sample output showing device activation status:
+ Here's sample output showing that the device is activated:
```output PS C:\> Get-DeviceConfiguration | To-json
Use the following steps to activate an Azure Stack Edge device. Note that activa
## Quickly fetch or change device configuration settings
-Use the following steps to sign in to the device, fetch the status of the `WebProxy` properties, set the `WebProxy` property to ΓÇ£isEnabled = trueΓÇ¥ and set the `WebProxy` URI, and then fetch the status of the changed `WebProxy` properties. After running the package, verify the new device configuration.
-
-1. Sign in to the device.
-
- ```azurepowershell
- Set-Login "https://<IP address>" "Password"
- ```
+Use the following steps to fetch the status of the `WebProxy` properties, set the `WebProxy` property to ΓÇ£isEnabled = trueΓÇ¥ and set the `WebProxy` URI, and then fetch the status of the changed `WebProxy` properties. After running the package, verify the new device configuration.
1. Load the device configuration cmdlet.
Use the following steps to sign in to the device, fetch the status of the `WebPr
} ```
-## Enable proactive log consent
+## Enable proactive log collection
Proactive log collection gathers system health indicators on your Azure Stack Edge device to help you efficiently troubleshoot any device issues. Proactive log collection is enabled by default. For more information, see [Proactive log collection](azure-stack-edge-gpu-proactive-log-collection.md). Use the following steps to fetch the current setting and then enable or disable proactive logging for your device.
-1. Sign in to the device.
-
- ```azurepowershell
- Set-Login "https://<IP address>" "Password"
- ```
- 1. Fetch the device configuration. ```azurepowershell
Use the following steps to fetch the current setting and then enable or disable
## Run device diagnostics
-To diagnose and troubleshoot any device errors, you can run the diagnostics tests. For more information, see [Run diagnostics](azure-stack-edge-gpu-troubleshoot.md#run-diagnostics).
+To diagnose and troubleshoot device errors, run diagnostic tests. For more information, see [Run diagnostics](azure-stack-edge-gpu-troubleshoot.md#run-diagnostics).
-Use the following steps to sign in to the device and run device diagnostics to verify status after you apply a device configuration package.
-
-1. Sign in to the device.
-
- ```azurepowershell
- Set-Login "https://<IP address>" "Password"
- ```
+Use the following steps to verify device status after you apply a configuration package.
1. Run device diagnostics.
Use the following steps to sign in to the device and run device diagnostics to v
> [!NOTE] > Two-node configurations are only supported on Azure Stack Edge Pro GPU and Azure Stack Edge Pro 2 devices.
-For Azure consistent services and NFS, define a virtual IP that allows you to connect to a clustered device instead of a specific node. A virtual IP is an available IP in the cluster network. Any client connecting to the cluster network on the two-node device should be able to access this IP.
-
-You can set either an ACS or an NFS configuration. Additional options include static or DHCP network settings. For more information about setting virtual IPs, see [Configure virtual IPs](azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md#configure-virtual-ips).
+A virtual IP is an available IP in the cluster network. Set a virtual IP to connect to a clustered device instead of an individual node. Any client connecting to the cluster network on the two-node device must be able to access the virtual IP.
-Use the following steps to set the `DeviceVIP` configuration on a two-node Azure Stack Edge device.
+You can set either an Azure Consistent Services or a Network File System configuration. Additional options include static or DHCP network settings. For more information about setting virtual IPs, see [Configure virtual IPs](azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md#configure-virtual-ips).
-The example below shows a static ACS configuration, followed by an example that shows a configuration with DHCP enabled.
+### [Azure Consistent Services](#tab/azure-consistent-services)
-1. Sign in to the device.
-
- ```azurepowershell
- Set-Login "https://<IP address>" "Password"
- ```
-
-1. Fetch the device configuration.
-
- ```azurepowershell
- Get-DeviceConfiguration | To-json
- ```
+### Set a static Azure Consistent Services configuration
1. Fetch the `DeviceVIP` configuration.
The example below shows a static ACS configuration, followed by an example that
Get-DeviceVip | to-json ```
-1. Set the `DeviceVIP` property with a static ACS configuration.
+1. Set the `DeviceVIP` property with a static Azure Consistent Services configuration.
```azurepowershell
- $acsVip = New-Object PSObject -Property @{ Type = "ACS"; VipAddress = "192.168.181.10"; ClusterNetworkAddress = "192.168.0.0"; IsDhcpEnabled = $false }
+ $acsVip = New-Object PSObject -Property @{ Type = "ACS"; VipAddress = "10.57.51.32"; ClusterNetworkAddress = "10.57.48.0"; IsDhcpEnabled = $false }
``` 1. Update the device with the `DeviceVIP` property.
The example below shows a static ACS configuration, followed by an example that
Set-DeviceVip -vip $acsVip ```
- Here's sample output:
-
- ```output
- acsVIP nfsVIP
-
- @{type=ACS; name=Azure Consistent Services; address=192.168.181.10; network=; isDhcpEnabled=False} @{type=NFS; name=Network File Syst...
- }
- ```
- 1. Fetch the updated `DeviceVIP` configuration. ```azurepowershell
The example below shows a static ACS configuration, followed by an example that
Here's sample output: ```output
- {
- "acsVIP": {
+ {
+ "acsVIP": {
"type": "ACS", "name": "Azure Consistent Services",
- "address": "192.168.181.10",
+ "address": "10.57.51.32",
"network": {
- "name": "Cluster Network 1",
- "address": "192.168.0.0",
- "subnet": "255.255.0.0",
- "dhcpEnabled": true
+ "name": "Cluster Network 3",
+ "address": "10.57.48.0",
+ "subnet": "255.255.248.0",
+ "dhcpEnabled": false
}, "isDhcpEnabled": false
- },
- "nfsVIP": {
- "type": "NFS",
- "name": "Network File System",
- "address": null,
- "network": null,
- "isDhcpEnabled": false
- },
- "clusterNetworks": [
- {
- "name": "Cluster Network 1",
- "address": "192.168.0.0",
- "subnet": "255.255.0.0",
- "dhcpEnabled": false
- },
- {
- "name": "Cluster Network 4",
- "address": "10.126.72.0",
- "subnet": "255.255.248.0",
- "dhcpEnabled": false
- }
- ]
+ }
} PS C:\> ```
-Use the following steps to set the `DeviceVIP` configuration on a two-node Azure Stack Edge device.
+### Set a DHCP Azure Consistent Services configuration
-The example below shows a sequence of steps to enable DHCP.
-
-1. Sign in to the device.
+1. Fetch the `DeviceVIP` configuration.
```azurepowershell
- Set-Login "https://<IP address>" "Password"
+ Get-DeviceVip | to-json
```
-1. Fetch the device configuration.
+1. Set the `DeviceVIP` property to enable DHCP.
```azurepowershell
- Get-DeviceConfiguration | To-json
+ $acsVip = New-Object PSObject -Property @{ Type = "ACS"; VipAddress = $null; ClusterNetworkAddress = "10.57.48.0"; IsDhcpEnabled = $true }
```
-1. Fetch the `DeviceVIP` configuration.
+1. Update the device with the `DeviceVIP` property.
+
+ ```azurepowershell
+ Set-DeviceVip -vip $acsVip
+ ```
+
+1. Fetch the updated `DeviceVIP` configuration.
```azurepowershell Get-DeviceVip | to-json ```
- Here's sample output showing the initial state of **acsVIP setting "isDhcpEnabled": false** on the device:
+ Here's sample output:
```output { "acsVIP": { "type": "ACS", "name": "Azure Consistent Services",
- "address": "192.168.181.10",
+ "address": "10.57.53.225",
"network": {
- "name": "Cluster Network 1",
- "address": "192.168.0.0",
- "subnet": "255.255.0.0",
- "dhcpEnabled": false
+ "name": "Cluster Network 3",
+ "address": "10.57.48.0",
+ "subnet": "255.255.248.0",
+ "dhcpEnabled": true
},
- "isDhcpEnabled": false
- },
- "nfsVIP": {
- "type": "NFS",
- "name": "Network File System",
- "address": null,
- "network": null,
- "isDhcpEnabled": false
+ "isDhcpEnabled": true
},
- "clusterNetworks": [
- {
- "name": "Cluster Network 1",
- "address": "192.168.0.0",
- "subnet": "255.255.0.0",
- "dhcpEnabled": false
- },
- {
- "name": "Cluster Network 2",
- "address": "10.139.218.0",
- "subnet": "255.255.255.0",
- "dhcpEnabled": false
- },
- {
- "name": "Cluster Network 3",
- "address": "10.126.72.0",
- "subnet": "255.255.248.0",
- "dhcpEnabled": false
- }
- ]
}
+ PS C:\>
+ ```
+
+### [Network File System](#tab/network-file-system)
+
+### Set a static Network File System configuration
+
+1. Fetch the `DeviceVIP` configuration.
+ ```azurepowershell
+ Get-DeviceVip | to-json
``` 1. Set the `DeviceVIP` property to enable DHCP. ```azurepowershell
- $acsVip = New-Object PSObject -Property @{ Type = "ACS"; VipAddress = "192.168.181.10"; ClusterNetworkAddress = "192.168.0.0"; IsDhcpEnabled = $true }
+ $nfsVip = New-Object PSObject -Property @{ Type = "NFS"; VipAddress = "10.57.53.215"; ClusterNetworkAddress = "10.57.48.0"; IsDhcpEnabled = $false }
``` 1. Update the device with the `DeviceVIP` property. ```azurepowershell
- Set-DeviceVip -vip $acsVip
+ Set-DeviceVip -vip $nfsVip
```
+1. Fetch the updated `DeviceVIP` configuration.
+
+ ```azurepowershell
+ Get-DeviceVip | to-json
+ ```
Here's sample output:
- ```output
- acsVIP nfsVIP
-
- @{type=ACS; name=Azure Consistent Services; address=192.168.181.10; network=; isDhcpEnabled=True} @{type=NFS; name=Network File System; address=192.168.3.63; network=; ...
+ ```Output
+ {
+ "nfsVIP": {
+ "type": "NFS",
+ "name": "Network File System",
+ "address": "10.57.53.215",
+ "network": {
+ "name": "Cluster Network 3",
+ "address": "10.57.48.0",
+ "subnet": "255.255.248.0",
+ "dhcpEnabled": false
+ },
+ "isDhcpEnabled": false
+ }
}
+ PS C:\>
```
+### Set a DHCP Network File System configuration
+
+1. Fetch the `DeviceVIP` configuration.
+ ```azurepowershell
+ Get-DeviceVip | to-json
+ ```
+
+1. Set the `DeviceVIP` property to enable DHCP.
+
+ ```azurepowershell
+ $nfsVip = New-Object PSObject -Property @{ Type = "NFS"; VipAddress = $null; ClusterNetworkAddress = "10.57.48.0"; IsDhcpEnabled = $true }
+ ```
+
+1. Update the device with the `DeviceVIP` property.
+
+ ```azurepowershell
+ Set-DeviceVip -vip $nfsVip
+ ```
+ 1. Fetch the updated `DeviceVIP` configuration. ```azurepowershell Get-DeviceVip | to-json ```
- Here's sample output showing the updated **acsVIP setting "isDhcpEnabled": true** on the device:
+ Here's sample output:
```output {
- "acsVIP": {
- "type": "ACS",
- "name": "Azure Consistent Services",
- "address": "192.168.181.10",
- "network": {
- "name": "Cluster Network 1",
- "address": "192.168.0.0",
- "subnet": "255.255.0.0",
- "dhcpEnabled": true
- },
- "isDhcpEnabled": true
- },
"nfsVIP": {
- "type": "NFS",
- "name": "Network File System",
- "address": "192.168.3.63",
- "network": {
- "name": "Cluster Network 1",
- "address": "192.168.0.0",
- "subnet": "255.255.0.0",
- "dhcpEnabled": false
- },
- "isDhcpEnabled": false
- },
- "clusterNetworks": [
- {
- "name": "Cluster Network 1",
- "address": "192.168.0.0",
- "subnet": "255.255.0.0",
- "dhcpEnabled": false
- },
- {
- "name": "Cluster Network 2",
- "address": "10.139.218.0",
- "subnet": "255.255.255.0",
- "dhcpEnabled": false
- },
- {
+ "type": "NFS",
+ "name": "Network File System",
+ "address": "10.57.53.228",
+ "network": {
"name": "Cluster Network 3",
- "address": "10.126.72.0",
+ "address": "10.57.48.0",
"subnet": "255.255.248.0",
- "dhcpEnabled": false
- }
- ]
+ "dhcpEnabled": true
+ },
+ "isDhcpEnabled": true
+ }
} PS C:\>
-
``` ++ ## Troubleshooting - [Run diagnostics or collect logs to troubleshoot Azure Stack Edge device issues](azure-stack-edge-gpu-troubleshoot.md).
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| **Exposed Postgres service with risky configuration in Kubernetes detected (Preview)**<br>(K8S_ExposedPostgresBroadIPRange)|Kubernetes cluster configuration analysis detected exposure of a Postgres service by a load balancer with a risky configuration. Exposing the service to a wide range of IP addresses poses a security risk.|InitialAccess|Medium| | **Attempt to create a new Linux namespace from a container detected**<br>(K8S.NODE_NamespaceCreation) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium | | **A history file has been cleared**<br>(K8S.NODE_HistoryFileCleared) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
-| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiAcitivty) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium |
+| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiActivity) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium |
| **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium | | **An uncommon connection attempt detected**<br>(K8S.NODE_SuspectConnection) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium | | **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium |
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
For other policies, you can create an exemption directly in the policy itself, b
### What Microsoft Defender plans or licenses do I need to use the regulatory compliance dashboard?
-If you've got *any* of the Microsoft Defender plan enabled on *any* of your Azure resources, you can access Defender for Cloud's regulatory compliance dashboard and all of its data.
+If you've got *any* of the Microsoft Defender plan (except for Defender for Servers Plan 1) enabled on *any* of your Azure resources, you can access Defender for Cloud's regulatory compliance dashboard and all of its data.
## Next steps
The regulatory compliance dashboard can greatly simplify the compliance process,
To learn more, see these related pages: - [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md) - Learn how to select which standards appear in your regulatory compliance dashboard.-- [Managing security recommendations in Defender for Cloud](review-security-recommendations.md) - Learn how to use recommendations in Defender for Cloud to help protect your Azure resources.
+- [Managing security recommendations in Defender for Cloud](review-security-recommendations.md) - Learn how to use recommendations in Defender for Cloud to help protect your Azure resources.
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using DMS (classic) + > [!NOTE] > This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md). >
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
# Tutorial: Migrate SQL Server to Azure SQL Database using DMS (classic) + > [!NOTE] > This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-azure-sql-database-offline-ads.md). >
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS (classic) + > [!NOTE] > This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-offline-ads.md). >
energy-data-services Tutorial Petrel Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-petrel-ddms.md
+
+ Title: Tutorial - Work with Petrel data records by using Petrel DDMS APIs in Azure Data Manager for Energy Preview
+description: Learn how to work with Petrel data records in your Azure Data Manager for Energy Preview instance by using Petrel Domain Data Management Services (Petrel DDMS) APIs in Postman.
++++ Last updated : 2/07/2023+++
+# Tutorial: Work with Petrel data records by using Petrel DDMS APIs
+
+Use Petrel Domain Data Management Services (Petrel DDMS) APIs in Postman to work with Petrel data in your instance of Azure Data Manager for Energy Preview.
+
+In this tutorial, you'll learn how to:
+> [!div class="checklist"]
+>
+> - Set up Postman to use a Petrel DDMS collection.
+> - Set up Postman to use a Petrel DDMS environment.
+> - Send requests via Postman.
+> - Generate an authorization token.
+> - Use Petrel DDMS APIs to work with Petrel data records/projects.
++
+For more information about DDMS, see [DDMS concepts](concepts-ddms.md).
+
+## Prerequisites
+
+- An Azure subscription
+- An instance of [Azure Data Manager for Energy Preview](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription.
+
+## Get your Azure Data Manager for Energy Services instance details
+
+The first step is to get the following information from your [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden):
+
+| Parameter | Value | Example |
+| | |-- |
+| CLIENT_ID | Application (client) ID | 3dfxxxxxxxxxxxxxxxxxxxxxx |
+| CLIENT_SECRET | Client secrets | _fl****************** |
+| TENANT_ID | Directory (tenant) ID | 72fxxxxxxxxxxxx |
+| SCOPE | Application (client) ID | 3dfxxxxxxxxxxxxxxxxxxxxxxx |
+| HOSTNAME | URI | `<instance>.energy.azure.com` |
+| DATA_PARTITION_ID | Data Partition(s) | `<instance>-<data-partition-name>` |
+
+You'll use this information later in the tutorial.
+
+## Set up Postman
+
+Next, set up Postman:
+
+1. Download and install the [Postman](https://www.postman.com/downloads/) desktop app.
+
+1. Import the following files in Postman:
+
+ - [Petrel DDMS Postman collection](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/PetrelDSv2.postman_collection.json)
++
+1. Create a Postman environment using the values you obtained above. The environment should look something like this:
+
+ :::image type="content" source="media/tutorial-petrel-ddms/pdsv2-env-postman.png" alt-text="Screenshot that shows pdsv2 env." lightbox="media/tutorial-petrel-ddms/pdsv2-env-postman.png":::
++
+## Generate a token to use in APIs
+
+The Postman collection for Petrel DDMS contains requests you can use to interact with your Petrel Projects. It also contains a request to query current Petrel projects and records in your Azure Data Manager for Energy instance.
+
+1. In Postman, in the left menu, select **Collections**, and then select **Petrel DDMS**. Under **Setup**, select **Get Token**.
+
+1. In the environment dropdown in the upper-right corner, select **Petrel DDMS Environment** you created.
+
+1. To send the request, select **Send**.
+
+This request will generate an access token and assign it as the authorization method for future requests.
+
+You can also generate a token by using the cURL command in Postman or a terminal to generate a bearer token. Use the values from your Microsoft Energy Data Services instance.
+
+```bash
+ curl --location --request POST 'https://login.microsoftonline.com/{{TENANT_ID}}/oauth2/v2.0/token' \
+ --header 'Content-Type: application/x-www-form-urlencoded' \
+ --data-urlencode 'grant_type=client_credentials' \
+ --data-urlencode 'client_id={{CLIENT_ID}}' \
+ --data-urlencode 'client_secret={{CLIENT_SECRET}}' \
+ --data-urlencode 'scope={{SCOPE}}'
+```
+
+In order to use this cURL generated token, you must update `access_token` in your **Collection** variables with the value after `Bearer` in the response.
+
+## Use Petrel DDMS APIs to work with Petrel Projects
+
+Successfully completing the Postman requests described in the following Petrel DDMS API allows you to interact with your save Petrel projects. While the API does provide a means of uploading data, we recommended that you upload your projects via the DELFI Petrel Explorer. All of the following API calls assume you have a project uploaded to Petrel Explorer.
+
+### Create a legal tag
+
+Create a legal tag that's automatically added to your Petrel DDMS environment for data compliance.
+
+API: **Setup** > **Create Legal Tag for PDS**
+
+Method: POST
++
+For more information, see [Manage legal tags](how-to-manage-legal-tags.md).
+
+### Add User to Entitlement Groups
+
+In order to ensure the user has the proper permissions to make the Petrel DDMS API calls, the user must be apart of the `users.datalake.admins@{data-partition-id}.dataservices.energy` entitlements group. This call adds the user to the proper groups.
+
+The **USER** in this case is the Client ID/OID contained in the token used for authentication. For example, if you generate a token using a client ID of `8cdxxxxxxxxxxxx`, you must add `8cdxxxxxxxxxxxx` to the **users.datalake.admins** group.
+
+API: **Setup** > **Add User to DATALAKE Admins**
+
+Method: POST
++
+### Get Project
+
+Given a Project ID, returns the corresponding Petrel Project record in your Azure Data Manager for Energy instance.
+
+API: **Project** > **Get Project**.
+
+Method: GET
++
+### Delete Project
+
+Given a Project ID, deletes the project and the associated Petrel Project record data in your Azure Data Manager for Energy instance.
+
+API: **Project** > **Delete Project**
+
+Method: DELETE
++
+### Get Project Version
+
+Given a `Project ID` and a `Version ID`, gets the Petrel Version record associated with that project/version ID in your Microsoft Energy Data Services instance.
+
+API: **Project** > **Project Version**
+
+Method: GET
++
+### Get a Project Download URL
+
+Given a Project ID, returns a SAS URL to download the data of the corresponding project from your Azure Data Manager for Energy instance.
+
+API: **Project** > **Download URL**
+
+Method: GET
++
+### Get a Project Upload URL
+
+Given a Project ID, returns two SAS URLs. One to upload data to and one to download data from the corresponding project in your Azure Data Manager for Energy instance.
+
+API: **Project** > **Upload URL**
+
+Method: POST
++
+#### Using the Upload URL
+Once you have your SAS upload url, it should look something like this:
+
+`https://{storage-account}.z15.blob.storage.azure.net/projects/{data-partition-id}/{projectID}.staging/{versionID}?{SAS-token-info}`
+
+Next you can modify this URL to contain the filename of data you want to upload:
+
+`https://{storage-account}.z15.blob.storage.azure.net/projects/{data-partition-id}/{projectID}.staging/{versionID}/{FILENAME}?{SAS-token-info}`
+
+Making a PUT call to this URL uploads the contents of the `body` to the blob storage under the **FILENAME** provided.
+
+### Update Project
+
+Given a Project ID, SAS upload URL, and a Petrel Project record, updates the Petrel Project record in your Microsoft Energy Data Services with the new values provided. Can also upload data to a given project but doesn't have to.
+
+API: **Project** > **Update Project**
+
+Method: PUT
++
+### Search Projects
+
+Allows the user to search through Petrel Projects given many fields. The call returns all match Project IDs. The API supports full text search on string fields, range queries on date, numeric or string fields, along with geo-spatial search.
+
+API: **Project** > **Search Projects**
+
+Method: POST
++
+## Next Steps
+> [!div class="nextstepaction"]
+> Visit the following DELFI links for further tutorials involving Petrel Explorer and Petrel Workflows:\
+> [Get started with Project Explorer (slb.com)](https://guru.delfi.slb.com/content/1015/help/1/en-US/299EC605-5CC6-4CD9-9B07-482B16426769)\
+> [Open a project saved in Project Explorer (slb.com)](https://guru.delfi.slb.com/content/1015/help/1/en-US/20AACC1C-E501-4436-9FC9-03427C21B12E)\
+> [Upload ZGY data from Petrel to the DELFI Data Ecosystem (slb.com)](https://guru.delfi.slb.com/content/1015/help/1/en-US/9F76FDB2-7817-491A-995F-A18D558A191C)\
+> [Import ZGY data from the DELFI Data Ecosystem to Petrel (slb.com)](https://guru.delfi.slb.com/content/1015/help/1/en-US/B0B8DA1A-74B6-4109-B80D-25FF4A75C57D)\
+> [Save a Petrel project in Project Explorer (slb.com)](https://guru.delfi.slb.com/content/1015/help/1/en-US/C86C74BE-6FF9-4962-AEBF-887897B95D4B)
event-hubs Event Hubs Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-samples.md
Title: Samples - Azure Event Hubs | Microsoft Docs description: This article provides a list of samples for Azure Event Hubs that are on GitHub. Previously updated : 07/05/2022 Last updated : 03/21/2023
You can find Event Hubs samples on [GitHub](https://github.com/Azure/azure-event
| Version | Samples location | | - | - | | Azure.Messaging.EventHubs version 5 (latest) | [Event Hubs samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples)<br/>[Event Hubs Processor samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples) |
+| Azure.ResourceManager.EventHubs | [Management samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventhub/Azure.ResourceManager.EventHubs/samples) |
| Microsoft.Azure.EventHubs version 4 (legacy) | [GitHub location](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/) | | Samples in the Azure Samples repository | [GitHub location](https://github.com/orgs/Azure-Samples/repositories?q=event-hubs&type=all&language=c%23) |
event-hubs Schema Registry Dotnet Send Receive Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-dotnet-send-receive-quickstart.md
Title: Validate schema when sending or receiving events
-description: This article provides a walkthrough to create a .NET Core application that sends/receives events to/from Azure Event Hubs with schema validation using Schema Registry.
+ Title: 'Quickstart: Validate schema when sending or receiving events'
+description: In this quickstart, you create a .NET Core application that sends/receives events to/from Azure Event Hubs with schema validation using Schema Registry.
Previously updated : 01/12/2022 Last updated : 03/20/2023 ms.devlang: csharp # Quickstart: Validate schema when sending and receiving events - AMQP and .NET
+In this quickstart, you learn how to send events to and receive events from an event hub with schema validation using the **Azure.Messaging.EventHubs** .NET library.
-**Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a grouping construct (schema groups). For more information, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md).
+> [!NOTE]
+> **Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a grouping construct (schema groups). For more information, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md).
-This quickstart shows how to send events to and receive events from an event hub with schema validation using the **Azure.Messaging.EventHubs** .NET library.
## Prerequisites If you're new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md) before you do this quickstart. To complete this quickstart, you need the following prerequisites:-- Follow instructions from the quickstart: [Create an Event Hubs namespace and an event hub](event-hubs-create.md).-- Follow instructions from [Get the connection string](event-hubs-get-connection-string.md) to get a connection string to your Event Hubs namespace. Note down the following settings that you'll use in the current quickstart:
- - Connection string for the Event Hubs namespace
- - Name of the event hub
-- **Complete the [.NET quickstart](event-hubs-dotnet-standard-getstarted-send.md)** to become familiar with sending events to and receiving events from event hubs using .NET. If you have already done the .NET quickstart before, you can skip this step. -- **Follow instructions from [Create schemas using Schema Registry](create-schema-registry.md)** to create a schema group and a schema. When creating a schema, follow instructions from the [Create a schema](#create-a-schema) in the current quickstart article. -- **Microsoft Visual Studio 2019**. The Azure Event Hubs client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# language versions, but the new syntax won't be available. To make use of the full syntax, we recommended that you compile with the [.NET Core SDK](https://dotnet.microsoft.com/download) 3.0 or higher and [language version](/dotnet/csharp/language-reference/configure-language-version#override-a-default) set to `latest`. If you're using Visual Studio, versions before Visual Studio 2019 aren't compatible with the tools needed to build C# 8.0 projects. Visual Studio 2019, including the free Community edition, can be downloaded [here](https://visualstudio.microsoft.com/vs/).
+- If you don't have an **Azure subscription**, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- **Microsoft Visual Studio 2022**.
+ The Azure Event Hubs client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# language versions, but the new syntax isn't available. To make use of the full syntax, we recommended that you compile with the [.NET Core SDK](https://dotnet.microsoft.com/download) 3.0 or higher and [language version](/dotnet/csharp/language-reference/configure-language-version#override-a-default) set to `latest`. If you're using Visual Studio, versions before Visual Studio 2019 aren't compatible with the tools needed to build C# 8.0 projects. Visual Studio 2019, including the free Community edition, can be downloaded [here](https://visualstudio.microsoft.com/vs/).
+## Create an event hub
+Follow instructions from the quickstart: [Create an Event Hubs namespace and an event hub](event-hubs-create.md) to create an Event Hubs namespace and an event hub. Then, follow instructions from [Get the connection string](event-hubs-get-connection-string.md) to get a connection string to your Event Hubs namespace.
+
+Note down the following settings that you'll use in the current quickstart:
+- Connection string for the Event Hubs namespace
+- Name of the event hub
## Create a schema
+Follow instructions from [Create schemas using Schema Registry](create-schema-registry.md) to create a schema group and a schema.
+ 1. Create a schema group named **contoso-sg** using the Schema Registry portal. Use Avro as the serialization type and **None** for the compatibility mode. 1. In that schema group, create a new Avro schema with schema name: ``Microsoft.Azure.Data.SchemaRegistry.example.Order`` using the following schema content.
To complete this quickstart, you need the following prerequisites:
} ```
+## Add user to Schema Registry Reader role
+Add your user account to the **Schema Registry Reader** role at the namespace level. You can also use the **Schema Registry Contributor** role, but that's not necessary for this quickstart.
+
+1. On the **Event Hubs Namespace** page, select **Access control (IAM)** on the left menu.
+2. On the **Access control (IAM)** page, select **+ Add** -> **Add role assignment** on the menu.
+3. On the **Assignment type** page, select **Next**.
+4. On the **Roles** page, select **Schema Registry Reader (Preview)**, and then select **Next** at the bottom of the page.
+5. Use the **+ Select members** link to add your user account to the role, and then select **Next**.
+6. On the **Review + assign** page, select **Review + assign**.
+ ## Produce events to event hubs with schema validation
To complete this quickstart, you need the following prerequisites:
### Add the Event Hubs NuGet package 1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. Run the following command to install the **Azure.Messaging.EventHubs** NuGet package:
+1. Run the following commands to install **Azure.Messaging.EventHubs** and other NuGet packages. Press **ENTER** to run the last command.
```cmd Install-Package Azure.Messaging.EventHubs Install-Package Azure.Identity
- Install-Package Microsoft.Azure.Data.SchemaRegistry.ApacheAvro -Version 1.0.0-beta.2
- Install-Package Azure.ResourceManager.Compute -Version 1.0.0-beta.1
-
+ Install-Package Microsoft.Azure.Data.SchemaRegistry.ApacheAvro
+ Install-Package Azure.ResourceManager.Compute
``` 1. Authenticate producer applications to connect to Azure via Visual Studio as shown [here](/dotnet/api/overview/azure/identity-readme#authenticating-via-visual-studio).
+1. Sign-in to Azure using the user account that's a member of the `Schema Registry Reader` role at the namespace level. For information about schema registry roles, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md#azure-role-based-access-control).
### Code generation using the Avro schema
-1. You can use the same schema content and create the Avro schema file ``Order.avsc`` file inside the OrderProducer project.
-1. Then you can use this schema file to generate code for .NET. You can use any external code generation tool such as [avrogen](https://www.nuget.org/packages/Apache.Avro.Tools/) for code generation. (For example you can run `` avrogen -s .\Order.avsc `` to generate code).
-1. Once you generate code, you should have the corresponding C# types available inside your project. For the above Avro schema, it generates the C# types in ``Microsoft.Azure.Data.SchemaRegistry.example`` namespace.
+1. Use the same content you used to create the schema to create a file named ``Order.avsc``. Save the file in the project or solution folder.
+1. Then you can use this schema file to generate code for .NET. You can use any external code generation tool such as [avrogen](https://www.nuget.org/packages/Apache.Avro.Tools/) for code generation. For example you can run `` avrogen -s .\Order.avsc .`` to generate code.
+1. Once you generate code, you'll see the file named `Order.cs` in the `\Microsoft\Azure\Data\SchemaRegistry\example` folder. For the above Avro schema, it generates the C# types in ``Microsoft.Azure.Data.SchemaRegistry.example`` namespace.
+1. Add the `Order.cs` file to the `OrderProducer` project.
### Write code to serialize and send events to the event hub
-1. Add the following `using` statements to the top of the **Program.cs** file:
+1. Add the following code to the `Program.cs` file. See the code comments for details. High-level steps in the code are:
+ 1. Create a producer client that you can use to send events to an event hub.
+ 1. Create a schema registry client that you can use to serialize and validate data in an `Order` object.
+ 1. Create a new `Order` object using the generated `Order` type.
+ 1. Use the schema registry client to serialize the `Order` object to `EventData`.
+ 1. Create a batch of events.
+ 1. Add the event data to the event batch.
+ 1. Use the producer client to send the batch of events to the event hub.
```csharp
- using System;
- using System.IO;
- using System.Threading;
using Azure.Data.SchemaRegistry; using Azure.Identity; using Microsoft.Azure.Data.SchemaRegistry.ApacheAvro; using Azure.Messaging.EventHubs; using Azure.Messaging.EventHubs.Producer;
- using System.Threading.Tasks;
- ```
-1. Also you can import the generated types related to ``Order`` schema as shown below.
- ```csharp
- using Microsoft.Azure.Data.SchemaRegistry.example;
- ```
-
-2. Add constants to the `Program` class for the Event Hubs connection string and the event hub name.
-
- ```csharp
- // connection string to the Event Hubs namespace
- private const string connectionString = "<EVENT HUBS NAMESPACE - CONNECTION STRING>";
-
- // name of the event hub
- private const string eventHubName = "<EVENT HUB NAME>";
-
- // Schema Registry endpoint
- private const string schemaRegistryEndpoint = "<EVENT HUBS NAMESPACE>.servicebus.windows.net>";
-
- // name of the consumer group
- private const string schemaGroup = "<SCHEMA GROUP>";
-
- ```
-
- > [!NOTE]
- > Replace placeholder values with the connection string to your namespace, the name of the event hub, and schema group.
-3. Add the following static property to the `Program` class. See the code comments.
-
- ```csharp
- // The Event Hubs client types are safe to cache and use as a singleton for the lifetime
- // of the application, which is best practice when events are being published or read regularly.
- static EventHubProducerClient producerClient;
- ```
-1. Replace the `Main` method with the following `async Main` method. See the code comments for details.
-
- ```csharp
- static async Task Main()
- {
- // Create a producer client that you can use to send events to an event hub
- producerClient = new EventHubProducerClient(connectionString, eventHubName);
- // Create a schema registry client that you can use to serialize and validate data.
- var schemaRegistryClient = new SchemaRegistryClient(endpoint: schemaRegistryEndpoint, credential: new DefaultAzureCredential());
+ using Microsoft.Azure.Data.SchemaRegistry.example;
- // Create a batch of events
- using EventDataBatch eventBatch = await producerClient.CreateBatchAsync();
-
- // Create a new order object using the generated type/class 'Order'.
- var sampleOrder = new Order { id = "12345", amount = 55.99, description = "This is a sample order." };
+ // connection string to the Event Hubs namespace
+ const string connectionString = "EVENTHUBSNAMESPACECONNECTIONSTRING";
- using var memoryStream = new MemoryStream();
- // Create an Avro object serializer using the Schema Registry client object.
- var producerSerializer = new SchemaRegistryAvroObjectSerializer(schemaRegistryClient, schemaGroup, new SchemaRegistryAvroObjectSerializerOptions { AutoRegisterSchemas = true });
+ // name of the event hub
+ const string eventHubName = "EVENTHUBNAME";
- // Serialize events data to the memory stream object.
- producerSerializer.Serialize(memoryStream, sampleOrder, typeof(Order), CancellationToken.None);
+ // Schema Registry endpoint
+ const string schemaRegistryEndpoint = "EVENTHUBSNAMESPACENAME.servicebus.windows.net";
- byte[] _memoryStreamBytes;
- _memoryStreamBytes = memoryStream.ToArray();
+ // name of the consumer group
+ const string schemaGroup = "SCHEMAGROUPNAME";
- // Create event data with serialized data and add it to an event batch.
- eventBatch.TryAdd(new EventData(_memoryStreamBytes));
+ // The Event Hubs client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when events are being published or read regularly.
+ EventHubProducerClient producerClient;
- // Send serilized event data to event hub.
- await producerClient.SendAsync(eventBatch);
- Console.WriteLine("A batch of 1 order has been published.");
- }
- ```
-5. Build the project, and ensure that there are no errors.
-6. Run the program and wait for the confirmation message.
+ // Create a producer client that you can use to send events to an event hub
+ producerClient = new EventHubProducerClient(connectionString, eventHubName);
+
+ // Create a schema registry client that you can use to serialize and validate data.
+ var schemaRegistryClient = new SchemaRegistryClient(schemaRegistryEndpoint, new DefaultAzureCredential());
+
+ // Create an Avro object serializer using the Schema Registry client object.
+ var serializer = new SchemaRegistryAvroSerializer(schemaRegistryClient, schemaGroup, new SchemaRegistryAvroSerializerOptions { AutoRegisterSchemas = true });
+
+ // Create a new order object using the generated type/class 'Order'.
+ var sampleOrder = new Order { id = "1234", amount = 45.29, description = "First sample order." };
+ EventData eventData = (EventData)await serializer.SerializeAsync(sampleOrder, messageType: typeof(EventData));
+
+ // Create a batch of events
+ using EventDataBatch eventBatch = await producerClient.CreateBatchAsync();
+
+ // Add the event data to the event batch.
+ eventBatch.TryAdd(eventData);
+
+ // Send the batch of events to the event hub.
+ await producerClient.SendAsync(eventBatch);
+ Console.WriteLine("A batch of 1 order has been published.");
+ ```
+1. Replace the following placeholder values with the real values.
+ - `EVENTHUBSNAMESPACECONNECTIONSTRING` - connection string for the Event Hubs namespace
+ - `EVENTHUBNAME` - name of the event hub
+ - `EVENTHUBSNAMESPACENAME` - name of the Event Hubs namespace
+ - `SCHEMAGROUPNAME` - name of the schema group
+
+ ```csharp
+ // connection string to the Event Hubs namespace
+ const string connectionString = "EVENTHUBSNAMESPACECONNECTIONSTRING";
+
+ // name of the event hub
+ const string eventHubName = "EVENTHUBNAME";
+
+ // Schema Registry endpoint
+ const string schemaRegistryEndpoint = "EVENTHUBSNAMESPACENAME.servicebus.windows.net";
+
+ // name of the consumer group
+ const string schemaGroup = "SCHEMAGROUPNAME";
+ ```
+1. Build the project, and ensure that there are no errors.
+1. Run the program and wait for the confirmation message.
```csharp A batch of 1 order has been published.
To complete this quickstart, you need the following prerequisites:
:::image type="content" source="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal.png" alt-text="Image of the Azure portal page to verify that the event hub received the events." lightbox="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal.png"::: -- ## Consume events from event hubs with schema validation
-This section shows how to write a .NET Core console application that receives events from an event hub and use schema registry to de-serialize event data.
+This section shows how to write a .NET Core console application that receives events from an event hub and use schema registry to deserialize event data.
+### Additional prerequisites
+- Create the storage account to be used the event processor.
### Create consumer application
This section shows how to write a .NET Core console application that receives ev
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu. 1. In the **Package Manager Console** window, confirm that **OrderConsumer** is selected for the **Default project**. If not, use the drop-down list to select **OrderConsumer**.--
-1. Run the following command to install the required NuGet package:
-
+1. Run the following command to install the required NuGet packages. Press **ENTER** to run the last command.
+
```cmd Install-Package Azure.Messaging.EventHubs Install-Package Azure.Messaging.EventHubs.Processor Install-Package Azure.Identity
- Install-Package Microsoft.Azure.Data.SchemaRegistry.ApacheAvro -Version 1.0.0-beta.2
- Install-Package Azure.ResourceManager.Compute -Version 1.0.0-beta.1
-
+ Install-Package Microsoft.Azure.Data.SchemaRegistry.ApacheAvro
+ Install-Package Azure.ResourceManager.Compute
``` 1. Authenticate producer applications to connect to Azure via Visual Studio as shown [here](/dotnet/api/overview/azure/identity-readme#authenticating-via-visual-studio). ---
-### Code generation using the Avro schema
-1. You can use the same schema content and create the Avro schema file ``Order.avsc`` file inside the ``OrderProducer`` project.
-1. Then you can use this schema file to generate code for .NET. For this, you can use any external code generation tool such as [avrogen](https://www.nuget.org/packages/Apache.Avro.Tools/). (For example you can run `` avrogen -s .\Order.avsc `` to generate code).
-1. Once you generate code, you should have the corresponding C# types available inside your project. For the above Avro schema, it generates the C# types in ``Microsoft.Azure.Data.SchemaRegistry.example`` namespace.
+1. Sign-in to Azure using the user account that's a member of the `Schema Registry Reader` role at the namespace level. For information about schema registry roles, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md#azure-role-based-access-control).
+1. Add the `Order.cs` file you generated as part of creating the producer app to the **OrderConsumer** project.
+1. Right-click **OrderConsumer** project, and select **Set as Startup project**.
### Write code to receive events and deserialize them using Schema Registry
-1. Add the following `using` statements to the top of the **Program.cs** file:
+1. Add the following code to the `Program.cs` file. See the code comments for details. High-level steps in the code are:
+ 1. Create a consumer client that you can use to send events to an event hub.
+ 1. Create a blob container client for the blob container in the Azure blob storage.
+ 1. Create an event processor client and register event and error handlers.
+ 1. In the event handler, create a schema registry client that you can use to deserialize event data into an `Order` object.
+ 1. Deserialize the event data into an `Order` object using the serializer.
+ 1. Print the information about the received order.
```csharp
- using System;
- using System.IO;
- using System.Text;
- using System.Threading.Tasks;
- using System.Threading;
using Azure.Data.SchemaRegistry; using Azure.Identity; using Microsoft.Azure.Data.SchemaRegistry.ApacheAvro;
This section shows how to write a .NET Core console application that receives ev
using Azure.Messaging.EventHubs; using Azure.Messaging.EventHubs.Consumer; using Azure.Messaging.EventHubs.Processor;
- ```
-1. Also you can import the generated types related to ``Order`` schema as shown below.
- ```csharp
- using Microsoft.Azure.Data.SchemaRegistry.example;
- ```
-
-2. Add constants to the `Program` class for the Event Hubs connection string and the event hub name.
-
- ```csharp
- // connection string to the Event Hubs namespace
- private const string connectionString = "<EVENT HUBS NAMESPACE - CONNECTION STRING>";
-
- // name of the event hub
- private const string eventHubName = "<EVENT HUB NAME>";
-
- private const string blobStorageConnectionString = "<AZURE STORAGE CONNECTION STRING>";
-
- private const string blobContainerName = "<BLOB CONTAINER NAME>";
-
- // Schema Registry endpoint
- private const string schemaRegistryEndpoint = "<EVENT HUBS NAMESPACE>.servicebus.windows.net>";
-
- // name of the consumer group
- private const string schemaGroup = "<SCHEMA GROUP>";
-
- ```
-
-3. Add the following static properties to the `Program` class.
-
- ```csharp
- static BlobContainerClient storageClient;
-
- // The Event Hubs client types are safe to cache and use as a singleton for the lifetime
- // of the application, which is best practice when events are being published or read regularly.
- static EventProcessorClient processor;
- ```
-1. Replace the `Main` method with the following `async Main` method. See the code comments for details.
-
- ```csharp
- static async Task Main()
- {
- // Read from the default consumer group: $Default
- string consumerGroup = EventHubConsumerClient.DefaultConsumerGroupName;
-
- // Create a blob container client that the event processor will use
- storageClient = new BlobContainerClient(blobStorageConnectionString, blobContainerName);
-
- // Create an event processor client to process events in the event hub
- processor = new EventProcessorClient(storageClient, consumerGroup, ehubNamespaceConnectionString, eventHubName);
-
- // Register handlers for processing events and handling errors
- processor.ProcessEventAsync += ProcessEventHandler;
- processor.ProcessErrorAsync += ProcessErrorHandler;
-
- // Start the processing
- await processor.StartProcessingAsync();
-
- // Wait for 30 seconds for the events to be processed
- await Task.Delay(TimeSpan.FromSeconds(30));
-
- // Stop the processing
- await processor.StopProcessingAsync();
- }
- ```
-1. Now, add the following event handler method that includes event de-serialization logic with the schema registry
- ```csharp
- static async Task ProcessEventHandler(ProcessEventArgs eventArgs)
- {
- // Create a schema registry client that you can use to deserialize and validate data.
- var schemaRegistryClient = new SchemaRegistryClient(endpoint: schemaRegistryEndpoint, credential: new DefaultAzureCredential());
- // Retrieve event data and convert it to a byte array.
- byte[] _memoryStreamBytes = eventArgs.Data.Body.ToArray();
- using var consumerMemoryStream = new MemoryStream(_memoryStreamBytes);
+ using Microsoft.Azure.Data.SchemaRegistry.example;
+
+
+ // connection string to the Event Hubs namespace
+ const string connectionString = "EVENTHUBSNAMESPACECONNECTIONSTRING";
+
+ // name of the event hub
+ const string eventHubName = "EVENTHUBNAME";
+
+ // Schema Registry endpoint
+ const string schemaRegistryEndpoint = "EVENTHUBSNAMESPACENAME.servicebus.windows.net";
+
+ // name of the consumer group
+ const string schemaGroup = "SCHEMAGROUPNAME";
+
+ // connection string for the Azure Storage account
+ const string blobStorageConnectionString = "AZURESTORAGECONNECTIONSTRING";
+
+ // name of the blob container that will be userd as a checkpoint store
+ const string blobContainerName = "BLOBCONTAINERNAME";
+
+ // Create a blob container client that the event processor will use
+ BlobContainerClient storageClient = new BlobContainerClient(blobStorageConnectionString, blobContainerName);
- var consumerSerializer = new SchemaRegistryAvroObjectSerializer(schemaRegistryClient, schemaGroup, new SchemaRegistryAvroObjectSerializerOptions { AutoRegisterSchemas = false });
- consumerMemoryStream.Position = 0;
+ // Create an event processor client to process events in the event hub
+ EventProcessorClient processor = new EventProcessorClient(storageClient, EventHubConsumerClient.DefaultConsumerGroupName, connectionString, eventHubName);
- // Deserialize event data and create order object using schema registry.
- Order sampleOrder = (Order)consumerSerializer.Deserialize(consumerMemoryStream, typeof(Order), CancellationToken.None);
- Console.WriteLine("Received - Order ID: " + sampleOrder.id);
+ // Register handlers for processing events and handling errors
+ processor.ProcessEventAsync += ProcessEventHandler;
+ processor.ProcessErrorAsync += ProcessErrorHandler;
- // Update checkpoint in the blob storage so that the app receives only new events the next time it's run
- await eventArgs.UpdateCheckpointAsync(eventArgs.CancellationToken);
+ // Start the processing
+ await processor.StartProcessingAsync();
+
+ // Wait for 30 seconds for the events to be processed
+ await Task.Delay(TimeSpan.FromSeconds(30));
+
+ // Stop the processing
+ await processor.StopProcessingAsync();
+
+ static async Task ProcessEventHandler(ProcessEventArgs eventArgs)
+ {
+ // Create a schema registry client that you can use to serialize and validate data.
+ var schemaRegistryClient = new SchemaRegistryClient(schemaRegistryEndpoint, new DefaultAzureCredential());
+
+ // Create an Avro object serializer using the Schema Registry client object.
+ var serializer = new SchemaRegistryAvroSerializer(schemaRegistryClient, schemaGroup, new SchemaRegistryAvroSerializerOptions { AutoRegisterSchemas = true });
+
+ // Deserialized data in the received event using the schema
+ Order sampleOrder = (Order)await serializer.DeserializeAsync(eventArgs.Data, typeof(Order));
+
+ // Print the received event
+ Console.WriteLine($"Received order with ID: {sampleOrder.id}, amount: {sampleOrder.amount}, description: {sampleOrder.description}");
+
+ await eventArgs.UpdateCheckpointAsync(eventArgs.CancellationToken);
}-
- ```
-
-
-1. Now, add the following error handler methods to the class.
-
- ```csharp
+
static Task ProcessErrorHandler(ProcessErrorEventArgs eventArgs)
- {
- // Write details about the error to the console window
- Console.WriteLine($"\tPartition '{ eventArgs.PartitionId}': an unhandled exception was encountered. This was not expected to happen.");
- Console.WriteLine(eventArgs.Exception.Message);
- return Task.CompletedTask;
- }
-
- ```
-1. Build the project, and ensure that there are no errors.
-
+ {
+ // Write details about the error to the console window
+ Console.WriteLine($"\tPartition '{eventArgs.PartitionId}': an unhandled exception was encountered. This was not expected to happen.");
+ Console.WriteLine(eventArgs.Exception.Message);
+ return Task.CompletedTask;
+ }
+ ```
+1. Replace the following placeholder values with the real values.
+ - `EVENTHUBSNAMESPACE-CONNECTIONSTRING` - connection string for the Event Hubs namespace
+ - `EVENTHUBNAME` - name of the event hub
+ - `EVENTHUBSNAMESPACENAME` - name of the Event Hubs namespace
+ - `SCHEMAGROUPNAME` - name of the schema group
+ - `AZURESTORAGECONNECTIONSTRING` - connection string for the Azure storage account
+ - `BLOBCONTAINERNAME` - Name of the blob container
+
+ ```csharp
+ // connection string to the Event Hubs namespace
+ const string connectionString = "EVENTHUBSNAMESPACE-CONNECTIONSTRING";
+
+ // name of the event hub
+ const string eventHubName = "EVENTHUBNAME";
+
+ // Schema Registry endpoint
+ const string schemaRegistryEndpoint = "EVENTHUBSNAMESPACENAME.servicebus.windows.net";
+
+ // name of the consumer group
+ const string schemaGroup = "SCHEMAGROUPNAME";
+ // Azure storage connection string
+ const string blobStorageConnectionString = "AZURESTORAGECONNECTIONSTRING";
+
+ // Azure blob container name
+ const string blobContainerName = "BLOBCONTAINERNAME";
+ ```
+1. Build the project, and ensure that there are no errors.
6. Run the receiver application. 1. You should see a message that the events have been received. ```bash
- Received - Order ID: 12345
+ Received order with ID: 1234, amount: 45.29, description: First sample order.
``` These events are the three events you sent to the event hub earlier by running the sender program.
+## Samples
+See the [Readme](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/schemaregistry/Microsoft.Azure.Data.SchemaRegistry.ApacheAvro) article in our GitHub repository.
+
+## Clean up resources
+Delete the Event Hubs namespace or delete the resource group that contains the namespace.
## Next steps
hdinsight Log Analytics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/log-analytics-migration.md
Previously updated : 11/17/2022 Last updated : 03/21/2023 # Log Analytics migration guide for Azure HDInsight clusters
Considering customer feedback, the Azure HDInsight team invested in integration
- Faster log delivery - Resource-based table grouping and default queries -- > [!NOTE] > New Azure Montitor integration is in Public Preview across all regions where HDInsight is available.
This document outlines the changes to the Azure Monitor integration and provides
**Insights cluster portal integration**: The **Insights** pane is also new to the HDInsight Cluster portal. After enabling the new Azure Monitor integration, you can select the **Insights** pane and an out-of-box logs and metrics dashboard specific to the cluster's type will automatically populate for you. These dashboards have been revamped from our previous Azure solutions. They give you deep insights into your cluster's performance and health.
-**At-scale insights**: You can use the new **At-Scale Insights** workbook in the **Azure Monitor** portal to monitor your clusters' health and performance across different subscriptions.
- ## Customer scenarios The following sections describe how customers can use the new Azure Monitor integration in different scenarios. The [Activate a new Azure Monitor integration](#activate-a-new-azure-monitor-integration) section outlines how to activate and use the new Azure Monitor integration. The [Migrating from Azure Monitor Classic to the new Azure Monitor Integration](#migrate-to-the-new-azure-monitor-integration) section includes additional information for users that depend on the old Azure Monitor integration.
Insights are cluster-specific visualization dashboards made using [Azure Workboo
You can create your own Azure workbooks with custom graphs and visualizations. In your cluster's portal page, scroll down to the **Monitoring** section and select the **Workbooks** pane in the menu on the left. You can either start using a blank template or use one of the templates under the **HDInsight Clusters** section. There's a template for each cluster type. Templates are useful if you want to save specific customizations that the default HDInsight Insights don't provide. Feel free to send in requests for new features in the HDInsight Insights if you feel they're lacking something.
-#### At-scale workbooks for new Azure Monitor integrations
-
-Use our new at-scale workbook to get a multi-cluster monitoring experience for your clusters. Our at-scale workbook shows you which of your clusters have the monitoring pipeline enabled. The workbook also gives you a straightforward way to check the health of multiple clusters at once. To view this workbook:
-
-1. Go to the **Azure Monitor** page in from the Azure portal home page
-2. Once on the **Azure Monitor** page, select **Insights Hub** under the **Insights** section.
-3. Select **HDInsight Clusters** under the **Analytics** section.
-
- :::image type="content" source="./media/log-analytics-migration/at-scale-workbook.png" lightbox="./media/log-analytics-migration/at-scale-workbook.png" alt-text="Screenshot that shows the at-scale workbook." border="false":::
- #### Alerts You can add custom alerts to your clusters and workspaces in the Log query editor. Go to the Logs query editor by selecting the **Logs** pane from either your cluster or workspace portal. Run a query and then select **New Alert Rule** as shown in the following screenshot. For more information, read about [configuring alerts](../azure-monitor/alerts/alerts-log.md).
Refer to the [mapping table](#appendix-table-mapping) between the old table/sche
#### Out-of-box dashboards
-We also improved the out-of-box dashboards both at the cluster-level. There's a button on the top right of every graph that allows you to see the underlying query that produces the information. The graph is a great way to familiarize yourself with how the new tables can be queried effectively. You can access the out-of-box dashboards by following the instructions that you'll find in the [Insights](#insights) and [At-scale workbooks for new Azure Monitor integrations](#at-scale-workbooks-for-new-azure-monitor-integrations) sections.
-
-### Use an HDInsight at-scale monitoring dashboard
-
-If you're using the out-of-box monitoring dashboard for HDInsight clusters like HDInsight Spark Monitoring and HDInsight Interactive Monitoring, we're working to provide you the same capabilities on the Azure Monitor portal.
-
-You'll see that there's an HDInsight clusters option in Azure Monitor.
-
- :::image type="content" source="./media/log-analytics-migration/hdinsight-azure-monitor.png" lightbox="./media/log-analytics-migration/hdinsight-azure-monitor.png" alt-text="Screenshot that shows the HDInsight option in Azure Monitor." border="false":::
-
-The Azure Monitor portal's Insights Hub provides you the capability of monitoring multiple HDInsight clusters in one place. We organize the clusters based on the workload type, so you see types like Spark, HBase, and Hive. Instead of going to multiple dashboards, now you can monitor all your HDInsight clusters in this view.
-
-> [!NOTE]
-> For more information, see the [Insights](#insights) and [At-scale workbooks for new Azure Monitor integrations](#at-scale-workbooks-for-new-azure-monitor-integrations) sections in this article.
+We also improved the out-of-box dashboards both at the cluster-level. There's a button on the top right of every graph that allows you to see the underlying query that produces the information. The graph is a great way to familiarize yourself with how the new tables can be queried effectively.
## Enable both integrations to accelerate the migration
healthcare-apis How To Use Mapping Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-mapping-debugger.md
Previously updated : 03/07/2023 Last updated : 03/15/2023 # How to use the MedTech service Mapping debugger
+> [!IMPORTANT]
+> This feature is currently in Public Preview. See [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ > [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
healthcare-apis Understand Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/understand-service.md
Previously updated : 03/07/2023 Last updated : 03/21/2023
-# Understand the MedTech service device message data transformation
+# Understand the MedTech service device message data transformation
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
This article provides an overview of the device message data processing stages w
The MedTech service device message data processing follows these steps and in this order: * Ingest
-* Normalize - Device mappings applied.
+* Normalize - Device mapping applied.
* Group - (Optional)
-* Transform - FHIR destination mappings applied.
+* Transform - FHIR destination mapping applied.
* Persist :::image type="content" source="media/understand-service/understand-device-message-flow.png" alt-text="Screenshot of a device message as it processed by the MedTech service." lightbox="media/understand-service/understand-device-message-flow.png":::
The device message event hub uses the MedTech service's [system-assigned managed
> - A MedTech service and a storage writer application accessing the same device message event hub. ## Normalize
-Normalize is the next stage where device message data is processed using user-selected/user-created conforming and valid [device mappings](how-to-configure-device-mappings.md). This mapping process results in transforming device message data into a normalized schema.
+Normalize is the next stage where device message data is processed using the user-selected/user-created conforming and valid [device mapping](how-to-configure-device-mappings.md). This mapping process results in transforming device message data into a normalized schema.
The normalization process not only simplifies data processing at later stages, but also provides the capability to project one device message into multiple normalized messages. For instance, a device could send multiple vital signs for body temperature, pulse rate, blood pressure, and respiration rate in a single device message. This device message would create four separate FHIR Observation resources. Each resource would represent a different vital sign, with the device message projected into four different normalized messages.
Group is the next *optional* stage where the normalized messages available from
Device identity and measurement type grouping are optional and enabled by the use of the [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. The SampledData measurement type provides a concise way to represent a time-based series of measurements from a device message into FHIR Observation resources. When you use the SampledData measurement type, measurements can be grouped into a single FHIR Observation resource that represents a 1-hour period or a 24-hour period. ## Transform
-Transform is the next stage where normalized messages are processed using user-selected/user-created conforming and valid [FHIR destination mappings](how-to-configure-fhir-mappings.md). Normalized messages get transformed into FHIR Observation resources if a matching FHIR destination mapping has been authored.
+Transform is the next stage where normalized messages are processed using the user-selected/user-created conforming and valid [FHIR destination mapping](how-to-configure-fhir-mappings.md). Normalized messages get transformed into FHIR Observation resources if a matching FHIR destination mapping has been authored.
At this point, the [Device](https://www.hl7.org/fhir/device.html) resource, along with its associated [Patient](https://www.hl7.org/fhir/patient.html) resource, is also retrieved from the FHIR service using the device identifier present in the device message. These resources are added as a reference to the FHIR Observation resource being created.
In this article, you learned about the MedTech service device message processing
To learn how to configure the MedTech service device and FHIR destination mappings, see > [!div class="nextstepaction"]
-> [How to configure device mappings](how-to-configure-device-mappings.md)
+> [How to configure the device mapping](how-to-configure-device-mappings.md)
> [!div class="nextstepaction"]
-> [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md)
+> [How to configure the FHIR destination mapping](how-to-configure-fhir-mappings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-edge Module Edgeagent Edgehub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-edgeagent-edgehub.md
The module twin for the IoT Edge agent is called `$edgeAgent` and coordinates th
| Property | Description | Required | | -- | -- | -- |
-| schemaVersion | Either "1.0" or "1.1". Version 1.1 was introduced with IoT Edge version 1.0.10, and is recommended. | Yes |
-| runtime.type | Has to be "docker" | Yes |
-| runtime.settings.minDockerVersion | Set to the minimum Docker version required by this deployment manifest | Yes |
+| imagePullPolicy | When to pull the image in either *OnCreate* or *Never* (*Never* can be used if the image is already on the device) | Yes |
+| restartPolicy | When the module should be restarted. Possible values are: *Never*: don't restart module if not running, *Always*: always restart module if not running, *On-Unhealthy*: restart module if unhealthy. Unhealthy is what Docker reports based on a health check, for example "Unhealthy - the container is not working correctly", *On-Failed*: restart if Failed. | Yes |
+| runtime.type | Has to be *docker*. | Yes |
+| runtime.settings.minDockerVersion | Set to the minimum Docker version required by this deployment manifest. | Yes |
| runtime.settings.loggingOptions | A stringified JSON containing the logging options for the IoT Edge agent container. [Docker logging options](https://docs.docker.com/engine/admin/logging/overview/) | No |
-| runtime.settings.registryCredentials.{registryId}.username | The username of the container registry. For Azure Container Registry, the username is usually the registry name.<br><br>Registry credentials are necessary for any private module images. | No |
+| runtime.settings.registryCredentials.{registryId}.username | The username of the container registry. For Azure Container Registry, the username is usually the registry name. Registry credentials are necessary for any private module images. | No |
| runtime.settings.registryCredentials.{registryId}.password | The password for the container registry. | No |
-| runtime.settings.registryCredentials.{registryId}.address | The address of the container registry. For Azure Container Registry, the address is usually *{registry name}.azurecr.io*. | No |
-| systemModules.edgeAgent.type | Has to be "docker" | Yes |
-| systemModules.edgeAgent.startupOrder | An integer value for which spot a module has in the startup order. 0 is first and max integer (4294967295) is last. If a value isn't provided, the default is max integer. | No |
+| runtime.settings.registryCredentials.{registryId}.address | The address of the container registry. For Azure Container Registry, the address is usually *{registry name}.azurecr.io*. | No |
+| schemaVersion | Either *1.0* or *1.1*. Version 1.1 was introduced with IoT Edge version 1.0.10, and is recommended. | Yes |
+| status | Desired status of the module: *Running* or *Stopped*. | Required |
+| systemModules.edgeAgent.type | Has to be *docker*. | Yes |
+| systemModules.edgeAgent.startupOrder | An integer value for the location a module has in the startup order. A *0* is first and *max integer* (4294967295) is last. If a value isn't provided, the default is *max integer*. | No |
| systemModules.edgeAgent.settings.image | The URI of the image of the IoT Edge agent. Currently, the IoT Edge agent isn't able to update itself. | Yes | | systemModules.edgeAgent.settings.createOptions | A stringified JSON containing the options for the creation of the IoT Edge agent container. [Docker create options](https://docs.docker.com/engine/api/v1.32/#operation/ContainerCreate) | No | | systemModules.edgeAgent.configuration.id | The ID of the deployment that deployed this module. | IoT Hub sets this property when the manifest is applied using a deployment. Not part of a deployment manifest. |
-| systemModules.edgeHub.type | Has to be "docker" | Yes |
-| systemModules.edgeHub.status | Has to be "running" | Yes |
-| systemModules.edgeHub.restartPolicy | Has to be "always" | Yes |
-| systemModules.edgeHub.startupOrder | An integer value for which spot a module has in the startup order. 0 is first and max integer (4294967295) is last. If a value isn't provided, the default is max integer. | No |
+| systemModules.edgeHub.type | Has to be *docker*. | Yes |
+| systemModules.edgeHub.status | Has to be *running*. | Yes |
+| systemModules.edgeHub.restartPolicy | Has to be *always*. | Yes |
+| systemModules.edgeHub.startupOrder | An integer value for which spot a module has in the startup order. A *0* is first and *max integer* (4294967295) is last. If a value isn't provided, the default is *max integer*. | No |
| systemModules.edgeHub.settings.image | The URI of the image of the IoT Edge hub. | Yes | | systemModules.edgeHub.settings.createOptions | A stringified JSON containing the options for the creation of the IoT Edge hub container. [Docker create options](https://docs.docker.com/engine/api/v1.32/#operation/ContainerCreate) | No | | systemModules.edgeHub.configuration.id | The ID of the deployment that deployed this module. | IoT Hub sets this property when the manifest is applied using a deployment. Not part of a deployment manifest. | | modules.{moduleId}.version | A user-defined string representing the version of this module. | Yes |
-| modules.{moduleId}.type | Has to be "docker" | Yes |
-| modules.{moduleId}.status | {"running" \| "stopped"} | Yes |
-| modules.{moduleId}.restartPolicy | {"never" \| "always"} | Yes |
-| modules.{moduleId}.startupOrder | An integer value for which spot a module has in the startup order. 0 is first and max integer (4294967295) is last. If a value isn't provided, the default is max integer. | No |
-| modules.{moduleId}.imagePullPolicy | {"on-create" \| "never"} | No |
-| modules.{moduleId}.env | A list of environment variables to pass to the module. Takes the format `"<name>": {"value": "<value>"}` | No |
+| modules.{moduleId}.type | Has to be *docker*. | Yes |
+| modules.{moduleId}.status | {*running* \| *stopped*} | Yes |
+| modules.{moduleId}.restartPolicy | {*never* \| *always*} | Yes |
+| modules.{moduleId}.startupOrder | An integer value for the location a module has in the startup order. A *0* is first and *max integer* (4294967295) is last. If a value isn't provided, the default is *max integer*. | No |
+| modules.{moduleId}.imagePullPolicy | {*on-create* \| *never*} | No |
+| modules.{moduleId}.env | A list of environment variables to pass to the module. Takes the format `"<name>": {"value": "<value>"}`. | No |
| modules.{moduleId}.settings.image | The URI to the module image. | Yes | | modules.{moduleId}.settings.createOptions | A stringified JSON containing the options for the creation of the module container. [Docker create options](https://docs.docker.com/engine/api/v1.32/#operation/ContainerCreate) | No | | modules.{moduleId}.configuration.id | The ID of the deployment that deployed this module. | IoT Hub sets this property when the manifest is applied using a deployment. Not part of a deployment manifest. |
+| version | The current iteration that has version, commit, and build. | No |
## EdgeAgent reported properties
The following table does not include the information that is copied from the des
| Property | Description | | -- | -- |
+| lastDesiredStatus.code | This status code refers to the last desired properties seen by the IoT Edge agent. Allowed values: `200` Success, `400` Invalid configuration, `412` Invalid schema version, `417` Desired properties are empty, `500` Failed.|
+| lastDesiredStatus.description | Text description of the status. |
| lastDesiredVersion | This integer refers to the last version of the desired properties processed by the IoT Edge agent. |
-| lastDesiredStatus.code | This status code refers to the last desired properties seen by the IoT Edge agent. Allowed values: `200` Success, `400` Invalid configuration, `412` Invalid schema version, `417` the desired properties are empty, `500` Failed |
-| lastDesiredStatus.description | Text description of the status |
-| runtime.platform.OS | Reporting the OS running on the device |
-| runtime.platform.architecture | Reporting the architecture of the CPU on the device |
-| systemModules.edgeAgent.runtimeStatus | The reported status of IoT Edge agent: {"running" \| "unhealthy"} |
+| runtime.platform.OS | Reporting the OS running on the device. |
+| runtime.platform.architecture | Reporting the architecture of the CPU on the device. |
+| schemaVersion | Schema version of reported properties. |
+| systemModules.edgeAgent.runtimeStatus | The reported status of IoT Edge agent: {*running* \| *unhealthy*}. |
| systemModules.edgeAgent.statusDescription | Text description of the reported status of the IoT Edge agent. |
-| systemModules.edgeAgent.exitCode | The exit code reported by the IoT Edge agent container if the container exits |
-| systemModules.edgeAgent.lastStartTimeUtc | Time when IoT Edge agent was last started |
-| systemModules.edgeAgent.lastExitTimeUtc | Time when IoT Edge agent last exited |
-| systemModules.edgeHub.runtimeStatus | Status of IoT Edge hub: { "running" \| "stopped" \| "failed" \| "backoff" \| "unhealthy" } |
-| systemModules.edgeHub.statusDescription | Text description of the status of IoT Edge hub if unhealthy. |
-| systemModules.edgeHub.exitCode | The exit code reported by the IoT Edge hub container if the container exits |
-| systemModules.edgeHub.lastStartTimeUtc | Time when IoT Edge hub was last started |
-| systemModules.edgeHub.lastExitTimeUtc | Time when IoT Edge hub last exited |
-| systemModules.edgeHub.lastRestartTimeUtc | Time when IoT Edge hub was last restarted |
+| systemModules.edgeAgent.exitCode | The exit code reported by the IoT Edge agent container if the container exits. |
+| systemModules.edgeAgent.lastStartTimeUtc | Time when IoT Edge agent was last started. |
+| systemModules.edgeAgent.lastExitTimeUtc | Time when IoT Edge agent last exited. |
+| systemModules.edgeHub.runtimeStatus | Status of IoT Edge hub: { *running* \| *stopped* \| *failed* \| *backoff* \| *unhealthy* }. |
+| systemModules.edgeHub.statusDescription | Text description of the status of IoT Edge hub, if unhealthy. |
+| systemModules.edgeHub.exitCode | Exit code reported by the IoT Edge hub container, if the container exits. |
+| systemModules.edgeHub.lastStartTimeUtc | Time when IoT Edge hub was last started. |
+| systemModules.edgeHub.lastExitTimeUtc | Time when IoT Edge hub was last exited. |
+| systemModules.edgeHub.lastRestartTimeUtc | Time when IoT Edge hub was last restarted. |
| systemModules.edgeHub.restartCount | Number of times this module was restarted as part of the restart policy. |
-| modules.{moduleId}.runtimeStatus | Status of the module: { "running" \| "stopped" \| "failed" \| "backoff" \| "unhealthy" } |
-| modules.{moduleId}.statusDescription | Text description of the status of the module if unhealthy. |
-| modules.{moduleId}.exitCode | The exit code reported by the module container if the container exits |
-| modules.{moduleId}.lastStartTimeUtc | Time when the module was last started |
-| modules.{moduleId}.lastExitTimeUtc | Time when the module last exited |
-| modules.{moduleId}.lastRestartTimeUtc | Time when the module was last restarted |
+| modules.{moduleId}.runtimeStatus | Status of the module: { *running* \| *stopped* \| *failed* \| *backoff* \| *unhealthy* }. |
+| modules.{moduleId}.statusDescription | Text description of the status of the module, if unhealthy. |
+| modules.{moduleId}.exitCode | The exit code reported by the module container, if the container exits. |
+| modules.{moduleId}.lastStartTimeUtc | Time when the module was last started. |
+| modules.{moduleId}.lastExitTimeUtc | Time when the module was last exited. |
+| modules.{moduleId}.lastRestartTimeUtc | Time when the module was last restarted. |
| modules.{moduleId}.restartCount | Number of times this module was restarted as part of the restart policy. |
+| version | Version of the image. Example: "version": { "version": "1.2.7", "build": "50979330", "commit": "d3ec971caa0af0fc39d2c1f91aef21e95bd0c03c" }. |
## EdgeHub desired properties
The module twin for the IoT Edge hub is called `$edgeHub` and coordinates the co
| Property | Description | Required in the deployment manifest | | -- | -- | -- |
-| schemaVersion | Either "1.0" or "1.1". Version 1.1 was introduced with IoT Edge version 1.0.10, and is recommended. | Yes |
+| schemaVersion | Either 1.0 or 1.1. Version 1.1 was introduced with IoT Edge version 1.0.10, and is recommended. | Yes |
| routes.{routeName} | A string representing an IoT Edge hub route. For more information, see [Declare routes](module-composition.md#declare-routes). | The `routes` element can be present but empty. | | storeAndForwardConfiguration.timeToLiveSecs | The device time in seconds that IoT Edge hub keeps messages if disconnected from routing endpoints, whether IoT Hub or a local module. This time persists over any power offs or restarts. For more information, see [Offline capabilities](offline-capabilities.md#time-to-live). | Yes |
The module twin for the IoT Edge hub is called `$edgeHub` and coordinates the co
| lastDesiredVersion | This integer refers to the last version of the desired properties processed by the IoT Edge hub. | | lastDesiredStatus.code | The status code referring to last desired properties seen by the IoT Edge hub. Allowed values: `200` Success, `400` Invalid configuration, `500` Failed | | lastDesiredStatus.description | Text description of the status. |
-| clients.{device or moduleId}.status | The connectivity status of this device or module. Possible values {"connected" \| "disconnected"}. Only module identities can be in disconnected state. Downstream devices connecting to IoT Edge hub appear only when connected. |
+| clients | All clients connected to edgeHub with the status and last connected time. Example: "clients": { "device2/SimulatedTemperatureSensor": { "status": "Connected", "lastConnectedTimeUtc": "2022-11-17T21:49:16.4781564Z" } }. |
+| clients.{device or moduleId}.status | The connectivity status of this device or module. Possible values {*connected* \| *disconnected*}. Only module identities can be in disconnected state. Downstream devices connecting to IoT Edge hub appear only when connected. |
| clients.{device or moduleId}.lastConnectTime | Last time the device or module connected. | | clients.{device or moduleId}.lastDisconnectTime | Last time the device or module disconnected. |
+| schemaVersion | Schema version of reported properties. |
+| version | Version of the image. Example: "version": { "version": "1.2.7", "build": "50979330", "commit": "d3ec971caa0af0fc39d2c1f91aef21e95bd0c03c" }. |
## Next steps
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-cli.md
tags: azure-resource-manager
Previously updated : 11/14/2022 Last updated : 03/21/2023 ms.devlang: azurecli
You need to provide following inputs to create a Managed HSM resource:
- Azure location. - A list of initial administrators.
-The following example creates an HSM named **ContosoMHSM**, in the resource group **ContosoResourceGroup**, residing in the **West US 3** location, with **the current signed in user** as the only administrator, with **7 days retention period** for soft-delete. Read more about [Managed HSM soft-delete](soft-delete-overview.md)
+The following example creates an HSM named **ContosoMHSM**, in the resource group **ContosoResourceGroup**, residing in the **East US 2** location, with **the current signed in user** as the only administrator, with **7 days retention period** for soft-delete. Read more about [Managed HSM soft-delete](soft-delete-overview.md)
```azurecli-interactive oid=$(az ad signed-in-user show --query id -o tsv)
-az keyvault create --hsm-name "ContosoMHSM" --resource-group "ContosoResourceGroup" --location "westus3" --administrators $oid --retention-days 7
+az keyvault create --hsm-name "ContosoMHSM" --resource-group "ContosoResourceGroup" --location "eastus2" --administrators $oid --retention-days 7
``` > [!NOTE]
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-powershell.md
Title: Create and retrieve attributes of a managed key in Azure Key Vault ΓÇô Az
description: Quickstart showing how to set and retrieve a managed key from Azure Key Vault using Azure PowerShell Previously updated : 11/14/2022 Last updated : 03/21/2023
In this quickstart, you will create and activate an Azure Key Vault Managed HSM
If you do not have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-The service is available in limited regions ΓÇô To learn more about availability, please see [Azure Dedicated HSM purshase options](https://azure.microsoft.com/pricing/details/azure-dedicated-hsm).
+The service is available in limited regions ΓÇô To learn more about availability, please see [Azure Dedicated HSM purchase options](https://azure.microsoft.com/pricing/details/azure-dedicated-hsm).
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
Use the Azure PowerShell [New-AzKeyVaultManagedHsm](/powershell/module/az.keyvau
> Each Managed HSM must have a unique name. Replace \<your-unique-managed-hsm-name\> with the name of your Managed HSM in the following examples. - Resource group name: **myResourceGroup**.-- The location: **West US 3**.
+- The location: **East US 2**.
- Your principal ID: Pass the Azure Active Directory principal ID that you obtained in the last section to the "Administrator" parameter. ```azurepowershell-interactive
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-template.md
description: Quickstart showing how to create Azure an Azure Key Vault Managed H
Previously updated : 09/22/2022 Last updated : 03/21/2023
You can now deploy the ARM template:
- **Subscription**: Select an Azure subscription. - **Resource group**: Select **Create new**, enter "myResourceGroup" as the name, and then select **OK**.
- - **Location**: Select a location. For example, **West US 3**.
+ - **Location**: Select a location. For example, **East US 2**.
- **managedHSMName**: Enter a name for your Managed HSM. - **Tenant ID**: The template function automatically retrieves your tenant ID; don't change the default value. If there is no value, enter the Tenant ID that you retrieved above. - **initialAdminObjectIds**: Enter the Object ID that you retrieved above.
lab-services How To Attach Detach Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-detach-shared-image-gallery.md
This article shows how to attach or detach an Azure compute gallery to a lab pla
> [!IMPORTANT] > To show a virtual machine image in the list of images during lab creation, you need to replicate the compute gallery image to the same region as the lab plan. You need to manually [replicate images](../virtual-machines/shared-image-galleries.md) to other regions in the compute gallery.
-Saving images to a compute gallery and replicating those images incurs additional cost. This cost is separate from the Azure Lab Services usage cost. Learn more about [Azure Compute Gallery pricing](../virtual-machines/azure-compute-gallery.md#billing).
+Saving images to a compute gallery and replicating those images incurs extra cost. This cost is separate from the Azure Lab Services usage cost. Learn more about [Azure Compute Gallery pricing](../virtual-machines/azure-compute-gallery.md#billing).
## Prerequisites
Saving images to a compute gallery and replicating those images incurs additiona
| [Owner](/azure/role-based-access-control/built-in-roles#owner) | Azure compute gallery | If you attach an existing compute gallery. | | [Owner](/azure/role-based-access-control/built-in-roles#owner) | Resource group | If you create a new compute gallery. |
- Learn how to [assign an Azure role in Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/role-assignments-steps#step-5-assign-role).
+- If your Azure account is a guest user in Azure Active Directory, your Azure account needs to have the [Directory Readers](/azure/active-directory/roles/permissions-reference#directory-readers) role to attach an existing compute gallery.
+
+Learn how to [assign an Azure role in Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/role-assignments-steps#step-5-assign-role).
## Scenarios
If you already have an Azure compute gallery, you can also attach it to your lab
### Configure compute gallery permissions
-The Azure Lab Services service principal needs to have the Owner Azure RBAC role on the Azure compute gallery. There are two Azure Lab Services service principals:
+The Azure Lab Services service principal needs to have the [Owner](/azure/role-based-access-control/built-in-roles#owner) Azure RBAC role on the Azure compute gallery. There are two Azure Lab Services service principals:
| Name | Application ID | Description | | - | -- | - | | Azure Lab Services | c7bb12bf-0b39-4f7f-9171-f418ff39b76a | Service principal for Azure Lab Services lab plans (V2). | | Azure Lab Services | 1a14be2a-e903-4cec-99cf-b2e209259a0f | Service principal for Azure Lab Services lab accounts (V1). |
-To attach a compute gallery to a lab plan, assign the Owner role to the service principal with application ID `c7bb12bf-0b39-4f7f-9171-f418ff39b76a`.
+To attach a compute gallery to a lab plan, assign the [Owner](/azure/role-based-access-control/built-in-roles#owner) role to the service principal with application ID `c7bb12bf-0b39-4f7f-9171-f418ff39b76a`.
+
+If your Azure account is a guest user, your Azure account needs to have the [Directory Readers](/azure/active-directory/roles/permissions-reference#directory-readers) role to perform the role assignment. Learn about [role assignments for guest users](/azure/role-based-access-control/role-assignments-external-users#guest-user-cannot-browse-users-groups-or-service-principals-to-assign-roles).
-> [!NOTE]
-> When you add a role assignment in the Azure portal, the user interface shows the *object ID* of the service principal, which is different from the *application ID*. The object ID for a service principal can be different in each Azure subscription. You can find the service principal object ID in Azure Active Directory, based on its application ID. Learn more about [Service principal objects](/azure/active-directory/develop/app-objects-and-service-principals#service-principal-object).
+# [Azure CLI](#tab/azure-cli)
-Follow these steps to grant permissions to the Azure Lab Service service principal by using the Azure CLI:
+Follow these steps to grant permissions to the Azure Lab Services service principal by using the Azure CLI:
1. Open [Azure Cloud Shell](https://shell.azure.com). Alternately, select the **Cloud Shell** button on the menu bar at the upper right in the [Azure portal](https://portal.azure.com).
Follow these steps to grant permissions to the Azure Lab Service service princip
Replace the text placeholders *`<service-principal-object-id>`* and *`<gallery-id>`* with the outcomes of the previous commands.
+# [Azure portal](#tab/portal)
+
+When you add a role assignment in the Azure portal, the user interface shows the *object ID* of the service principal, which is different from the *application ID*. The object ID for a service principal is different in each Azure subscription. Learn more about [Service principal objects](/azure/active-directory/develop/app-objects-and-service-principals#service-principal-object).
+
+Follow these steps to grant permissions to the Azure Lab Services service principal by using the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the search box at the top, enter *Enterprise applications*, and select **Enterprise applications** from the services list.
+1. On the **All applications** page, remove the **Application type** filter, and enter *c7bb12bf-0b39-4f7f-9171-f418ff39b76a* in the **Application ID starts with** filter.
+
+ :::image type="content" source="./media/how-to-attach-detach-shared-image-gallery/lab-services-enterprise-applications.png" alt-text="Screenshot that shows the list of enterprise applications in the Azure portal, highlighting the application ID filter." lightbox="./media/how-to-attach-detach-shared-image-gallery/lab-services-enterprise-applications.png":::
+
+1. Note the **Object ID** value of the Azure Lab Services service principal.
+1. Go to your Azure compute gallery resource.
+1. Select **Access control (IAM)**, and then select **Add** > **Add role assignment**.
+1. On the **Role** page, select the **Owner** role from the list.
+1. On the **Members** page, select **Select members**.
+1. Enter *Azure Lab Services** in the search box, select both items, and then select **Select**.
+1. In the **Add role assignment** page, remove the item that doesn't match the object ID of the Azure Lab Services service principal.
+
+ :::image type="content" source="./media/how-to-attach-detach-shared-image-gallery/compute-gallery-add-role-assignment.png" alt-text="Screenshot that shows the add role assignment page for the compute gallery in the Azure portal." lightbox="./media/how-to-attach-detach-shared-image-gallery/compute-gallery-add-role-assignment.png":::
+
+1. On the **Review + Assign** page, select **Review + assign** to add the role assignment to the compute gallery.
+++ Learn more about how to [assign an Azure role in Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/role-assignments-steps#step-5-assign-role). ### Attach the compute gallery
logic-apps Quickstart Create Deploy Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-deploy-azure-resource-manager-template.md
To view the logic app workflow, you can use the Azure portal, run a script that
```azurecli-interactive echo "Enter your logic app name:" && read logicAppName &&
-az logic workflow show --name $logicAppName &&
+az logic workflow show --resource-group $resourceGroupName --name $logicAppName &&
echo "Press [ENTER] to continue ..." ```
For more information, see [Azure CLI: az logic workflow show](/cli/azure/logic/w
```azurepowershell-interactive $logicAppName = Read-Host -Prompt "Enter your logic app name"
-Get-AzLogicApp -Name $logicAppName
+Get-AzLogicApp -ResourceGroupName $resourceGroupName -Name $logicAppName
Write-Host "Press [ENTER] to continue..." ```
machine-learning How To Azure Container For Pytorch Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-azure-container-for-pytorch-environment.md
+
+ Title: How to create Azure Container for PyTorch Custom Curated environment
+
+description: Create custom curated Azure Container for PyTorch environments in Azure Machine Learning studio to run your machine learning models and reuse it in different scenarios.
+++++++ Last updated : 03/20/2023++
+# Create custom curated Azure Container for PyTorch (ACPT) environments in Azure Machine Learning studio
+
+If you're looking to extend curated environment and add Hugging Face (HF) transformers or datasets or any other external packages to be installed, Azure Machine Learning offers to create a new env with docker context containing ACPT curated environment as base image and additional packages on top of it as below.
+
+## Prerequisites
+
+Before following the steps in this article, make sure you have the following prerequisites:
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+- An Azure Machine Learning workspace. If you don't have one, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one.
+
+## Navigate to environments
+
+In the [Azure Machine Learning studio](https://ml.azure.com/registries/environments), navigate to the "Environments" section by selecting the "Environments" option.
++
+## Navigate to curated environments
+
+Navigate to curated environments and search "acpt" to list all the available ACPT curated environments. Selecting the environment shows details of the environment.
+++
+## Get details of the curated environments
+
+To create custom environment, you need the base docker image repository, which can be found in the "Description" section as "Azure Container Registry". Copy the "Azure Container Registry" name, which is used later when you create a new custom environment.
++
+## Navigate to custom environments
+
+Go back and select the " Custom Environments" tab.
++
+## Create custom environments
+
+Select **+ Create**. In the "Create Environment" window, name the environment, description and select "Create a new docker context" in Select environments type section.
++
+Paste the docker image name that you copied in previously. Configure your environment by declaring the base image and add any env variables you want to use and the packages that you want to include.
++
+Review your environment settings, add any tags if needed and select on the **Create** button to create your custom environment.
+
+That's it! You've now created a custom environment in Azure Machine Learning studio and can use it to run your machine learning models.
+
+## Next steps
+
+- Learn more about environment objects:
+ - [What are Azure Machine Learning environments? ](concept-environments.md).
+ - Learn more about [curated environments](concept-environments.md).
+- Learn more about [training models in Azure Machine Learning](concept-train-machine-learning-model.md).
+- [Azure Container for PyTorch (ACPT) reference](resource-azure-container-for-pytorch.md)
machine-learning How To Manage Models Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models-mlflow.md
# Manage models registries in Azure Machine Learning with MLflow
-Azure Machine Learning supports MLflow for model management. This represents a convenient way to support the entire model lifecycle for users familiar with the MLFlow client. The following article describes the different capabilities and how it compares with other options.
+Azure Machine Learning supports MLflow for model management. Such approach represents a convenient way to support the entire model lifecycle for users familiar with the MLFlow client. The following article describes the different capabilities and how it compares with other options.
### Prerequisites [!INCLUDE [mlflow-prereqs](../../includes/machine-learning-mlflow-prereqs.md)]
-* Some operations may be executed directly using the MLflow fluent API (`mlflow.<method>`). However, others may require to create an MLflow client, which allows to communicate with Azure Machine Learning in the MLflow protocol. You can create an `MlflowClient` object as follows. This tutorial will use the object `client` to refer to such MLflow client.
+* Some operations may be executed directly using the MLflow fluent API (`mlflow.<method>`). However, others may require to create an MLflow client, which allows to communicate with Azure Machine Learning in the MLflow protocol. You can create an `MlflowClient` object as follows. This tutorial uses the object `client` to refer to such MLflow client.
```python using mlflow
Azure Machine Learning supports MLflow for model management. This represents a c
## Registering new models in the registry
+The models registry offer a convenient and centralized way to manage models in a workspace. Each workspace has its own independent models registry. The following section explains multiple ways to register models in the registry using MLflow SDK.
+ ### Creating models from an existing run
-If you have an MLflow model logged inside of a run and you want to register it in a registry, you can do that by using the run ID and the path where the model was logged. See [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md) to know how to query this information if you don't have it.
+If you have an MLflow model logged inside of a run and you want to register it in a registry, use the run ID and the path where the model was logged. See [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md) to know how to query this information if you don't have it.
```python mlflow.register_model(f"runs:/{run_id}/{artifact_path}", model_name)
mlflow.register_model(f"file://{model_local_path}", "local-model-test")
## Querying model registries
+You can use the MLflow SDK to query and search for models registered in the registry. The following section explains multiple ways to achieve it.
+ ### Querying all the models in the registry You can query all the registered models in the registry using the MLflow client. The following sample prints all the model's names:
for model in client.search_registered_models():
print(f"{model.name}") ```
+Use `order_by` to order by a specific property like `name`, `version`, `creation_timestamp`, and `last_updated_timestamp`:
+
+```python
+client.search_registered_models(order_by=["name ASC"])
+```
+ > [!NOTE] > __MLflow 2.0 advisory:__ In older versions of Mlflow (<2.0), use method `MlflowClient.list_registered_models()` instead. ### Getting specific versions of the model
-The command above will retrieve the model object which contains all the model versions. However, if you want to get the last registered model version of a given model, you can use `get_registered_model`:
+The `search_registered_models()` command retrieves the model object, which contains all the model versions. However, if you want to get the last registered model version of a given model, you can use `get_registered_model`:
```python client.get_registered_model(model_name)
You can load models directly from the registry to restore the models objects tha
MLflow supports model's stages to manage model's lifecycle. Model's version can transition from one stage to another. Stages are assigned to a model's version (instead of models) which means that a given model can have multiple versions on different stages. > [!IMPORTANT]
-> Stages can only be accessed using the MLflow SDK. They don't show up in the [Azure Machine Learning Studio portal](https://ml.azure.com) and can't be retrieved using neither Azure Machine Learning SDK, Azure Machine Learning CLI, or Azure Machine Learning REST API. Creating deployment from a given model's stage is not supported by the moment.
+> Stages can only be accessed using the MLflow SDK. They don't show up in the [Azure ML Studio portal](https://ml.azure.com) and can't be retrieved using neither Azure ML SDK, Azure ML CLI, or Azure ML REST API. Creating deployment from a given model's stage is not supported by the moment.
### Querying model stages
Transitioning a model's version to a particular stage can be done using the MLfl
client.transition_model_version_stage(model_name, version=3, stage="Staging") ```
-By default, if there were an existing model version in that particular stage, it will remain there. Hence, it won't be replaced as multiple model's versions can be in the same stage at the same time. Alternatively, you can indicate `archive_existing_versions=True` to tell MLflow to move the existing model's version to the stage `Archived`.
+By default, if there were an existing model version in that particular stage, it remains there. Hence, it isn't replaced as multiple model's versions can be in the same stage at the same time. Alternatively, you can indicate `archive_existing_versions=True` to tell MLflow to move the existing model's version to the stage `Archived`.
```python client.transition_model_version_stage(
model = mlflow.pyfunc.load_model(f"models:/{model_name}/Staging")
## Editing and deleting models
-Editing registered models is supported in both Mlflow and Azure Machine Learning. However, there are some differences important to be noticed:
+Editing registered models is supported in both Mlflow and Azure ML. However, there are some differences important to be noticed:
> [!WARNING] > Renaming models is not supported in Azure Machine Learning as model objects are immmutable.
client.delete_model_version(model_name, version="2")
## Support matrix for managing models with MLflow
-The MLflow client exposes several methods to retrieve and manage models. The following table shows which of those methods are currently supported in MLflow when connected to Azure Machine Learning. It also compares it with other models management capabilities in Azure Machine Learning.
+The MLflow client exposes several methods to retrieve and manage models. The following table shows which of those methods are currently supported in MLflow when connected to Azure ML. It also compares it with other models management capabilities in Azure ML.
-| Feature | MLflow | Azure Machine Learning with MLflow | Azure Machine Learning CLIv2 | Azure Machine Learning Studio |
+| Feature | MLflow | Azure ML with MLflow | Azure ML CLIv2 | Azure ML Studio |
| :- | :-: | :-: | :-: | :-: | | Registering models in MLflow format | **&check;** | **&check;** | **&check;** | **&check;** | | Registering models not in MLflow format | | | **&check;** | **&check;** | | Registering models from runs outputs/artifacts | **&check;** | **&check;**<sup>1</sup> | **&check;**<sup>2</sup> | **&check;** | | Registering models from runs outputs/artifacts in a different tracking server/workspace | **&check;** | | **&check;**<sup>5</sup> | **&check;**<sup>5</sup> |
-| Listing registered models | **&check;** | **&check;** | **&check;** | **&check;** |
+| Search/list registered models | **&check;** | **&check;** | **&check;** | **&check;** |
| Retrieving details of registered model's versions | **&check;** | **&check;** | **&check;** | **&check;** | | Editing registered model's versions description | **&check;** | **&check;** | **&check;** | **&check;** | | Editing registered model's versions tags | **&check;** | **&check;** | **&check;** | **&check;** |
The MLflow client exposes several methods to retrieve and manage models. The fol
> [!NOTE] > - <sup>1</sup> Use URIs with format `runs:/<ruin-id>/<path>`. > - <sup>2</sup> Use URIs with format `azureml://jobs/<job-id>/outputs/artifacts/<path>`.
-> - <sup>3</sup> Registered models are immutable objects in Azure Machine Learning.
-> - <sup>4</sup> Use search box in Azure Machine Learning Studio. Partial match supported.
+> - <sup>3</sup> Registered models are immutable objects in Azure ML.
+> - <sup>4</sup> Use search box in Azure ML Studio. Partial match supported.
> - <sup>5</sup> Use [registries](how-to-manage-registries.md). ## Next steps
machine-learning How To Share Models Pipelines Across Workspaces With Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-models-pipelines-across-workspaces-with-registries.md
mlflow_model = Model(
version=str(1), # use str(int(time.time())) if you want a random model number description="MLflow model created from local path", )
-ml_client_registry.model.create_or_update(mlflow_model)
+ml_client_registry.models.create_or_update(mlflow_model)
```
mlflow_model = Model(
version=version_timestamp, description="MLflow model created from job output", )
-ml_client_workspace.model.create_or_update(mlflow_model)
+ml_client_workspace.models.create_or_update(mlflow_model)
``` > [!TIP]
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
Azure Machine Learning will set the `MASTER_ADDR`, `MASTER_PORT`, `WORLD_SIZE`,
## DeepSpeed
-[DeepSpeed](https://www.deepspeed.ai/tutorials/azure/) is supported as a first-class citizen within Azure Machine Learning to run distributed jobs with near linear scalabibility in terms of 
+[DeepSpeed](https://www.deepspeed.ai/tutorials/azure/) is supported as a first-class citizen within Azure Machine Learning to run distributed jobs with near linear scalability in terms of 
* Increase in model size * Increase in number of GPUs `DeepSpeed` can be enabled using either Pytorch distribution or MPI for running distributed training. Azure Machine Learning supports the `DeepSpeed` launcher to launch distributed training as well as autotuning to get optimal `ds` configuration.
-You can use a [curated environment](resource-curated-environments.md#azure-container-for-pytorch-acpt-preview) for an out of the box environment with the latest state of art technologies including `DeepSpeed`, `ORT`, `MSSCCL`, and `Pytorch` for your DeepSpeed training jobs.
+You can use a [curated environment](resource-curated-environments.md#azure-container-for-pytorch-acpt) for an out of the box environment with the latest state of art technologies including `DeepSpeed`, `ORT`, `MSSCCL`, and `Pytorch` for your DeepSpeed training jobs.
### DeepSpeed example
machine-learning Reference Checkpoint Performance For Large Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-checkpoint-performance-for-large-models.md
Nebula can
* An Azure subscription and an Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md) for more information about workspace resource creation * An Azure Machine Learning compute target. See [Manage training & deploy computes](./how-to-create-attach-compute-studio.md) to learn more about compute target creation * A training script that uses **PyTorch**.
-* ACPT-curated (Azure Container for Pytorch) environment. See [Curated environments](resource-curated-environments.md#azure-container-for-pytorch-acpt-preview) to obtain the ACPT image. Learn how to use the curated environment [here](./how-to-use-environments.md)
+* ACPT-curated (Azure Container for Pytorch) environment. See [Curated environments](resource-curated-environments.md#azure-container-for-pytorch-acpt) to obtain the ACPT image. Learn how to [use the curated environment](./how-to-use-environments.md)
* An Azure Machine Learning script run configuration file. If you donΓÇÖt have one, you can follow [this resource](./how-to-set-up-training-targets.md) ## How to Use Nebula
Nebula use involves:
- [API calls to save and load checkpoints](#call-apis-to-save-and-load-checkpoints) ### Using ACPT environment
-[Azure Container for PyTorch (ACPT)](how-to-manage-environments-v2.md?tabs=cli#curated-environments), a curated environment for PyTorch model training, includes Nebula as a preinstalled, dependent Python package. See [Azure Container for PyTorch (ACPT)](resource-curated-environments.md#azure-container-for-pytorch-acpt-preview) to view the curated environment, and [Enabling Deep Learning with Azure Container for PyTorch in Azure Machine Learning](https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/enabling-deep-learning-with-azure-container-for-pytorch-in-azure/ba-p/3650489) to learn more about the ACPT image.
+[Azure Container for PyTorch (ACPT)](how-to-manage-environments-v2.md?tabs=cli#curated-environments), a curated environment for PyTorch model training, includes Nebula as a preinstalled, dependent Python package. See [Azure Container for PyTorch (ACPT)](resource-curated-environments.md#azure-container-for-pytorch-acpt) to view the curated environment, and [Enabling Deep Learning with Azure Container for PyTorch in Azure Machine Learning](https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/enabling-deep-learning-with-azure-container-for-pytorch-in-azure/ba-p/3650489) to learn more about the ACPT image.
### Initializing Nebula
machine-learning Resource Azure Container For Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-azure-container-for-pytorch.md
+
+ Title: Azure Container for PyTorch
+
+description: Azure Container for PyTorch (ACPT), a curated environment that includes the best of Microsoft technologies for training with PyTorch on Azure.
+++++++ Last updated : 03/20/2023++
+# Azure Container for PyTorch (ACPT)
+
+Azure Container for PyTorch is a lightweight, standalone environment that includes needed components to effectively run optimized training for large models on Azure Machine Learning. The Azure Machine Learning [curated environments](resource-curated-environments.md) are available in the userΓÇÖs workspace by default and are backed by cached Docker images that use the latest version of the Azure Machine Learning SDK. It helps with reducing preparation costs and faster deployment time. ACPT can be used to quickly get started with various deep learning tasks with PyTorch on Azure.
+
+> [!NOTE]
+> Use the [Python SDK](how-to-use-environments.md), [CLI](/cli/azure/ml/environment#az-ml-environment-list), or Azure Machine Learning [studio](how-to-manage-environments-in-studio.md) to get the full list of environments and their dependencies. For more information, see the [environments article](how-to-use-environments.md#use-a-curated-environment).
+
+## Why should I use ACPT?
+
+* Use as is with preinstalled packages or build on top of the curated environment.
+* Optimized training framework to set up, develop, accelerate PyTorch model on large workloads.
+* Up-to-date stack with the latest compatible versions of Ubuntu, Python, PyTorch, CUDA\RocM, etc.
+* Ease of use: All components installed and validated against dozens of Microsoft workloads to reduce setup costs and accelerate time to value.
+* Latest Training Optimization Technologies: [ONNX RunTime](https://onnxruntime.ai/) , [DeepSpeed](https://www.deepspeed.ai/), [MSCCL](https://github.com/microsoft/msccl),and others.
+* Integration with Azure Machine Learning: Track your PyTorch experiments on Azure Machine Learning studio or using the SDK.
+* The image is also available as a [Data Science Virtual Machine (DSVM)](https://azure.microsoft.com/products/virtual-machines/data-science-virtual-machines/). To learn more about Data Science Virtual Machines, see [the DSVM overview documentation](data-science-virtual-machine/overview.md).
+* Azure customer support reduces training and deployment latency.
+* Improves training and deployment success rate.
+* Avoid unnecessary image builds.
+* Only have required dependencies and access right in the image/container.
+
+>[!IMPORTANT]
+> To view more information about curated environment packages and versions, visit the Environments tab in the Azure Machine Learning [studio](./how-to-manage-environments-in-studio.md).
+
+## Supported configurations for Azure Container for PyTorch (ACPT)
+
+**Description**: The Azure Curated Environment for PyTorch is our latest PyTorch curated environment. It's optimized for large, distributed deep learning workloads and comes prepackaged with the best of Microsoft technologies for accelerated training, for example, OnnxRuntime Training (ORT), DeepSpeed, MSCCL, etc.
+
+The following configurations are supported:
+
+| Environment Name | OS | GPU Version| Python Version | PyTorch Version | ORT-training Version | DeepSpeed Version | torch-ort Version |
+| | | | | | | | |
+|acpt-pytorch-2.0-cuda11.7|Ubuntu 20.04|cu117|3.8|2.0|1.14.1|0.8.2 |0.14.0|
+|acpt-pytorch-1.13-cuda11.7|Ubuntu 20.04|cu117|3.8|1.13.1|1.14.1|0.8.2|1.14.0|
+|acpt-pytorch-1.12-py39-cuda11.6|Ubuntu 20.04|cu116|3.9|1.12.1|1.14.1|0.8.2|1.14.0|
+|acpt-pytorch-1.12-cuda11.6|Ubuntu 20.04|cu116|3.8|1.12.1|1.14.1|0.8.2|1.14.0|
+|acpt-pytorch-1.11-cuda11.5|Ubuntu 20.04|cu115|3.8|1.11.0|1.11.1|0.7.3|1.11.0|
+|acpt-pytorch-1.11-cuda11.3|Ubuntu 20.04|cu113|3.8|1.11.0|1.14.1|0.8.2|1.14.0|
+
+Other packages like fairscale, horovod, msccl, protobuf, pyspark, pytest, pytorch-lightning, tensorboard, NebulaML, torchvision, torchmetrics to support all training needs
+
+To learn more, see [Create custom ACPT curated environments](how-to-azure-container-for-pytorch-environment.md).
+
+> [!NOTE]
+> Currently, due to underlying cuda and cluster incompatibilities, on [NC series](../virtual-machines/nc-series.md) only acpt-pytorch-1.11-cuda11.3 with cuda 11.3 and torch 1.11 can be used.
+
+## Support
+
+Version updates for supported environments, including the base images they reference, are released every two weeks to address vulnerabilities no older than 30 days. Based on usage, some environments may be deprecated (hidden from the product but usable) to support more common machine learning scenarios.
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
Last updated 10/21/2021
# Azure Machine Learning Curated Environments
-This article lists the curated environments with latest framework versions in Azure Machine Learning. Curated environments are provided by Azure Machine Learning and are available in your workspace by default. They are backed by cached Docker images that use the latest version of the Azure Machine Learning SDK, reducing the run preparation cost and allowing for faster deployment time. Use these environments to quickly get started with various machine learning frameworks.
+This article lists the curated environments with latest framework versions in Azure Machine Learning. Curated environments are provided by Azure Machine Learning and are available in your workspace by default. They're backed by cached Docker images that use the latest version of the Azure Machine Learning SDK, reducing the run preparation cost and allowing for faster deployment time. Use these environments to quickly get started with various machine learning frameworks.
> [!NOTE]
-> Use the [Python SDK](how-to-use-environments.md), [CLI](/cli/azure/ml/environment#az-ml-environment-list), or Azure Machine Learning [studio](how-to-manage-environments-in-studio.md) to get the full list of environments and their dependencies. For more information, see the [environments article](how-to-use-environments.md#use-a-curated-environment).
+> Use the [Python SDK](how-to-use-environments.md), [CLI](/cli/azure/ml/environment#az-ml-environment-list), or Azure Machine Learning [studio](how-to-manage-environments-in-studio.md) to get the full list of environments and their dependencies. For more information, see the [environments article](how-to-use-environments.md#use-a-curated-environment).
+ ## Why should I use curated environments? * Reduces training and deployment latency. * Improves training and deployment success rate. * Avoid unnecessary image builds.
-* Only have required dependencies and access right in the image/container. 
+* Only have required dependencies and access right in the image/container.
->[!IMPORTANT]
-> To view more information about curated environment packages and versions, visit the Environments tab in the Azure Machine Learning [studio](./how-to-manage-environments-in-studio.md).
+>[!IMPORTANT]
+> To view more information about curated environment packages and versions, visit the Environments tab in the Azure Machine Learning [studio](./how-to-manage-environments-in-studio.md).
## Curated environments -
-> [!IMPORTANT]
-> Items marked (preview) in this article are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-### Azure Container for PyTorch (ACPT) (preview)
+### Azure Container for PyTorch (ACPT)
**Name**: AzureML-ACPT-pytorch-1.12-py39-cuda11.6-gpu
-**Description**: The Azure Curated Environment for PyTorch is our latest PyTorch curated environment. It is optimized for large, distributed deep learning workloads and comes pre-packaged with the best of Microsoft technologies for accelerated training, e.g., OnnxRuntime Training (ORT), DeepSpeed, MSCCL, etc.
-
-The following configurations are supported:
+**Description**: The Azure Curated Environment for PyTorch is our latest PyTorch curated environment. It's optimized for large, distributed deep learning workloads and comes prepackaged with the best of Microsoft technologies for accelerated training, for example, OnnxRuntime Training (ORT), DeepSpeed, MSCCL, etc.
-| Environment Name | OS | GPU Version| Python Version | PyTorch Version | ORT-training Version | DeepSpeed Version | torch-ort Version |
-| | | | | | | | |
-| AzureML-ACPT-pytorch-1.12-py39-cuda11.6-gpu | Ubuntu 20.04 | cu116 | 3.9 | 1.12.1 | 1.13.1 | 0.7.3 | 1.13.1 |
-| AzureML-ACPT-pytorch-1.12-py38-cuda11.6-gpu | Ubuntu 20.04 | cu116 | 3.8 | 1.12.1 | 1.12.0 | 0.7.3 | 1.12.0 |
-| AzureML-ACPT-pytorch-1.11-py38-cuda11.5-gpu | Ubuntu 20.04 | cu115 | 3.8 | 1.11.0 | 1.11.1 | 0.7.3 | 1.11.0 |
-| AzureML-ACPT-pytorch-1.11-py38-cuda11.3-gpu | Ubuntu 20.04 | cu113 | 3.8 | 1.11.0 | 1.11.1 | 0.7.3 | 1.11.0 |
+To learn more, see [Azure Container for PyTorch (ACPT)](resource-azure-container-for-pytorch.md).
> [!NOTE]
-> Currently, due to underlying cuda and cluster incompatibilities, on [NC series](../virtual-machines/nc-series.md) only AzureML-ACPT-pytorch-1.11-py38-cuda11.3-gpu with cuda 11.3 can be used.
+> Currently, due to underlying cuda and cluster incompatibilities, on [NC series](../virtual-machines/nc-series.md) only acpt-pytorch-1.11-cuda11.3 with cuda 11.3 and torch 1.11 can be used.
### PyTorch
mariadb Quickstart Create Mariadb Server Database Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-bicep.md
Title: 'Quickstart: Create an Azure DB for MariaDB - Bicep'
+ Title: 'Quickstart: Create an Azure Database for MariaDB - Bicep'
description: In this Quickstart article, learn how to create an Azure Database for MariaDB server using Bicep.
migrate Migrate Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-services-overview.md
Azure Migrate integrates with several ISV offerings.
[Device42](https://docs.device42.com/) | Assess servers. [Lakeside](https://go.microsoft.com/fwlink/?linkid=2104908) | Assess VDI. [RackWare](https://go.microsoft.com/fwlink/?linkid=2102735) | Migrate servers.
-[Turbonomic](https://learn.turbonomic.com/azure-migrate-portal-free-trial) | Assess servers.
+[Turbonomic](https://go.microsoft.com/fwlink/?linkid=2094295) | Assess servers.
[UnifyCloud](https://www.cloudatlasinc.com/cloudrecon/) | Assess servers and databases. [Zerto](https://go.microsoft.com/fwlink/?linkid=2152102) | Migrate servers.
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
ms. Previously updated : 03/03/2023 Last updated : 03/21/2023
To assess physical servers, you create a project, and add the Azure Migrate: Dis
**Physical server deployment:** The physical server can be standalone or deployed in a cluster. **Type of servers:** Bare metal servers, virtualized servers running on-premises or other clouds like AWS, GCP, Xen etc.
->[!Note]
+> [!Note]
> Currently, Azure Migrate does not support the discovery of para-virtualized servers. **Operating system:** All Windows and Linux operating systems can be assessed for migration.
For Linux servers, based on the features you want to perform, you can create a u
### Option 1 - You need a sudo user account on the servers that you want to discover. Use this account to pull configuration and performance metadata, perform software inventory (discovery of installed applications) and enable agentless dependency analysis using SSH connectivity.-- You need to enable sudo access for the commands listed [here](discovered-metadata.md#linux-server-metadata). In addition to these commands, the user account also needs to have permissions to execute ls and netstat commands to perform agentless dependency analysis.
+- You need to enable sudo access on /usr/bin/bash to execute the commands listed [here](discovered-metadata.md#linux-server-metadata). In addition to these commands, the user account also needs to have permissions to execute ls and netstat commands to perform agentless dependency analysis.
- Make sure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked. - Azure Migrate supports the following Linux OS distributions for discovery using an account with sudo access:
mysql Quickstart Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-bicep.md
Last updated 02/16/2023
## Create server with public access
-Create a **main.bicep** file and a **crateFirewallRules.bicep** file with the following content to create a server using public access connectivity method and also create a database on the server. Update the **firewallRules** default value if needed.
+Create a **main.bicep** file and a **CreateFirewallRules.bicep** file with the following content to create a server using public access connectivity method and also create a database on the server. Update the **firewallRules** default value if needed.
**main.bicep**
resource database 'Microsoft.DBforMySQL/flexibleServers/databases@2021-12-01-pre
} ```
-**crateFirewallRules.bicep**
+**CreateFirewallRules.bicep**
```bicep param serverName string
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/whats-happening-to-mysql-single-server.md
For more information on migrating from Single Server to Flexible Server using ot
To upgrade to Azure Database for MySQL Flexible Server, it's important to know when you're eligible to migrate your single server. Find the migration eligibility criteria in the below table.
-| Single Server configuration not supported for migration | How and when to migrate? |
+| Single Server configuration not supported in Flexible Server | How and when to migrate? |
||--|
-| Single servers with Private Link enabled | Private Link is on the road map for next year. You can also choose to migrate now and perform wNet injection via a point-in-time restore operation to move to private access network connectivity method. |
-| Single servers with Cross-Region Read Replicas enabled | Cross-Region Read Replicas for flexible server (for paired region) is in private preview, and you can start migrating your single server. Cross-Region Read Replicas for flexible server (for any cross-region) is on the road map for later this year, post, which you can migrate your single server. |
+| Single servers with Private Link enabled | Private Link is on the road map for this year. You can also choose to migrate now and perform wNet injection via a point-in-time restore operation to move to private access network connectivity method. |
+| Single servers with Cross-Region Read Replicas enabled | Cross-Region Read Replicas for flexible server (for paired region) is in private preview, and you can start migrating your single server. Cross-Region Read Replicas for flexible server (for any cross-region) is on the road map for later this year, post which you can migrate your single server. |
+| Single servers with Query Store enabled | You are eligible to migrate and you can configure slow query logs on the target flexible server by following steps [here](https://learn.microsoft.com/azure/mysql/flexible-server/tutorial-query-performance-insights#configure-slow-query-logs-by-using-the-azure-portal). You can then view query insights by using [workbooks template](https://learn.microsoft.com/azure/mysql/flexible-server/tutorial-query-performance-insights#view-query-insights-by-using-workbooks). |
| Single server deployed in regions where flexible server isn't supported (Learn more about regions [here](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?regions=all&products=mysql)). | Azure Database Migration Service (classic) supports cross-region migration. Deploy your target flexible server in a suitable region and migrate using DMS (classic). | ## Frequently Asked Questions (FAQs)
private-5g-core Gather Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/gather-diagnostics.md
You must already have an AP5GC site deployed to collect diagnostics.
## Collect values for diagnostics package gathering
-1. [Create a storage account](../storage/common/storage-account-create.md) for diagnsotics with the following additional configuration:
+1. [Create a storage account](../storage/common/storage-account-create.md) for diagnostics with the following additional configuration:
1. In the **Advanced** tab, select **Enable storage account key access**. This will allow your support representative to download traces stored in this account using the URLs you share with them. 1. In the **Data protection** tab, under **Access control**, select **Enable version-level immutability support**. This will allow you to specify a time-based retention policy for the account in the next step. 1. If you would like the content of your storage account to be automatically deleted after a period of time, [configure a default time-based retention policy](../storage/blobs/immutable-policy-configure-version-scope.md#configure-a-default-time-based-retention-policy) for your storage account.
purview Concept Policies Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-devops.md
Previously updated : 03/12/2023 Last updated : 03/20/2023 # What can I accomplish with Microsoft Purview DevOps policies?
Bob and Alice are involved with the DevOps process at their company. Given their
||| ## Mapping of popular DMVs and DMFs
-SQL dynamic metadata includes a list of more than 700 DMVs/DMFs. We list here as an illustration some of the most popular ones, mapped to their role definition in Microsoft Purview DevOps policies and linked to the URL, along with their description.
+SQL dynamic metadata includes a list of more than 700 DMVs/DMFs. We list here as an illustration some of the most popular ones, mapped to their role definition in Microsoft Purview DevOps policies and provide a URL link to the document that describes them.
-| **Accessible by DevOps role** | **Popular DMV / DMF** | **Description**|
+| **DevOps role** | **Category** | **Example DMV / DMF** |
|-|-|-| ||||
-| *SQL Performance Monitor* | [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql)|Monitors the current activity and performance of the server|
-||[sys.dm_os_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-wait-stats-transact-sql)|Identifies performance bottlenecks to enable system tuning|
-|| [sys.dm_exec_query_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-query-stats-transact-sql)|Identifies queries that are consuming a lot of resources or taking a long time to execute|
-|| [sys.dm_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql)|Shows information about all active user connections and internal tasks|
-|| [sys.dm_os_waiting_tasks](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-waiting-tasks-transact-sql)|Helps identify and troubleshoot blocking issues within SQL Server|
-|| [sys.dm_exec_procedure_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-procedure-stats-transact-sql)|Returns how many times a procedure was executed, the total duration, reads, writes and more|
+| **SQL Performance Monitor** | Query system parameters to understand your system | [sys.configurations](/sql/relational-databases/system-catalog-views/sys-configurations-transact-sql) |
+| | | [sys.dm_os_sys_info](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-sys-info-transact-sql) |
+| | Identify performance bottlenecks | [sys.dm_os_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-wait-stats-transact-sql) |
+| | Analyze currently running queries | [sys.dm_exec_query_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-query-stats-transact-sql) |
+| | Analyze blocking issues | [sys.dm_tran_locks](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-locks-transact-sql) |
+| | | [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) |
+| | | [sys.dm_os_waiting_tasks](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-waiting-tasks-transact-sql) |
+| | Analyze memory usage | [sys.dm_os_memory_clerks](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-memory-clerks-transact-sql) |
+| | Analyze file-usage and performance| [sys.master_files](/sql/relational-databases/system-catalog-views/sys-master-files-transact-sql) |
+| | | [sys.dm_io_virtual_file_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-io-virtual-file-stats-transact-sql) |
+| | Analyze index-usage and fragmentation | [sys.indexes](/sql/relational-databases/system-catalog-views/sys-indexes-transact-sql) |
+| | | [sys.dm_db_index_usage_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-index-usage-stats-transact-sql) |
+| | | [sys.dm_db_index_physical_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-index-physical-stats-transact-sql) |
+| | Active user connections and internal tasks | [sys.dm_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql) |
+| | Procedure execution stats | [sys.dm_exec_procedure_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-procedure-stats-transact-sql) |
+| | Use the Query Store | [sys.query_store_plan](/sql/relational-databases/system-catalog-views/sys-query-store-plan-transact-sql) |
+| | | [sys.query_store_query](/sql/relational-databases/system-catalog-views/sys-query-store-query-transact-sql) |
+| | | [sys.query_store_query_text](/sql/relational-databases/system-catalog-views/sys-query-store-query-text-transact-sql) |
||||
-| *SQL Security Auditor* |[sys.dm_server_audit_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-server-audit-status-transact-sql)|Returns audit details such as the location of the target, size and status of the audit itself|
+| **SQL Security Auditor** | Returns audit details | [sys.dm_server_audit_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-server-audit-status-transact-sql) |
||||
-| Both *SQL Performance Monitor* and *SQL Security Auditor*|[sys.dm_audit_actions](/sql/relational-databases/system-dynamic-management-views/sys-dm-audit-actions-transact-sql)|Returns a row for every audit action that can be reported in the audit log and every audit action group that can be configured as part of SQL Server Audit|
-||[sys.dm_audit_class_type_map](/sql/relational-databases/system-dynamic-management-views/sys-dm-audit-class-type-map-transact-sql)|When events are fired, they record the object type, not the securable class. This DMV maps the class_type field in the audit log to the class_desc field in sys.dm_audit_actions|
+| Both **SQL Performance Monitor** and **SQL Security Auditor**| | [sys.dm_audit_actions](/sql/relational-databases/system-dynamic-management-views/sys-dm-audit-actions-transact-sql) |
+|||[sys.dm_audit_class_type_map](/sql/relational-databases/system-dynamic-management-views/sys-dm-audit-class-type-map-transact-sql) |
|||| For more on these DMVs/DMFs you can check these docs
purview How To Receive Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-receive-share.md
+ Last updated 02/16/2023 # Receive Azure Storage in-place share with Microsoft Purview Data Sharing (preview)
Microsoft Purview Data Sharing supports in-place data sharing from Azure Data La
``` The *RegistrationState* should be **Registered**. It could take 15 minutes to 1 hour for registration to complete. For more information, see the [register preview feature article](../azure-resource-manager/management/preview-features.md?tabs=azure-portal#register-preview-feature). + * A target storage account **created after** the registration step is completed. **The target storage account must be in the same Azure region as the source storage account.** If you don't know the Azure region of the source storage account, you can find out during the share attaching step later in the process. Target storage account can be in a different Azure region from your Microsoft Purview account. > [!IMPORTANT] > The target storage account must be in the same Azure region as the source storage account.-
- > [!NOTE]
- > The following are supported storage account configurations:
- >
- > - Azure regions: Canada Central, Canada East, UK South, UK West, Australia East, Japan East, Korea South, and South Africa North
- > - Performance: Standard
- > - Redundancy options: LRS, GRS, RA-GRS
* You need the **Contributor** or **Owner** or **Storage Blob Data Owner** or **Storage Blob Data Contributor** role on the target storage account. You can find more details on the [ADLS Gen2](register-scan-adls-gen2.md#data-sharing) or [Blob storage](register-scan-azure-blob-storage-source.md#data-sharing) data source pages. * If the target storage account is in a different Azure subscription than the one for Microsoft Purview account, the Microsoft.Purview resource provider needs to be registered in the Azure subscription where the Storage account is located. It's automatically registered at the time of share consumer attaching the share and if the user has permission to do the `/register/action` operation and therefore, Contributor or Owner roles to the subscription where the Storage account is located.
purview How To Share Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-share-data.md
Microsoft Purview Data Sharing supports in-place data sharing from Azure Data La
``` The *RegistrationState* should be **Registered**. It could take 15 minutes to 1 hour for registration to complete. For more information, see [Register preview feature](../azure-resource-manager/management/preview-features.md?tabs=azure-portal#register-preview-feature).
-* A source storage account **created after the registration step is completed**. Source storage account can be in a different Azure region from your Microsoft Purview account, but needs to follow these configurations:
-
- > [!NOTE]
- > The following are supported storage account configurations:
- >
- > - Azure regions: Canada Central, Canada East, UK South, UK West, Australia East, Japan East, Korea South, and South Africa North
- > - Performance: Standard
- > - Redundancy options: LRS, GRS, RA-GRS
+
+* A source storage account **created after the registration step is completed**. Source storage account can be in a different Azure region from your Microsoft Purview account, but needs to follow the available configurations.
* You need the **Owner** or **Storage Blob Data Owner** role on the source storage account to be able to share data. You can find more details on the [ADLS Gen2](register-scan-adls-gen2.md#data-sharing) or [Blob storage](register-scan-azure-blob-storage-source.md#data-sharing) data source page.
remote-rendering Sample Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/samples/sample-model.md
Model statistics:
|--|:--| | [Required server size](../reference/vm-sizes.md) | standard | | Number of triangles | 18.7 Million |
+| Number of vertices | 35.0 Million |
| Number of movable parts | 2073 | | Number of materials | 94 |
+| Number of textures | 393, using more than 1.6 GB of GPU memory |
## Third-party data ### Triangular meshes
-The Khronos Group maintains a set of glTF sample models for testing. ARR supports the glTF format both in text (*.gltf*) and in binary (*.glb*) form. We suggest using the PBR models for best visual results:
+The Khronos Group maintains a set of glTF sample models for testing. ARR supports the glTF format both in text (`.gltf`) and in binary (`.glb`) form. We suggest using the PBR models for best visual results:
* [glTF Sample Models](https://github.com/KhronosGroup/glTF-Sample-Models)
resource-mover Support Matrix Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-extension-resource-types.md
Last updated 03/02/2023 -+
resource-mover Support Matrix Move Region Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-azure-vm.md
Previously updated : 03/02/2023 Last updated : 03/21/2023 -+
resource-mover Support Matrix Move Region Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-sql.md
Previously updated : 03/02/2023 Last updated : 03/21/2023 ---+ # Support for moving Azure SQL resources between Azure regions
resource-mover Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/whats-new.md
+ Last updated 03/09/2023
role-based-access-control Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/elevate-access-global-admin.md
Previously updated : 10/19/2022 Last updated : 03/21/2023
When you call `elevateAccess`, you create a role assignment for yourself, so to
DELETE https://management.azure.com/providers/Microsoft.Authorization/roleAssignments/11111111-1111-1111-1111-111111111111?api-version=2022-04-01 ```
-## View elevate access logs
+## View elevate access log entries in the Directory Activity logs
-When access is elevated, an entry is added to the logs. As a Global Administrator in Azure AD, you might want to check when access was elevated and who did it. Elevate access log entries do not appear in the standard activity logs, but instead appear in the directory activity logs. This section describes different ways that you can view the elevate access logs.
+When access is elevated, an entry is added to the logs. As a Global Administrator in Azure AD, you might want to check when access was elevated and who did it. Elevate access log entries do not appear in the standard activity logs, but instead appear in the Directory Activity logs. This section describes different ways that you can view the elevate access log entries.
-### View elevate access logs using the Azure portal
-
-1. Follow the steps earlier in this article to elevate your access.
+### View elevate access log entries using the Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
When access is elevated, an entry is added to the logs. As a Global Administrato
![Screenshot showing directory activity logs in Monitor.](./media/elevate-access-global-admin/monitor-directory-activity.png)
-1. Follow the steps earlier in this article to remove elevated access.
-
-### View elevate access logs using Azure CLI
-
-1. Follow the steps earlier in this article to elevate your access.
+### View elevate access log entries using Azure CLI
1. Use the [az login](/cli/azure/reference-index#az-login) command to sign in as Global Administrator.
When access is elevated, an entry is added to the logs. As a Global Administrato
}, ```
-1. Follow the steps earlier in this article to remove elevated access.
+### Delegate access to a group to view elevate access log entries using Azure CLI
-### Delegate access to a group to view elevate access logs using Azure CLI
-
-If you want to be able to periodically get the elevate access logs, you can delegate access to a group and then use Azure CLI.
+If you want to be able to periodically get the elevate access log entries, you can delegate access to a group and then use Azure CLI.
1. Open **Azure Active Directory** > **Groups**. 1. Create a new security group and note the group object ID.
-1. Follow the steps earlier in this article to elevate your access.
- 1. Use the [az login](/cli/azure/reference-index#az-login) command to sign in as Global Administrator. 1. Use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to assign the [Reader](built-in-roles.md#reader) role to the group who can only read logs at the directory level, which are found at `Microsoft/Insights`.
If you want to be able to periodically get the elevate access logs, you can dele
1. Add a user who will read logs to the previously created group.
-1. Follow the steps earlier in this article to remove elevated access.
-
-A user in the group can now periodically run the [az rest](/cli/azure/reference-index#az-rest) command to view elevate access logs.
+A user in the group can now periodically run the [az rest](/cli/azure/reference-index#az-rest) command to view elevate access log entries.
```azurecli az rest --url "https://management.azure.com/providers/Microsoft.Insights/eventtypes/management/values?api-version=2015-04-01&$filter=eventTimestamp ge '2021-09-10T20:00:00Z'" > output.txt
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 11/15/2022 Last updated : 03/21/2023
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
> [!NOTE] > Looking for preview features? Previews are announced here, but we also maintain a [preview features list](search-api-preview.md) so you can find them in one place.
+## March 2023
+
+| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|--||--|
+| [**ChatGPT + Enterprise data with Azure OpenAI and Cognitive Search (GitHub)**](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/README.md) | Sample | Python code and a template for combining Cognitive Search with the large language models in OpenAI. For background, see this Tech Community blog post: [Revolutionize your Enterprise Data with ChatGPT](https://techcommunity.microsoft.com/t5/ai-applied-ai-blog/revolutionize-your-enterprise-data-with-chatgpt-next-gen-apps-w/ba-p/3762087). To summarize the key points: <ul><li>Use Cognitive Search to consolidate and index searchable content.</li> <li>Query the index for initial search results.</li> <li>Assemble prompts from those results and send to the gpt-35-turbo (preview) model in Azure OpenAI.</li> <li>Return a summary and provide citations and transparency in your customer-facing app so that users can evaluate the response.</li> </ul>|
+ ## November 2022 | Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
sentinel Domain Based Essential Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/domain-based-essential-solutions.md
+
+ Title: ASIM-based domain solutions - Essentials for Microsoft Sentinel
+description: Learn about the Microsoft essential solutions for Microsoft Sentinel that span across different ASIM schemas like networks, DNS, and web sessions.
++ Last updated : 03/08/2023+
+#Customer intent: As a security engineer, I want to learn how I can minimize the amount of solution content I have to deploy and manage by using Microsoft essential solutions for Microsoft Sentinel.
++
+# Advanced Security Information Model (ASIM) based domain solutions for Microsoft Sentinel (preview)
+
+Microsoft essential solutions are domain solutions published by Microsoft for Microsoft Sentinel. These solutions have out-of-the-box content which can operate across multiple products for specific categories like networking. Some of these essential solutions use the normalization technique Advanced Security Information Model (ASIM) to normalize the data at query time or ingestion time.
+
+> [!IMPORTANT]
+> Microsoft essential solutions and the Network Session Essentials solution are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Why use ASIM-based Microsoft essential solutions?
+
+When multiple solutions in a domain category share similar detection patterns, it makes sense to have the data captured under a normalized schema like ASIM. Essential solutions makes use of this ASIM schema to detect threats at scale.
+
+In the content hub, there are multiple product solutions for different domain categories like "Security - Network". For example, Azure Firewall, Palo Alto Firewall, and Corelight have product solutions for the "Security - Network" domain category.
+
+- These solutions have differing data ingest components by design. But thereΓÇÖs a certain pattern to the analytics, hunting, workbooks, and other content within the same domain category.
+- Most of the major network products have a common basic set of firewall alerts that includes malicious threats coming from unusual IP addresses. The analytic rule template is, in general, duplicated for each of the "Security - Network" category of product solutions. If you're running multiple network products, you need to check and configure multiple analytic rules individually, which is inefficient. You'd also get alerts for each rule configured and might end up with alert fatigue.
+- If you have duplicative hunting queries, you might have less performant hunting experiences with the run-all mode of hunting. These duplicative hunting queries also introduce inefficiencies for threat hunters to select and run similar queries.
+
+You might consider Microsoft essential solutions for the following reasons:
+
+- A normalized schema makes it easier for you to query incident details. You don't have to remember different vendor syntax for similar log attributes.
+- If you don't have to manage content for multiple solutions, use case deployment and incident handling is much easier.
+- A consolidated workbook view gives you better environment visibility and possible query time parsing with high performing ASIM parsers.
+
+## ASIM schemas supported
+
+The essentials solutions are currently spanned across the following different ASIM schemas that Sentinel supports:
+
+- Audit event
+- Authentication event
+- DNS activity
+- File activity
+- Network session
+- Process event
+- Web session
+
+For more information, see [Advanced Security Information Model (ASIM) schemas](/azure/sentinel/normalization-about-schemas).
+
+## Ingestion time normalization
+
+The ingestion time normalization results can be ingested into following normalized table:
+
+- [ASimDnsActivityLogs](/azure/azure-monitor/reference/tables/asimdnsactivitylogs) for the DNS schema.
+- [ASimNetworkSessionLogs](/azure/azure-monitor/reference/tables/asimnetworksessionlogs) for the Network Session schema
+
+For more information, see [Ingest time normalization](/azure/sentinel/normalization-ingest-time).
+
+## Content available with ASIM-based domain essential solutions
+
+The following table describes the type of content available with each essential solution. For some specific use cases, you might want to also use the content available with the Microsoft Sentinel product solution.
+
+|Content type |description |
+|||
+|Analytical Rule | The analytical rules available in the ASIM-based essential solutions are generic and a good fit for any of the dependent Microsoft Sentinel product solutions for that domain. The Microsoft Sentinel product solution might have a source specific use case covered as part of the analytical rule. Enable Microsoft Sentinel product solution rules as needed for your environment. |
+|Hunting query | The hunting queries available in the ASIM-based essential solutions are generic and a good fit to hunt for threats from any of the dependent Microsoft Sentinel product solutions for that domain. The Microsoft Sentinel product solution might have a source specific hunting query available out-of-the-box. Use the hunting queries from the Microsoft Sentinel product solution as needed for your environment. |
+|Playbook | The ASIM-based essential solutions are expected to handle data with very high events per seconds. When you have content that's using that volume of data, you might experience some performance impact that can cause slow loading of workbooks or query results. To solve this problem, the summarization playbook summarizes the source logs and stores the information into a predefined table. Enable the summarization playbook to allow the essential solutions to query this table.<br><br> Because playbooks in Microsoft Sentinel are based on workflows built in Azure Logic Apps which create separate resources, additional charges might apply. For more information, see the [Azure Logic Apps pricing page](https://azure.microsoft.com/pricing/details/logic-apps/). Additional charges might also apply for storage of the summarized data. |
+|Watchlist | The ASIM-based essential solutions use a watchlist that includes multiple sets of conditions for analytic rule detection and hunting queries. The watchlist allows you to do the following tasks:<br><br>- Do focused monitoring with data filtration. <br>- Switch between hunting and detection for each list item. <br>- Keep **Threshold type** set to **Static** to leverage threshold-based alerting while anomaly-based alerts would learn from the last few days of data (maximum 14 days). <br>- Modify **Alert Name**, **Description**, **Tactic** and **Severity** by using this watchlist for individual list items.<br>- Disable detection by setting **Severity** as **Disabled**. |
+|Workbook | The workbook available with the ASIM-based essential solutions gives a consolidated view of different events and activity happening in the dependent domain. Because this workbook fetches results from a very high volume of data, there might be some performance lag. If you experience performance issues, use the summarization playbook.|
+
+These essential solutions like other Microsoft Sentinel domain solutions don't have a connector of their own. They depend on the source specific connectors in Microsoft Sentinel product solutions to pull in the logs. To understand the products the domain solution supports, refer to the prerequisite list of product solutions each of the ASIM domain essentials solutions lists. Install one or more of the product solutions. Configure the data connectors to meet the underlying product dependency needs and to enable better usage of this domain solution content.
++
+## Next steps
+
+- [Find ASIM-based domain essential solutions like the Network Session Essentials](sentinel-solutions-catalog.md)
+- [Using the Advanced Security Information Model (ASIM)](/azure/sentinel/normalization-about-parsers)
sentinel Extend Sentinel Across Workspaces Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/extend-sentinel-across-workspaces-tenants.md
You can then write a query across both workspaces by beginning with `unionSecuri
#### Cross-workspace analytics rules<a name="scheduled-alerts"></a> <!-- Bookmark added for backward compatibility with old heading -->
-You can now include cross-workspace queries in scheduled analytics rules. You can use cross-workspace analytics rules in a central SOC, and across tenants (using Azure Lighthouse), suitable for MSSPs. Note these limitations:
+You can now include cross-workspace queries in scheduled analytics rules. You can use cross-workspace analytics rules in a central SOC, and across tenants (using Azure Lighthouse), suitable for MSSPs. This use is subject to the following limitations:
-- You can include **up to 20 workspaces** in a single query. For good performance though, we recommend keeping it under 5.-- You must deploy Microsoft Sentinel **on every workspace** referenced in the query.
+- You can include **up to 20 workspaces** in a single query. However, for good performance, we recommend including no more than 5.
+ - Alerts generated by a cross-workspace analytics rule, and the incidents created from them, exist **only in the workspace where the rule was defined**. The alerts won't be displayed in any of the other workspaces referenced in the query.-
+- A cross-workspace analytics rule, like any analytics rule, will continue running even if the user who created the rule loses access to workspaces referenced in the rule's query.
+
Alerts and incidents created by cross-workspace analytics rules contain all the related entities, including those from all the referenced workspaces and the "home" workspace (where the rule was defined). This way, analysts get a full picture of alerts and incidents. > [!NOTE]
In this article, you learned how Microsoft Sentinel's capabilities can be extend
- Learn how to [work with multiple tenants](./multiple-tenants-service-providers.md) in Microsoft Sentinel, using Azure Lighthouse. - Learn how to [view and manage incidents in multiple workspaces](./multiple-workspace-view.md) seamlessly.++
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-catalog.md
Title: Microsoft Sentinel content hub catalog | Microsoft Docs
description: This article lists the solutions currently available in the content hub for Microsoft Sentinel and where to find the full list of solutions. Previously updated : 09/29/2022 Last updated : 03/08/2023 - # Microsoft Sentinel content hub catalog
When you deploy a solution, the security content included with the solution, suc
||||| |**[Apache Log4j Vulnerability Detection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-apachelog4jvulnerability?tab=Overview)** | Analytics rules, hunting queries, workbooks, playbooks, watchlist | Application, Security - Threat Protection, Security - Vulnerability Management | Microsoft| |**[Cybersecurity Maturity Model Certification (CMMC)](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-cmmcv2?tab=Overview)** | [Analytics rules, workbook, playbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-cybersecurity-maturity-model-certification-cmmc/ba-p/2111184) | Compliance | Microsoft|
+| **[Deception Honey Tokens](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinelhoneytokens.azuresentinelhoneytokens?tab=Overview)** | [Workbooks, analytics rules, playbooks](monitor-key-vault-honeytokens.md) | Security - Threat Protection |Microsoft Sentinel community |
|**[Dev-0537 Detection and Hunting](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-dev0537detectionandhunting?tab=Overview)**||Security - Threat Protection|Microsoft| | **[Microsoft Defender for IoT](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview)** | [Analytics rules, playbooks, workbook](iot-advanced-threat-monitoring.md) | Internet of Things (IoT), Security - Threat Protection | Microsoft | |**[Maturity Model for Event Log Management M2131](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-maturitymodelforeventlogma?tab=Overview)** | [Analytics rules, hunting queries, playbooks, workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/modernize-log-management-with-the-maturity-model-for-event-log/ba-p/3072842) | Compliance | Microsoft| |**[Microsoft 365 Insider Risk Management](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-insiderriskmanagement?tab=Overview)** (IRM) |[Data connector](data-connectors/microsoft-365-insider-risk-management.md), [workbook, analytics rules, hunting queries, playbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-microsoft-sentinel-microsoft-insider-risk/ba-p/2955786) |Security - Insider threat | Microsoft|
-| **[Deception Honey Tokens](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinelhoneytokens.azuresentinelhoneytokens?tab=Overview)** | [Workbooks, analytics rules, playbooks](monitor-key-vault-honeytokens.md) | Security - Threat Protection |Microsoft |
+|**[Network Session Essentials](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-networksession?tab=Overview)**|[Analytics rules, hunting queries, playbooks, workbook](domain-based-essential-solutions.md)|Security - Network | Microsoft|
|**[NIST SP 800-53](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-nistsp80053?tab=Overview)**|[Workbooks, analytic rules, playbooks](https://techcommunity.microsoft.com/t5/public-sector-blog/microsoft-sentinel-nist-sp-800-53-solution/ba-p/3401307)|Security - Threat Protection|Microsoft| |**[Security Threat Essentials](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-securitythreatessentialsol?tab=Overview)**|Analytic rules, Hunting queries|Security - Others|Microsoft| |**[Zero Trust (TIC 3.0)](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-zerotrust?tab=Overview)** |[Analytics rules, playbook, workbooks](/security/zero-trust/integrate/sentinel-solution) |Compliance, Identity, Security - Others |Microsoft |
spring-apps How To Custom Persistent Storage With Standard Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-custom-persistent-storage-with-standard-consumption.md
+
+ Title: How to enable your own persistent storage in Azure Spring Apps with the Standard consumption plan
+description: Learn how to enable your own persistent storage in Azure Spring Apps.
++++ Last updated : 03/21/2023+++
+# How to enable your own persistent storage in Azure Spring Apps with the Standard consumption plan
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption (Preview) ❌ Basic/Standard ❌ Enterprise
+
+This article describes how to enable your own persistent storage in Azure Spring Apps.
+
+Your own storage and the built-in persistent storage in Azure Spring Apps differ in the following ways:
+
+- In built-in storage, artifacts generated by your application get uploaded into Azure Storage accounts. In your own storage, artifacts generated by your application get uploaded into a storage account that you control.
+
+- In built-in storage, Microsoft Azure controls the encryption-at-rest and the lifetime management policies for those artifacts. In your own storage, you control the encryption-at-rest policy, the lifetime management policy, and network access.
+
+You can also mount your own persistent storage not only to Azure Spring Apps but to other service instances in the environment such as Azure Container Apps. This capability is possible because your Azure Spring Apps instance is deployed in the Azure Container Apps environment.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.28.0 or higher.
+- An Azure Spring Apps Standard consumption plan service instance. For more information, see [Quickstart: Provision an Azure Spring Apps Standard consumption plan service instance](quickstart-provision-standard-consumption-service-instance.md).
+- A Spring app deployed to Azure Spring Apps. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md).
+
+## Set up the environment
+
+Use the following commands to set the following variables to the names of your resources and current region setting.
+
+```bash
+RESOURCE_GROUP="<resource-group-name>"
+LOCATION="eastus"
+AZURE_CONTAINER_APPS_ENVIRONMENT="<Azure-Container-Apps-environment-name>"
+AZURE_SPRING_APPS_INSTANCE="<Azure-Spring-Apps-instance-name>"
+APP_NAME="<Spring-app-name>"
+```
+
+## Set up a storage account
+
+Use the following steps to create a storage account and establish a file share to mount to the Spring app.
+
+1. Create an Azure Storage account by using the following commands. The `STORAGE_ACCOUNT_NAME` variable includes a randomly generated suffix to ensure uniqueness.
+
+ ```azurecli
+ STORAGE_ACCOUNT_NAME="myasastorageaccount$RANDOM"
+
+ az storage account create \
+ --resource-group $RESOURCE_GROUP \
+ --name $STORAGE_ACCOUNT_NAME \
+ --location "$LOCATION" \
+ --kind StorageV2 \
+ --sku Standard_LRS \
+ --query provisioningState \
+ --enable-large-file-share
+ ```
+
+ This command returns a success message upon successful completion.
+
+1. Create the Azure Storage file share by using the following commands:
+
+ ```azurecli
+ FILE_SHARE_NAME="<file-share-name>"
+
+ az storage share-rm create \
+ --resource-group $RESOURCE_GROUP \
+ --storage-account $STORAGE_ACCOUNT_NAME \
+ --name $FILE_SHARE_NAME \
+ --quota 1024 \
+ --enabled-protocols SMB \
+ --output table
+ ```
+
+1. Get the Storage Account key by using the following command:
+
+ ```azurecli
+ STORAGE_ACCOUNT_KEY=$(az storage account keys list \
+ --account-name $STORAGE_ACCOUNT_NAME \
+ --query "[0].value" \
+ --output tsv)
+ ```
+
+ The storage account key is required to create the storage link in your Azure Container Apps environment.
+
+## Link the storage to the Azure Container Apps environment
+
+Create the storage link in the Azure Container Apps environment by using the following commands. The `az containerapp env storage set` command creates a link between the environment and the file share created with the `az storage share-rm` command.
+
+```azurecli
+STORAGE_MOUNT_NAME="<storage-account-name>"
+
+az containerapp env storage set \
+ --resource-group $RESOURCE_GROUP \
+ --name $AZURE_CONTAINER_APPS_ENVIRONMENT \
+ --storage-name $STORAGE_MOUNT_NAME \
+ --azure-file-account-name $STORAGE_ACCOUNT_NAME \
+ --azure-file-account-key $STORAGE_ACCOUNT_KEY \
+ --azure-file-share-name $FILE_SHARE_NAME \
+ --access-mode ReadWrite \
+ --output table
+```
+
+Now that the storage account and environment are linked, you can use the storage mount in your Azure Spring Apps instance.
+
+## Add storage to an app
+
+Add the persistent storage to your existing app by using the following command:
+
+```azurecli
+az spring app append-persistent-storage \
+ --resource-group $RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_INSTANCE \
+ --name $APP_NAME \
+ --persistent-storage-type AzureFileVolume \
+ --mount-path /var/log/nginx \
+ --storage-name STORAGE_MOUNT_NAME
+```
+
+## Clean up resources
+
+Be sure to delete the resources you created in this article when you no longer need them. To delete the resources, just delete the resource group that contains them. You can delete the resource group using the Azure portal. Alternately, to delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+- [Customer responsibilities for Azure Spring Apps Standard consumption plan in a virtual network](./standard-consumption-customer-responsibilities.md)
spring-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/overview.md
description: Learn the features and benefits of Azure Spring Apps to deploy and
Previously updated : 03/09/2021 Last updated : 03/21/2023 #Customer intent: As an Azure Cloud user, I want to deploy, run, and monitor Spring applications.
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Standard consumption (Preview) ✔️ Basic/Standard ✔️ Enterprise
Azure Spring Apps makes it easy to deploy Spring Boot applications to Azure without any code changes. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
Azure Spring Apps supports both Java [Spring Boot](https://spring.io/projects/sp
As part of the Azure ecosystem, Azure Spring Apps allows easy binding to other Azure services including storage, databases, monitoring, and more.
-![Azure Spring Apps overview](media/spring-cloud-principles/azure-spring-cloud-overview.png)
* Azure Spring Apps is a fully managed service for Spring Boot apps that lets you focus on building and running apps without the hassle of managing infrastructure.
The following documents will help you migrate existing Spring Boot apps to Azure
* [Migrate Spring Boot applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-apps) * [Migrate Spring Cloud applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps?pivots=sc-standard-tier)
-The following quickstarts apply to Basic/Standard tier only. For Enterprise tier quickstarts, see the next section.
+The following quickstarts apply to Basic/Standard only. For Enterprise quickstarts, see the next section.
* [Provision an Azure Spring Apps service instance](quickstart-provision-service-instance.md) * [Set up the configuration server](quickstart-setup-config-server.md) * [Build and deploy apps](quickstart-deploy-apps.md)
-## Enterprise Tier overview
+## Standard consumption plan
+
+The Standard consumption plan provides a flexible billing model where you pay only for compute time used instead of provisioning resources. Start with as little as 0.25 vCPU and dynamically scale out based on HTTP or events powered by Kubernetes Event-Driven Autoscaling (KEDA). You can also scale your app instance to zero and stop all charges related to the app when there are no requests to process.
+
+Standard consumption simplifies the virtual network experience for running polyglot apps. All your apps will share the same virtual network when you deploy frontend apps as containers in Azure Container Apps and Spring apps in Standard consumption, in the same Azure Container Apps environment. There's no need to create disparate subnets and Network Security Groups for frontend apps, Spring apps, and the Spring service runtime.
++
+## Enterprise plan
Based on our learnings from customer engagements, we built Azure Spring Apps Enterprise tier with commercially supported Spring runtime components to help enterprise customers to ship faster and unlock SpringΓÇÖs full potential, including feature parity and region parity with Standard tier.
Azure Spring Apps, including Enterprise tier, runs on Azure in a fully managed e
| Monitor end-to-end using any tool and platform. | Application Insights, Azure Log Analytics, Splunk, Elastic, New Relic, Dynatrace, or AppDynamics | | Connect Spring applications and interact with your cloud services. | Spring integration with Azure services for data, messaging, eventing, cache, storage, and directories | | Securely load app secrets and certificates. | Azure Key Vault |
-| Use familiar development tools. | IntelliJ, Visual Studio Code, Eclipse, Spring Tool Suite, Maven, or Gradle |
+| Use familiar development tools. | IntelliJ, Visual Studio Code, Eclipse, Spring Tool Suite, Maven, or Gradle |
After you create your Enterprise tier service instance and deploy your applications, you can monitor with Application Insights or any other application performance management tools of your choice.
+### Get started with the Standard consumption plan
+
+The following quickstarts and articles will help you get started using the Standard consumption plan:
+
+* [Provision a service instance](quickstart-provision-standard-consumption-service-instance.md)
+* [Provision in an Azure Container Apps environment with a virtual network](quickstart-provision-standard-consumption-app-environment-with-virtual-network.md)
+* [Access apps in a virtual network](quickstart-access-standard-consumption-within-virtual-network.md)
+* [Deploy an event-driven app](quickstart-deploy-event-driven-app-standard-consumption.md)
+* [Set up autoscale](quickstart-apps-autoscale-standard-consumption.md)
+* [Map a custom domain to Azure Spring Apps](quickstart-standard-consumption-custom-domain.md)
+* [Analyze logs and metrics](quickstart-analyze-logs-and-metrics-standard-consumption.md)
+* [Enable your own persistent storage](how-to-custom-persistent-storage-with-standard-consumption.md)
+* [Customer responsibilities for Standard consumption plan in a virtual network](standard-consumption-customer-responsibilities.md)
+ ### Get started with Enterprise tier The following quickstarts will help you get started using the Enterprise tier:
spring-apps Quickstart Access Standard Consumption Within Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-access-standard-consumption-within-virtual-network.md
+
+ Title: Quickstart - Access applications using Azure Spring Apps Standard consumption plan in a virtual network
+description: Learn how to access applications in a virtual network that are using the Azure Spring Apps Standard consumption plan.
++++ Last updated : 03/21/2023+++
+# Quickstart: Access applications using Azure Spring Apps Standard consumption plan in a virtual network
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption (Preview) ❌ Basic/Standard ❌ Enterprise
+
+This article describes how to access your application in a virtual network using Azure Spring Apps Standard consumption plan.
+
+When you create an Azure Container Apps environment in an existing virtual network, you can access all the apps inside the environment only within that virtual network. In addition, when you create an instance of Azure Spring Apps inside the Azure Container Apps environment, you can access the applications in the Azure Spring Apps instance only from the virtual network. For more information, see [Provide a virtual network to an internal Azure Container Apps environments](../container-apps/vnet-custom-internal.md?tabs=bash&pivots=azure-portal).
+
+## Create a private DNS zone
+
+Create a private DNS zone named as the default domain of the Azure Container Apps environment - `<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io` - with an A record.
+
+Use the following command to get the default domain of Azure Container Apps environment.
+
+```azurecli
+az containerapp env show \
+ --resource-group <resource-group-name> \
+ --name <Azure-Container-Apps-environment-name> \
+ --query 'properties.defaultDomain'
+```
+
+Use the following command to create a Private DNS Zone for applications in the virtual network.
+
+```azurecli
+az network private-dns zone create \
+ --resource-group <resource-group-name> \
+ --name <private-dns-zone-name>
+```
+
+## Create an A record
+
+Create an A record that contains the name `<DNS Suffix>` and the static IP address of the Azure Container Apps environment.
+
+Use the following command to get the static IP address for an Azure Container Apps environment.
+
+```azurecli
+az containerapp env show \
+ --resource-group <resource-group-name> \
+ --name <Azure-Container-Apps-environment-name> \
+ --query 'properties.staticIp'
+```
+
+Use the following command to create the A record:
+
+```azurecli
+az network private-dns record-set a add-record \
+ --resource-group <resource-group-name> \
+ --zone-name <private-dns-zone-name> \
+ --record-set-name '*' \
+ --ipv4-address <static-ip>
+```
+
+## Link the virtual network
+
+Use the following command to create a virtual network link to the private DNS zone of the virtual network.
+
+```azurecli
+az network private-dns link vnet create \
+ --resource-group <resource-group-name> \
+ --name <link-name> \
+ --zone-name <private-dns-zone-name> \
+ --virtual-network <virtual-network-name> \
+ --registration-enabled false
+```
+
+## Access the application
+
+Now you can access an application in an Azure Spring Apps instance within your virtual network, using the URL of the application.
+
+## Clean up resources
+
+Be sure to delete the resources you created in this article when you no longer need them. To delete the resources, just delete the resource group that contains them. You can delete the resource group using the Azure portal. Alternately, to delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deploy an event-driven application to Azure Spring Apps with the Standard consumption plan](./quickstart-deploy-event-driven-app-standard-consumption.md)
spring-apps Quickstart Analyze Logs And Metrics Standard Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-analyze-logs-and-metrics-standard-consumption.md
+
+ Title: Quickstart - Analyze logs and metrics in the Azure Spring Apps Standard consumption plan
+description: Learn how to analyze logs and metrics in the Azure Spring Apps Standard consumption plan.
++++ Last updated : 3/21/2023+++
+# Quickstart: Analyze logs and metrics in the Azure Spring Apps Standard consumption plan
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption (Preview) ❌ Basic/Standard ❌ Enterprise
+
+This article shows you how to analyze logs and metrics in the Azure Spring Apps Standard consumption plan.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- An Azure Spring Apps Standard consumption plan service instance. For more information, see [Quickstart: Provision an Azure Spring Apps Standard consumption plan service instance](quickstart-provision-standard-consumption-service-instance.md).
+- A Spring app deployed to Azure Spring Apps. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md).
+
+## Analyze logs
+
+The following sections describe various tools in Azure that you can use to analyze your consumption plan usage.
+
+### Configure logging options
+
+You can configure logging options in the Azure Container Apps environment where you deploy your Azure Spring Apps instance. You can choose between the following log destinations:
+
+- **Log Analytics** - Azure Monitor Log Analytics is the default storage and viewing option. Your logs are stored in a Log Analytics workspace where you can view and analyze them using Log Analytics queries.
+
+- **Azure Monitor** - Azure Monitor routes logs to one or more of the following destinations:
+
+ - A Log Analytics workspace for viewing and analysis.
+ - An Azure storage account to archive.
+ - An Azure event hub for data ingestion and analytic services.
+ - An Azure partner monitoring solution such as Datadog, Elastic, Logz.io, and others.
+
+- **None** - You can disable the storage of log data.
+
+You can enable logs in Azure Spring Apps in the following ways:
+
+- When you select **Log Analytics** as the logging option.
+- When you select **Azure Monitor** as the logging option, with the **Spring App console logs** category selected in the **Diagnostic** setting.
+
+For more information, see [Log storage and monitoring options in Azure Container Apps](../container-apps/log-options.md).
+
+### Query logs by using Log Analytics
+
+Log Analytics is a tool in the Azure portal that you can use to view and analyze log data. By using Log Analytics, you can write Kusto queries and then sort, filter, and visualize the results in charts to spot trends and identify issues. You can work interactively with the query results or use them with other features such as alerts, dashboards, and workbooks.
+
+The following sections describe various methods to view logs.
+
+#### Use logs
+
+Use the following steps to query log data.
+
+1. In the Azure portal, go to your Azure Spring Apps instance.
+1. Select **Logs** from the navigation pane.
+1. In the **New Query 1** settings, enter a query such as the following example:
+
+ ```sql
+ AppEnvSpringAppConsoleLogs_CL
+ | limit 50
+ ```
+
+1. Select **Run**.
+
+#### Use Log Analytics
+
+Use the following steps to perform analytics on log data.
+
+1. In the Azure portal, go to your Azure Spring Apps instance.
+1. Select **Log Analytics** in the navigation pane.
+1. Select the Log Analytics workspace where you chose to store the logs.
+1. To open the **Log Search** pane, select **Logs**.
+1. To view logs, in the **Tables** search box, enter a query such as the following example:
+
+ ```sql
+ AppEnvSpringAppConsoleLogs_CL
+ | limit 50
+ ```
+
+1. To view the search result, select **Run**.
+1. You can search the logs of the specific application, deployment, or instance by setting a filter condition, as shown in the following example:
+
+ ```sql
+ AppEnvSpringAppConsoleLogs_CL
+ | where ContainerAppName_s == "YourAppName" and RevisionName_s has "YourDeploymentName" and ContainerGroupName_s == "YourInstanceName"
+ | limit 50
+ ```
+
+ > [!NOTE]
+ > `==` is case sensitive, but `=~` isn't.
+
+To learn more about the query language used in Log Analytics, see [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/). To query all your Log Analytics logs from a centralized client, see [Query data in Azure Monitor using Azure Data Explorer](/azure/data-explorer/query-monitor-data).
+
+## Analyze metrics
+
+Azure Monitor collects metric data from your Azure Spring Apps instance at regular intervals to help you gain insights into the performance and health of your Spring apps.
+
+To visualize the data, select **Metrics** in the navigation pane in your Azure Spring Apps instance. You can also retrieve raw metric data through the [Azure CLI](/cli/azure/monitor/metrics) and Azure [PowerShell cmdlets](/powershell/module/az.monitor/get-azmetric).
+
+### Available metrics
+
+Azure Spring Apps provides the metrics described in the following table:
+
+| Title | Description | Metric ID | Unit |
+|--|--|-|--|
+| CPU usage nanocores | CPU usage in nanocores (1,000,000,000 nanocores = 1 core) | `UsageNanoCores` | nanocores |
+| Memory working set bytes | Working set memory used in bytes | `WorkingSetBytes` | bytes |
+| Network in bytes | Network received bytes | `RxBytes` | bytes |
+| Network out bytes | Network transmitted bytes | `TxBytes` | bytes |
+| Requests | Requests processed | `Requests` | n/a |
+| Restart count | Restart count of Spring App | `RestartCount` | n/a |
+
+### Use metrics explorer
+
+The Azure Monitor metrics explorer enables you to create charts from metric data to help you analyze your Azure Spring Apps resource and network usage over time. You can pin charts to a dashboard or in a shared workbook.
+
+1. Open the metrics explorer in the Azure portal by selecting **Metrics** in the navigation pane on the overview page of your Azure Spring Apps instance. To learn more about metrics explorer, see [Getting started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md).
+
+1. Create a chart by selecting a metric in the **Metric** dropdown menu. You can modify the chart by changing the aggregation, adding more metrics, changing time ranges and intervals, adding filters, and applying splitting.
+
+#### Add filters
+
+Optionally, you can create filters to limit the data shown based on application name and instance name. Use the following steps to create a filter:
+
+1. Select **Add filter**.
+1. Select **App** or **Instance** from the **Property** list.
+1. Select values from the **Value** list.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to enable your own persistent storage in Azure Spring Apps with the Standard consumption plan](./how-to-custom-persistent-storage-with-standard-consumption.md)
spring-apps Quickstart Apps Autoscale Standard Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-apps-autoscale-standard-consumption.md
+
+ Title: Quickstart - Set up autoscale for applications in Azure Spring Apps Standard consumption plan
+description: Learn how to set up autoscale for applications in Azure Spring Apps Standard consumption plan.
++++ Last updated : 03/21/2023+++
+# Quickstart: Set up autoscale for applications in Azure Spring Apps Standard consumption plan
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption (Preview) ❌ Basic/Standard ❌ Enterprise
+
+This article describes how to set up autoscale rules for your applications in Azure Spring Apps Standard consumption plan. The plan uses an Azure Container Apps environment to host your Spring applications, and provides the following management and support:
+
+- Manages automatic horizontal scaling through a set of declarative scaling rules.
+- Supports all the scaling rules that Azure Container Apps supports.
+
+For more information, see [Azure Container Apps documentation](../container-apps/index.yml).
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, see [Azure free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure Spring Apps Standard consumption plan service instance. For more information, see [Quickstart: Provision an Azure Spring Apps Standard consumption plan service instance](quickstart-provision-standard-consumption-service-instance.md).
+- A Spring app deployed to Azure Spring Apps. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md).
+
+## Scale definition
+
+Scaling is defined by the combination of limits and rules.
+
+- Limits are the minimum and maximum number of instances that your Spring allows.
+
+ | Scale limit | Default value | Min value | Max value |
+ |--||--|--|
+ | Minimum number of instances per deployment | 1 | 0 | 30 |
+ | Maximum number of instances per deployment | 10 | 1 | 30 |
+
+ By default, the minimum instance count of your Spring application is set to 1 to ensure that your deployment is always running. If you want to scale in to zero, you can set the minimum instance count to zero.
+
+- Rules are the criteria that the autoscaling abides by to add or remove instances. The scale rules include HTTP, TCP, and Custom rules as described in the [Scale rules](../container-apps/scale-app.md#scale-rules) section of [Set scaling rules in Azure Container Apps](../container-apps/scale-app.md).
+
+ If you define more than one scale rule, the autoscaling begins when the first condition of any rule is met.
+
+- The *polling interval* and *cooldown period* are two time spans that occur during autoscaling.
+ - The polling interval defines the time span between each polling action of real time data as defined by your rules. The polling interval is set to 30 seconds by default.
+ - The cooldown period applies only when scaling to zero - for example, to wait five minutes after the last time autoscaling checked the message queue and it was empty.
+
+## Set up autoscale settings
+
+You can set up autoscale settings for your application by using the Azure portal or Azure CLI.
+
+### [Azure portal](#tab/azure-portal)
+
+Use the following steps to define autoscale settings and rules.
+
+1. Sign in to the Azure portal.
+1. Select **Azure Spring Apps** under **Azure services**.
+1. In the **Name** column, select the Azure Spring Apps instance that you want to autoscale.
+1. On the overview page for your Azure Spring Apps instance, select **Apps** in the navigation pane.
+1. Select the application that you want to autoscale.
+1. On the overview page for the selected app, select **Scale out** in the navigation pane.
+1. On the **Scale out (Preview)** page, select the deployment you want to autoscale.
+1. Set up the instance limits of your deployment.
+1. Select **Add** to add your scale rules. To define your custom rules, see [Keda scalers](https://keda.sh/docs/2.9/scalers/).
++
+### [Azure CLI](#tab/azure-cli)
+
+Use the following commands to create an application in Azure Spring Apps with an autoscaling rule. The replicas count is adjusted automatically according to the count of messages in Azure Service Bus Queue.
+
+```azurecli-interactive
+az spring app create \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --name <app-name> \
+ --secrets "connection-string-secret=<service-bus-connection-string>" \
+ --scale-rule-name azure-servicebus-queue-rule \
+ --scale-rule-type azure-servicebus \
+ --scale-rule-metadata "queueName=my-queue" \
+ "namespace=service-bus-namespace" \
+ "messageCount=5" \
+ --scale-rule-auth "connection=connection-string-secret" \
+ --min-replicas 0 \
+ --max-replicas 5
+```
+++
+## Clean up resources
+
+Be sure to delete the resources you created in this article when you no longer need them. To delete the resources, just delete the resource group that contains them. You can delete the resource group using the Azure portal. Alternately, to delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Map a custom domain to Azure Spring Apps with the Standard consumption plan](./quickstart-standard-consumption-custom-domain.md)
spring-apps Quickstart Deploy Event Driven App Standard Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-event-driven-app-standard-consumption.md
+
+ Title: Quickstart - Deploy event-driven application to Azure Spring Apps with the Standard consumption plan
+description: Learn how to deploy an event-driven application to Azure Spring Apps with the Standard consumption plan.
+++ Last updated : 03/21/2023++++
+# Quickstart: Deploy an event-driven application to Azure Spring Apps with the Standard consumption plan
+
+> [!NOTE]
+> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption (Preview) ❌ Basic/Standard ❌ Enterprise
+
+This article explains how to deploy a Spring Boot event-driven application to Azure Spring Apps with the Standard consumption plan.
+
+The sample project is an event-driven application that subscribes to a [Service Bus queue](../service-bus-messaging/service-bus-queues-topics-subscriptions.md#queues) named `lower-case`, and then handles the message and sends another message to another queue named `upper-case`. To make the app simple, message processing just converts the message to uppercase. The following diagram depicts this process:
++
+## Prerequisites
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli). Version 2.45.0 or greater.
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+
+## Clone and build the sample project
+
+Use the following steps to prepare the sample locally.
+
+1. The sample project is ready on GitHub. Clone sample project by using the following command:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/ASA-Samples-Event-Driven-Application.git
+ ```
+
+1. Build the sample project by using the following commands:
+
+ ```bash
+ cd ASA-Samples-Event-Driven-Application
+ ./mvnw clean package -DskipTests
+ ```
+
+## Prepare the cloud environment
+
+The main resources you need to run this sample is an Azure Spring Apps instance and an Azure Service Bus instance. Use the following steps to create these resources.
+
+1. Use the following commands to create variables for the names of your resources and for other settings as needed. Resource names in Azure must be unique.
+
+ ```azurecli
+ RESOURCE_GROUP=<event-driven-app-resource-group-name>
+ LOCATION=<desired-region>
+ SERVICE_BUS_NAME_SPACE=<event-driven-app-service-bus-namespace>
+ AZURE_CONTAINER_APPS_ENVIRONMENT=<Azure-Container-Apps-environment-name>
+ AZURE_SPRING_APPS_INSTANCE=<Azure-Spring-Apps-instance-name>
+ APP_NAME=<event-driven-app-name>
+ ```
+
+1. Sign in to Azure by using the following command:
+
+ ```azurecli
+ az login
+ ```
+
+1. Set the default location by using the following command:
+
+ ```azurecli
+ az configure --defaults location=${LOCATION}
+ ```
+
+1. Set your default subscription. First, list all available subscriptions:
+
+ ```azurecli
+ az account list --output table
+ ```
+
+1. Determine the ID of the subscription you want to set and use it with the following command to set your default subscription.
+
+ ```azurecli
+ az account set --subscription <subscription-ID>
+ ```
+
+1. Create a resource group by using the following command:
+
+ ```azurecli
+ az group create --resource-group ${RESOURCE_GROUP}
+ ```
+
+1. Use the following command to set the newly created resource group as the default resource group.
+
+ ```azurecli
+ az configure --defaults group=${RESOURCE_GROUP}
+ ```
+
+## Create a Service Bus instance
+
+Create a Service Bus instance by using the following steps.
+
+1. Use the following command to create a Service Bus namespace.
+
+ ```azurecli
+ az servicebus namespace create --name ${SERVICE_BUS_NAME_SPACE}
+ ```
+
+1. Use the following commands to create two queues named `lower-case` and `upper-case`.
+
+ ```azurecli
+ az servicebus queue create \
+ --namespace-name ${SERVICE_BUS_NAME_SPACE} \
+ --name lower-case
+ az servicebus queue create \
+ --namespace-name ${SERVICE_BUS_NAME_SPACE} \
+ --name upper-case
+ ```
+
+## Create an Azure Container Apps environment
+
+The Azure Container Apps environment creates a secure boundary around a group of applications. Apps deployed to the same environment are deployed in the same virtual network and write logs to the same Log Analytics workspace.
+
+Use the following steps to create the environment:
+
+1. Install the Azure Container Apps extension for the CLI by using the following command:
+
+ ```azurecli
+ az extension add --name containerapp --upgrade
+ ```
+
+1. Register the `Microsoft.App` namespace by using the following command:
+
+ ```azurecli
+ az provider register --namespace Microsoft.App
+ ```
+
+1. If you haven't previously used the Azure Monitor Log Analytics workspace, register the `Microsoft.OperationalInsights` provider by using the following command:
+
+ ```azurecli
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
+
+1. Create the environment by using the following command:
+
+ ```azurecli
+ az containerapp env create --name ${AZURE_CONTAINER_APPS_ENVIRONMENT}
+ ```
+
+## Create the Azure Spring Apps instance
+
+An Azure Spring Apps Standard consumption plan instance hosts the Spring event-driven app. Use the following steps to create the service instance and then create an app inside the instance.
+
+1. Install the Azure CLI extension designed for Azure Spring Apps Standard consumption by using the following command:
+
+ ```azurecli
+ az extension remove --name spring && \
+ az extension add --name spring
+ ```
+
+1. Register the `Microsoft.AppPlatform` provider for the Azure Spring Apps by using the following command:
+
+ ```azurecli
+ az provider register --namespace Microsoft.AppPlatform
+ ```
+
+1. Get the Azure Container Apps environment resource ID by using the following command:
+
+ ```azurecli
+ MANAGED_ENV_RESOURCE_ID=$(az containerapp env show \
+ --name ${AZURE_CONTAINER_APPS_ENVIRONMENT} \
+ --query id \
+ --output tsv)
+ ```
+
+1. Use the following command to create your Azure Spring Apps instance, specifying the resource ID of the Azure Container Apps environment you created.
+
+ ```azurecli
+ az spring create \
+ --name ${AZURE_SPRING_APPS_INSTANCE} \
+ --managed-environment ${MANAGED_ENV_RESOURCE_ID} \
+ --sku standardGen2
+ ```
+
+1. Create an app in the Azure Spring Apps instance by using the following command:
+
+ ```azurecli
+ az spring app create \
+ --service ${AZURE_SPRING_APPS_INSTANCE} \
+ --name ${APP_NAME} \
+ --cpu 1 \
+ --memory 2 \
+ --instance-count 2 \
+ --runtime-version Java_17 \
+ --assign-endpoint true
+ ```
+
+## Bind the Service Bus to Azure Spring Apps and deploy the app
+
+Now both the Service Bus and the app in Azure Spring Apps have been created, but the app can't connect to the Service Bus. Use the following steps to enable the app to connect to the Service Bus, and then deploy the app.
+
+1. Get the Service Bus's connection string by using the following command:
+
+ ```azurecli
+ SERVICE_BUS_CONNECTION_STRING=$(az servicebus namespace authorization-rule keys list \
+ --namespace-name ${SERVICE_BUS_NAME_SPACE} \
+ --name RootManageSharedAccessKey \
+ --query primaryConnectionString \
+ --output tsv)
+ ```
+
+1. Use the following command to provide the connection string to the app through an environment variable.
+
+ ```azurecli
+ az spring app update \
+ --service ${AZURE_SPRING_APPS_INSTANCE} \
+ --name ${APP_NAME} \
+ --env SERVICE_BUS_CONNECTION_STRING=${SERVICE_BUS_CONNECTION_STRING}
+ ```
+
+1. Now the cloud environment is ready. Deploy the app by using the following command.
+
+ ```azurecli
+ az spring app deploy \
+ --service ${AZURE_SPRING_APPS_INSTANCE} \
+ --name ${APP_NAME} \
+ --artifact-path target/simple-event-driven-app-0.0.1-SNAPSHOT.jar
+ ```
+
+## Validate the event-driven app
+
+Use the following steps to confirm that the event-driven app works correctly. You can validate the app by sending a message to the `lower-case` queue, then confirming that there's a message in the `upper-case` queue.
+
+1. Send a message to `lower-case` queue with Service Bus Explorer. For more information, see the [Send a message to a queue or topic](../service-bus-messaging/explorer.md#send-a-message-to-a-queue-or-topic) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
+
+1. Confirm that there's a new message sent to the `upper-case` queue. For more information, see the [Peek a message](../service-bus-messaging/explorer.md#peek-a-message) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md).
+
+## Clean up resources
+
+Be sure to delete the resources you created in this article when you no longer need them. To delete the resources, just delete the resource group that contains them. You can delete the resource group using the Azure portal. Alternately, to delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Set up autoscale for applications in Azure Spring Apps Standard consumption plan](./quickstart-apps-autoscale-standard-consumption.md)
+
+For more information, see the following articles:
+
+- [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
+- [Spring on Azure](/azure/developer/java/spring/)
+- [Spring Cloud Azure](/azure/developer/java/spring-framework/)
spring-apps Quickstart Provision Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-provision-service-instance.md
You can provision an instance of the Azure Spring Apps service using the Azure p
- [JDK 17](/azure/developer/java/fundamentals/java-jdk-install) - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Optionally, [Azure CLI version 2.44.0 or higher](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`-- Optionally, [the Azure Toolkit for IntelliJ](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/).
+- (Optional) [Azure CLI version 2.44.0 or higher](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`
+- (Optional) [the Azure Toolkit for IntelliJ](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/).
## Provision an instance of Azure Spring Apps
spring-apps Quickstart Provision Standard Consumption App Environment With Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-provision-standard-consumption-app-environment-with-virtual-network.md
+
+ Title: Quickstart - Create an Azure Spring Apps Standard consumption plan instance in an Azure Container Apps environment with a virtual network
+description: Learn how to create an Azure Spring Apps instance in an Azure Container Apps environment with a virtual network.
++++ Last updated : 03/21/2023+++
+# Quickstart: Create an Azure Spring Apps Standard consumption plan instance in an Azure Container Apps environment with a virtual network
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption (Preview) ❌ Basic/Standard ❌ Enterprise
+
+This article describes how create an Azure Spring Apps instance in an Azure Container Apps environment with a virtual network. An Azure Container Apps environment creates a secure boundary around a group of applications. Applications deployed to the same environment are deployed in the same virtual network and write logs to the same Log Analytics workspace.
+
+When you create an Azure Spring Apps instance in an Azure Container Apps environment, it shares the same virtual network with other services and resources in the same Azure Container Apps environment. When you deploy frontend apps as containers in Azure Container Apps, and you also deploy Spring apps in the Azure Spring Apps Standard consumption plan, the apps are all in the same Azure Container Apps environment.
+
+You can also deploy your Azure Container Apps environment to an existing virtual network created by your IT team. This scenario simplifies the virtual network experience for running polyglot apps.
+
+> [!NOTE]
+> You can use an existing virtual network that has a dedicated subnet with a CIDR range of `/23` or higher.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- (Optional) [Azure CLI](/cli/azure/install-azure-cli) version 2.28.0 or higher.
+
+## Create an Azure Spring Apps instance in an Azure Container Apps environment
+
+Use the following steps to create an Azure Spring Apps instance in an Azure Container Apps environment with a virtual network.
+
+### [Azure portal](#tab/Azure-portal)
+
+1. Open the [Azure portal](https://portal.azure.com/).
+
+1. In the search box, search for *Azure Spring Apps*, and then select **Azure Spring Apps** in the results.
+
+ :::image type="content" source="media/quickstart-provision-app-environment-with-virtual-network/azure-spring-apps-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps in search results, with Azure Spring Apps highlighted in the search bar and in the results." lightbox="media/quickstart-provision-app-environment-with-virtual-network/azure-spring-apps-start.png":::
+
+1. On the Azure Spring Apps page, select **Create**.
+
+ :::image type="content" source="media/quickstart-provision-app-environment-with-virtual-network/azure-spring-apps-create.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps page with the Create button highlighted." lightbox="media/quickstart-provision-app-environment-with-virtual-network/azure-spring-apps-create.png":::
+
+1. Fill out the **Basics** form on the Azure Spring Apps **Create** page using the following guidelines:
+
+ - **Project Details**:
+
+ - **Subscription**: Select the subscription you want to be billed for this resource.
+ - **Resource group**: Select an existing resource group or create a new one.
+
+ - **Service Details**:
+
+ - **Name**: Create the name for the Azure Spring Apps instance. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+ - **Location**: Currently, only the following regions are supported: Australia East, Central US, East US, East US 2, West Europe, East Asia, North Europe, South Central US, UK South, West US 3.
+
+ - **Plan**: Select **Standard Consumption** for the **Pricing tier** option.
+
+ - **App Environment**:
+
+ - Select **Create new** to create a new Azure Container Apps environment or select an existing environment from the dropdown menu.
+
+ :::image type="content" source="media/quickstart-provision-app-environment-with-virtual-network/select-azure-container-apps-environment.png" alt-text="Screenshot of Azure portal showing the Create Container Apps environment page for an Azure Spring Apps instance with Create new highlighted for Azure Container Apps environment." lightbox="media/quickstart-provision-app-environment-with-virtual-network/select-azure-container-apps-environment.png":::
+
+1. Fill out the **Basics** form on the **Create Container Apps environment** page. Use the default value `asa-standard-consumption-app-env` for the **Environment name** and set **Zone redundancy** to **Enabled**.
+
+ :::image type="content" source="media/quickstart-provision-app-environment-with-virtual-network/create-azure-container-apps-environment.png" alt-text="Screenshot of Azure portal showing Create Container Apps environment page with the Basics tab selected." lightbox="media/quickstart-provision-app-environment-with-virtual-network/create-azure-container-apps-environment.png":::
+
+1. Select **Networking** and then specify the settings using the following guidelines:
+
+ - For **Use your own virtual network**, select **Yes**.
+ - Select the names for **Virtual network** and for **Infrastructure subnet** from the dropdown menus or use **Create new** as needed.
+ - Set **Virtual IP** to **External**. You can set the value to **Internal** if you prefer to use only internal IP addresses available in the virtual network instead of a public static IP.
+
+ :::image type="content" source="media/quickstart-provision-app-environment-with-virtual-network/create-azure-container-apps-environment-virtual-network.png" alt-text="Screenshot of Azure portal showing Create Container Apps environment page with the Networking tab selected." lightbox="media/quickstart-provision-app-environment-with-virtual-network/create-azure-container-apps-environment-virtual-network.png":::
+
+ >[!NOTE]
+ > The subnet associated with an Azure Container Apps environment requires a CIDR prefix of `/23` or higher.
+
+1. Select **Create**.
+
+1. On the Azure Spring Apps **Create** page, select **Review and Create** to finish creating the Azure Spring Apps instance.
+
+### [Azure CLI](#tab/Azure-CLI)
+
+1. Sign in to Azure by using the following command:
+
+ ```azurecli
+ az login
+ ```
+
+1. Install the Azure Container Apps extension for the Azure CLI by using the following command:
+
+ ```azurecli
+ az extension add --name containerapp --upgrade
+ ```
+
+1. Register the `Microsoft.App` namespace by using the following command:
+
+ ```azurecli
+ az provider register --namespace Microsoft.App
+ ```
+
+1. If you haven't previously used the Azure Monitor Log Analytics workspace, register the `Microsoft.OperationalInsights` provider by using the following command:
+
+ ```azurecli
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
+
+1. Use the following commands to create variables to store various values. If you're using an existing virtual network, specify the name of that network and its resource group. Otherwise, specify the name of a new virtual network and resource group to create.
+
+ ```bash
+ RESOURCE_GROUP="<resource-group-name>"
+ LOCATION="eastus"
+ AZURE_CONTAINER_APPS_ENVIRONMENT="<Azure-Container-Apps-environment-name>"
+ VNET_NAME="<virtual-network-name>"
+ ```
+
+1. If you aren't using an existing virtual network, use the following commands to create an Azure virtual network and subnet to associate with the Azure Container Apps environment. The virtual network must have a subnet available for the environment deployment.
+
+ ```azurecli
+ az network vnet create \
+ --resource-group $RESOURCE_GROUP \
+ --name $VNET_NAME \
+ --location $LOCATION \
+ --address-prefix 10.0.0.0/16
+
+ az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VNET_NAME \
+ --name $INFRASTRUCTURE_SUBNET \
+ --address-prefixes 10.0.0.0/23
+ ```
+
+1. Use the following command to get the ID for the infrastructure subnet and store it in a variable.
+
+ ```azurecli
+ INFRASTRUCTURE_SUBNET=$(az network vnet subnet show \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VNET_NAME \
+ --name infrastructure-subnet \
+ --query "id" \
+ --output tsv \
+ | tr -d '[:space:]')
+ ```
+
+1. Use the following command to create the Azure Container Apps environment using the infrastructure subnet ID.
+
+ ```azurecli
+ az containerapp env create \
+ --name $AZURE_CONTAINER_APPS_ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET
+ ```
+
+ > [!NOTE]
+ > You can create an internal Azure Container Apps environment that doesn't use a public static IP, but instead uses only internal IP addresses available in the custom virtual network. For more information, see the [Create an environment](../container-apps/vnet-custom-internal.md?tabs=bash&pivots=azure-cli#create-an-environment) section of [Provide a virtual network to an internal Azure Container Apps environment](../container-apps/vnet-custom-internal.md?tabs=bash&pivots=azure-cli).
+
+ The following table describes the parameters used in the `containerapp env create` command.
+
+ | Parameter | Description |
+ |-|--|
+ | `name` | The name of the Azure Container Apps environment. |
+ | `resource-group` | The name of the resource group. |
+ | `location` | The Azure location where the environment is to deploy. |
+ | `infrastructure-subnet-resource-id` | The Resource ID of a subnet for infrastructure components and user application containers. |
+ | `internal-only` | (Optional) Sets the environment to use only internal IP addresses available in the custom virtual network instead of a public static IP. (Requires the infrastructure subnet resource ID.) |
+++
+## Clean up resources
+
+Be sure to delete the resources you created in this article when you no longer need them. To delete the resources, just delete the resource group that contains them. You can delete the resource group using the Azure portal. Alternately, to delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Access applications using Azure Spring Apps Standard consumption plan in a virtual network](./quickstart-access-standard-consumption-within-virtual-network.md)
spring-apps Quickstart Provision Standard Consumption Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-provision-standard-consumption-service-instance.md
+
+ Title: Quickstart - Provision an Azure Spring Apps Standard consumption plan service instance
+description: Learn how to create a Standard consumption plan in Azure Spring Apps for app deployment.
++++ Last updated : 03/21/2023+++
+# Quickstart: Provision an Azure Spring Apps Standard consumption plan service instance
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption (Preview) ❌ Basic/Standard ❌ Enterprise
+
+This article describes how to create a Standard consumption plan in Azure Spring Apps for application deployment.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- (Optional) [Azure CLI](/cli/azure/install-azure-cli) version 2.28.0 or higher.
+
+## Provision a Standard consumption plan instance
+
+You can use either the Azure portal or the Azure CLI to create a Standard consumption plan.
+
+### [Azure portal](#tab/Azure-portal)
+
+Use the following steps to create an instance of Azure Spring Apps using the Azure portal.
+
+1. Open the [Azure portal](https://portal.azure.com/).
+
+1. In the search box, search for *Azure Spring Apps*, and then select **Azure Spring Apps** from the results.
+
+ :::image type="content" source="media/quickstart-provision-standard-consumption-service-instance/azure-spring-apps-start.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps service highlighted in the search results." lightbox="media/quickstart-provision-standard-consumption-service-instance/azure-spring-apps-start.png":::
+
+1. On the Azure Spring Apps page, select **Create**.
+
+ :::image type="content" source="media/quickstart-provision-standard-consumption-service-instance/azure-spring-apps-create.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps resource with the Create button highlighted." lightbox="media/quickstart-provision-standard-consumption-service-instance/azure-spring-apps-create.png":::
+
+1. Fill out the **Basics** form on the Azure Spring Apps **Create** page using the following guidelines:
+
+ - **Project Details**
+
+ - **Subscription**: Select the subscription you want to be billed for this resource.
+ - **Resource group**: Select an existing resource group or create a new one.
+
+ - **Service Details**
+
+ - **Name**: Create the name for the Azure Spring Apps service instance. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+ - **Location**: Currently, only the following regions are supported: Australia East, Central US, East US, East US 2, West Europe, East Asia, North Europe, South Central US, UK South, West US 3.
+ - **Plan**: Select **Standard Consumption** for the **Pricing tier** option.
+
+ - **App Environment**
+
+ - Select **Create new** to create a new Azure Container Apps environment, or select an existing environment from the dropdown menu.
+
+ :::image type="content" source="media/quickstart-provision-standard-consumption-service-instance/select-azure-container-apps-environment.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps Create page." lightbox="media/quickstart-provision-standard-consumption-service-instance/select-azure-container-apps-environment.png":::
+
+1. Fill out the **Basics** form on the **Create Container Apps environment** page. Use the default value `asa-standard-consumption-app-env` for the **Environment name** and set **Zone redundancy** to **Enabled**.
+
+ :::image type="content" source="media/quickstart-provision-standard-consumption-service-instance/create-azure-container-apps-environment.png" alt-text="Screenshot of Azure portal showing Create Container Apps Environment pane." lightbox="media/quickstart-provision-standard-consumption-service-instance/create-azure-container-apps-environment.png":::
+
+1. Select **Review and create**.
+
+1. On the Azure Spring Apps **Create** page, select **Review and Create** to finish creating the Azure Spring Apps instance.
+
+>[!NOTE]
+> Optionally, you can also create an Azure Container Apps environment with your own virtual network. For more information, see [Quickstart: Create an Azure Spring Apps instance in an Azure Container Apps environment with a virtual network](quickstart-provision-standard-consumption-app-environment-with-virtual-network.md).
+
+### [Azure CLI](#tab/Azure-CLI)
+
+The following sections show you how to create an instance of Azure Spring Apps using the Azure CLI.
+
+## Create an Azure Container Apps environment
+
+An Azure Container Apps environment creates a secure boundary around a group of applications. Apps deployed to the same environment are deployed in the same virtual network and write logs to the same Log Analytics workspace.
+
+You can create the Azure Container Apps environment in one of two ways:
+
+- Using your own virtual network. For more information, see [Quickstart: Create an Azure Spring Apps instance in an Azure Container Apps environment with a virtual network](quickstart-provision-standard-consumption-app-environment-with-virtual-network.md).
+
+- Using a system assigned virtual network, as described in the following procedure.
+
+1. Sign in to Azure by using the following command:
+
+ ```azurecli
+ az login
+ ```
+
+1. Install the Azure Container Apps extension for the Azure CLI by using the following command:
+
+ ```azurecli
+ az extension add --name containerapp --upgrade
+ ```
+
+1. Register the `Microsoft.App` namespace by using the following command:
+
+ ```azurecli
+ az provider register --namespace Microsoft.App
+ ```
+
+1. If you haven't previously used the Azure Monitor Log Analytics workspace, register the `Microsoft.OperationalInsights` provider by using the following command:
+
+ ```azurecli
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
+
+1. Use the following commands to create variables to store name and location information. Be sure to replace the placeholder values with your own values.
+
+ ```bash
+ RESOURCE_GROUP="<resource-group-name>"
+ LOCATION="eastus"
+ AZURE_CONTAINER_APPS_ENVIRONMENT="<Azure-Container-Apps-environment-name>"
+ ```
+
+1. Create the Azure Container Apps environment by using the following command:
+
+ ```azurecli
+ az containerapp env create \
+ --resource-group $RESOURCE_GROUP \
+ --name $AZURE_CONTAINER_APPS_ENVIRONMENT \
+ --location $LOCATION
+ ```
+
+## Deploy an Azure Spring Apps instance
+
+Use the following steps to deploy the service instance:
+
+1. Install the latest Azure CLI extension for Azure Spring Apps by using the following command:
+
+ ```azurecli
+ az extension remove --name spring && \
+ az extension add --name spring
+ ```
+
+1. Register the `Microsoft.AppPlatform` provider for the Azure Spring Apps by using the following command:
+
+ ```azurecli
+ az provider register --namespace Microsoft.AppPlatform
+ ```
+
+1. Use the following commands to create variables to store name and location information. You can skip the first three variables if you set them in the previous section. Be sure to replace the placeholder values with your own values.
+
+ ```azurecli
+ RESOURCE_GROUP="<resource-group-name>"
+ LOCATION="eastus"
+ AZURE_CONTAINER_APPS_ENVIRONMENT="<Azure-Container-Apps-environment-name>"
+
+ AZURE_SPRING_APPS_INSTANCE="<Azure-Spring-Apps-instance-name>"
+ MANAGED_ENV_RESOURCE_ID=$(az containerapp env show \
+ --resource-group $RESOURCE_GROUP \
+ --name $AZURE_CONTAINER_APPS_ENVIRONMENT \
+ --query id \
+ --output tsv)
+ ```
+
+1. Use the following command to deploy a Standard consumption plan for an Azure Spring Apps instance on top of the container environment. Create your Azure Spring Apps instance by specifying the resource of the Azure Container Apps environment you created.
+
+ ```azurecli
+ az spring create \
+ --resource-group $RESOURCE_GROUP \
+ --name $AZURE_SPRING_APPS_INSTANCE \
+ --managed-environment $MANAGED_ENV_RESOURCE_ID \
+ --sku StandardGen2 \
+ --location $LOCATION
+ ```
+
+1. After the deployment, an infrastructure resource group is created in your subscription to host the underlying resources for the Azure Spring Apps Standard consumption plan instance. The resource group is named `{AZURE_CONTAINER_APPS_ENVIRONMENT}_SpringApps_{SPRING_APPS_SERVICE_ID}`, as shown with the following command:
+
+ ```azurecli
+ SERVICE_ID=$(az spring show \
+ --resource-group $RESOURCE_GROUP \
+ --name $AZURE_SPRING_APPS_INSTANCE \
+ --query properties.serviceId \
+ --output tsv)
+ INFRA_RESOURCE_GROUP=${AZURE_CONTAINER_APPS_ENVIRONMENT}_SpringApps_${SERVICE_ID}
+ echo ${INFRA_RESOURCE_GROUP}
+ ```
+++
+## Clean up resources
+
+Be sure to delete the resources you created in this article when you no longer need them. To delete the resources, just delete the resource group that contains them. You can delete the resource group using the Azure portal. Alternately, to delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create an Azure Spring Apps Standard consumption plan instance in an Azure Container Apps environment with a virtual network](./quickstart-provision-standard-consumption-app-environment-with-virtual-network.md)
spring-apps Quickstart Standard Consumption Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-standard-consumption-custom-domain.md
+
+ Title: Quickstart - Map a custom domain to Azure Spring Apps with the Standard consumption plan
+description: Learn how to map a web domain to apps in Azure Spring Apps.
++++ Last updated : 03/21/2023+++
+# Quickstart: Map a custom domain to Azure Spring Apps with the Standard consumption plan
+
+**This article applies to:** ✔️ Standard consumption (Preview) ❌ Basic/Standard ❌ Enterprise
+
+ This article shows you how to map a custom web site domain, such as `https://www.contoso.com`, to your app in Azure Spring Apps. This mapping is accomplished by using a `CNAME` record that the Domain Name Service (DNS) uses to store node names throughout the network.
+
+The mapping secures the custom domain with a certificate and enforces Transport Layer Security (TLS), also known as the Secure Sockets Layer (SSL).
+
+## Prerequisites
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli)
+- An Azure Spring Apps Standard consumption plan service instance. For more information, see [Quickstart: Provision an Azure Spring Apps Standard consumption plan service instance](quickstart-provision-standard-consumption-service-instance.md).
+- A Spring app deployed to Azure Spring Apps. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md).
+- A domain name registered in the DNS registry as provided by a web hosting or domain provider.
+- A certificate resource created under an Azure Container Apps environment. For more information, see [Add certificate in Container App](../container-apps/custom-domains-certificates.md).
+
+## Map a custom domain
+
+To map the custom domain, you create the CNAME record and then use the Azure CLI to bind the domain to an app in Azure Spring Apps.
+
+First, use the following steps to create the `CNAME` record:
+
+1. Contact your DNS provider to request a `CNAME` record to map your domain to the Full Qualified Domain Name (FQDN) of your Spring app.
+
+1. Add a `TXT` record with the name `asuid.{subdomain}` with the value being the verification ID of your Azure Container Apps environment. You can obtain this value by using the following command.
+
+ ```azurecli
+ az containerapp env show \
+ --resource-group <resource-group-name> \
+ --name <Azure-Container-Apps-environment-name> \
+ --query 'properties.customDomainConfiguration.customDomainVerificationId'
+ ```
+
+ After you add the `CNAME` and `TXT` record, the DNS records page will resemble the following table.
+
+ | Name | Type | Value |
+ |||--|
+ | `{subdomain}` | `CNAME` | `testapp.agreeablewater-4c8480b3.eastus.azurecontainerapps.io` |
+ | `asuid.{subdomain}` | `A` | `6K861CL04CATKUCFF604024064D57PB52F5DF7B67BC3033BA9808BDA8998U270` |
+
+1. Next, bind the custom domain to your app by using the following command.
+
+ ```azurecli
+ az spring app custom-domain bind \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --app <app-name> \
+ --domain-name <your-custom-domain-name> \
+ --certificate <name-of-your-certificate-under-Azure-Container-Apps-environment>
+ ```
+
+## Clean up resources
+
+Be sure to delete the resources you created in this article when you no longer need them. To delete the resources, just delete the resource group that contains them. You can delete the resource group using the Azure portal. Alternately, to delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Analyze logs and metrics in the Azure Spring Apps Standard consumption plan](./quickstart-analyze-logs-and-metrics-standard-consumption.md)
spring-apps Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart.md
Title: "Quickstart - Deploy your first application to Azure Spring Apps"
+ Title: Quickstart - Deploy your first application to Azure Spring Apps
description: Describes how to deploy an application to Azure Spring Apps. + Previously updated : 08/22/2022- Last updated : 03/21/2022
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Standard consumption (Preview) ✔️ Basic/Standard ✔️ Enterprise
-This quickstart explains how to deploy a small application to run on Azure Spring Apps.
+This article explains how to deploy a small application to run on Azure Spring Apps.
-The application code used in this tutorial is a simple app. When you've completed this example, the application will be accessible online, and you can manage it through the Azure portal.
+The application code used in this tutorial is a simple app. When you've completed this example, the application is accessible online, and you can manage it through the Azure portal.
This quickstart explains how to:
This quickstart explains how to:
> - Build and deploy an app with a public endpoint. > - Clean up the resources.
-At the end of this quickstart, you'll have a working spring app running on Azure Spring Apps.
+At the end of this quickstart, you have a working Spring app running on Azure Spring Apps.
## [Azure CLI](#tab/Azure-CLI) ## Prerequisites - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Apache Maven](https://maven.apache.org/download.cgi)
- [Azure CLI](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring` - If you're deploying Azure Spring Apps Enterprise tier for the first time in the target subscription, see the [Prerequisites](./how-to-enterprise-marketplace-offer.md#prerequisites) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). ## Provision an instance of Azure Spring Apps
-Use the following steps to provision a service instance.
+Use the following steps to create an Azure Spring Apps service instance.
-1. Select **Try It** and sign in to your Azure account in [Azure Cloud Shell](../cloud-shell/overview.md).
+1. Select **Open Cloudshell** and sign in to your Azure account in [Azure Cloud Shell](../cloud-shell/overview.md).
```azurecli-interactive az account show ```
-1. Azure Cloud Shell workspaces are temporary. On initial start, the shell prompts you to associate an [Azure Storage](../storage/common/storage-introduction.md) instance with your subscription to persist files across sessions.
+1. Azure Cloud Shell workspaces are temporary. When first started, the shell prompts you to select an [Azure Storage](../storage/common/storage-introduction.md) instance with your subscription to persist files across sessions.
:::image type="content" source="media/quickstart/azure-storage-subscription.png" alt-text="Screenshot of Azure Storage subscription." lightbox="media/quickstart/azure-storage-subscription.png":::
Use the following steps to provision a service instance.
az account list --output table ```
-1. Use the following command to choose and link to your subscription.
+1. Use the following command to set your default subscription.
```azurecli-interactive
- az account set --subscription <subscription-id>
+ az account set --subscription <subscription-ID>
``` 1. Use the following command to create a resource group.
Use the following steps to provision a service instance.
```azurecli-interactive az spring create \ --resource-group <name-of-resource-group> \
- --name <service-instance-name>
+ --name <Azure-Spring-Apps-instance-name>
```
-1. Choose **Y** to install the Azure Spring Apps extension and run it.
+1. Select **Y** to install the Azure Spring Apps extension and run it.
## Create an app in your Azure Spring Apps instance
+An [*App*](concept-understand-app-and-deployment.md) is an abstraction of one business app. Apps run in an Azure Spring Apps service instance, as shown in the following diagram.
++ Use the following command to specify the app name on Azure Spring Apps as *hellospring*. ```azurecli-interactive az spring app create \ --resource-group <name-of-resource-group> \
- --service <service-instance-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--name hellospring \ --assign-endpoint true ```
Use the following command to deploy the *.jar* file for the app (*target/spring-
```azurecli-interactive az spring app deploy \ --resource-group <name-of-resource-group> \
- --service <service-instance-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--name hellospring \ --artifact-path target/spring-boot-complete-0.0.1-SNAPSHOT.jar ``` Deploying the application can take a few minutes.
+## [Standard consumption plan with Azure CLI](#tab/Consumption-Plan)
+
+## Prerequisites
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Apache Maven](https://maven.apache.org/download.cgi)
+- [Azure CLI](/cli/azure/install-azure-cli). Install the Azure CLI extension for Azure Spring Apps Standard consumption plan by using the following command.
+
+ ```azurecli
+ az extension remove --name spring && \
+ az extension add --name spring
+ ```
+
+- Use the following commands to install the Azure Container Apps extension for the Azure CLI and register these namespaces: `Microsoft.App`, `Microsoft.OperationalInsights`, and `Microsoft.AppPlatform`
+
+ ```azurecli
+ az extension add --name containerapp --upgrade
+ az provider register --namespace Microsoft.App
+ az provider register --namespace Microsoft.OperationalInsights
+ az provider register --namespace Microsoft.AppPlatform
+ ```
+
+## Provision an instance of Azure Spring Apps
+
+Use the following steps to create an Azure Spring Apps service instance.
+
+1. Select **Open Cloudshell** and sign in to your Azure account in [Azure Cloud Shell](../cloud-shell/overview.md).
+
+ ```azurecli-interactive
+ az account show
+ ```
+
+1. Azure Cloud Shell workspaces are temporary. When first started, the shell prompts you to associate an [Azure Storage](../storage/common/storage-introduction.md) instance with your subscription to persist files across sessions.
+
+ :::image type="content" source="media/quickstart/azure-storage-subscription.png" alt-text="Screenshot of Azure Storage subscription." lightbox="media/quickstart/azure-storage-subscription.png":::
+
+1. After you sign in successfully, use the following command to display a list of your subscriptions.
+
+ ```azurecli-interactive
+ az account list --output table
+ ```
+
+1. Use the following command to set your default subscription.
+
+ ```azurecli-interactive
+ az account set --subscription <subscription-ID>
+ ```
+
+1. Define variables for this quickstart with the names of your resources and desired settings.
+
+ ```azurecli-interactive
+ LOCATION="<region>"
+ RESOURCE_GROUP="<resource-group-name>"
+ MANAGED_ENVIRONMENT="<Azure-Container-Apps-environment-name>"
+ SERVICE_NAME="<Azure-Spring-Apps-instance-name>"
+ APP_NAME="<Spring-app-name>"
+ ```
+
+1. Use the following command to create a resource group.
+
+ ```azurecli-interactive
+ az group create \
+ --resource-group ${RESOURCE_GROUP} \
+ --location ${LOCATION}
+ ```
+
+1. An Azure Container Apps environment creates a secure boundary around a group of applications. Apps deployed to the same environment are deployed in the same virtual network and write logs to the same [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md). To create the environment, run the following command:
+
+ ```azurecli-interactive
+ az containerapp env create \
+ --name ${MANAGED_ENVIRONMENT} \
+ --resource-group ${RESOURCE_GROUP} \
+ --location ${LOCATION}
+ ```
+
+1. Use the following command to create a variable to store the environment resource ID:
+
+ ```azurecli-interactive
+ MANAGED_ENV_RESOURCE_ID=$(az containerapp env show \
+ --name ${MANAGED_ENVIRONMENT} \
+ --resource-group ${RESOURCE_GROUP} \
+ --query id \
+ --output tsv)
+ ```
+
+1. Use the following command to create an Azure Spring Apps service instance. The Azure Spring Apps Standard consumption plan instance is built on top of the Azure Container Apps environment. Create your Azure Spring Apps instance by specifying the resource ID of the environment you created.
+
+ ```azurecli-interactive
+ az spring create \
+ --resource-group ${RESOURCE_GROUP} \
+ --name ${SERVICE_NAME} \
+ --managed-environment ${MANAGED_ENV_RESOURCE_ID} \
+ --sku standardGen2 \
+ --location ${LOCATION}
+ ```
+
+## Create an app in your Azure Spring Apps instance
+
+An [*App*](concept-understand-app-and-deployment.md) is an abstraction of one business app. Apps run in an Azure Spring Apps service instance, or simply service instance, as shown in the following diagram.
++
+Use the following command to specify the app name on Azure Spring Apps and to allocate required resources:
+
+```azurecli-interactive
+az spring app create \
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${SERVICE_NAME} \
+ --name ${APP_NAME} \
+ --cpu 1 \
+ --memory 2Gi \
+ --instance-count 2 \
+ --assign-endpoint true
+```
+
+Azure Spring Apps creates an empty welcome application and provides its URL in the field named `properties.url`.
++
+## Clone and build the Spring Boot sample project
+
+Use the following steps to clone the Spring Boot sample project.
+
+1. Use the following command to clone the [Spring Boot sample project](https://github.com/spring-guides/gs-spring-boot.git) from GitHub.
+
+ ```azurecli-interactive
+ git clone -b boot-2.7 https://github.com/spring-guides/gs-spring-boot.git
+ ```
+
+1. Use the following command to move to the project folder.
+
+ ```azurecli-interactive
+ cd gs-spring-boot/complete
+ ```
+
+1. Use the following [Maven](https://maven.apache.org/what-is-maven.html) command to build the project.
+
+ ```azurecli-interactive
+ mvn clean package -DskipTests
+ ```
+
+## Deploy the local app to Azure Spring Apps
+
+Use the following command to deploy the *.jar* file for the app.
+
+```azurecli-interactive
+az spring app deploy \
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${SERVICE_NAME} \
+ --name ${APP_NAME} \
+ --artifact-path target/spring-boot-complete-0.0.1-SNAPSHOT.jar \
+ --env testEnvKey=testEnvValue \
+ --runtime-version Java_11 \
+ --jvm-options '-Xms1024m -Xmx2048m'
+```
+
+Deploying the application can take a few minutes.
+ ## [IntelliJ](#tab/IntelliJ) ## Prerequisites
Use the following steps to create the project:
1. Use [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Apps. The following URL provides default settings for you.
-```url
-https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client
-```
+ ```url
+ https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client
+ ```
-The following image shows the recommended Initializr settings for the *hellospring* sample project.
+ The following image shows the recommended Initializr settings for the *hellospring* sample project.
-This example uses Java version 11. To use a different Java version, change the Java version setting under **Project Metadata**.
+ This example uses Java version 11. To use a different Java version, change the Java version setting under **Project Metadata**.
+ :::image type="content" source="media/quickstart/initializr-page.png" alt-text="Screenshot of Spring Initializr page." lightbox="media/quickstart/initializr-page.png":::
1. When all dependencies are set, select **Generate**. 1. Download and unpack the package, and then create a web controller for your web application by adding the file *src/main/java/com/example/hellospring/HelloController.java* with the following contents:
- ```java
- package com.example.hellospring;
+ ```java
+ package com.example.hellospring;
- import org.springframework.web.bind.annotation.RestController;
- import org.springframework.web.bind.annotation.RequestMapping;
+ import org.springframework.web.bind.annotation.RestController;
+ import org.springframework.web.bind.annotation.RequestMapping;
- @RestController
- public class HelloController {
+ @RestController
+ public class HelloController {
- @RequestMapping("/")
- public String index() {
- return "Greetings from Azure Spring Apps!";
+ @RequestMapping("/")
+ public String index() {
+ return "Greetings from Azure Spring Apps!";
}
+ }
+ ```
- }
- ```
-
-## Provision an instance of Azure Spring Apps
+## Create an instance of Azure Spring Apps
Use the following steps to create an instance of Azure Spring Apps using the Azure portal.
Use the following steps to create an instance of Azure Spring Apps using the Azu
:::image type="content" source="media/quickstart/spring-apps-create.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps resource with Create button highlighted." lightbox="media/quickstart/spring-apps-create.png":::
-1. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines:
+1. Fill out the form on the Azure Spring Apps **Create** page. Consider the following guidelines:
- **Subscription**: Select the subscription you want to be billed for this resource. - **Resource group**: Creating new resource groups for new resources is a best practice.
- - **Service Name**: Specify the service instance name. You'll use this name later in this article where the *\<service-instance-name\>* placeholder appears. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+ - **Service Name**: Specify the service instance name. You use this name later in this article where the *\<Azure-Spring-Apps-instance-name\>* placeholder appears. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
- **Region**: Select the region for your service instance. :::image type="content" source="media/quickstart/portal-start.png" alt-text="Screenshot of Azure portal showing Azure Spring Apps Create page." lightbox="media/quickstart/portal-start.png":::
To deploy a Spring Boot web app to Azure Spring Apps, follow the steps in [Java
-Once deployment has completed, you can access the app at `https://<service instance name>-hellospring.azuremicroservices.io/`.
+Once deployment has completed, you can access the app at `https://<service-instance-name>-hellospring.azuremicroservices.io/`.
## Clean up resources
echo "Press [ENTER] to continue ..."
## Next steps
-In this quickstart, you learned how to generate a basic Spring project, provision a service instance, build and deploy an app with a public endpoint, and clean up the resources.
+In this quickstart, you learned how to build and deploy a Spring app in an Azure Spring Apps service instance. You also learned how to deploy an app with a public endpoint, and how to clean up resources.
-You also have access to powerful logs, metrics, and distributed tracing capability from the Azure portal. For more information, see [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md).
+You have access to powerful logs, metrics, and distributed tracing capability from the Azure portal. For more information, see [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md).
To learn how to use more Azure Spring capabilities, advance to the quickstart series that deploys a sample application to Azure Spring Apps: > [!div class="nextstepaction"] > [Introduction to the sample app](./quickstart-sample-app-introduction.md)
+To learn how to create a Standard consumption plan in Azure Spring Apps for app deployment, advance to the Standard consumption quickstart series:
+
+> [!div class="nextstepaction"]
+> [Provision an Azure Spring Apps Standard consumption plan service instance](./quickstart-provision-standard-consumption-service-instance.md)
+ For a packaged app template with Azure Spring Apps infrastructure provisioned using Bicep, see [Spring Boot PetClinic Microservices Application Deployed to Azure Spring Apps](https://github.com/Azure-Samples/apptemplates-microservices-spring-app-on-AzureSpringApps). More samples are available on GitHub: [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
spring-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quotas.md
Title: Service plans and quotas for Azure Spring Apps
-description: Learn about service quotas and service plans for Azure Spring Apps
+ Title: Service plans and quotas for Azure Spring Apps
+description: Learn about service quotas and service plans for Azure Spring Apps.
Previously updated : 11/04/2019 Last updated : 03/21/2023
**This article applies to:** ✔️ Java ✔️ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Standard consumption (Preview) ✔️ Basic/Standard ✔️ Enterprise
-All Azure services set default limits and quotas for resources and features. Azure Spring Apps offers three pricing tiers: Basic, Standard, and Enterprise.
+All Azure services set default limits and quotas for resources and features. Azure Spring Apps offers four pricing plans: Basic, Standard, Enterprise, and Standard consumption.
-## Azure Spring Apps service tiers and limits
+## Azure Spring Apps service plans and limits
The following table defines limits for the pricing tiers in Azure Spring Apps.
-| Resource | Scope | Basic | Standard | Enterprise |
-|--|--||--|--|
-| vCPU | per app instance | 1 | 4 | 8 |
-| Memory | per app instance | 2 GB | 8 GB | 32 GB |
-| Azure Spring Apps service instances | per region per subscription | 10 | 10 | 10 |
-| Total app instances | per Azure Spring Apps service instance | 25 | 500 | 500 |
-| Custom Domains | per Azure Spring Apps service instance | 0 | 25 | 25 |
-| Persistent volumes | per Azure Spring Apps service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps | 50 GB/app x 10 apps |
-| Inbound Public Endpoints | per Azure Spring Apps service instance | 10 <sup>1</sup> | 10 <sup>1</sup> | 10 <sup>1</sup> |
-| Outbound Public IPs | per Azure Spring Apps service instance | 1 <sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> |
-| User-assigned managed identities | per app instance | 20 | 20 | 20 |
+| Resource | Scope | Basic | Standard | Enterprise | Standard consumption |
+|-|-|--|-|-|-|
+| vCPU | per app instance | 1 | 4 | 8 | 2 |
+| Memory | per app instance | 2 GB | 8 GB | 32 GB | 4 GB |
+| Azure Spring Apps service instances | per region per subscription | 10 | 10 | 10 | 10 |
+| Total app instances | per Azure Spring Apps service instance | 25 | 500 | 500 | 160 |
+| Custom Domains | per Azure Spring Apps service instance | 0 | 25 | 25 | 25 |
+| Persistent volumes | per Azure Spring Apps service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps | 50 GB/app x 10 apps | Not applicable |
+| Inbound Public Endpoints | per Azure Spring Apps service instance | 10 <sup>1</sup> | 10 <sup>1</sup> | 10 <sup>1</sup> | 10 <sup>1</sup> |
+| Outbound Public IPs | per Azure Spring Apps service instance | 1 <sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> |
+| User-assigned managed identities | per app instance | 20 | 20 | 20 | Not available during preview |
<sup>1</sup> You can increase this limit via support request to a maximum of 1 per app.
spring-apps Standard Consumption Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/standard-consumption-customer-responsibilities.md
+
+ Title: Customer responsibilities for Azure Spring Apps Standard consumption plan in a virtual network
+description: Learn about the customer responsibilities for running an Azure Spring Apps Standard consumption plan service instance in a virtual network.
++++ Last updated : 03/21/2023+++
+# Customer responsibilities for Azure Spring Apps Standard consumption plan in a virtual network
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption (Preview) ❌ Basic/Standard ❌ Enterprise
+
+This article describes the customer responsibilities for running an Azure Spring Apps Standard consumption plan service instance in a virtual network.
+
+Use Network Security Groups (NSGs) to configure virtual networks to conform to the settings required by Kubernetes.
+
+To control all inbound and outbound traffic for the Azure Container Apps environment, you can use NSGs to lock down a network with more restrictive rules than the default NSG rules.
+
+## NSG allow rules
+
+The following tables describe how to configure a collection of NSG allow rules.
+
+> [!NOTE]
+> The subnet associated with a Azure Container Apps environment requires a CIDR prefix of `/23` or larger.
+
+### Outbound with ServiceTags
+
+| Protocol | Port | ServiceTag | Description |
+|-|--|-|-|
+| UDP | `1194` | `AzureCloud.<region>` | Required for internal Azure Kubernetes Service (AKS) secure connection between underlying nodes and the control plane. Replace `<region>` with the region where your container app is deployed. |
+| TCP | `9000` | `AzureCloud.<region>` | Required for internal AKS secure connection between underlying nodes and the control plane. Replace `<region>` with the region where your container app is deployed. |
+| TCP | `443` | `AzureMonitor` | Allows outbound calls to Azure Monitor. |
+| TCP | `443` | `Azure Container Registry` | Enables the Azure Container Registry as described in [Virtual network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). |
+| TCP | `443`, `445` | `Azure Files` | Enables Azure Storage as described in [Virtual network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). |
+
+### Outbound with wild card IP rules
+
+| Protocol | Port | IP | Description |
+|-|--|-|-|
+| TCP | `443` | \* | Set all outbound traffic on port `443` to allow all fully qualified domain name (FQDN) based outbound dependencies that don't have a static IP. |
+| UDP | `123` | \* | NTP server. |
+| TCP | `5671` | \* | Container Apps control plane. |
+| TCP | `5672` | \* | Container Apps control plane. |
+
+### Outbound with FQDN requirements/application rules
+
+| Protocol | Port | FQDN | Description |
+|-|-|-|--|
+| TCP | `443` | `mcr.microsoft.com` | Microsoft Container Registry (MCR). |
+| TCP | `443` | `*.cdn.mscr.io` | MCR storage backed by the Azure Content Delivery Network (CDN). |
+| TCP | `443` | `*.data.mcr.microsoft.com` | MCR storage backed by the Azure CDN. |
+
+### Outbound with FQDN for third-party application performance management (optional)
+
+| Protocol | Port | FQDN | Description |
+|-|-|-|-|
+| TCP | `443/80` | `collector*.newrelic.com` | The required networks of New Relic application and performance monitoring (APM) agents from the US region. See APM Agents Networks. |
+| TCP | `443/80` | `collector*.eu01.nr-data.net` | The required networks of New Relic APM agents from the EU region. See APM Agents Networks. |
+| TCP | `443` | `*.live.dynatrace.com` | The required network of Dynatrace APM agents. |
+| TCP | `443` | `*.live.ruxit.com` | The required network of Dynatrace APM agents. |
+| TCP | `443/80` | `*.saas.appdynamics.com` | The required network of AppDynamics APM agents. See SaaS Domains and IP Ranges. |
+
+#### Considerations
+
+- If you're running HTTP servers, you might need to add ports `80` and `443`.
+- Adding deny rules for some ports and protocols with lower priority than `65000` may cause service interruption and unexpected behavior.
+
+## Next steps
+
+- [Azure Spring Apps documentation](./index.yml)
storage Storage Blob Tags Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md
Blob index tags categorize data in your storage account using key-value tag attr
To learn more about this feature along with known issues and limitations, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md). > [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create.md).
+> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create-javascript.md).
## Set tags
storage Nfs Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/nfs-comparison.md
description: Compare NFS access for Azure Files, Azure Blob Storage, and Azure N
Previously updated : 03/08/2023 Last updated : 03/20/2023
For more general comparisons, see [this article](storage-introduction.md) to com
|Available protocols |NFSv3<br></br>REST<br></br>Data Lake Storage Gen2 |SMB<br><br>NFSv4.1<br></br> (No interoperability between either protocol) |NFSv3 and NFSv4.1<br></br>SMB<br></br>Dual protocol (SMB and NFSv3, SMB and NFSv4.1) | |Key features | Integrated with HPC cache for low latency workloads. <br> </br> Integrated management, including lifecycle, immutable blobs, data failover, and metadata index. | Zonally redundant for high availability. <br></br> Consistent single-digit millisecond latency. <br></br>Predictable performance and cost that scales with capacity. |Extremely low latency (as low as sub-ms).<br></br>Rich ONTAP management capabilities such as snapshots, backup, cross-region replication, and cross-zone replication.<br></br>Consistent hybrid cloud experience. | |Performance (Per volume) |Up to 20,000 IOPS, up to 15 GiB/s throughput. |Up to 100,000 IOPS, up to 10 GiB/s throughput. |Up to 460,000 IOPS, up to 4.5 GiB/s throughput per regular volume, up to 10 GiB/s throughput per large volume. |
-|Scale | Up to 2 PiB for a single volume. <br></br> Up to ~4.75 TiB max for a single file.<br></br>No minimum capacity requirements. |Up to 100 TiB for a single file share.<br></br>Up to 4 TiB for a single file.<br></br>100 GiB min capacity. |Up to 100 TiB for a single regular volume, up to 500 TiB for a large volume.<br></br>Up to 16 TiB for a single file.<br></br>Consistent hybrid cloud experience. |
+|Scale | Up to 5 PiB for a single volume. <br></br> Up to 190.7 TiB for a single blob.<br></br>No minimum capacity requirements. |Up to 100 TiB for a single file share.<br></br>Up to 4 TiB for a single file.<br></br>100 GiB min capacity. |Up to 100 TiB for a single regular volume, up to 500 TiB for a large volume.<br></br>Up to 16 TiB for a single file.<br></br>Consistent hybrid cloud experience. |
|Pricing |[Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) |[Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) |[Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) | ## Next steps
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
The status of items that appear in this table may change over time as support co
## Performance NFS Azure file shares are only offered on premium file shares, which store data on solid-state drives (SSD). The IOPS and throughput of NFS shares scale with the provisioned capacity. See the [provisioned model](understanding-billing.md#provisioned-model) section of the **Understanding billing** article to understand the formulas for IOPS, IO bursting, and throughput. The average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress may face additional latencies due to the high number of open and close operations.
+> [!NOTE]
+> You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance with nconnect](nfs-nconnect-performance.md).
+ ## Workloads > [!IMPORTANT] > Before using NFS Azure file shares in production, see [Troubleshoot NFS Azure file shares](files-troubleshoot-linux-nfs.md) for a list of known issues.
storage Files Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-performance.md
One potential cause is a lack of SMB multi-channel support for standard file sha
- Obtaining a VM with a bigger core might help improve throughput. - Running the client application from multiple VMs will increase throughput. - Use REST APIs where possible.-- For NFS file shares, nconnect is available, in preview. Not recommended for production workloads.
+- For NFS Azure file shares, `nconnect` is available. See [Improve NFS Azure file share performance with nconnect](nfs-nconnect-performance.md).
<a id="slowperformance"></a>
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
description: Learn about new features and enhancements in Azure Files and Azure
Previously updated : 11/08/2022 Last updated : 03/21/2023
# What's new in Azure Files Azure Files is updated regularly to offer new features and enhancements. This article provides detailed information about what's new in Azure Files and Azure File Sync.
+## What's new in 2023
+
+### 2023 quarter 1 (January, February, March)
+#### Nconnect for NFS Azure file shares
+
+Nconnect is a client-side Linux mount option that increases performance at scale by allowing you to use more TCP connections between the Linux client and the Azure Premium Files service for NFSv4.1. With nconnect, you can increase performance at scale using fewer client machines to reduce total cost of ownership. For more information, see [Improve NFS Azure file share performance with nconnect](nfs-nconnect-performance.md).
++ ## What's new in 2022 ### 2022 quarter 4 (October, November, December)
For more information, see:
- [The provisioned model for premium Azure file shares](understanding-billing.md#provisioned-model) - [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/)
-#### NFS 4.1 protocol support is generally available
-Premium Azure file shares now support either the SMB or the NFS 4.1 protocols. NFS 4.1 is available in all regions where Azure Files supports the premium tier, for both locally redundant storage and zone-redundant storage. Azure file shares created with the NFS 4.1 protocol enabled are fully POSIX-compliant, distributed file shares that support a wide variety of Linux and container-based workloads. Some example workloads include: highly available SAP application layer, enterprise messaging, user home directories, custom line-of-business applications, database backups, database replication, and Azure Pipelines.
+#### NFSv4.1 protocol support is generally available
+Premium Azure file shares now support either the SMB or the NFSv4.1 protocols. NFSv4.1 is available in all regions where Azure Files supports the premium tier, for both locally redundant storage and zone-redundant storage. Azure file shares created with the NFSv4.1 protocol enabled are fully POSIX-compliant, distributed file shares that support a wide variety of Linux and container-based workloads. Some example workloads include: highly available SAP application layer, enterprise messaging, user home directories, custom line-of-business applications, database backups, database replication, and Azure Pipelines.
For more information, see:
storage Nfs Nconnect Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/nfs-nconnect-performance.md
+
+ Title: Improve NFS Azure file share performance with nconnect
+description: Learn how using nconnect with Linux clients can improve the performance of NFS Azure file shares at scale.
+++ Last updated : 03/20/2023++++
+# Improve NFS Azure file share performance with `nconnect`
+
+`Nconnect` is a client-side Linux mount option that increases performance at scale by allowing you to use more TCP connections between the client and the Azure Premium Files service for NFSv4.1, while maintaining the resiliency of platform as a service (PaaS).
+
+## Applies to
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![No, this article doesn't apply to standard SMB Azure file shares LRS/ZRS.](../media/icons/no-icon.png) | ![NFS shares are only available in premium Azure file shares.](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![No, this article doesn't apply to standard SMB Azure file shares GRS/GZRS.](../media/icons/no-icon.png) | ![NFS is only available in premium Azure file shares.](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![No, this article doesn't apply to premium SMB Azure file shares.](../media/icons/no-icon.png) | ![Yes, this article applies to premium NFS Azure file shares.](../media/icons/yes-icon.png) |
+
+## Benefits of `nconnect`
+
+With `nconnect`, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). `Nconnect` increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without `nconnect`, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest premium Azure file share provisioning size. With `nconnect`, you can achieve those limits using only 6-7 clients. ThatΓÇÖs almost a 70% reduction in computing cost, while providing significant improvements to IOPS and throughput at scale (see table).
+
+| **Metric (operation)** | **I/O size** | **Performance improvement** |
+|||--|
+| IOPS (write) | 64K, 1024K | 3x |
+| IOPS (read) | All I/O sizes | 2-4x |
+| Throughput (write) | 64K, 1024K | 3x |
+| Throughput (read) | All I/O sizes | 2-4x |
++
+## Prerequisites
+
+- The latest Linux distributions fully support `nconnect`. For older Linux distributions, ensure that the Linux kernel version is 5.3 or higher.
+- Per-mount configuration is only supported when a single file share is used per storage account over a private endpoint.
+
+## Performance impact of `nconnect`
+
+We achieved the following performance results when using the `nconnect` mount option with NFS Azure file shares on Linux clients at scale. For more information on how we achieved these results, see [performance test configuration](#performance-test-configuration).
+++
+## Recommendations
+
+Follow these recommendations to get the best results from `nconnect`.
+
+### Set `nconnect=4`
+While Azure Files supports setting `nconnect` up to the maximum setting of 16, we recommend configuring the mount options with the optimal setting of `nconnect=4`. Currently, there are no gains beyond four channels for the Azure Files implementation of `nconnect`. In fact, exceeding four channels to a single Azure file share from a single client might adversely affect performance due to TCP network saturation.
+
+### Size virtual machines carefully
+Depending on your workload requirements, itΓÇÖs important to correctly size the client machines to avoid being restricted by their [expected network bandwidth](../../virtual-network/virtual-machine-network-throughput.md#expected-network-throughput). You don't need multiple NICs in order to achieve the expected network throughput. While it's common to use [general purpose VMs](../../virtual-machines/sizes-general.md) with Azure Files, various VM types are available depending on your workload needs and region availability. For more information, see [Azure VM Selector](https://azure.microsoft.com/pricing/vm-selector/).
+
+### Keep queue depth less than or equal to 64
+Queue depth is the number of pending I/O requests that a storage resource can service. We don't recommend exceeding the optimal queue depth of 64. If you do, you won't see any more performance gains. For more information, see [Queue depth](understand-performance.md#queue-depth).
+
+### `Nconnect` per-mount configuration
+If a workload requires mounting multiple shares with one or more storage accounts with different `nconnect` settings from a single client, we can't guarantee that those settings will persist when mounting over the public endpoint. Per-mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 1.
+
+#### Scenario 1: (supported) `nconnect` per-mount configuration over private endpoint with multiple storage accounts
+
+- StorageAccount.file.core.windows.net = 10.10.10.10
+- StorageAccount2.file.core.windows.net = 10.10.10.11
+ - `Mount StorageAccount.file.core.windows.net:/FileShare1 nconnect=4`
+ - `Mount StorageAccount2.file.core.windows.net:/FileShare1`
+
+#### Scenario 2: (not supported) `nconnect` per-mount configuration over public endpoint
+
+- StorageAccount.file.core.windows.net = 52.239.238.8
+- StorageAccount2.file.core.windows.net = 52.239.238.7
+ - `Mount StorageAccount.file.core.windows.net:/FileShare1 nconnect=4`
+ - `Mount StorageAccount.file.core.windows.net:/FileShare2`
+ - `Mount StorageAccount2.file.core.windows.net:/FileShare1`
+
+> [!NOTE]
+> Even if the storage account resolves to a different IP address, we can't guarantee that address will persist because public endpoints aren't static addresses.
+
+#### Scenario 3: (not supported) `nconnect` per-mount configuration over private endpoint with multiple shares on single storage account
+
+- StorageAccount.file.core.windows.net = 10.10.10.10
+ - `Mount StorageAccount.file.core.windows.net:/FileShare1 nconnect=4`
+ - `Mount StorageAccount.file.core.windows.net:/FileShare2`
+ - `Mount StorageAccount.file.core.windows.net:/FileShare3`
+
+## Performance test configuration
+
+We used the following resources and benchmarking tools to achieve and measure the results outlined in this article.
+
+- **Single client:** Azure Virtual Machine ([DSv4-Series](../../virtual-machines/dv4-dsv4-series.md#dsv4-series)) with single NIC
+- **OS:** Linux (Ubuntu 20.40)
+- **NFS storage:** Azure Files premium file share (provisioned 30 TiB, set `nconnect=4`)
+
+| **Size** | **vCPU** | **Memory** | **Temp storage (SSD)** | **Max data disks** | **Max NICs** | **Expected network bandwidth** |
+|--|--|||--|--|--|
+| Standard_D16_v4 | 16 | 64 GiB | Remote storage only | 32 | 8 | 12,500 Mbps |
+
+### Benchmarking tools and tests
+
+We used Flexible I/O Tester (FIO), a free, open-source disk I/O tool used both for benchmark and stress/hardware verification. To install FIO, follow the Binary Packages section in the [FIO README file](https://github.com/axboe/fio#readme) to install for the platform of your choice.
+
+While these tests focus on random I/O access patterns, you get similar results when using sequential I/O.
+
+#### High IOPS: 100% reads
+
+**4k I/O size - random read - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
+```
+
+**8k I/O size - random read - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
+```
+
+#### High throughput: 100% reads
+
+**64k I/O size - random read - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
+```
+
+**1024k I/O size - 100% random read - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
+```
+
+#### High IOPS: 100% writes
+
+**4k I/O size - 100% random write - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
+```
+
+**8k I/O size - 100% random write - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
+```
+
+#### High throughput: 100% writes
+
+**64k I/O size - 100% random write - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
+```
+
+**1024k I/O size - 100% random write - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
+```
+
+## Performance considerations
+
+When using the `nconnect` mount option, you should closely evaluate workloads that have the following characteristics:
+
+- Latency sensitive write workloads that are single threaded and/or use a low queue depth (less than 16)
+- Latency sensitive read workloads that are single threaded and/or use a low queue depth in combination with smaller I/O sizes
+
+Not all workloads require high-scale IOPS or throughout performance. For smaller scale workloads, `nconnect` might not make sense. Use the following table to decide whether `nconnect` will be advantageous for your workload. Scenarios highlighted in green are recommended, while those highlighted in red are not. Those highlighted in yellow are neutral.
++
+## See also
+- For mounting instructions, see [Mount NFS file Share to Linux](storage-files-how-to-mount-nfs-shares.md).
+- For a comprehensive list of mount options, see [Linux NFS man page](https://linux.die.net/man/5/nfs).
+- For information on latency, IOPS, throughput, and other performance concepts, see [Understand Azure Files performance](understand-performance.md).
storage Storage Files How To Mount Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md
Azure file shares can be mounted in Linux distributions using either the Server
## Mount an NFS share using the Azure portal
+> [!NOTE]
+> You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance with nconnect](nfs-nconnect-performance.md).
+ 1. Once the file share is created, select the share and select **Connect from Linux**. 1. Enter the mount path you'd like to use, then copy the script. 1. Connect to your client and use the provided mounting script.
synapse-analytics Quickstart Integrate Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/quickstart-integrate-azure-machine-learning.md
This section will guide you on how to create an Azure Machine Learning linked se
### Create an Azure ML linked service
-1. In the Synapse workspace where you want to create the new Azure Machine Learning linked service, go to **Management** > **Linked service**, and create a new linked service with type "Azure Machine Learning".
+1. In the Synapse workspace where you want to create the new Azure Machine Learning linked service, go to **Manage** > **Linked services**, and create a new linked service with type "Azure Machine Learning".
![Create linked service](media/quickstart-integrate-azure-machine-learning/quickstart-integrate-azure-machine-learning-create-linked-service-00a.png)
This step will create a new Service Principal. If you want to use an existing Se
### Create an Azure ML linked service
-1. In the Synapse workspace where you want to create the new Azure Machine Learning linked service, go to **Management** -> **Linked service**, create a new linked service with type "Azure Machine Learning".
+1. In the Synapse workspace where you want to create the new Azure Machine Learning linked service, go to **Manage** -> **Linked services**, create a new linked service with type "Azure Machine Learning".
![Create linked service](media/quickstart-integrate-azure-machine-learning/quickstart-integrate-azure-machine-learning-create-linked-service-00a.png)
synapse-analytics Sql Data Warehouse Tables Distribute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute.md
Title: Distributed tables design guidance description: Recommendations for designing hash-distributed and round-robin distributed tables using dedicated SQL pool.- Previously updated : 09/27/2022 Last updated : 03/20/2023 --++ # Guidance for designing distributed tables using dedicated SQL pool in Azure Synapse Analytics
WITH
``` > [!NOTE]
-> Multi-column distribution is currently in preview for Azure Synapse Analytics. For more information on joining the preview, see multi-column distribution with [CREATE MATERIALIZED VIEW](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql), [CREATE TABLE](/sql/t-sql/statements/create-table-azure-sql-data-warehouse), or [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql).
+> Multi-column distribution in Azure Synapse Analytics can be enabled by changing the database's compatibility level to `50` with this command.
+> `ALTER DATABASE SCOPED CONFIGURATION SET DW_COMPATIBILITY_LEVEL = 50;`
+> For more information on setting the database compatibility level, see [ALTER DATABASE SCOPED CONFIGURATION](/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql). For more information on multi-column distributions, see [CREATE MATERIALIZED VIEW](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql), [CREATE TABLE](/sql/t-sql/statements/create-table-azure-sql-data-warehouse), or [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql).
Data stored in the distribution column(s) can be updated. Updates to data in distribution column(s) could result in data shuffle operation.
virtual-desktop Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights.md
Title: How to monitor with Azure Virtual Desktop Insights - Azure
description: How to use Azure Virtual Desktop Insights. Previously updated : 12/08/2022 Last updated : 03/21/2023
You can open Azure Virtual Desktop Insights with one of the following methods:
- Search for and select **Azure Monitor** from the Azure portal. Select **Insights Hub** under **Insights**, then select **Azure Virtual Desktop**. Once you have the page open, enter the **Subscription**, **Resource group**, **Host pool**, and **Time range** of the environment you want to monitor.
->[!NOTE]
->Azure Virtual Desktop Insights currently only supports monitoring one subscription, resource group, and host pool at a time. However, the public preview for Azure Virtual Desktop Insights at Scale (preview) lets you select multiple subscriptions, resource groups, and host pools at a time. You can use this preview feature by selecting **Insights (Preview)**. If you can't find the environment you want to monitor, see [our troubleshooting documentation](troubleshoot-insights.md).
- ## Log Analytics settings To start using Azure Virtual Desktop Insights, you'll need at least one Log Analytics workspace. Use a designated Log Analytics workspace for your Azure Virtual Desktop session hosts to ensure that performance counters and events are only collected from session hosts in your Azure Virtual Desktop deployment. If you already have a workspace set up, skip ahead to [Set up using the configuration workbook](#set-up-using-the-configuration-workbook). To set one up, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).