Updates from: 09/27/2022 01:11:34
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Previously updated : 04/21/2022 Last updated : 09/14/2022
Microsoft partners with the following ISVs for role-based access control.
| ISV partner | Description and integration walkthroughs | |:-|:--|
+| ![Screenshot of a grit IAM logo.](./medi) provides authentication, authorization, profile and role management, and delegated B2B SaaS application administration. It also enables role-based access control (RBAC) for end-users of Azure AD B2C.|
| ![Screenshot of a n8identity logo](./medi) is an Identity-as-a-Service governance platform that provides solution to address customer accounts migration and Customer Service Requests (CSR) administration running on Microsoft Azure. | | ![Screenshot of a Saviynt logo](./medi) cloud-native platform promotes better security, compliance, and governance through intelligent analytics and cross application integration for streamlining IT modernization. | | ![Screenshot of a WhoIAM Rampart logo](./medi) provides a fully integrated helpdesk and invitation-gated user registration experience. It allows support specialists to efficiently perform tasks like resetting passwords and multi-factor authentication without using Azure. It also enables apps and role-based access control (RBAC) for end-users of Azure AD B2C. |
active-directory-b2c Partner Grit Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-iam.md
+
+ Title: Configure the Grit IAM B2B2C solution with Azure Active Directory B2C
+
+description: Learn how to integrate Azure AD B2C authentication with the Grit IAM B2B2C solution
++++++ Last updated : 9/15/2022+++++
+# Tutorial: Configure the Grit IAM B2B2C solution with Azure Active Directory B2C
+
+In this tutorial, you learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with a [Grit IAM B2B2C](https://www.gritiam.com/b2b2c) solution. You can use the solution to provide secure, reliable, self-serviceable, and user-friendly identity and access management to your customers. Shared profile data such as first name, last name, home address, and email used in web and mobile applications are stored in a centralized manner with consideration to compliance and regulatory needs.
++
+Use Grit's B2BB2C solution for:
+
+- Authentication, authorization, profile and role management, and delegated B2B SaaS application administration.
+- Role-based access control for Azure AD B2C applications.
+
+## Prerequisites
+
+To get started, ensure the following prerequisites are met:
+
+- A Grit IAM account. You can go to [Grit IAM B2B2C solution](https://www.gritiam.com/b2b2c) to get a demo.
+- An Azure AD subscription. If you don't have one, you can create a [free Azure account](https://azure.microsoft.com/free/).
+- An Azure AD B2C tenant linked to the Azure subscription. You can learn more at [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md).
+- Configure your application in the Azure portal.
+
+## Scenario description
+
+Contoso does business with end customers and large enterprises, like Fabrikam_big1 and Fabrikam_big2. There're small enterprise customers like Fabrikam_small1 and Fabrikam_small2 and direct business is done with end customers like Smith1 and Smith2.
+
+*Contoso* has web and mobile applications and develops new applications. The applications rely on user shared profile data such as, first name, last name, address, and email. They want to centralize the profile data, so applications aren't collecting and storing the data. They want to store the profile information in accordance with certain compliance and regulations.
+
+![Screenshot that shows the architecture diagram of how the components are connected to each other.](./media/partner-grit-iam/grit-b2b2c-architecture.png)
+
+This integration is composed of the following components:
+
+- **Azure AD B2C Identity Experience Framework (IEF)**: An engine that executes user journeys, which can include validating credentials, performing MFA, checking user access. It's aided by the Azure AD database and the API layer, which's configured using XML.
+
+- **Grit API layer**: This layer exposes user profile data and metadata about organizations and applications. The data is stored in Azure AD and Cosmos DB.
+
+- **Grit Onboarding portal**: Used by admins to onboard applications and organizations.
+
+- **Grit Admin portal**: Used by the *Contoso* admin and by admins of *fabrikam_big1*, and *fabirkam_small1*. Delegated admins can manage users and their access. Super admins of the organizations manage all users.
++
+- **Grit Visual IEF editor**: A low code/no code editor that customizes the user journey and is provided by Grit. It produces the XML used by IEF. *Contoso* developers use it to customize user journeys.
++
+- **Applications**: Developed by *Contoso* or third parties. Applications use Open ID or SAML to connect to the customer identity and access management (CIAM) system. The tokens they receive contain user-profile information, but can make API calls, with the token as the auth mechanism, to do user-profile data create, read, update and delete (CRUD) operations.
++
+> [!NOTE]
+> Components developed by Grit, except the visual IEF editor, will be deployed in the Contoso Azure environment.
+
+## Configure Grit B2B2C with Azure AD B2C
+
+Use the guidance provided in the following sections to get started with configuration.
+
+### Step 1 - Setup infrastructure
+
+To get started with setup:
+
+- Contact [Grit support](mailto:info@gritsoftwaresystems.com) to obtain access.
+- For evaluation, the Grit support team will deploy the infrastructure in the Grit Azure subscription and they'll give you admin rights.
+- After you purchase the solution, Grit engineers will install the production version in your Azure subscription.
+- The infrastructure integrates with your virtual network (VNet) setup, supports APIM (third-party API management) and the firewall.
+- Grit implementation engineers can provide custom recommendations based on your infrastructure.
+
+### Step 2 - Create admins in the Admin Portal
+
+Use the Grit Admin portal to assign administrators access to the portal where they can perform the following tasks -
+
+- Add other admins such as super, organization, application admin in the hierarchy depending on their permission level.
+
+- View/accept/reject all the user's requests for the application registration.
+
+- Search users.
+
+To learn how to assign admin roles, check the [tutorial.](https://app.archbee.com/doc/j1VX2J3B3xJ-zMqnmlDA5/9IW3PgI2yn1cCpPGm1vVN)
+
+### Step 3 - Onboard organizations
+
+Use the Onboarding portal for one or more of your customers and their identity provider (IdP) that supports OpenID Connect (OIDC) and SAML. Onboard customers without an IdP, for local account authentication. For B2C applications, enable social authentications.
+
+In the Grit Onboarding portal, create a super admin for the tenant. The Onboarding portal defines the claims per application and per organization. Thereafter, the portal creates an endpoint URL for the sign-in and sign-up user flow.
+
+To learn how to onboard an organization, check this [tutorial](https://app.archbee.com/doc/G_YZFq_VwvgMlmX-_efmX/8m90WVb2M6Yi0gCe7yor2).
+
+### Step 4 - Integrate applications using OIDC or SAML
+
+After you onboard the customer, the Grit Onboarding portal provides URLs to onboard the applications.
+
+Learn [how your customers can sign up, sign in, and manage their profiles](add-sign-up-and-sign-in-policy.md?pivots=b2c-custom-policy).
+
+## Test the scenarios
+
+Check the authentication [scenarios](#scenario-description) in your applications. Use the Grit Admin portal to change roles and user properties. Provide delegated access to Admin portal by inviting users.
+
+## Next steps
+
+- [Azure AD B2C custom policy overview](custom-policy-overview.md)
+
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](custom-policy-get-started.md?tabs=applications)
+
+- [SAAS Platform - Organization Application Onboarding Portal](https://app.archbee.com/doc/G_YZFq_VwvgMlmX-_efmX/8m90WVb2M6Yi0gCe7yor2)
+
+- [SAAS Platform - Admin Portal](https://app.archbee.com/doc/j1VX2J3B3xJ-zMqnmlDA5/9IW3PgI2yn1cCpPGm1vVN)
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Previously updated : 06/17/2022 Last updated : 09/23/2022
# Combined security information registration for Azure Active Directory overview
-Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for Multi-Factor Authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both Multi-Factor Authentication and SSPR. We recommend this video on [How to enable and configure SSPR in Azure AD](https://www.youtube.com/watch?v=rA8TvhNcCvQ)
+Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for multifactor authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both multifactor authentication and SSPR. We recommend this video on [How to enable and configure SSPR in Azure AD](https://www.youtube.com/watch?v=rA8TvhNcCvQ)
> [!NOTE] > Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration.
If you want to force a specific language, you can add `?lng=<language>` to the e
## Methods available in combined registration
-Combined registration supports the following authentication methods and actions:
+Combined registration supports the authentication methods and actions in the following table.
| Method | Register | Change | Delete | | | | | |
Combined registration supports the following authentication methods and actions:
| FIDO2 security keys<br />*Managed mode only from the [Security info](https://mysignins.microsoft.com/security-info) page*| Yes | Yes | Yes | > [!NOTE]
-> App passwords are available only to users who have been enforced for Multi-Factor Authentication. App passwords are not available to users who are enabled for Multi-Factor Authentication via a Conditional Access policy.
+> App passwords are available only to users who have been enforced for Azure AD Multi-Factor Authentication. App passwords are not available to users who are enabled for Azure AD Multi-Factor Authentication by a Conditional Access policy.
-Users can set one of the following options as the default Multi-Factor Authentication method:
+Users can set one of the following options as the default multifactor authentication method.
- Microsoft Authenticator ΓÇô push notification or passwordless - Authenticator app or hardware token ΓÇô code
There are two modes of combined registration: interrupt and manage.
- **Interrupt mode** is a wizard-like experience, presented to users when they register or refresh their security info at sign-in. - **Manage mode** is part of the user profile and allows users to manage their security info.
-For both modes, users who have previously registered a method that can be used for Multi-Factor Authentication need to perform Multi-Factor Authentication before they can access their security info. Users must confirm their information before continuing to use their previously registered methods.
+For both modes, users who have previously registered a method that can be used for Azure AD Multi-Factor Authentication need to perform multifactor authentication before they can access their security info. Users must confirm their information before continuing to use their previously registered methods.
### Interrupt mode
-Combined registration adheres to both Multi-Factor Authentication and SSPR policies, if both are enabled for your tenant. These policies control whether a user is interrupted for registration during sign-in and which methods are available for registration. If only an SSPR policy is enabled, then users will be able to skip the registration interruption and complete it at a later time.
+Combined registration adheres to both multifactor authentication and SSPR policies, if both are enabled for your tenant. These policies control whether a user is interrupted for registration during sign-in and which methods are available for registration. If only an SSPR policy is enabled, then users will be able to skip the registration interruption and complete it at a later time.
The following are sample scenarios where users might be prompted to register or refresh their security info: -- *Multi-Factor Authentication registration enforced through Identity Protection:* Users are asked to register during sign-in. They register Multi-Factor Authentication methods and SSPR methods (if the user is enabled for SSPR).-- *Multi-Factor Authentication registration enforced through per-user Multi-Factor Authentication:* Users are asked to register during sign-in. They register Multi-Factor Authentication methods and SSPR methods (if the user is enabled for SSPR).-- *Multi-Factor Authentication registration enforced through Conditional Access or other policies:* Users are asked to register when they use a resource that requires Multi-Factor Authentication. They register Multi-Factor Authentication methods and SSPR methods (if the user is enabled for SSPR).
+- *Multifactor Authentication registration enforced through Identity Protection:* Users are asked to register during sign-in. They register multifactor authentication methods and SSPR methods (if the user is enabled for SSPR).
+- *Multifactor Authentication registration enforced through per-user multifactor authentication:* Users are asked to register during sign-in. They register multifactor authentication methods and SSPR methods (if the user is enabled for SSPR).
+- *Multifactor Authentication registration enforced through Conditional Access or other policies:* Users are asked to register when they use a resource that requires multifactor authentication. They register multifactor authentication methods and SSPR methods (if the user is enabled for SSPR).
- *SSPR registration enforced:* Users are asked to register during sign-in. They register only SSPR methods. - *SSPR refresh enforced:* Users are required to review their security info at an interval set by the admin. Users are shown their info and can confirm the current info or make changes if needed.
-When registration is enforced, users are shown the minimum number of methods needed to be compliant with both Multi-Factor Authentication and SSPR policies, from most to least secure. Users going through combined registration where both MFA and SSPR registration is enforced and the SSPR policy requires two methods will first be required to register an MFA method as the first method and can select another MFA or SSPR specific method as the second registered method (e.g. email, security questions etc.)
+When registration is enforced, users are shown the minimum number of methods needed to be compliant with both multifactor authentication and SSPR policies, from most to least secure. Users going through combined registration where both MFA and SSPR registration is enforced and the SSPR policy requires two methods will first be required to register an MFA method as the first method and can select another MFA or SSPR specific method as the second registered method (e.g. email, security questions etc.)
Consider the following example scenario:
The following flowchart describes which methods are shown to a user when interru
![Combined security info flowchart](media/concept-registration-mfa-sspr-combined/combined-security-info-flow-chart.png)
-If you have both Multi-Factor Authentication and SSPR enabled, we recommend that you enforce Multi-Factor Authentication registration.
+If you have both multifactor authentication and SSPR enabled, we recommend that you enforce multifactor authentication registration.
If the SSPR policy requires users to review their security info at regular intervals, users are interrupted during sign-in and shown all their registered methods. They can confirm the current info if it's up to date, or they can make changes if they need to. Users must perform multi-factor authentication when accessing this page.
Users can access manage mode by going to [https://aka.ms/mysecurityinfo](https:/
An admin has enforced registration.
-A user has not set up all required security info and goes to the Azure portal. After the user enters the user name and password, the user is prompted to set up security info. The user then follows the steps shown in the wizard to set up the required security info. If your settings allow it, the user can choose to set up methods other than those shown by default. After users complete the wizard, they review the methods they set up and their default method for Multi-Factor Authentication. To complete the setup process, the user confirms the info and continues to the Azure portal.
+A user has not set up all required security info and goes to the Azure portal. After the user enters the user name and password, the user is prompted to set up security info. The user then follows the steps shown in the wizard to set up the required security info. If your settings allow it, the user can choose to set up methods other than those shown by default. After users complete the wizard, they review the methods they set up and their default method for multifactor authentication. To complete the setup process, the user confirms the info and continues to the Azure portal.
### Set up security info from My Account
An admin has not enforced registration.
A user who hasn't yet set up all required security info goes to [https://myaccount.microsoft.com](https://myaccount.microsoft.com). The user selects **Security info** in the left pane. From there, the user chooses to add a method, selects any of the methods available, and follows the steps to set up that method. When finished, the user sees the method that was set up on the Security info page.
+### Set up other methods after partial registration
+
+If a user has partially satisfied MFA or SSPR registration due to existing authentication method registrations performed by the user or admin, users will only be asked to register additional information allowed by the Authentication methods policy. If more than one other authentication method is available for the user to choose and register, an option on the registration experience titled **I want to set up another method** will be shown and allow the user to set up their desired authentication method.
++ ### Delete security info from My Account A user who has previously set up at least one method navigates to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). The user chooses to delete one of the previously registered methods. When finished, the user no longer sees that method on the Security info page. ### Change the default method from My Account
-A user who has previously set up at least one method that can be used for Multi-Factor Authentication navigates to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). The user changes the current default method to a different default method. When finished, the user sees the new default method on the Security info page.
+A user who has previously set up at least one method that can be used for multifactor authentication navigates to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). The user changes the current default method to a different default method. When finished, the user sees the new default method on the Security info page.
### Switch directory
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md
Title: Block legacy authentication - Azure Active Directory
-description: Learn how to improve your security posture by blocking legacy authentication using Azure AD Conditional Access.
+description: Block legacy authentication using Azure AD Conditional Access.
Previously updated : 08/22/2022 Last updated : 09/26/2022
To give your users easy access to your cloud apps, Azure Active Directory (Azure AD) supports a broad variety of authentication protocols including legacy authentication. However, legacy authentication doesn't support things like multifactor authentication (MFA). MFA is a common requirement to improve security posture in organizations. > [!NOTE]
-> Effective October 1, 2022, we will begin to permanently disable Basic Authentication for Exchange Online in all Microsoft 365 tenants regardless of usage, except for SMTP Authentication. Read more [here](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online)
+> Effective October 1, 2022, we will begin to permanently disable Basic Authentication for Exchange Online in all Microsoft 365 tenants regardless of usage, except for SMTP Authentication. For more information, see the article [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online)
Alex Weinert, Director of Identity Security at Microsoft, in his March 12, 2020 blog post [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302#) emphasizes why organizations should block legacy authentication and what other tools Microsoft provides to accomplish this task:
There are two ways to use Conditional Access policies to block legacy authentica
### Directly blocking legacy authentication
-The easiest way to block legacy authentication across your entire organization is by configuring a Conditional Access policy that applies specifically to legacy authentication clients and blocks access. When assigning users and applications to the policy, make sure to exclude users and service accounts that still need to sign in using legacy authentication. When choosing the cloud apps in which to apply this policy, select All cloud apps, targeted apps such as Office 365 (recommended) or at a minimum, Office 365 Exchange Online. Configure the client apps condition by selecting **Exchange ActiveSync clients** and **Other clients**. To block access for these client apps, configure the access controls to Block access.
-
-![Client apps condition configured to block legacy auth](./media/block-legacy-authentication/client-apps-condition-configured-yes.png)
+The easiest way to block legacy authentication across your entire organization is by configuring a Conditional Access policy that applies specifically to legacy authentication clients and blocks access. When assigning users and applications to the policy, make sure to exclude users and service accounts that still need to sign in using legacy authentication. When choosing the cloud apps in which to apply this policy, select All cloud apps, targeted apps such as Office 365 (recommended) or at a minimum, Office 365 Exchange Online. Organizations can use the policy available in [Conditional Access templates](concept-conditional-access-policy-common.md) or the common policy [Conditional Access: Block legacy authentication](howto-conditional-access-policy-block-legacy.md) as a reference.
### Indirectly blocking legacy authentication If your organization isn't ready to block legacy authentication across the entire organization, you should ensure that sign-ins using legacy authentication aren't bypassing policies that require grant controls such as requiring multifactor authentication or compliant/hybrid Azure AD joined devices. During authentication, legacy authentication clients don't support sending MFA, device compliance, or join state information to Azure AD. Therefore, apply policies with grant controls to all client applications so that legacy authentication based sign-ins that canΓÇÖt satisfy the grant controls are blocked. With the general availability of the client apps condition in August 2020, newly created Conditional Access policies apply to all client apps by default.
-![Client apps condition default configuration](./media/block-legacy-authentication/client-apps-condition-configured-no.png)
- ## What you should know It can take up to 24 hours for the Conditional Access policy to go into effect.
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Previously updated : 09/06/2022 Last updated : 09/26/2022
Within a Conditional Access policy, an administrator can use access controls to grant or block access to resources. ## Block access
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md
To configure user consent settings through the Azure portal:
# [PowerShell](#tab/azure-powershell)
-To choose which app consent policy governs user consent for applications, you can use the latest [Azure AD PowerShell](/powershell/module/azuread/?view=azureadps-2.0&preserve-view=true) module.
+To choose which app consent policy governs user consent for applications, you can use the [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true) module. The cmdlets used here are included in the [Microsoft.Graph.Identity.SignIns](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.SignIns) module.
-> [!NOTE]
-> The instructions below use the generally available Azure AD PowerShell module ([AzureAD](https://www.powershellgallery.com/packages/AzureAD)). The parameter names are different in the preview version of this module ([AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview)). If you have both modules installed, ensure you're using the cmdlet from the correct module by first running:
->
-> ```powershell
-> Remove-Module AzureADPreview -ErrorAction SilentlyContinue
-> Import-Module AzureAD
-> ```
+#### Connect to Microsoft Graph PowerShell
+
+Connect to Microsoft Graph PowerShell using the least-privilege permission needed. For reading the current user consent settings, use *Policy.Read.All*. For reading and changing the user consent settings, use *Policy.ReadWrite.Authorization*.
+
+```powershell
+Connect-MgGraph -Scopes "Policy.ReadWrite.Authorization"
+```
#### Disable user consent To disable user consent, set the consent policies that govern user consent to empty: ```powershell
-Set-AzureADMSAuthorizationPolicy -DefaultUserRolePermissions @{
+Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{
"PermissionGrantPoliciesAssigned" = @() } ```
Set-AzureADMSAuthorizationPolicy -DefaultUserRolePermissions @{
To allow user consent, choose which app consent policy should govern users' authorization to grant consent to apps: ```powershell
-Set-AzureADMSAuthorizationPolicy -DefaultUserRolePermissions @{
+Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{
"PermissionGrantPoliciesAssigned" = @("managePermissionGrantsForSelf.{consent-policy-id}") } ```
Replace `{consent-policy-id}` with the ID of the policy you want to apply. You c
For example, to enable user consent subject to the built-in policy `microsoft-user-default-low`, run the following commands: ```powershell
-Set-AzureADMSAuthorizationPolicy -DefaultUserRolePermissions @{
+Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{
"PermissionGrantPoliciesAssigned" = @("managePermissionGrantsForSelf.microsoft-user-default-low") } ```
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-app-consent-policies.md
# Manage app consent policies
-With Azure AD PowerShell, you can view and manage app consent policies.
+With [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true), you can view and manage app consent policies.
-An app consent policy consists of one or more "includes" condition sets and zero or more "excludes" condition sets. For an event to be considered in an app consent policy, it must match *at least* one "includes" condition set, and must not match *any* "excludes" condition set.
+An app consent policy consists of one or more "include" condition sets and zero or more "exclude" condition sets. For an event to be considered in an app consent policy, it must match *at least* one "include" condition set, and must not match *any* "exclude" condition set.
Each condition set consists of several conditions. For an event to match a condition set, *all* conditions in the condition set must be met.
App consent policies where the ID begins with "microsoft-" are built-in policies
- A custom directory role with the necessary [permissions to manage app consent policies](../roles/custom-consent-permissions.md#managing-app-consent-policies) - The Microsoft Graph app role (application permission) Policy.ReadWrite.PermissionGrant (when connecting as an app or a service)
-1. Connect to [Azure AD PowerShell](/powershell/module/azuread/).
+1. Connect to [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true).
```powershell
- Connect-AzureAD
+ Connect-MgGraph -Scopes "Policy.ReadWrite.PermissionGrant"
``` ## List existing app consent policies
It's a good idea to start by getting familiar with the existing app consent poli
1. List all app consent policies: ```powershell
- Get-AzureADMSPermissionGrantPolicy | ft Id, DisplayName, Description
+ Get-MgPolicyPermissionGrantPolicy | ft Id, DisplayName, Description
```
-1. View the "includes" condition sets of a policy:
+1. View the "include" condition sets of a policy:
```powershell
- Get-AzureADMSPermissionGrantConditionSet -PolicyId "microsoft-application-admin" `
- -ConditionSetType "includes"
+ Get-MgPolicyPermissionGrantPolicyInclude -PermissionGrantPolicyId "microsoft-application-admin" | fl
```
-1. View the "excludes" condition sets:
+1. View the "exclude" condition sets:
```powershell
- Get-AzureADMSPermissionGrantConditionSet -PolicyId "microsoft-application-admin" `
- -ConditionSetType "excludes"
+ Get-MgPolicyPermissionGrantPolicyExclude -PermissionGrantPolicyId "microsoft-application-admin" | fl
``` ## Create a custom app consent policy
Follow these steps to create a custom app consent policy:
1. Create a new empty app consent policy. ```powershell
- New-AzureADMSPermissionGrantPolicy `
+ New-MgPolicyPermissionGrantPolicy `
-Id "my-custom-policy" ` -DisplayName "My first custom consent policy" ` -Description "This is a sample custom app consent policy." ```
-1. Add "includes" condition sets.
+1. Add "include" condition sets.
```powershell # Include delegated permissions classified "low", for apps from verified publishers
- New-AzureADMSPermissionGrantConditionSet `
- -PolicyId "my-custom-policy" `
- -ConditionSetType "includes" `
+ New-MgPolicyPermissionGrantPolicyInclude `
+ -PermissionGrantPolicyId "my-custom-policy" `
-PermissionType "delegated" ` -PermissionClassification "low" `
- -ClientApplicationsFromVerifiedPublisherOnly $true
+ -ClientApplicationsFromVerifiedPublisherOnly
``` Repeat this step to add additional "include" condition sets.
-1. Optionally, add "excludes" condition sets.
+1. Optionally, add "exclude" condition sets.
```powershell # Retrieve the service principal for the Azure Management API
- $azureApi = Get-AzureADServicePrincipal -Filter "servicePrincipalNames/any(n:n eq 'https://management.azure.com/')"
+ $azureApi = Get-MgServicePrincipal -Filter "servicePrincipalNames/any(n:n eq 'https://management.azure.com/')"
# Exclude delegated permissions for the Azure Management API
- New-AzureADMSPermissionGrantConditionSet `
- -PolicyId "my-custom-policy" `
- -ConditionSetType "excludes" `
+ New-MgPolicyPermissionGrantPolicyExclude `
+ -PermissionGrantPolicyId "my-custom-policy" `
-PermissionType "delegated" ` -ResourceApplication $azureApi.AppId ```
Once the app consent policy has been created, you can [allow user consent](confi
1. The following shows how you can delete a custom app consent policy. **This action cannot be undone.** ```powershell
- Remove-AzureADMSPermissionGrantPolicy -Id "my-custom-policy"
+ Remove-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId "my-custom-policy"
``` > [!WARNING]
The following table provides the list of supported conditions for app consent po
| ClientApplicationIds | A list of **AppId** values for the client applications to match with, or a list with the single value "all" to match any client application. Default is the single value "all". | | ClientApplicationTenantIds | A list of Azure Active Directory tenant IDs in which the client application is registered, or a list with the single value "all" to match with client apps registered in any tenant. Default is the single value "all". | | ClientApplicationPublisherIds | A list of Microsoft Partner Network (MPN) IDs for [verified publishers](../develop/publisher-verification-overview.md) of the client application, or a list with the single value "all" to match with client apps from any publisher. Default is the single value "all". |
-| ClientApplicationsFromVerifiedPublisherOnly | Set to `$true` to only match on client applications with a [verified publishers](../develop/publisher-verification-overview.md). Set to `$false` to match on any client app, even if it does not have a verified publisher. Default is `$false`. |
+| ClientApplicationsFromVerifiedPublisherOnly | Set this switch to only match on client applications with a [verified publishers](../develop/publisher-verification-overview.md). Disable this switch (`-ClientApplicationsFromVerifiedPublisherOnly:$false`) to match on any client app, even if it does not have a verified publisher. Default is `$false`. |
## Next steps
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-concept.md
Previously updated : 03/01/2022 Last updated : 09/26/2022
Role-assignable groups are designed to help prevent potential breaches by having
- Only Global Administrators and Privileged Role Administrators can create a role-assignable group. - The membership type for role-assignable groups must be Assigned and can't be an Azure AD dynamic group. Automated population of dynamic groups could lead to an unwanted account being added to the group and thus assigned to the role. - By default, only Global Administrators and Privileged Role Administrators can manage the membership of a role-assignable group, but you can delegate the management of role-assignable groups by adding group owners.-- RoleManagement.ReadWrite.Directory Microsoft Graph permission is required to be able to manage the membership of such groups; Group.ReadWrite.All won't work.
+- For Microsoft Graph, the *RoleManagement.ReadWrite.Directory* permission is required to be able to manage the membership of role-assignable groups. The *Group.ReadWrite.All* permission won't work.
- To prevent elevation of privilege, only a Privileged Authentication Administrator or a Global Administrator can change the credentials or reset MFA or modify sensitive attributes for members and owners of a role-assignable group. - Group nesting is not supported. A group can't be added as a member of a role-assignable group.
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 08/03/2022 Last updated : 09/26/2022
Users in this role can enable, disable, and delete devices in Azure AD and read
> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/deletedItems.devices/delete | Permanently delete devices, which can no longer be restored |
+> | microsoft.directory/deletedItems.devices/restore | Restore soft deleted devices to original state |
> | microsoft.directory/devices/delete | Delete devices from Azure AD | > | microsoft.directory/devices/disable | Disable devices in Azure AD | > | microsoft.directory/devices/enable | Enable devices in Azure AD |
Users in this role can read and update basic information of users, groups, and s
> | Actions | Description | > | | | > | microsoft.directory/applications/extensionProperties/update | Update extension properties on applications |
+> | microsoft.directory/contacts/create | Create contacts |
> | microsoft.directory/groups/assignLicense | Assign product licenses to groups for group-based licensing | > | microsoft.directory/groups/create | Create Security groups and Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups/reprocessLicenseAssignment | Reprocess license assignments for group-based licensing |
This administrator manages federation between Azure AD organizations and externa
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/domains/federation/update | Update federation property of domains |
> | microsoft.directory/identityProviders/allProperties/allTasks | Read and configure identity providers in Azure Active Directory B2C | ## Global Administrator
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/servicePrincipalCreationPolicies/delete | Delete service principal creation policies | > | microsoft.directory/servicePrincipalCreationPolicies/standard/read | Read standard properties of service principal creation policies | > | microsoft.directory/servicePrincipalCreationPolicies/basic/update | Update basic properties of service principal creation policies |
+> | microsoft.directory/tenantManagement/tenants/create | Create new tenants in Azure Active Directory |
> | microsoft.directory/verifiableCredentials/configuration/contracts/cards/allProperties/read | Read a verifiable credential card | > | microsoft.directory/verifiableCredentials/configuration/contracts/cards/revoke | Revoke a verifiable credential card | > | microsoft.directory/verifiableCredentials/configuration/contracts/create | Create a verifiable credential contract |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/verifiableCredentials/configuration/delete | Delete configuration required to create and manage verifiable credentials and delete all of its verifiable credentials | > | microsoft.directory/verifiableCredentials/configuration/allProperties/read | Read configuration required to create and manage verifiable credentials | > | microsoft.directory/verifiableCredentials/configuration/allProperties/update | Update configuration required to create and manage verifiable credentials |
-> | microsoft.directory/lifecycleManagement/workflows/allProperties/allTasks | Manage all aspects of lifecycle management workflows and tasks in Azure AD |
+> | microsoft.directory/lifecycleWorkflows/workflows/allProperties/allTasks | Manage all aspects of lifecycle workflows and tasks in Azure AD |
> | microsoft.azure.advancedThreatProtection/allEntities/allTasks | Manage all aspects of Azure Advanced Threat Protection | > | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |
Users with this role have access to all administrative features in Azure Active
> | microsoft.office365.userCommunication/allEntities/allTasks | Read and update what's new messages visibility | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | > | microsoft.office365.yammer/allEntities/allProperties/allTasks | Manage all aspects of Yammer |
+> | microsoft.permissionsManagement/allEntities/allProperties/allTasks | Manage all aspects of Entra Permissions Management |
> | microsoft.powerApps/allEntities/allTasks | Manage all aspects of Power Apps | > | microsoft.powerApps.powerBI/allEntities/allTasks | Manage all aspects of Power BI | > | microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/verifiableCredentials/configuration/contracts/cards/allProperties/read | Read a verifiable credential card | > | microsoft.directory/verifiableCredentials/configuration/contracts/allProperties/read | Read a verifiable credential contract | > | microsoft.directory/verifiableCredentials/configuration/allProperties/read | Read configuration required to create and manage verifiable credentials |
-> | microsoft.directory/lifecycleManagement/workflows/allProperties/read | Read all properties of lifecycle management workflows and tasks in Azure AD |
+> | microsoft.directory/lifecycleWorkflows/workflows/allProperties/read | Read all properties of lifecycle workflows and tasks in Azure AD |
> | microsoft.cloudPC/allEntities/allProperties/read | Read all aspects of Windows 365 | > | microsoft.commerce.billing/allEntities/allProperties/read | Read all resources of Office 365 billing | > | microsoft.edge/allEntities/allProperties/read | Read all aspects of Microsoft Edge |
Users in this role can read settings and administrative information across Micro
> | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | > | microsoft.office365.yammer/allEntities/allProperties/read | Read all aspects of Yammer |
+> | microsoft.permissionsManagement/allEntities/allProperties/read | Read all aspects of Entra Permissions Management |
> | microsoft.teams/allEntities/allProperties/read | Read all properties of Microsoft Teams | > | microsoft.virtualVisits/allEntities/allProperties/read | Read all aspects of Virtual Visits | > | microsoft.windows.updatesDeployments/allEntities/allProperties/read | Read all aspects of Windows Update Service |
This role can create and manage all security groups. However, Intune Administrat
> | microsoft.directory/contacts/create | Create contacts | > | microsoft.directory/contacts/delete | Delete contacts | > | microsoft.directory/contacts/basic/update | Update basic properties on contacts |
+> | microsoft.directory/deletedItems.devices/delete | Permanently delete devices, which can no longer be restored |
+> | microsoft.directory/deletedItems.devices/restore | Restore soft deleted devices to original state |
> | microsoft.directory/devices/create | Create devices (enroll in Azure AD) | > | microsoft.directory/devices/delete | Delete devices from Azure AD | > | microsoft.directory/devices/disable | Disable devices in Azure AD |
Assign the Lifecycle Workflows Administrator role to users who need to do the fo
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/lifecycleManagement/workflows/allProperties/allTasks | Manage all aspects of lifecycle management workflows and tasks in Azure AD |
+> | microsoft.directory/lifecycleWorkflows/workflows/allProperties/allTasks | Manage all aspects of lifecycle workflows and tasks in Azure AD |
## Message Center Privacy Reader
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
+> | microsoft.directory/domains/federation/update | Update federation property of domains |
> | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection | > | microsoft.directory/identityProtection/allProperties/update | Update all resources in Azure AD Identity Protection |
Assign the Windows 365 Administrator role to users who need to do the following
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/deletedItems.devices/delete | Permanently delete devices, which can no longer be restored |
+> | microsoft.directory/deletedItems.devices/restore | Restore soft deleted devices to original state |
> | microsoft.directory/devices/create | Create devices (enroll in Azure AD) | > | microsoft.directory/devices/delete | Delete devices from Azure AD | > | microsoft.directory/devices/disable | Disable devices in Azure AD |
aks Aks Planned Maintenance Weekly Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-planned-maintenance-weekly-releases.md
+
+ Title: Use Planned Maintenance for your Azure Kubernetes Service (AKS) cluster weekly releases (preview)
+
+description: Learn how to use Planned Maintenance in Azure Kubernetes Service (AKS) for cluster weekly releases
++ Last updated : 09/16/2021+++++
+# Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster exclusively for weekly releases (preview)
+
+ Planned Maintenance allows you to schedule weekly maintenance windows that will ensure the weekly releases [releases] are controlled. Maintenance Windows are configured using the Azure CLI, allowing you to select from a set of pre-available configurations.
+
+## Before you begin
+
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
++
+### Limitations
+
+When using Planned Maintenance, the following restrictions apply:
+
+- AKS reserves the right to break these windows for unplanned/reactive maintenance operations that are urgent or critical.
+- Currently, performing maintenance operations are considered *best-effort only* and are not guaranteed to occur within a specified window.
+- Updates cannot be blocked for more than seven days.
+++
+## Available pre-created public maintenance configurations for you to pick
+
+There are 2 general kinds of pre-created public maintenance configurations:
+
+- For Weekday (Monday, Tuesday, Wednesday, Thursday), from 10 pm to 6 am next morning.
+- For Weekend (Friday, Saturday, Sunday), from 10 pm to 6 am next morning.
+
+For a list of pre-created public maintenance configurations on the weekday schedule, see below. For weekend schedules, replace `weekday` with `weekend`.
+
+|Configuration name| Time zone|
+|--|--|
+|aks-mrp-cfg-weekday_utc12|UTC+12|
+|...|...|
+|aks-mrp-cfg-weekday_utc1|UTC+1|
+|aks-mrp-cfg-weekday_utc|UTC+0|
+|aks-mrp-cfg-weekday_utc-1|UTC-1|
+|...|...|
+|aks-mrp-cfg-weekday_utc-12|UTC-12|
+
+## Assign a public maintenance configuration to an AKS Cluster
+
+Find the public maintenance configuration ID by name:
+```azurecli-interactive
+az maintenance public-configuration show --resource-name "aks-mrp-cfg-weekday_utc8"
+```
+This call may prompt you to install the `maintenance` extension. Once done, you can proceed:
+
+The output should look like the below example. Be sure to take note of the `id` field -
+```json
+{
+"duration": "08:00",
+"expirationDateTime": null,
+"extensionProperties": {
+"maintenanceSubScope": "AKS"
+},
+"id": "/subscriptions/0159df5c-b605-45a9-9876-36e17d5286e0/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/aks-mrp-cfg-weekday_utc8",
+"installPatches": null,
+"location": "westus2",
+"maintenanceScope": "Resource",
+"name": "aks-mrp-cfg-weekday_utc8",
+"namespace": "Microsoft.Maintenance",
+"recurEvery": "Week Monday,Tuesday,Wednesday,Thursday",
+"startDateTime": "2022-08-01 22:00",
+"systemData": null,
+"tags": {},
+"timeZone": "China Standard Time",
+"type": "Microsoft.Maintenance/publicMaintenanceConfigurations",
+"visibility": "Public"
+}
+```
+
+Next, assign the public maintenance configuration to your AKS cluster using the ID:
+```azurecli-interactive
+az maintenance assignment create --maintenance-configuration-id "/subscriptions/0159df5c-b605-45a9-9876-36e17d5286e0/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/aks-mrp-cfg-weekday_utc8" --name assignmentName --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters"
+```
+## List all maintenance windows in an existing cluster
+```azurecli-interactive
+az maintenance assignment list --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters"
+```
+
+## Delete a public maintenance configuration of an AKS cluster
+```azurecli-interactive
+az maintenance assignment delete --name assignmentName --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters"
+```
+
+<!-- LINKS - Internal -->
+[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
+[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
+[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
+[aks-support-policies]: support-policies.md
+[aks-faq]: faq.md
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-extension-update]: /cli/azure/extension#az_extension_update
+[az-feature-list]: /cli/azure/feature#az_feature_list
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[aks-upgrade]: upgrade-cluster.md
+[releases]:release-tracker.md
aks Dapr Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-troubleshooting.md
+
+ Title: Troubleshoot Dapr extension installation errors
+description: Troubleshoot errors you may encounter while installing the Dapr extension for AKS or Arc for Kubernetes
+++++ Last updated : 09/15/2022+++
+# Troubleshoot Dapr extension installation errors
+
+This article details some common error messages you may encounter while installing the Dapr extension for Azure Kubernetes Service (AKS) or Arc for Kubernetes.
+
+## Installation failure without an error message
+
+If the extension fails to create or update without an error message, you can inspect where the creation of the extension failed by running the `az k8s-extension list` command. For example, if a wrong key is used in the configuration-settings, such as `global.ha=false` instead of `global.ha.enabled=false`:
+
+```azure-cli-interactive
+az k8s-extension list --cluster-type managedClusters --cluster-name myCluster --resource-group myResourceGroup
+```
+
+The below JSON is returned, and the error message is captured in the `message` property.
+
+```json
+"statuses": [
+ {
+ "code": "InstallationFailed",
+ "displayStatus": null,
+ "level": null,
+ "message": "Error: {failed to install chart from path [] for release [dapr-1]: err [template: dapr/charts/dapr_sidecar_injector/templates/dapr_sidecar_injector_poddisruptionbudget.yaml:1:17: executing \"dapr/charts/dapr_sidecar_injector/templates/dapr_sidecar_injector_poddisruptionbudget.yaml\" at <.Values.global.ha.enabled>: can't evaluate field enabled in type interface {}]} occurred while doing the operation : {Installing the extension} on the config",
+ "time": null
+ }
+],
+```
+
+Another example:
+
+```azurecli
+az k8s-extension list --cluster-type managedClusters --cluster-name myCluster --resource-group myResourceGroup
+```
+
+```json
+"statuses": [
+ {
+ "code": "InstallationFailed",
+ "displayStatus": null,
+ "level": null,
+ "message": "The extension operation failed with the following error: unable to add the configuration with configId {extension:microsoft-dapr} due to error: {error while adding the CRD configuration: error {failed to get the immutable configMap from the elevated namespace with err: configmaps 'extension-immutable-values' not found }}. (Code: ExtensionOperationFailed)",
+ "time": null
+ }
+ ]
+```
+
+For these cases, possible remediation actions are to:
+
+- [Restart your AKS or Arc for Kubernetes cluster](./start-stop-cluster.md).
+- Make sure you've [registered the `KubernetesConfiguration` service provider](./dapr.md#register-the-kubernetesconfiguration-service-provider).
+- Force delete and [reinstall the Dapr extension](./dapr.md).
+
+See below for examples of error messages you may encounter during Dapr extension install or update.
+
+## Error: Dapr version doesn't exist
+
+You're installing the Dapr extension and [targeting a specific version](./dapr.md#targeting-a-specific-dapr-version), but run into an error message saying the Dapr version doesn't exist.
+
+```
+(ExtensionOperationFailed) The extension operation failed with the following error: Failed to resolve the extension version from the given values.
+Code: ExtensionOperationFailed
+Message: The extension operation failed with the following error: Failed to resolve the extension version from the given values.
+```
+
+Try installing again, making sure to use a [supported version of Dapr](./dapr.md#dapr-versions).
+
+## Error: Dapr version exists, but not in the mentioned region
+
+Some versions of Dapr aren't available in all regions. If you receive an error message like the following, try installing in an [available region](./dapr.md#cloudsregions) where your Dapr version is supported.
+
+```
+(ExtensionTypeRegistrationGetFailed) Extension type microsoft.dapr is not registered in region <regionname>.
+Code: ExtensionTypeRegistrationGetFailed
+Message: Extension type microsoft.dapr is not registered in region <regionname>
+```
+
+## Error: `dapr-system` already exists
+
+You're installing the Dapr extension for AKS or Arc for Kubernetes, but receive an error message indicating that Dapr already exists. This error message may look like:
+
+```
+(ExtensionOperationFailed) The extension operation failed with the following error: Error: {failed to install chart from path [] for release [dapr-ext]: err [rendered manifests contain a resource that already exists. Unable to continue with install: ServiceAccount "dapr-operator" in namespace "dapr-system" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dapr-ext": current value is "dapr"]} occurred while doing the operation : {Installing the extension} on the config
+```
+
+You need to uninstall Dapr OSS before installing the Dapr extension. For more information, read [Migrate from Dapr OSS](./dapr-migration.md).
+
+## Next steps
+
+If you're still running into issues, explore the [AKS troubleshooting guide](./troubleshooting.md) and the [Dapr OSS troubleshooting guide](https://docs.dapr.io/operations/troubleshooting/common_issues/).
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
The Dapr extension for AKS and Arc for Kubernetes requires outbound URLs on `htt
## Troubleshooting extension errors
-If the extension fails to create or update, you can inspect where the creation of the extension failed by running the `az k8s-extension list` command. For example, if a wrong key is used in the configuration-settings, such as `global.ha=false` instead of `global.ha.enabled=false`:
-
-```azure-cli-interactive
-az k8s-extension list --cluster-type managedClusters --cluster-name myAKSCluster --resource-group myResourceGroup
-```
-
-The below JSON is returned, and the error message is captured in the `message` property.
-
-```json
-"statuses": [
- {
- "code": "InstallationFailed",
- "displayStatus": null,
- "level": null,
- "message": "Error: {failed to install chart from path [] for release [dapr-1]: err [template: dapr/charts/dapr_sidecar_injector/templates/dapr_sidecar_injector_poddisruptionbudget.yaml:1:17: executing \"dapr/charts/dapr_sidecar_injector/templates/dapr_sidecar_injector_poddisruptionbudget.yaml\" at <.Values.global.ha.enabled>: can't evaluate field enabled in type interface {}]} occurred while doing the operation : {Installing the extension} on the config",
- "time": null
- }
-],
-```
+If the extension fails to create or update, try suggestions and solutions in the [Dapr extension troubleshooting guide](./dapr-troubleshooting.md).
### Troubleshooting Dapr
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Previously updated : 09/16/2022 Last updated : 09/26/2022 # Use ImageCleaner to clean up stale images on your Azure Kubernetes Service cluster (preview) It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images can present security issues as they may contain vulnerabilities. By cleaning these unreferenced images, you can remove an area of risk in your clusters. When done manually, this process can be time intensive, which ImageCleaner can mitigate via automatic image identification and removal.
-ImageCleaner is a feature inherited from Eraser. For more information on Eraser, see [Eraser plugin](https://github.com/Azure/eraser)
+> [!NOTE]
+> ImageCleaner is a feature based on [Eraser](https://github.com/Azure/eraser).
+> On an AKS cluster, the feature name and property name is `ImageCleaner` while the relevant ImageCleaner pods' names contain `Eraser`.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
az aks update -g MyResourceGroup -n MyManagedCluster \
--image-cleaner-interval-hours 48 ```
+After the feature is enabled, the `eraser-controller-manager-xxx` pod and `collector-aks-xxx` pod will be deployed.
Based on your configuration, ImageCleaner will generate an `ImageList` containing non-running and vulnerable images at the desired interval. ImageCleaner will automatically remove these images from cluster nodes. ## Manually remove images
And apply it to the cluster:
kubectl apply -f image-list.yml ```
-A job will trigger which causes ImageCleaner to remove the desired images from all nodes.
+A job named `eraser-aks-xxx`will be triggerred which causes ImageCleaner to remove the desired images from all nodes.
## Disable ImageCleaner
api-management Api Management Get Started Publish Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-get-started-publish-versions.md
Enter the values from the following table. Then select **Create** to create your
|||| |**Name** | *demo-conference-api-v1* | Unique name in your API Management instance.<br/><br/>Because a version is in fact a new API based off an API's [revision](api-management-get-started-revise-api.md), this setting is the new API's name. | |**Versioning scheme** | **Path** | The way callers specify the API version. |
-|**Version identifer** | *v1* | Scheme-specific indicator of the version. For **Path**, the suffix for the API URL path. <br/><br/> If **Header** or **Query string** is selected, enter an additional value: the name of the header or query string parameter.<br/><br/> A usage example is displayed. |
+|**Version identifier** | *v1* | Scheme-specific indicator of the version. For **Path**, the suffix for the API URL path. <br/><br/> If **Header** or **Query string** is selected, enter an additional value: the name of the header or query string parameter.<br/><br/> A usage example is displayed. |
|**Products** | **Unlimited** | Optionally, one or more products that the API version is associated with. To publish the API, you must associate it with a product. You can also [add the version to a product](#add-the-version-to-a-product) later. | After creating the version, it now appears underneath **Demo Conference API** in the API List. You now see two APIs: **Original**, and **v1**.
api-management Powershell Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/powershell-create-service-instance.md
Title: Quickstart - Create Azure API Management instance using PowerShell | Microsoft Docs
-description: Create a new Azure API Management instance by using Azure PowerShell.
+ Title: Quickstart - Create API Management instance - PowerShell
+description: Use this quickstart to create a new Azure API Management instance by using Azure PowerShell cmdlets.
Previously updated : 03/30/2022 Last updated : 09/21/2022 # Quickstart: Create a new Azure API Management service instance by using PowerShell
-Azure API Management (APIM) helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. APIM lets you create and manage modern API gateways for existing backend services hosted anywhere. For more information, see the [Overview](api-management-key-concepts.md).
+In this quickstart, you create a new API Management instance by using Azure PowerShell cmdlets.
-This quickstart describes the steps for creating a new API Management instance by using Azure PowerShell cmdlets.
+Azure API Management helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. API Management lets you create and manage modern API gateways for existing backend services hosted anywhere.
+## Prerequisites
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure Cloud Shell or Azure PowerShell
+
+ [!INCLUDE [cloud-shell-try-it-no-header](../../includes/cloud-shell-try-it-no-header.md)]
+
+ If you choose to install and use the PowerShell locally, this quickstart requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create resource group
The following command creates a resource group named *myResourceGroup* in the We
New-AzResourceGroup -Name myResourceGroup -Location WestUS ```
-## Create an API Management service
+## Create an API Management instance
Now that you have a resource group, you can create an API Management service instance. Create one by using [New-AzApiManagement](/powershell/module/az.apimanagement/new-azapimanagement) and provide a service name and publisher details. The service name must be unique within Azure. In the following example, *myapim* is used for the service name. Update the name to a unique value. Also, update the organization name of the API publisher and the admin email address to receive notifications.
-By default, the command creates the instance in the Developer tier, an economical option to evaluate Azure API Management. This tier isn't for production use. For more information about scaling the API Management tiers, see [upgrade and scale](upgrade-and-scale.md).
+By default, the command creates the instance in the Developer tier, an economical option to evaluate Azure API Management. This tier isn't for production use. For more information about the API Management tiers, see [Feature-based comparison of the Azure API Management tiers](api-management-features.md).
> [!NOTE] > This is a long-running action. It can take between 30 and 40 minutes to create and activate an API Management service in this tier.
Name : myapim
Location : West US Sku : Developer Capacity : 1
-CreatedTimeUtc : 9/9/2020 9:07:43 PM
+CreatedTimeUtc : 9/9/2022 9:07:43 PM
ProvisioningState : Succeeded RuntimeUrl : https://myapim.azure-api.net RuntimeRegionalUrl : https://myapi-westus-01.regional.azure-api.net
AdditionalRegions : {}
SslSetting : Microsoft.Azure.Commands.ApiManagement.Models.PsApiManagementSslSetting Identity : EnableClientCertificate :
+EnableClientCertificate :
+Zone :
+DisableGateway : False
+MinimalControlPlaneApiVersion :
+PublicIpAddressId :
+PlatformVersion : stv2
+PublicNetworkAccess : Enabled
+PrivateEndpointConnections :
ResourceGroupName : myResourceGroup ```
app-service Nat Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/nat-gateway-integration.md
az network vnet subnet update --resource-group [myResourceGroup] --vnet-name [my
The same NAT gateway can be used across multiple subnets in the same Virtual Network allowing a NAT gateway to be used across multiple apps and App Service plans.
-NAT gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,512 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scale-nat-gateway) of NAT gateway.
+NAT gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,512 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scalability) of NAT gateway.
## Next steps
app-service Overview Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-diagnostics.md
The tiles or the Troubleshoot link show the available diagnostics for the catego
## Diagnostic report
-After you choose to investigate the issue further by clicking on a topic, you can view more details about the topic often supplemented with graphs and markdowns. Diagnostic report can be a powerful tool for pinpointing the problem with your app. The following is the Overview for Availability and Performance:
+After you choose to investigate the issue further by clicking on a topic, you can view more details about the topic often supplemented with graphs and markdowns. Diagnostic report can be a powerful tool for pinpointing the problem with your app. The following is the Web App Down from Availability and Performance:
![App Service Diagnose and solve problems Availability and Performance category homepage with Web App Down diagnostic selected, which displays an availability chart, Organic SLA percentage and Observations and Solutions for problems that were detected.](./media/app-service-diagnostics/full-diagnostic-report-5.png)
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
When no longer needed, you can delete the resource group, App service, and all r
The [Application Settings](reference-app-settings.md#wordpress) for WordPress admin credentials are only for deployment purposes. Modifying these values has no effect on the WordPress installation. To change the WordPress admin password, see [resetting your password](https://wordpress.org/support/article/resetting-your-password/#to-change-your-password). The [Application Settings for WordPress admin credentials](reference-app-settings.md#wordpress) begin with the **`WORDPRESS_ADMIN_`** prefix. For more information on updating the WordPress admin password, see [Changing WordPress Admin Credentials](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/changing_wordpress_admin_credentials.md).
+## Migrate to App Service on Linux
+
+There's a couple approaches when migrating your WordPress app to App Service on Linux. You could use a WP plugin or migrate manually using FTP and a MySQL client. Additional documentation, including [Migrating to App Service](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_migration_linux_appservices.md), can be found at [WordPress - App Service on Linux](https://github.com/Azure/wordpress-linux-appservice/tree/main/WordPress).
+ ## Next steps Congratulations, you've successfully completed this quickstart!
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
# Tutorial: Build a Quarkus web app with Azure App Service on Linux and PostgreSQL
-This tutorial walks you through the process of building, configuring, deploying, and scaling Java web apps on Azure.
+This tutorial walks you through the process of building, configuring, deploying, and scaling Java web apps on Azure.
When you are finished, you will have a [Quarkus](https://quarkus.io) application storing data in [PostgreSQL](../postgresql/index.yml) database running on [Azure App Service on Linux](overview.md).
-![Screenshot of Quarkus application storing data in PostgreSQL.](./media/tutorial-java-quarkus-postgresql/quarkus-crud-running-locally.png)
In this tutorial, you learn how to:
In this tutorial, you learn how to:
## Prerequisites
-* [Azure CLI](/cli/azure/overview), installed on your own computer.
+* [Azure CLI](/cli/azure/overview), installed on your own computer.
* [Git](https://git-scm.com/) * [Java JDK](/azure/developer/java/fundamentals/java-support-on-azure) * [Maven](https://maven.apache.org)
In this tutorial, you learn how to:
This tutorial uses a sample Fruits list app with a web UI that calls a Quarkus REST API backed by [Azure Database for PostgreSQL](../postgresql/index.yml). The code for the app is available [on GitHub](https://github.com/quarkusio/quarkus-quickstarts/tree/main/hibernate-orm-panache-quickstart). To learn more about writing Java apps using Quarkus and PostgreSQL, see the [Quarkus Hibernate ORM with Panache Guide](https://quarkus.io/guides/hibernate-orm-panache) and the [Quarkus Datasource Guide](https://quarkus.io/guides/datasource). - Run the following commands in your terminal to clone the sample repo and set up the sample app environment. ```bash
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
```azurecli az login az account set -s <your-subscription-id>
- ```
+ ```
2. Create an Azure Resource Group, noting the resource group name (referred to with `$RESOURCE_GROUP` later on)
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
--name <a-resource-group-name> \ --location <a-resource-group-region> ```+ 3. Create an App Service Plan. The App Service Plan is the compute container, it determines your cores, memory, price, and scale. ```azurecli
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
--sku B2 \ --is-linux ```+ 4. Create an app service within the App Service Plan. ```azurecli
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
--runtime "JAVA|11-java11" \ --plan "quarkus-tutorial-app-service-plan" ```+ > [!IMPORTANT] > The `WEBAPP_NAME` must be **unique across all Azure**. A good pattern is to use a combination of your company name or initials of your name along with a good webapp name, for example `johndoe-quarkus-app`.
Follow these steps to create an Azure PostgreSQL database in your subscription.
--start-ip-address 0.0.0.0 \ --end-ip-address 0.0.0.0 ```+ 3. Create a database named `fruits` within the Postgres service with this command: ```azurecli
Use Maven to run the sample.
mvn quarkus:dev ```
+> [!IMPORTANT]
+> Be sure you have the H2 JDBC driver installed. You can add it using the following Maven command: `./mvnw quarkus:add-extension -Dextensions="jdbc-h2"`.
+ This will build the app, run its unit tests, and then start the application in developer live coding. You should see: ```output
INFO [io.quarkus] (Quarkus Main Thread) Installed features: [agroal, cdi, hiber
You can access Quarkus app locally by typing the `w` character into the console, or using this link once the app is started: `http://localhost:8080/`.
-![Screenshot of Quarkus application storing data in PostgreSQL.](./media/tutorial-java-quarkus-postgresql/quarkus-crud-running-locally.png)
If you see exceptions in the output, double-check that the configuration values for `%dev` are correct.
az webapp config appsettings set \
'PORT=8080' \ 'WEBSITES_PORT=8080' ```+ > [!NOTE] > The use of single quotes (`'`) to surround the settings is required if your password has special characters.
az webapp browse \
You should see the app running with the remote URL in the address bar:
-![Screenshot of Quarkus application storing data in PostgreSQL running remotely.](./media/tutorial-java-quarkus-postgresql/quarkus-crud-running-remotely.png)
If you see errors, use the following section to access the log file from the running app:
az appservice plan update --number-of-workers 2 \
## Clean up resources If you don't need these resources for another tutorial (see [Next steps](#next-steps)), you can delete them by running the following command in the Cloud Shell or on your local terminal:+ ```azurecli az group delete --name $RESOURCE_GROUP --yes ```
az group delete --name $RESOURCE_GROUP --yes
## Next steps [Azure for Java Developers](/java/azure/)
-[Quarkus](https://quarkus.io),
+[Quarkus](https://quarkus.io),
[Getting Started with Quarkus](https://quarkus.io/get-started/), and [App Service Linux](overview.md).
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
+
+ Title: 'Tutorial: Access data with managed identity in Java'
+description: Secure Azure Database for PostgreSQL connectivity with managed identity from a sample Java Tomcat app, and apply it to other Azure services.
+ms.devlang: java
+ Last updated : 09/26/2022++++
+# Tutorial: Connect to a PostgreSQL Database from Java Tomcat App Service without secrets using a managed identity
+
+[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](/azure/postgresql/) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the environment variables. In this tutorial, you will learn how to:
+
+> [!div class="checklist"]
+> * Create a PostgreSQL database.
+> * Deploy the sample app to Azure App Service on Tomcat using WAR packaging.
+> * Configure a Spring Boot web application to use Azure AD authentication with PostgreSQL Database.
+> * Connect to PostgreSQL Database with Managed Identity using Service Connector.
++
+## Prerequisites
+
+* [Git](https://git-scm.com/)
+* [Java JDK](/azure/developer/java/fundamentals/java-support-on-azure)
+* [Maven](https://maven.apache.org)
+* [Azure CLI](/cli/azure/overview). This quickstart requires that you are running the latest [edge build of Azure CLI](https://github.com/Azure/azure-cli/blob/dev/doc/try_new_features_before_release.md). [Download and install the edge builds](https://github.com/Azure/azure-cli#edge-builds) for your platform.
+
+## Clone the sample app and prepare the repo
+
+Run the following commands in your terminal to clone the sample repo and set up the sample app environment.
+
+```bash
+git clone https://github.com/Azure-Samples/Passwordless-Connections-for-Java-Apps
+cd Passwordless-Connections-for-Java-Apps/Tomcat/checklist/
+```
+
+## Create an Azure Postgres DB
+
+Follow these steps to create an Azure Database for Postgres Single Server in your subscription. The Spring Boot app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
+
+1. Sign into the Azure CLI, and optionally set your subscription if you have more than one connected to your login credentials.
+
+ ```azurecli-interactive
+ az login
+ az account set --subscription <subscription-ID>
+ ```
+
+1. Create an Azure Resource Group, noting the resource group name.
+
+ ```azurecli-interactive
+ RESOURCE_GROUP=<resource-group-name>
+ LOCATION=eastus
+
+ az group create --name $RESOURCE_GROUP --location $LOCATION
+ ```
+
+1. Create an Azure Postgres Database server. The server is created with an administrator account, but it won't be used as we'll use the Azure Active Directory (Azure AD) admin account to perform administrative tasks.
+
+ ```azurecli-interactive
+ POSTGRESQL_ADMIN_USER=azureuser
+ # PostgreSQL admin access rights won't be used as Azure AD authentication is leveraged to administer the database.
+ POSTGRESQL_ADMIN_PASSWORD=<admin-password>
+ POSTGRESQL_HOST=<postgresql-host-name>
+
+ # Create a PostgreSQL server.
+ az postgres server create \
+ --resource-group $RESOURCE_GROUP \
+ --name $POSTGRESQL_HOST \
+ --location $LOCATION \
+ --admin-user $POSTGRESQL_ADMIN_USER \
+ --admin-password $POSTGRESQL_ADMIN_PASSWORD \
+ --public-network-access 0.0.0.0 \
+ --sku-name B_Gen5_1
+ ```
+
+1. Create a database for the application.
+
+ ```azurecli-interactive
+ DATABASE_NAME=checklist
+
+ az postgres db create \
+ --resource-group $RESOURCE_GROUP \
+ --server-name $POSTGRESQL_HOST \
+ --name $DATABASE_NAME
+ ```
+
+## Deploy the application to App Service
+
+Follow these steps to build a WAR file and deploy to Azure App Service on Tomcat using a WAR packaging.
+
+The changes you made in *application.properties* also apply to the managed identity, so the only thing to do is to remove the existing application settings in App Service.
+
+1. The sample app contains a *pom-war.xml* file that can generate the WAR file. Run the following command to build the app.
+
+ ```bash
+ mvn clean package -f pom-war.xml
+ ```
+
+1. Create an Azure App Service resource on Linux using Tomcat 9.0.
+
+ ```azurecli-interactive
+ # Create an App Service plan
+ az appservice plan create \
+ --resource-group $RESOURCE_GROUP \
+ --name $APPSERVICE_PLAN \
+ --location $LOCATION \
+ --sku B1 \
+ --is-linux
+
+ # Create an App Service resource.
+ az webapp create \
+ --resource-group $RESOURCE_GROUP \
+ --name $APPSERVICE_NAME \
+ --plan $APPSERVICE_PLAN \
+ --runtime "TOMCAT:9.0-jre8"
+ ```
+
+1. Deploy the WAR package to App Service.
+
+ ```azurecli-interactive
+ az webapp deploy \
+ --resource-group $RESOURCE_GROUP \
+ --name $APPSERVICE_NAME \
+ --src-path target/app.war \
+ --type war
+ ```
+
+## Connect Postgres Database with identity connectivity
+
+Next, connect your app to an Postgres Database Single Server with a system-assigned managed identity using Service Connector. To do this, run the [az webapp connection create](/cli/azure/webapp/connection/create#az-webapp-connection-create-postgres) command.
+
+```azurecli-interactive
+az webapp connection create postgres \
+ --resource-group $RESOURCE_GROUP \
+ --name $APPSERVICE_NAME \
+ --target-resource-group $RESOURCE_GROUP \
+ --server $POSTGRESQL_HOST \
+ --database $DATABASE_NAME \
+ --system-assigned-identity
+```
+
+This command creates a connection between your web app and your PostgreSQL server, and manages authentication through a system-assigned managed identity.
+
+## View sample web app
+
+Run the following command to open the deployed web app in your browser.
+
+```azurecli-interactive
+az webapp browse \
+ --resource-group $RESOURCE_GROUP \
+ --name MyWebapp \
+ --name $APPSERVICE_NAME
+```
++
+## Next steps
+
+Learn more about running Java apps on App Service on Linux in the developer guide.
+
+> [!div class="nextstepaction"]
+> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux)
attestation Claim Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-sets.md
Claims to be used by policy authors to define authorization rules in an SGX atte
- **x-ms-sgx-mrsigner**: A string value, which identifies the author of SGX enclave.
- MRSIGNER is the hash of the enclave authorΓÇÖs public key which is used to sign the enclave binary. By validating MRSIGNER via an attestation policy, customers can verify if trusted binaries are running inside an enclave. When the policy claim does not match the enclave authorΓÇÖs MRSIGNER, it implies that the enclave binary is not signed by a trusted source and the attestation fails.
+ MRSIGNER is the hash of the enclave authorΓÇÖs public key which is associated with the private key used to sign the enclave binary. By validating MRSIGNER via an attestation policy, customers can verify if trusted binaries are running inside an enclave. When the policy claim does not match the enclave authorΓÇÖs MRSIGNER, it implies that the enclave binary is not signed by a trusted source and the attestation fails.
When an enclave author prefers to rotate MRSIGNER for security reasons, Azure Attestation policy must be updated to support the new and old MRSIGNER values before the binaries are updated. Otherwise authorization checks will fail resulting in attestation failures.
azure-arc Automated Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/automated-integration-testing.md
At a high-level, the launcher performs the following sequence of steps:
3. Perform CRD metadata scan to discover existing Arc and Arc Data Services Custom Resources 4. Clean up any existing Custom Resources in Kubernetes, and subsequent resources in Azure. If any mismatch between the credentials in `.test.env` compared to resources existing in the cluster, quit. 5. Generate a unique set of environment variables based on timestamp for Arc Cluster name, Data Controller and Custom Location/Namespace. Prints out the environment variables, obfuscating sensitive values (e.g. Service Principal Password etc.)
-6. a. For Direct Mode - Onboard the Cluster to Azure Arc, then deploys the Controller via the [unified experience](/create-data-controller-direct-cli?tabs=linux#deployunified-experience)
+6. a. For Direct Mode - Onboard the Cluster to Azure Arc, then deploys the Controller via the [unified experience](create-data-controller-direct-cli.md?tabs=linux#deployunified-experience)
b. For Indirect Mode: deploy the Data Controller 7. Once Data Controller is `Ready`, generate a set of Azure CLI ([`az arcdata dc debug`](/cli/azure/arcdata/dc/debug?view=azure-cli-latest&preserve-view=true)) logs and store locally, labeled as `setup-complete` - as a baseline. 8. Use the `TESTS_DIRECT/INDIRECT` environment variable from `.test.env` to launch a set of parallelized Sonobuoy test runs based on a space-separated array (`TESTS_(IN)DIRECT`). These runs execute in a new `sonobuoy` namespace, using `arc-sb-plugin` pod that contains the Pytest validation tests.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
In version 2.x and later versions of the Functions runtime, configures app behav
In version 2.x and later versions of the Functions runtime, application settings can override [host.json](functions-host-json.md) settings in the current environment. These overrides are expressed as application settings named `AzureFunctionsJobHost__path__to__setting`. For more information, see [Override host.json values](functions-host-json.md#override-hostjson-values).
-## AzureFunctionsWebHost__hostid
+## AzureFunctionsWebHost__hostId
Sets the host ID for a given function app, which should be a unique ID. This setting overrides the automatically generated host ID value for your app. Use this setting only when you need to prevent host ID collisions between function apps that share the same storage account.
A host ID must be between 1 and 32 characters, contain only lowercase letters, n
|Key|Sample value| |||
-|AzureFunctionsWebHost__hostid|`myuniquefunctionappname123456789`|
+|AzureFunctionsWebHost__hostId|`myuniquefunctionappname123456789`|
For more information, see [Host ID considerations](storage-considerations.md#host-id-considerations).
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
When you develop your functions locally, any local settings required by your app
+ To learn more about local development of compiled C# functions (both in-process and isolated process) using Visual Studio, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). + To learn more about local development of functions using VS Code on a Mac, Linux, or Windows computer, see the Visual Studio Code getting started article for your preferred language: + [C# (in-process)](create-first-function-vs-code-csharp.md)
- + [C# )isolated process)](create-first-function-vs-code-csharp.md?tabs=isolated-process)
+ + [C# (isolated process)](create-first-function-vs-code-csharp.md?tabs=isolated-process)
+ [Java](create-first-function-vs-code-java.md) + [JavaScript](create-first-function-vs-code-node.md) + [PowerShell](create-first-function-vs-code-powershell.md)
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
Title: Azure Functions runtime versions overview
description: Azure Functions supports multiple versions of the runtime. Learn the differences between them and how to choose the one that's right for you. Previously updated : 07/06/2022 Last updated : 09/23/2022 zone_pivot_groups: programming-languages-set-functions
If you don't see your programming language, go select it from the [top of the pa
#### Runtime -- Azure Functions Proxies are no longer supported in 4.x. You're recommended to use [Azure API Management](../api-management/import-function-app-as-api.md).
+- Azure Functions proxies is a legacy feature for versions 1.x through 3.x of the Azure Functions runtime. Support for Functions proxies is being returned in version 4.x so that you can successfully upgrade your function apps to the latest runtime version. As soon as possible, you should instead switch to integrating your function apps with Azure API Management. API Management lets you take advantage of a more complete set of features for defining, securing, managing, and monetizing your Functions-based APIs. For more information, see [API Management integration](functions-proxies.md#api-management-integration). For information about the pending return of proxies in version 4.x, [Monitor the App Service announcements page](https://github.com/Azure/app-service-announcements/issues).
- Logging to Azure Storage using *AzureWebJobsDashboard* is no longer supported in 4.x. You should instead use [Application Insights](./functions-monitoring.md). ([#1923](https://github.com/Azure/Azure-Functions/issues/1923))
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
You can use the following strategies to avoid host ID collisions:
### Override the host ID
-You can explicitly set a specific host ID for your function app in the application settings by using the `AzureFunctionsWebHost__hostid` setting. For more information, see [AzureFunctionsWebHost__hostid](functions-app-settings.md#azurefunctionswebhost__hostid).
+You can explicitly set a specific host ID for your function app in the application settings by using the `AzureFunctionsWebHost__hostId` setting. For more information, see [AzureFunctionsWebHost__hostId](functions-app-settings.md#azurefunctionswebhost__hostid).
When the collision occurs between slots, you may need to mark this setting as a slot setting. To learn how to create app settings, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
do
$randomContent = New-Guid $logRecord = "$(Get-Date -format s)Z Record number $count with random content $randomContent" $logRecord | Out-File "$logFolder\\$logFileName" -Encoding utf8 -Append
- Sleep $sleepSeconds
+ Start-Sleep $sleepSeconds
} while ($true)
azure-monitor Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/console.md
telemetryClient.TrackTrace("Hello World!");
``` > [!NOTE]
-> Telemetry is not sent instantly. Telemetry items are batched and sent by the ApplicationInsights SDK. In Console apps, which exit right after calling `Track()` methods, telemetry may not be sent unless `Flush()` and `Sleep`/`Delay` is done before the app exits as shown in [full example](#full-example) later in this article. `Sleep` is not required if you are using `InMemoryChannel`. There is an active issue regarding the need for `Sleep` which is tracked here: [ApplicationInsights-dotnet/issues/407](https://github.com/microsoft/ApplicationInsights-dotnet/issues/407)
+> - Telemetry isn't sent instantly; items are batched and sent by the ApplicationInsights SDK. Console apps exit after calling `Track()` methods.
+> - Telemetry may not be sent unless `Flush()` and `Sleep`/`Delay` is done before the app exits as shown in [full example](#full-example) later in this article. `Sleep` is not required if you're using `InMemoryChannel`.
* Install latest version of [Microsoft.ApplicationInsights.DependencyCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector) package - it automatically tracks HTTP, SQL, or some other external dependency calls. You may initialize and configure Application Insights from the code or using `ApplicationInsights.config` file. Make sure initialization happens as early as possible. > [!NOTE]
-> Instructions referring to **ApplicationInsights.config** are only applicable to apps that are targeting the .NET Framework, and do not apply to .NET Core applications.
+> - **ApplicationInsights.config** is not supported by .NET Core applications.
### Using config file
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Container insights supports the following environments:
- [Azure Arc-enabled Kubernetes cluster](../../azure-arc/kubernetes/overview.md) - [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises - [AKS engine](https://github.com/Azure/aks-engine)
- - [Red Hat OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4.x
+ - [Red Hat OpenShift](https://docs.openshift.com/container-platform/latest/welcome/https://docsupdatetracker.net/index.html) version 4.x
## Supported Kubernetes versions The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 09/22/2022 Last updated : 09/26/2022 # Guidelines for Azure NetApp Files network planning
If the subnet has a combination of volumes with the Standard and Basic network f
Configuring user-defined routes (UDRs) on the source VM subnets with address prefix of delegated subnet and next hop as NVA isn't supported for volumes with the Basic network features. Such a setting will result in connectivity issues.
+> [!NOTE]
+> To access an Azure NetApp Files volume from an on-premises network via a VNet gateway (ExpressRoute or VPN) and firewall, configure the route table assigned to the VNet gateway to include the `/32` IPv4 address of the Azure NetApp Files volume listed and point to the firewall as the next hop. Using an aggregate address space that includes the Azure NetApp Files volume IP address will not forward the Azure NetApp Files traffic to the firewall.
+ ## Azure native environments The following diagram illustrates an Azure-native environment:
azure-netapp-files Cross Region Replication Create Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-create-peering.md
To authorize the replication, you need to obtain the resource ID of the replicat
* [Manage disaster recovery](cross-region-replication-manage-disaster-recovery.md) * [Delete volume replications or volumes](cross-region-replication-delete.md) * [Troubleshoot cross-region-replication](troubleshoot-cross-region-replication.md)-
+* [Manage Azure NetApp Files volume replication with the CLI](/cli/azure/netappfiles/volume/replication)
azure-resource-manager Bicep Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-array.md
description: Describes the functions to use in a Bicep file for working with arr
Previously updated : 04/12/2022- Last updated : 09/26/2022 + # Array functions for Bicep This article describes the Bicep functions for working with arrays. The lambda functions for working with arrays can be found [here](./bicep-functions-lambda.md).
The output from the preceding example with the default values is:
| objectEmpty | Bool | True | | stringEmpty | Bool | True |
+### Quickstart examples
+
+The following example is extracted from a quickstart template, [SQL Server VM with performance optimized storage settings
+](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.attestation/attestation-provider-create/main.bicep):
+
+```bicep
+@description('Array containing DNS Servers')
+param dnsServers array = []
+
+...
+
+resource vnet 'Microsoft.Network/virtualNetworks@2021-02-01' = {
+ name: vnetName
+ location: location
+ properties: {
+ addressSpace: {
+ addressPrefixes: vnetAddressSpace
+ }
+ dhcpOptions: empty(dnsServers) ? null : {
+ dnsServers: dnsServers
+ }
+ ...
+ }
+}
+```
+
+In the [conditional expression](./operators-logical.md#conditional-expression--), the empty function is used to check whether the **dnsServers** array is an empty array.
+ ## first `first(arg1)`
The output from the preceding example with the default values is:
`flatten(arrayToFlatten)`
-Takes an array of arrays, and returns an array of sub-array elements, in the original order. Sub-arrays are only flattened once, not recursively.
+Takes an array of arrays, and returns an array of subarray elements, in the original order. Subarrays are only flattened once, not recursively.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
| Parameter | Required | Type | Description | |: |: |: |: |
-| arrayToFlattern |Yes |array |The array of sub-arrays to flatten.|
+| arrayToFlattern |Yes |array |The array of subarrays to flatten.|
### Return value
The output from the preceding example with the default values is:
| stringLength | Int | 13 | | objectLength | Int | 4 |
+### Quickstart examples
+
+The following example is extracted from a quickstart template, [Deploy API Management in external VNet with public IP
+](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-external-vnet-publicip):
+
+```bicep
+@description('Numbers for availability zones, for example, 1,2,3.')
+param availabilityZones array = [
+ '1'
+ '2'
+]
+
+resource exampleApim 'Microsoft.ApiManagement/service@2021-08-01' = {
+ name: apiManagementName
+ location: location
+ sku: {
+ name: sku
+ capacity: skuCount
+ }
+ zones: ((length(availabilityZones) == 0) ? null : availabilityZones)
+ ...
+}
+```
+
+In the [conditional expression](./operators-logical.md#conditional-expression--), the `length` function check the length of the **availabilityZones** array.
+
+More examples can be found in these quickstart Bicep files:
+- [Backup Resource Manager VMs using Recovery Services vault
+](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.recoveryservices/recovery-services-backup-vms/)
+- [Deploy API Management into Availability Zones](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-simple-zones)
+- [Create a Firewall and FirewallPolicy with Rules and Ipgroups](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/azurefirewall-create-with-firewallpolicy-apprule-netrule-ipgroups)
+- [Create a sandbox setup of Azure Firewall with Zones](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/azurefirewall-with-zones-sandbox)
+ ## max `max(arg1)`
The output from the preceding example with the default values is:
| - | - | -- | | rangeOutput | Array | [5, 6, 7] |
+### Quickstart examples
+
+The following example is extracted from a quickstart template, [Two VMs in VNET - Internal Load Balancer and LB rules
+](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/2-vms-internal-load-balancer):
+
+```bicep
+...
+var numberOfInstances = 2
+
+resource networkInterface 'Microsoft.Network/networkInterfaces@2021-05-01' = [for i in range(0, numberOfInstances): {
+ name: '${networkInterfaceName}${i}'
+ location: location
+ properties: {
+ ...
+ }
+}]
+
+resource vm 'Microsoft.Compute/virtualMachines@2021-11-01' = [for i in range(0, numberOfInstances): {
+ name: '${vmNamePrefix}${i}'
+ location: location
+ properties: {
+ ...
+ }
+}]
+```
+
+The Bicep file creates two networkInterface and two virtualMachine resources.
+
+More examples can be found in these quickstart Bicep files:
+
+- [Multi VM Template with Managed Disk](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-copy-managed-disks)
+- [Create a VM with multiple empty StandardSSD_LRS Data Disks](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-with-standardssd-disk)
+- [Create a Firewall and FirewallPolicy with Rules and Ipgroups](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/azurefirewall-create-with-firewallpolicy-apprule-netrule-ipgroups)
+- [Create an Azure Firewall with IpGroups](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/azurefirewall-create-with-ipgroups-and-linux-jumpbox)
+- [Create a sandbox setup of Azure Firewall with Zones](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/azurefirewall-with-zones-sandbox)
+- [Create an Azure Firewall with multiple IP public addresses](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/fw-docs-qs)
+- [Create a standard load-balancer](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/load-balancer-standard-create)
+- [Azure Traffic Manager VM example](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/traffic-manager-vm)
+- [Create A Security Automation for specific Alerts](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.security/securitycenter-create-automation-for-alertnamecontains)
+- [SQL Server VM with performance optimized storage settings](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sqlvirtualmachine/sql-vm-new-storage)
+- [Create a storage account with multiple Blob containers](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.storage/storage-multi-blob-container)
+- [Create a storage account with multiple file shares](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.storage/storage-multi-file-share)
+ ## skip `skip(originalValue, numberToSkip)`
azure-resource-manager Msbuild Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/msbuild-bicep-file.md
Title: Use MSBuild to convert Bicep to JSON description: Use MSBuild to convert a Bicep file to Azure Resource Manager template (ARM template) JSON. Previously updated : 07/14/2022 Last updated : 09/26/2022
For [Microsoft.Build.NoTargets](/dotnet/core/project-sdk/overview#project-files)
The following example converts Bicep to JSON inside a classic project file that's not SDK-based. Only use the classic example if the previous examples don't work for you. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
-In this example, the `ProjectGuid`, `RootNamespace` and `AssemblyName` properties contain placeholder values. When you create a project file, a unique GUID is created and the name values match your project's name.
+In this example, the `ProjectGuid`, `RootNamespace` and `AssemblyName` properties contain placeholder values. When you create a project file, a unique GUID is created, and the name values match your project's name.
```xml <?xml version="1.0" encoding="utf-8"?>
param location string = resourceGroup().location
var storageAccountName = 'storage${uniqueString(resourceGroup().id)}'
-resource storageAccount 'Microsoft.Storage/storageAccounts@2021-09-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-05-01' = {
name: storageAccountName location: location sku: {
Run MSBuild to convert the Bicep file to JSON.
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-09-01",
+ "apiVersion": "2022-05-01",
"name": "[variables('storageAccountName')]", "location": "[parameters('location')]", "sku": {
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
To learn more about limits on a more granular level, such as document size, quer
[!INCLUDE [azure-cognitive-services-limits](../../../includes/azure-cognitive-services-limits.md)]
+## Azure Container Apps limits
+
+For Azure Container Apps limits, see [Quotas in Azure Container Apps](../../container-apps/quotas.md).
+ ## Azure Cosmos DB limits For Azure Cosmos DB limits, see [Limits in Azure Cosmos DB](../../cosmos-db/concepts-limits.md).
azure-video-indexer Clapperboard Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/clapperboard-metadata.md
The clapperboard insight is used to detect clapper board instances and informati
When the movie is being edited, the slate is removed from the scene but a metadata with what's on the clapper board is important. Azure Video Indexer extracts the data from clapperboards, preserves and presents the metadata as described in this article.
-This insight is most useful to customers involved in the movie post-production process.
- ## View the insight ### View post-production insights
azure-vmware Concepts Design Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-design-public-internet-access.md
Last updated 5/12/2022
-# Internet connectivity design considerations (Preview)
+# Internet connectivity design considerations
There are three primary patterns for creating outbound access to the Internet from Azure VMware Solution and to enable inbound Internet access to resources on your Azure VMware Solution private cloud.
backup Backup Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md
Title: Overview of Backup vaults description: An overview of Backup vaults. Previously updated : 02/14/2022 Last updated : 09/26/2022
In the **Backup Instances** tile, you get a summarized view of all backup instan
![Backup jobs](./media/backup-vault-overview/backup-jobs.png)
-## Move a Backup vault across Azure subscriptions/resource groups (Public Preview)
+## Move a Backup vault across Azure subscriptions/resource groups
This section explains how to move a Backup vault (configured for Azure Backup) across Azure subscriptions and resource groups using the Azure portal.
Troubleshoot the following common issues you might encounter during Backup vault
**Cause**: Resource move for Backup vault is currently not supported in the selected Azure region.
-**Recommendation**: Ensure that you've selected one of the supported regions to move Backup vaults. See [Supported regions](#supported-regions).
+**Recommendation**: Ensure that you've selected one of the supported regions to move Backup vaults. See [Supported regions](#supported-regions
+
+#### UserErrorCrossTenantMSIMoveNotSupported
+
+**Cause**: This error occurs if the subscription with which resource is associated has moved to a different Tenant, but the Managed Identity is still associated with the old Tenant.
+
+**Recommendation**: Remove the Managed Identity from the existing Tenant; move the resource and add it again to the new one.
## Next steps
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md
This topic explains how to enable user-assigned managed identities on Batch pool
First, [create your user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) in the same tenant as your Batch account. You can create the identity using the Azure portal, the Azure Command-Line Interface (Azure CLI), PowerShell, Azure Resource Manager, or the Azure REST API. This managed identity does not need to be in the same resource group or even in the same subscription.
+> [!IMPORTANT]
+> Identities must be configured as user-assigned managed identities. The system-assigned managed identity is available for retrieving [customer-managed keys from Azure KeyVault](batch-customer-managed-key.md), but these are not supported in batch pools.
+ ## Create a Batch pool with user-assigned managed identities After you've created one or more user-assigned managed identities, you can create a Batch pool with that identity or those identities. You can:
confidential-ledger Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-portal.md
Sign in to the Azure portal at https://portal.azure.com.
1. From the Azure portal menu, or from the Home page, select **Create a resource**.
-1. In the Search box, enter "confidential ledger".
-
-1. From the results list, choose **confidential ledger**.
-
-1. On the confidential ledger section, choose **Create**.
+1. In the Search box, enter "Confidential Ledger", select said application, and then choose **Create**.
1. On the Create confidential ledger section, provide the following information:
- - **Name**: Provide your confidential ledger a unique name.
- - **Subscription**: Choose a subscription.
+ - **Name**: Provide a unique name.
+ - **Subscription**: Choose the desired subscription.
- **Resource Group**: Select **Create new*** and enter a resource group name. - **Location**: In the pull-down menu, choose a location. - Leave the other options to their defaults.
Sign in to the Azure portal at https://portal.azure.com.
1. You must now add an Azure AD-based or certificate-based user to your confidential ledger with a role of "Administrator." In this quickstart, we'll add an Azure AD-based user. Select **+ Add AAD-Based User**.
-1. You must add an Azure AD-based or Certificate-based user. Search the right-hand pane for your email address. Select your row, and then choose **Select** at the bottom of the pane.
+1. You must add an Azure AD-based or Certificate-based user. Search the right-hand pane for your email address. Select your row, and then choose **Select** at the bottom of the pane. Your user profile may already be in the Azure AD-based user section, in which case you cannot add yourself again.
1. In the **Ledger Role** drop-down field, select **Administrator**.
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md
The Service Bus connector has different versions, based on [logic app workflow t
* If your logic app resource uses a managed identity for authenticating access to your Service Bus namespace and messaging entity, make sure that you've assigned role permissions at the corresponding levels. For example, to access a queue, the managed identity requires a role that has the necessary permissions for that queue.
- If you're using the Service Bus *managed* connector, each managed identity that accesses a *different* messaging entity should have a separate API connection to that entity. If you use different Service Bus actions to send and receive messages, and those actions require different permissions, make sure to use different API connections.
+ Each managed identity that accesses a *different* messaging entity should have a separate connection to that entity. If you use different Service Bus actions to send and receive messages, and those actions require different permissions, make sure to use different connections.
For more information about managed identities, review [Authenticate access to Azure resources with managed identities in Azure Logic Apps](../logic-apps/create-managed-service-identity.md).
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md
With Azure Container Apps, you can:
- [**Monitor your apps**](monitor.md) using Azure Log Analytics.
+- [**Generous quotas**](quotas.md) which are overridable to increase limits on a per-account basis.
+ <sup>1</sup> Applications that [scale on CPU or memory load](scale-app.md) can't scale to zero. ## Introductory video
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
+
+ Title: 'Tutorial: Access data with managed identity in Java using Service Connector'
+description: Secure Azure Database for PostgreSQL connectivity with managed identity from a sample Java Quarkus app, and deploy it to Azure Container Apps.
+ms.devlang: java
++++ Last updated : 09/26/2022++
+# Tutorial: Connect to PostgreSQL Database from a Java Quarkus Container App without secrets using a managed identity
+
+[Azure Container Apps](overview.md) provides a [managed identity](managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](/azure/postgresql/) and other Azure services. Managed identities in Container Apps make your app more secure by eliminating secrets from your app, such as credentials in the environment variables.
+
+This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you'll have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](../postgresql/index.yml) database with a managed identity running on [Container Apps](overview.md).
+
+What you will learn:
+
+> [!div class="checklist"]
+> * Configure a Quarkus app to authenticate using Azure Active Directory (Azure AD) with a PostgreSQL Database.
+> * Create an Azure container registry and push a Java app image to it.
+> * Create a Container App in Azure.
+> * Create a PostgreSQL database in Azure.
+> * Connect to a PostgreSQL Database with managed identity using Service Connector.
++
+## 1. Prerequisites
+
+* [Azure CLI](/cli/azure/overview). This quickstart requires that you are running the latest [edge build of Azure CLI](https://github.com/Azure/azure-cli/blob/dev/doc/try_new_features_before_release.md). [Download and install the edge builds](https://github.com/Azure/azure-cli#edge-builds) for your platform.
+* [Git](https://git-scm.com/)
+* [Java JDK](/azure/developer/java/fundamentals/java-support-on-azure)
+* [Maven](https://maven.apache.org)
+* [Docker](https://docs.docker.com/get-docker/)
+* [GraalVM](https://www.graalvm.org/downloads/)
+
+## 2. Create a container registry
+
+Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+The following example creates a resource group named `myResourceGroup` in the East US Azure region.
+
+```azurecli
+az group create --name myResourceGroup --location eastus
+```
+
+Create an Azure container registry instance using the [az acr create](/cli/azure/acr#az-acr-create) command. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the following example, `myContainerRegistry007` is used. Update this to a unique value.
+
+```azurecli
+az acr create \
+ --resource-group myResourceGroup \
+ --name myContainerRegistry007 \
+ --sku Basic
+```
+
+## 3. Clone the sample app and prepare the container image
+
+This tutorial uses a sample Fruits list app with a web UI that calls a Quarkus REST API backed by [Azure Database for PostgreSQL](../postgresql/index.yml). The code for the app is available [on GitHub](https://github.com/quarkusio/quarkus-quickstarts/tree/main/hibernate-orm-panache-quickstart). To learn more about writing Java apps using Quarkus and PostgreSQL, see the [Quarkus Hibernate ORM with Panache Guide](https://quarkus.io/guides/hibernate-orm-panache) and the [Quarkus Datasource Guide](https://quarkus.io/guides/datasource).
+
+Run the following commands in your terminal to clone the sample repo and set up the sample app environment.
+
+```bash
+git clone https://github.com/quarkusio/quarkus-quickstarts
+cd quarkus-quickstarts/hibernate-orm-panache-quickstart
+```
+
+### Modify your project
+
+1. Add the required dependencies to your project's BOM file.
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity-providers-jdbc-postgresql</artifactId>
+ <version>1.0.0-beta.1</version>
+ </dependency>
+ ```
+
+1. Configure the Quarkus app properties.
+
+ The Quarkus configuration is located in the *src/main/resources/application.properties* file. Open this file in your editor, and observe several default properties. The properties prefixed with `%prod` are only used when the application is built and deployed, for example when deployed to Azure App Service. When the application runs locally, `%prod` properties are ignored. Similarly, `%dev` properties are used in Quarkus' Live Coding / Dev mode, and `%test` properties are used during continuous testing.
+
+ Delete the existing content in *application.properties* and replace with the following to configure the database for dev, test, and production modes:
+
+ ```properties
+ quarkus.package.type=uber-jar
+
+ quarkus.hibernate-orm.database.generation=drop-and-create
+ quarkus.datasource.db-kind=postgresql
+ quarkus.datasource.jdbc.max-size=8
+ quarkus.datasource.jdbc.min-size=2
+ quarkus.hibernate-orm.log.sql=true
+ quarkus.hibernate-orm.sql-load-script=import.sql
+ quarkus.datasource.jdbc.acquisition-timeout = 10
+
+ %dev.quarkus.datasource.username=${AZURE_CLIENT_NAME}@${DBHOST}
+ %dev.quarkus.datasource.jdbc.url=jdbc:postgresql://${DBHOST}.postgres.database.azure.com:5432/${DBNAME}?\
+ authenticationPluginClassName=com.azure.identity.providers.postgresql.AzureIdentityPostgresqlAuthenticationPlugin\
+ &sslmode=require\
+ &azure.clientId=${AZURE_CLIENT_ID}\
+ &azure.clientSecret=${AZURE_CLIENT_SECRET}\
+ &azure.tenantId=${AZURE_TENANT_ID}
+
+ %prod.quarkus.datasource.username=${AZURE_MI_NAME}@${DBHOST}
+ %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://${DBHOST}.postgres.database.azure.com:5432/${DBNAME}?\
+ authenticationPluginClassName=com.azure.identity.providers.postgresql.AzureIdentityPostgresqlAuthenticationPlugin\
+ &sslmode=require
+
+ %dev.quarkus.class-loading.parent-first-artifacts=com.azure:azure-core::jar,\
+ com.azure:azure-core-http-netty::jar,\
+ io.projectreactor.netty:reactor-netty-core::jar,\
+ io.projectreactor.netty:reactor-netty-http::jar,\
+ io.netty:netty-resolver-dns::jar,\
+ io.netty:netty-codec::jar,\
+ io.netty:netty-codec-http::jar,\
+ io.netty:netty-codec-http2::jar,\
+ io.netty:netty-handler::jar,\
+ io.netty:netty-resolver::jar,\
+ io.netty:netty-common::jar,\
+ io.netty:netty-transport::jar,\
+ io.netty:netty-buffer::jar,\
+ com.azure:azure-identity::jar,\
+ com.azure:azure-identity-providers-core::jar,\
+ com.azure:azure-identity-providers-jdbc-postgresql::jar,\
+ com.fasterxml.jackson.core:jackson-core::jar,\
+ com.fasterxml.jackson.core:jackson-annotations::jar,\
+ com.fasterxml.jackson.core:jackson-databind::jar,\
+ com.fasterxml.jackson.dataformat:jackson-dataformat-xml::jar,\
+ com.fasterxml.jackson.datatype:jackson-datatype-jsr310::jar,\
+ org.reactivestreams:reactive-streams::jar,\
+ io.projectreactor:reactor-core::jar,\
+ com.microsoft.azure:msal4j::jar,\
+ com.microsoft.azure:msal4j-persistence-extension::jar,\
+ org.codehaus.woodstox:stax2-api::jar,\
+ com.fasterxml.woodstox:woodstox-core::jar,\
+ com.nimbusds:oauth2-oidc-sdk::jar,\
+ com.nimbusds:content-type::jar,\
+ com.nimbusds:nimbus-jose-jwt::jar,\
+ net.minidev:json-smart::jar,\
+ net.minidev:accessors-smart::jar,\
+ io.netty:netty-transport-native-unix-common::jar
+ ```
+
+### Build and push a Docker image to the container registry
+
+1. Build the container image.
+
+ Run the following command to build the Quarkus app image. You must tag it with the fully qualified name of your registry login server. The login server name is in the format *\<registry-name\>.azurecr.io* (must be all lowercase), for example, *myContainerRegistry007.azurecr.io*. Replace the name with your own registry name.
+
+ ```bash
+ mvnw quarkus:add-extension -Dextensions="container-image-jib"
+ mvnw clean package -Pnative -Dquarkus.native.container-build=true -Dquarkus.container-image.build=true -Dquarkus.container-image.registry=myContainerRegistry007 -Dquarkus.container-image.name=quarkus-postgres-passwordless-app -Dquarkus.container-image.tag=v1
+ ```
+
+1. Log in to the registry.
+
+ Before pushing container images, you must log in to the registry. To do so, use the [az acr login][az-acr-login] command. Specify only the registry resource name when signing in with the Azure CLI. Don't use the fully qualified login server name.
+
+ ```azurecli
+ az acr login --name <registry-name>
+ ```
+
+ The command returns a `Login Succeeded` message once completed.
+
+1. Push the image to the registry.
+
+ Use [docker push][docker-push] to push the image to the registry instance. Replace `myContainerRegistry007` with the login server name of your registry instance. This example creates the `quarkus-postgres-passwordless-app` repository, containing the `quarkus-postgres-passwordless-app:v1` image.
+
+ ```bash
+ docker push myContainerRegistry007/quarkus-postgres-passwordless-app:v1
+ ```
+
+## 4. Create a Container App on Azure
+
+1. Create a Container Apps instance by running the following command. Make sure you replace the value of the environment variables with the actual name and location you want to use.
+
+ ```azurecli
+ RESOURCE_GROUP="myResourceGroup"
+ LOCATION="eastus"
+ CONTAINERAPPS_ENVIRONMENT="my-environment"
+
+ az containerapp env create \
+ --resource-group $RESOURCE_GROUP \
+ --name $CONTAINERAPPS_ENVIRONMENT \
+ --location $LOCATION
+ ```
+
+1. Create a container app with your app image by running the following command. Replace the placeholders with your values. To find the container registry admin account details, see [Authenticate with an Azure container registry](../container-registry/container-registry-authentication.md)
+
+ ```azurecli
+ CONTAINER_IMAGE_NAME=quarkus-postgres-passwordless-app:v1
+ REGISTRY_SERVER=myContainerRegistry007
+ REGISTRY_USERNAME=<REGISTRY_USERNAME>
+ REGISTRY_PASSWORD=<REGISTRY_PASSWORD>
+
+ az containerapp create \
+ --resource-group $RESOURCE_GROUP \
+ --name my-container-app \
+ --image $CONTAINER_IMAGE_NAME \
+ --environment $CONTAINERAPPS_ENVIRONMENT \
+ --registry-server $REGISTRY_SERVER \
+ --registry-username $REGISTRY_USERNAME \
+ --registry-password $REGISTRY_PASSWORD
+ ```
+
+## 5. Create and connect a PostgreSQL database with identity connectivity
+
+Next, create a PostgreSQL Database Single Server and configure your container app to connect to a PostgreSQL Database with a system-assigned managed identity. The Quarkus app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
+
+1. Create the database service.
+
+ ```azurecli
+ DB_SERVER_NAME='msdocs-quarkus-postgres-webapp-db'
+ ADMIN_USERNAME='demoadmin'
+ ADMIN_PASSWORD='<admin-password>'
+
+ az postgres server create \
+ --resource-group $RESOURCE_GROUP \
+ --name $DB_SERVER_NAME \
+ --location $LOCATION \
+ --admin-user $DB_USERNAME \
+ --admin-password $DB_PASSWORD \
+ --sku-name GP_Gen5_2
+ ```
+
+ The following parameters are used in the above Azure CLI command:
+
+ * *resource-group* &rarr; Use the same resource group name in which you created the web app, for example `msdocs-quarkus-postgres-webapp-rg`.
+ * *name* &rarr; The PostgreSQL database server name. This name must be **unique across all Azure** (the server endpoint becomes `https://<name>.postgres.database.azure.com`). Allowed characters are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and server identifier. (`msdocs-quarkus-postgres-webapp-db`)
+ * *location* &rarr; Use the same location used for the web app.
+ * *admin-user* &rarr; Username for the administrator account. It can't be `azure_superuser`, `admin`, `administrator`, `root`, `guest`, or `public`. For example, `demoadmin` is okay.
+ * *admin-password* &rarr; Password of the administrator user. It must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
+
+ > [!IMPORTANT]
+ > When creating usernames or passwords **do not** use the `$` character. Later in this tutorial, you will create environment variables with these values where the `$` character has special meaning within the Linux container used to run Java apps.
+
+ * *public-access* &rarr; `None` which sets the server in public access mode with no firewall rules. Rules will be created in a later step.
+ * *sku-name* &rarr; The name of the pricing tier and compute configuration, for example `GP_Gen5_2`. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
+
+1. Create a database named `fruits` within the PostgreSQL service with this command:
+
+ ```azurecli
+ az postgres db create \
+ --resource-group $RESOURCE_GROUP \
+ --server-name $DB_SERVER_NAME \
+ --name fruits
+ ```
+
+1. Connect the database to the container app with a system-assigned managed identity, using the connection command.
+
+ ```azurecli
+ az containerapp connection create postgres \
+ --resource-group $RESOURCE_GROUP \
+ --name my-container-app \
+ --target-resource-group $RESOURCE_GROUP \
+ --server $DB_SERVER_NAME \
+ --database fruits \
+ --managed-identity
+ ```
+
+## 6. Review your changes
+
+You can find the application URL(FQDN) by using the following command:
+
+```azurecli
+az containerapp list --resource-group $RESOURCE_GROUP
+```
+
+When the new webpage shows your list of fruits, your app is connecting to the database using the managed identity. You should now be able to edit fruit list as before.
++
+## Next steps
+
+Learn more about running Java apps on Azure in the developer guide.
+
+> [!div class="nextstepaction"]
+> [Azure for Java Developers](/java/azure/)
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Depending on which API you use, an Azure Cosmos container can represent either a
| Maximum length of database or container name | 255 | | Maximum number of stored procedures per container | 100 <sup>1</sup> | | Maximum number of UDFs per container | 50 <sup>1</sup> |
-| Maximum number of paths in indexing policy| 100 <sup>1</sup> |
| Maximum number of unique keys per container|10 <sup>1</sup> | | Maximum number of paths per unique key constraint|16 <sup>1</sup> | | Maximum TTL value |2147483647 |
Cosmos DB supports querying items using [SQL](./sql-query-getting-started.md). T
| Resource | Limit | | | |
-| Maximum length of SQL query| 256 KB |
+| Maximum length of SQL query| 512 KB |
| Maximum JOINs per query| 10 <sup>1</sup> | | Maximum UDFs per query| 10 <sup>1</sup> | | Maximum points per polygon| 4096 |
-| Maximum included paths per container| 500 |
-| Maximum excluded paths per container| 500 |
+| Maximum explicitly included paths per container| 1500 <sup>1</sup> |
+| Maximum explicitly excluded paths per container| 1500 <sup>1</sup> |
| Maximum properties in a composite index| 8 | <sup>1</sup> You can increase any of these SQL query limits by creating an [Azure Support request](create-support-request-quota-increase.md).
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
+
+ Title: Configure customer-managed keys for your Azure Cosmos DB account
+description: Learn how to configure customer-managed keys for your Azure Cosmos DB account with Azure Key Vault
+++ Last updated : 07/20/2022++
+ms.devlang: azurecli
++
+# Configure customer-managed keys for your Azure Cosmos account with Azure Key Vault
+
+Data stored in your Azure Cosmos account is automatically and seamlessly encrypted with keys managed by Microsoft (**service-managed keys**). Optionally, you can choose to add a second layer of encryption with keys you manage (**customer-managed keys** or CMK).
++
+You must store customer-managed keys in [Azure Key Vault](../key-vault/general/overview.md) and provide a key for each Azure Cosmos account that is enabled with customer-managed keys. This key is used to encrypt all the data stored in that account.
+
+> [!NOTE]
+> Currently, customer-managed keys are available only for new Azure Cosmos accounts. You should configure them during account creation.
+
+## <a id="register-resource-provider"></a> Register the Azure Cosmos DB resource provider for your Azure subscription
+
+1. Sign in to the [Azure portal](https://portal.azure.com/), go to your Azure subscription, and select **Resource providers** under the **Settings** tab:
+
+ :::image type="content" source="./media/how-to-setup-cmk/portal-rp.png" alt-text="Resource providers entry from the left menu":::
+
+1. Search for the **Microsoft.DocumentDB** resource provider. Verify if the resource provider is already marked as registered. If not, choose the resource provider and select **Register**:
+
+ :::image type="content" source="./media/how-to-setup-cmk/portal-rp-register.png" alt-text="Registering the Microsoft.DocumentDB resource provider":::
+
+## Configure your Azure Key Vault instance
+
+> [!IMPORTANT]
+> Your Azure Key Vault instance must be accessible through public network access or allow trusted Microsoft services to bypass its firewall. An instance that is exclusively accessible through [private endpoints](../key-vault/general/private-link-service.md) cannot be used to host your customer-managed keys.
+
+Using customer-managed keys with Azure Cosmos DB requires you to set two properties on the Azure Key Vault instance that you plan to use to host your encryption keys: **Soft Delete** and **Purge Protection**.
+
+If you create a new Azure Key Vault instance, enable these properties during creation:
++
+If you're using an existing Azure Key Vault instance, you can verify that these properties are enabled by looking at the **Properties** section on the Azure portal. If any of these properties isn't enabled, see the "Enabling soft-delete" and "Enabling Purge Protection" sections in one of the following articles:
+
+- [How to use soft-delete with PowerShell](../key-vault/general/key-vault-recovery.md)
+- [How to use soft-delete with Azure CLI](../key-vault/general/key-vault-recovery.md)
+
+## <a id="add-access-policy"></a> Add an access policy to your Azure Key Vault instance
+
+1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys. Select **Access Policies** from the left menu:
+
+ :::image type="content" source="./media/how-to-setup-cmk/portal-akv-ap.png" alt-text="Access policies from the left menu":::
+
+1. Select **+ Add Access Policy**.
+
+1. Under the **Key permissions** drop-down menu, select **Get**, **Unwrap Key**, and **Wrap Key** permissions:
+
+ :::image type="content" source="./media/how-to-setup-cmk/portal-akv-add-ap-perm2.png" alt-text="Selecting the right permissions":::
+
+1. Under **Select principal**, select **None selected**.
+
+1. Search for **Azure Cosmos DB** principal and select it (to make it easier to find, you can also search by application ID: `a232010e-820c-4083-83bb-3ace5fc29d0b` for any Azure region except Azure Government regions where the application ID is `57506a73-e302-42a9-b869-6f12d9ec29e9`). If the **Azure Cosmos DB** principal isn't in the list, you might need to re-register the **Microsoft.DocumentDB** resource provider as described in the [Register the resource provider](#register-resource-provider) section of this article.
+
+ > [!NOTE]
+ > This registers the Azure Cosmos DB first-party-identity in your Azure Key Vault access policy. To replace this first-party identity by your Azure Cosmos DB account managed identity, see [Using a managed identity in the Azure Key Vault access policy](#using-managed-identity).
+
+1. Choose **Select** at the bottom.
+
+ :::image type="content" source="./media/how-to-setup-cmk/portal-akv-add-ap.png" alt-text="Select the Azure Cosmos DB principal":::
+
+1. Select **Add** to add the new access policy.
+
+1. Select **Save** on the Key Vault instance to save all changes.
+
+## Generate a key in Azure Key Vault
+
+1. From the Azure portal, go the Azure Key Vault instance that you plan to use to host your encryption keys. Then, select **Keys** from the left menu:
+
+ :::image type="content" source="./media/how-to-setup-cmk/portal-akv-keys.png" alt-text="Keys entry from the left menu":::
+
+1. Select **Generate/Import**, provide a name for the new key, and select an RSA key size. A minimum of 3072 is recommended for best security. Then select **Create**:
+
+ :::image type="content" source="./media/how-to-setup-cmk/portal-akv-gen.png" alt-text="Create a new key":::
+
+1. After the key is created, select the newly created key and then its current version.
+
+1. Copy the key's **Key Identifier**, except the part after the last forward slash:
+
+ :::image type="content" source="./media/how-to-setup-cmk/portal-akv-keyid.png" alt-text="Copying the key's key identifier":::
+
+## Create a new Azure Cosmos account
+
+### Using the Azure portal
+
+When you create a new Azure Cosmos DB account from the Azure portal, choose **Customer-managed key** in the **Encryption** step. In the **Key URI** field, paste the URI/key identifier of the Azure Key Vault key that you copied from the previous step:
++
+### <a id="using-powershell"></a> Using Azure PowerShell
+
+When you create a new Azure Cosmos DB account with PowerShell:
+
+- Pass the URI of the Azure Key Vault key copied earlier under the **keyVaultKeyUri** property in **PropertyObject**.
+
+- Use **2019-12-12** or later as the API version.
+
+> [!IMPORTANT]
+> You must set the `locations` property explicitly for the account to be successfully created with customer-managed keys.
+
+```powershell
+$resourceGroupName = "myResourceGroup"
+$accountLocation = "West US 2"
+$accountName = "mycosmosaccount"
+
+$failoverLocations = @(
+ @{ "locationName"="West US 2"; "failoverPriority"=0 }
+)
+
+$CosmosDBProperties = @{
+ "databaseAccountOfferType"="Standard";
+ "locations"=$failoverLocations;
+ "keyVaultKeyUri" = "https://<my-vault>.vault.azure.net/keys/<my-key>";
+}
+
+New-AzResource -ResourceType "Microsoft.DocumentDb/databaseAccounts" `
+ -ApiVersion "2019-12-12" -ResourceGroupName $resourceGroupName `
+ -Location $accountLocation -Name $accountName -PropertyObject $CosmosDBProperties
+```
+
+After the account has been created, you can verify that customer-managed keys have been enabled by fetching the URI of the Azure Key Vault key:
+
+```powershell
+Get-AzResource -ResourceGroupName $resourceGroupName -Name $accountName `
+ -ResourceType "Microsoft.DocumentDb/databaseAccounts" `
+ | Select-Object -ExpandProperty Properties `
+ | Select-Object -ExpandProperty keyVaultKeyUri
+```
+
+### Using an Azure Resource Manager template
+
+When you create a new Azure Cosmos account through an Azure Resource Manager template:
+
+- Pass the URI of the Azure Key Vault key that you copied earlier under the **keyVaultKeyUri** property in the **properties** object.
+
+- Use **2019-12-12** or later as the API version.
+
+> [!IMPORTANT]
+> You must set the `locations` property explicitly for the account to be successfully created with customer-managed keys.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "accountName": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ },
+ "keyVaultKeyUri": {
+ "type": "string"
+ }
+ },
+ "resources":
+ [
+ {
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "name": "[parameters('accountName')]",
+ "apiVersion": "2019-12-12",
+ "kind": "GlobalDocumentDB",
+ "location": "[parameters('location')]",
+ "properties": {
+ "locations": [
+ {
+ "locationName": "[parameters('location')]",
+ "failoverPriority": 0,
+ "isZoneRedundant": false
+ }
+ ],
+ "databaseAccountOfferType": "Standard",
+ "keyVaultKeyUri": "[parameters('keyVaultKeyUri')]"
+ }
+ }
+ ]
+}
+```
+
+Deploy the template with the following PowerShell script:
+
+```powershell
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$accountLocation = "West US 2"
+$keyVaultKeyUri = "https://<my-vault>.vault.azure.net/keys/<my-key>"
+
+New-AzResourceGroupDeployment `
+ -ResourceGroupName $resourceGroupName `
+ -TemplateFile "deploy.json" `
+ -accountName $accountName `
+ -location $accountLocation `
+ -keyVaultKeyUri $keyVaultKeyUri
+```
+
+### <a id="using-azure-cli"></a> Using the Azure CLI
+
+When you create a new Azure Cosmos account through the Azure CLI, pass the URI of the Azure Key Vault key that you copied earlier under the `--key-uri` parameter.
+
+```azurecli-interactive
+resourceGroupName='myResourceGroup'
+accountName='mycosmosaccount'
+keyVaultKeyUri = 'https://<my-vault>.vault.azure.net/keys/<my-key>'
+
+az cosmosdb create \
+ -n $accountName \
+ -g $resourceGroupName \
+ --locations regionName='West US 2' failoverPriority=0 isZoneRedundant=False \
+ --key-uri $keyVaultKeyUri
+```
+
+After the account has been created, you can verify that customer-managed keys have been enabled by fetching the URI of the Azure Key Vault key:
+
+```azurecli-interactive
+az cosmosdb show \
+ -n $accountName \
+ -g $resourceGroupName \
+ --query keyVaultKeyUri
+```
+
+## <a id="using-managed-identity"></a> Using a managed identity in the Azure Key Vault access policy
+
+This access policy ensures that your encryption keys can be accessed by your Azure Cosmos DB account. The access policy is implemented by granting access to a specific Azure Active Directory (AD) identity. Two types of identities are supported:
+
+- Azure Cosmos DB's first-party identity can be used to grant access to the Azure Cosmos DB service.
+- Your Azure Cosmos DB account's [managed identity](how-to-setup-managed-identity.md) can be used to grant access to your account specifically.
+
+### To use a system-assigned managed identity
+
+Because a system-assigned managed identity can only be retrieved after the creation of your account, you still need to initially create your account using the first-party identity, as described [above](#add-access-policy). Then:
+
+1. If the system-assigned managed identity wasn't configured during account creation, [enable a system-assigned managed identity](./how-to-setup-managed-identity.md#add-a-system-assigned-identity) on your account and copy the `principalId` that got assigned.
+
+1. Add a new access policy to your Azure Key Vault account as described [above](#add-access-policy), but using the `principalId` you copied at the previous step instead of Azure Cosmos DB's first-party identity.
+
+1. Update your Azure Cosmos DB account to specify that you want to use the system-assigned managed identity when accessing your encryption keys in Azure Key Vault. You have two options:
+
+ - Specify the property in your account's Azure Resource Manager template:
+
+ ```json
+ {
+ "type": " Microsoft.DocumentDB/databaseAccounts",
+ "properties": {
+ "defaultIdentity": "SystemAssignedIdentity",
+ // ...
+ },
+ // ...
+ }
+ ```
+
+ - Update your account with the Azure CLI:
+
+ ```azurecli
+ resourceGroupName='myResourceGroup'
+ accountName='mycosmosaccount'
+
+ az cosmosdb update --resource-group $resourceGroupName --name $accountName --default-identity "SystemAssignedIdentity"
+ ```
+
+1. Optionally, you can then remove the Azure Cosmos DB first-party identity from your Azure Key Vault access policy.
+
+### To use a user-assigned managed identity
+
+1. When creating the new access policy in your Azure Key Vault account as described [above](#add-access-policy), use the `Object ID` of the managed identity you wish to use instead of Azure Cosmos DB's first-party identity.
+
+1. When creating your Azure Cosmos DB account, you must enable the user-assigned managed identity and specify that you want to use this identity when accessing your encryption keys in Azure Key Vault. Options include:
+
+ - Using an Azure Resource Manager template:
+
+ ```json
+ {
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<identity-resource-id>": {}
+ }
+ },
+ // ...
+ "properties": {
+ "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>"
+ "keyVaultKeyUri": "<key-vault-key-uri>"
+ // ...
+ }
+ }
+ ```
+
+ - Using the Azure CLI:
+
+ ```azurecli
+ resourceGroupName='myResourceGroup'
+ accountName='mycosmosaccount'
+ keyVaultKeyUri = 'https://<my-vault>.vault.azure.net/keys/<my-key>'
+
+ az cosmosdb create \
+ -n $accountName \
+ -g $resourceGroupName \
+ --key-uri $keyVaultKeyUri
+ --assign-identity <identity-resource-id>
+ --default-identity "UserAssignedIdentity=<identity-resource-id>"
+ ```
+
+## Use CMK with continuous backup
+
+You can create a continuous backup account by using the Azure CLI or an Azure Resource Manager template.
+
+Currently, only user-assigned managed identity is supported for creating continuous backup accounts.
+
+### To create a continuous backup account by using the Azure CLI
+
+```azurecli
+resourceGroupName='myResourceGroup'
+accountName='mycosmosaccount'
+keyVaultKeyUri = 'https://<my-vault>.vault.azure.net/keys/<my-key>'
+
+az cosmosdb create \
+ -n $accountName \
+ -g $resourceGroupName \
+ --key-uri $keyVaultKeyUri \
+ --locations regionName=<Location> \
+ --assign-identity <identity-resource-id> \
+ --default-identity "UserAssignedIdentity=<identity-resource-id>" \
+ --backup-policy-type Continuous
+```
+
+### To create a continuous backup account by using an Azure Resource Manager template
+
+When you create a new Azure Cosmos account through an Azure Resource Manager template:
+
+- Pass the URI of the Azure Key Vault key that you copied earlier under the **keyVaultKeyUri** property in the **properties** object.
+- Use **2021-11-15** or later as the API version.
+
+> [!IMPORTANT]
+> You must set the `locations` property explicitly for the account to be successfully created with customer-managed keys as shown in the preceding example.
+
+```json
+ {
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<identity-resource-id>": {}
+ }
+ },
+ // ...
+ "properties": {
+ "backupPolicy": { "type": "Continuous" },
+ "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>"
+ "keyVaultKeyUri": "<key-vault-key-uri>"
+ // ...
+ }
+}
+```
+
+## Customer-managed keys and double encryption
+
+The data you store in your Azure Cosmos DB account when using customer-managed keys ends up being encrypted twice:
+
+- Once through the default encryption performed with Microsoft-managed keys.
+- Once through the extra encryption performed with customer-managed keys.
+
+Double encryption only applies to the main Azure Cosmos DB transactional storage. Some features involve internal replication of your data to a second tier of storage where double encryption isn't provided, even with customer-managed keys. These features include:
+
+- [Azure Synapse Link](./synapse-link.md)
+- [Continuous backups with point-in-time restore](./continuous-backup-restore-introduction.md)
+
+## Key rotation
+
+Rotating the customer-managed key used by your Azure Cosmos account can be done in two ways.
+
+- Create a new version of the key currently used from Azure Key Vault:
+
+ :::image type="content" source="./media/how-to-setup-cmk/portal-akv-rot.png" alt-text="Screenshot of the New Version option in the Versions page of the Azure portal.":::
+
+- Swap the key currently used with a different one by updating the key URI on your account. From the Azure portal, go to your Azure Cosmos account and select **Data Encryption** from the left menu:
+
+ :::image type="content" source="./media/how-to-setup-cmk/portal-data-encryption.png" alt-text="Screenshot of the Data Encryption menu option in the Azure portal.":::
+
+ Then, replace the **Key URI** with the new key you want to use and select **Save**:
+
+ :::image type="content" source="./media/how-to-setup-cmk/portal-key-swap.png" alt-text="Screenshot of the Save option in the Key page of the Azure portal.":::
+
+ Here's how to do achieve the same result in PowerShell:
+
+ ```powershell
+ $resourceGroupName = "myResourceGroup"
+ $accountName = "mycosmosaccount"
+ $newKeyUri = "https://<my-vault>.vault.azure.net/keys/<my-new-key>"
+
+ $account = Get-AzResource -ResourceGroupName $resourceGroupName -Name $accountName `
+ -ResourceType "Microsoft.DocumentDb/databaseAccounts"
+
+ $account.Properties.keyVaultKeyUri = $newKeyUri
+
+ $account | Set-AzResource -Force
+ ```
+
+The previous key or key version can be disabled after the [Azure Key Vault audit logs](../key-vault/general/logging.md) don't show activity from Azure Cosmos DB on that key or key version anymore. No more activity should take place on the previous key or key version after 24 hours of key rotation.
+
+## Error handling
+
+If there are any errors with customer-managed keys in Azure Cosmos DB, Azure Cosmos DB returns the error details along with an HTTP substatus code in the response. You can use the HTTP substatus code to debug the root cause of the issue. See the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article to get the list of supported HTTP substatus codes.
+
+## Frequently asked questions
+
+### Are there more charges to enable customer-managed keys?
+
+No, there's no charge to enable this feature.
+
+### How do customer-managed keys influence capacity planning?
+
+[Request Units](./request-units.md) consumed by your database operations see an increase to reflect the extra processing required to perform encryption and decryption of your data when using customer-managed keys. The extra RU consumption may lead to slightly higher utilization of your provisioned capacity. Use the table below for guidance:
+
+| Operation type | Request Unit increase |
+|||
+| Point-reads (fetching items by their ID) | + 5% per operation |
+| Any write operation | + 6% per operation <br/> Approximately + 0.06 RU per indexed property |
+| Queries, reading change feed, or conflict feed | + 15% per operation |
+
+### What data gets encrypted with the customer-managed keys?
+
+All the data stored in your Azure Cosmos account is encrypted with the customer-managed keys, except for the following metadata:
+
+- The names of your Azure Cosmos DB [accounts, databases, and containers](./account-databases-containers-items.md#elements-in-an-azure-cosmos-db-account)
+
+- The names of your [stored procedures](./stored-procedures-triggers-udfs.md)
+
+- The property paths declared in your [indexing policies](./index-policy.md)
+
+- The values of your containers' [partition keys](./partitioning-overview.md)
+
+### Are customer-managed keys supported for existing Azure Cosmos accounts?
+
+This feature is currently available only for new accounts.
+
+### Is it possible to use customer-managed keys with the Azure Cosmos DB [analytical store](analytical-store-introduction.md)?
+
+Yes, Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must [use your Azure Cosmos DB account's managed identity](#using-managed-identity) in your Azure Key Vault access policy before [enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. For a how-to guide on how to enable managed identity and use it in an access policy, see [access Azure Key Vault from Azure Cosmos DB using a managed identity](access-key-vault-managed-identity.md).
+
+### Is there a plan to support finer granularity than account-level keys?
+
+Not currently, but container-level keys are being considered.
+
+### How can I tell if customer-managed keys are enabled on my Azure Cosmos account?
+
+From the Azure portal, go to your Azure Cosmos account and watch for the **Data Encryption** entry in the left menu; if this entry exists, customer-managed keys are enabled on your account:
++
+You can also programmatically fetch the details of your Azure Cosmos account and look for the presence of the `keyVaultKeyUri` property. See above for ways to do that [in PowerShell](#using-powershell) and [using the Azure CLI](#using-azure-cli).
+
+### How do customer-managed keys affect periodic backups?
+
+Azure Cosmos DB takes [regular and automatic backups](./online-backup-and-restore.md) of the data stored in your account. This operation backs up the encrypted data.
+
+The following conditions are necessary to successfully restore a periodic backup:
+- The encryption key that you used at the time of the backup is required and must be available in Azure Key Vault. This condition requires that no revocation was made and the version of the key that was used at the time of the backup is still enabled.
+- If you [used a system-assigned managed identity in the access policy](#to-use-a-system-assigned-managed-identity), temporarily [grant access to the Azure Cosmos DB first-party identity](#add-access-policy) before restoring your data. This requirement exists because a system-assigned managed identity is specific to an account and can't be reused in the target account. Once the data is fully restored to the target account, you can set your desired identity configuration and remove the first-party identity from the Key Vault access policy.
+
+### How do customer-managed keys affect continuous backups?
+
+Azure Cosmos DB gives you the option to configure [continuous backups](./continuous-backup-restore-introduction.md) on your account. With continuous backups, you can restore your data to any point in time within the past 30 days. To use continuous backups on an account where customer-managed keys are enabled, you must [use a user-assigned managed identity](#to-use-a-user-assigned-managed-identity) in the Key Vault access policy. Azure Cosmos DB first-party identities or system-assigned managed identities aren't currently supported on accounts using continuous backups.
+
+The following conditions are necessary to successfully perform a point-in-time restore:
+- The encryption key that you used at the time of the backup is required and must be available in Azure Key Vault. This requirement means that no revocation was made and the version of the key that was used at the time of the backup is still enabled.
+- You must ensure that the user-assigned managed identity originally used on the source account is still declared in the Key Vault access policy.
+
+> [!IMPORTANT]
+> If you revoke the encryption key before deleting your account, your account's backup may miss the data written up to 1 hour before the revocation was made.
+
+### How do I revoke an encryption key?
+
+Key revocation is done by disabling the latest version of the key:
++
+Alternatively, to revoke all keys from an Azure Key Vault instance, you can delete the access policy granted to the Azure Cosmos DB principal:
++
+### What operations are available after a customer-managed key is revoked?
+
+The only operation possible when the encryption key has been revoked is account deletion.
+
+## Next steps
+
+- Learn more about [data encryption in Azure Cosmos DB](./database-encryption-at-rest.md).
+- Get an overview of [secure access to data in Cosmos DB](secure-access-to-data.md).
cosmos-db Monitor Server Side Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-server-side-latency.md
Previously updated : 09/16/2021 Last updated : 09/23/2022 # How to monitor the server-side latency for operations in an Azure Cosmos DB container or account+ [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature does not require you to enable or configure anything explicitly. The server-side latency metric direct and server-side latency gateway metrics are used to view the server-side latency of an operation in two different connection modes. Use server-side latency gateway metric if your request operation is in gateway connectivity mode. Use server-side latency direct metric if your request operation is in direct connectivity mode. Azure Cosmos DB provides SLA of less than 10 ms for point read/write operations with direct connectivity. For point read and write operations, the SLAs are calculated as detailed in the [SLA document](https://azure.microsoft.com/support/legal/sl) article.
+Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature doesn't require you to enable or configure anything explicitly. The server-side latency metric direct and server-side latency gateway metrics are used to view the server-side latency of an operation in two different connection modes. Use server-side latency gateway metric if your request operation is in gateway connectivity mode. Use server-side latency direct metric if your request operation is in direct connectivity mode. Azure Cosmos DB provides SLA of less than 10 ms for point read/write operations with direct connectivity. For point read and point write operations, the SLAs are calculated as detailed in the [SLA document](https://azure.microsoft.com/support/legal/sl) article.
The following table indicates which API supports server-side latency metrics (Direct versus Gateway):
The following table indicates which API supports server-side latency metrics (Di
You can monitor server-side latency metrics if you see unusually high latency for point operation such as:
-* A GET or a SET operation with partition key and ID
+* A GET or a SET operation with partition key and ID
* A read or write operation or * A query
-You can look up the diagnostic log to see the size of the data returned. If you see a sustained high latency for query operations, you should look up the diagnostic log for higher [throughput or RU/s](cosmosdb-monitor-logs-basic-queries.md) used. Server side latency shows the amount of time spent on the backend infrastructure before the data was returned to the client. It is important to look at this metric to rule out any backend latency issues.
+You can look up the diagnostic log to see the size of the data returned. If you see a sustained high latency for query operations, you should look up the diagnostic log for higher [throughput or RU/s](cosmosdb-monitor-logs-basic-queries.md) used. Server side latency shows the amount of time spent on the backend infrastructure before the data was returned to the client. It's important to look at this metric to rule out any backend latency issues.
## View the server-side latency metrics 1. Sign in to the [Azure portal](https://portal.azure.com/).
-
+ 1. Select **Monitor** from the left-hand navigation bar and select **Metrics**. :::image type="content" source="./media/monitor-server-side-latency/monitor-metrics-blade.png" alt-text="Metrics pane in Azure Monitor" border="true"::: 1. From the **Metrics** pane > **Select a resource** > choose the required **subscription**, and **resource group**. For the **Resource type**, select **Azure Cosmos DB accounts**, choose one of your existing Azure Cosmos accounts, and select **Apply**.
-
+ :::image type="content" source="./media/monitor-account-key-updates/select-account-scope.png" alt-text="Select the account scope to view metrics" border="true":::
-1. Next select the **Server Side Latency Gateway** metric from the list of available metrics, if your operation is in gateway connectivity mode. Select the **Server Side Latency Direct** metric, if your operation is in direct connectivity mode. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-cosmos-db-reference.md) article. In this example, let's select **Server Side Latency Gateway** and **Avg** as the aggregation value. In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the server-side latency in gateway connectivity mode per 5 minute for the selected period.
+1. Next select the **Server Side Latency Gateway** metric from the list of available metrics, if your operation is in gateway connectivity mode. Select the **Server Side Latency Direct** metric, if your operation is in direct connectivity mode. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-cosmos-db-reference.md) article. In this example, let's select **Server Side Latency Gateway** and **Avg** as the aggregation value. In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the server-side latency in gateway connectivity mode per 5 minutes for the selected period.
:::image type="content" source="./media/monitor-server-side-latency/server-side-latency-gateway-metric.png" alt-text="Choose the Server-Side Latency Gateway metric from the Azure portal" border="true" lightbox="./media/monitor-server-side-latency/server-side-latency-gateway-metric.png"::: ## Filters for server-side latency
-You can also filter metrics and get the charts displayed by a specific **CollectionName**, **DatabaseName**, **OperationType**, **Region**, and **PublicAPIType**.
+You can also filter metrics and get the charts displayed by a specific **CollectionName**, **DatabaseName**, **OperationType**, **Region**, and **PublicAPIType**.
-To filter the metrics, select **Add filter** and choose the required property such as **PublicAPIType** and select the value **Sql**. Select **Apply splitting** for **OperationType**. The graph then displays the server-side latency for different operations in gateway connection mode during the selected period. The operations executed via Stored procedure are not logged so they are not available under the OperationType metric.
+To filter the metrics, select **Add filter** and choose the required property such as **PublicAPIType** and select the value **Sql**. Select **Apply splitting** for **OperationType**. The graph then displays the server-side latency for different operations in gateway connection mode during the selected period. The operations executed via Stored procedure aren't logged so they aren't available under the OperationType metric.
The **Server Side Latency Gateway** metrics for each operation are displayed as shown in the following image:
The **Server Side Latency Gateway** metrics for each operation are displayed as
You can also group the metrics by using the **Apply splitting** option.
+> [!NOTE]
+> Requests coming into Azure Cosmos DB donΓÇÖt always target a container. For example, you could create a database inside a globally distributed account and the request will still be recorded for the server-side latency metric. The request is recorded because it does take time to create a database resource, but it does not target a container. If you see that the value of the `CollectionName` metric is `<empty>`, this means that the target is not a container, but another resource in Azure Cosmos DB.
+>
+> As a workaround, you can proactively filter your metrics to a specific container (CollectionName) to exclude requests that aren't specific to the container that's the subject of your query.
+ ## Next steps * Monitor Azure Cosmos DB data by using [diagnostic settings](cosmosdb-monitor-resource-logs.md) in Azure.
data-factory Connector Sap Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md
This SAP CDC connector is supported for the following capabilities:
| Supported capabilities|IR | || --|
-|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9313;|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312;, &#9313;|
<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
The SAP CDC connector supports basic authentication or Secure Network Communicat
To use this SAP CDC connector, you need to: -- Set up a self-hosted integration runtime (version 3.17 or later). For more information, see [Create and configure a self-hosted integration runtime](create-self-hosted-integration-runtime.md).
+- Set up a self-hosted integration runtime. The most recent version can be found in [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=39717). For more information, see [Create and configure a self-hosted integration runtime](create-self-hosted-integration-runtime.md).
- Download the 64-bit [SAP Connector for Microsoft .NET 3.0](https://support.sap.com/en/product/connectors/msnet.html) from SAP's website, and install it on the self-hosted integration runtime machine. During installation, make sure you select the **Install Assemblies to GAC** option in the **Optional setup steps** window.
Follow the steps described in [Prepare the SAP CDC linked service](sap-change-da
## Dataset properties
-To prepare an SAP CDC dataset, follow [Prepare the SAP CDC source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md#set-up-the-source-dataset)
+To prepare an SAP CDC dataset, follow [Prepare the SAP CDC source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md#set-up-the-source-dataset).
## Transform data with the SAP CDC connector
To create a mapping data flow using the SAP CDC connector as a source, complete
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-key-columns.png" alt-text="Screenshot of the key columns selection in source options of mapping data flow source.":::
-1. For details on the tabs **Projection**, **Optimize** and **Inspect**, please follow [mapping data flow](concepts-data-flow-overview.md).
+1. For the tabs **Projection**, **Optimize** and **Inspect**, please follow [mapping data flow](concepts-data-flow-overview.md).
+
+1. If **Run mode** is set to **Full on every run**, the tab **Optimize** offers additional selection and partitioning options. Each partition condition (the screenshot below shows an example with two conditions) will trigger a separate extraction process in the connected SAP system. Up to three of these extraction process are executed in parallel.
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-optimize-partition.png" alt-text="Screenshot of the partitioning options in optimize of mapping data flow source.":::
+
data-factory Continuous Integration Delivery Manual Promotion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-manual-promotion.md
Previously updated : 09/24/2021 Last updated : 09/20/2022
Use the steps below to promote a Resource Manager template to each environment f
:::image type="content" source="media/continuous-integration-delivery/custom-deployment-build-your-own-template.png" alt-text="Build your own template":::
-1. Select **Load file**, and then select the generated Resource Manager template. This is the **arm_template.json** file located in the .zip file exported in step 1.
+1. Select **Load file**, and then select the generated Resource Manager template. This is the **ARMTemplateForFactory.json** file located in the .zip file exported in step 1.
:::image type="content" source="media/continuous-integration-delivery/custom-deployment-edit-template.png" alt-text="Edit template":::
data-factory Industry Sap Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-connectors.md
The following table shows the SAP connectors and in which activity scenarios the
| :-- | :-- | :-- | :-- | | |[SAP Business Warehouse Open Hub](connector-sap-business-warehouse-open-hub.md) | ✓/− | | ✓ | SAP Business Warehouse version 7.01 or higher. SAP BW/4HANA isn't supported by this connector. | |[SAP Business Warehouse via MDX](connector-sap-business-warehouse.md)| ✓/− | | ✓ | SAP Business Warehouse version 7.x. |
+| [SAP CDC (Preview)](connector-sap-change-data-capture.md) | | Γ£ô/- | | Can connect to all SAP releases supporting SAP Operational Data Provisioning (ODP). This includes most SAP ECC and SAP BW releases, as well as SAP S/4HANA, SAP BW/4HANA and SAP Landscape Transformation Replication Server (SLT). For details, follow [Overview and architecture of the SAP CDC capabilities (preview)](sap-change-data-capture-introduction-architecture.md) |
| [SAP Cloud for Customer (C4C)](connector-sap-cloud-for-customer.md) | ✓/✓ | | ✓ | SAP Cloud for Customer including the SAP Cloud for Sales, SAP Cloud for Service, and SAP Cloud for Social Engagement solutions. | | [SAP ECC](connector-sap-ecc.md) | ✓/− | | ✓ | SAP ECC on SAP NetWeaver version 7.0 and later. | | [SAP HANA](connector-sap-hana.md) | ✓/✓ | | ✓ | Any version of SAP HANA database |
data-factory Sap Change Data Capture Prepare Linked Service Source Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prepare-linked-service-source-dataset.md
To set up an SAP CDC (preview) linked service:
1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Datasets** > **Dataset Actions**, select **New dataset**.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-pipeline.png" alt-text="Screenshot that shows creating a new pipeline in the Data Factory Studio Author hub.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-dataset.png" alt-text="Screenshot that shows creating a new pipeline in the Data Factory Studio Author hub.":::
1. In **New dataset**, search for **SAP**. Select **SAP CDC (Preview)**, and then select **Continue**.
data-factory Sap Change Data Capture Prerequisites Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prerequisites-configuration.md
ODP offers various data extraction contexts or *source object types*. Although m
- [2232584 - To release SAP extractors for ODP API](https://launchpad.support.sap.com/#/notes/2232584) for a list of all SAP-delivered DataSources (more than 7,400) that have been released
-### Set up the SAP replication server
+### Set up the SAP Landscape Transformation Replication Server
SAP Landscape Transformation Replication Server (SLT) is a database trigger-enabled CDC solution that can replicate SAP application tables and simple views in near real time. SLT replicates from SAP source systems to various targets, including the operational delta queue (ODQ). You can use SLT as a proxy in data extraction ODP. You can install SLT on an SAP source system as an SAP Data Migration Server (DMIS) add-on or use it on a standalone replication server. To use SLT as a proxy, complete the following steps:
data-factory Sap Change Data Capture Shir Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-shir-preparation.md
On the computer running your self-hosted integration runtime, edit *C:\Windows\S
```ini # SAP ECC
-52.149.66.239 sapids01
+xxx.xxx.xxx.xxx sapecc01
# SAP BW
-20.190.60.250 sapbwx01
+yyy.yyy.yyy.yyy sapbw01
# SAP SLT
-20.56.211.31 sapnwx01
+zzz.zzz.zzz.zzz sapnw01
``` ## Next steps
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
description: Follow this tutorial to learn how to build out an end-to-end Azure Digital Twins solution that's driven by device data. Previously updated : 06/21/2022 Last updated : 09/26/2022
To publish the function app to Azure, you'll need to create a storage account, t
This command publishes the project to the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory.
- 1. Create a zip of the published files that are located in the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory. Name the zipped folder *publish.zip*.
+ 1. Using your preferred method, create a zip of the published files that are located in the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory. Name the zipped folder *publish.zip*.
>[!TIP] >If you're using PowerShell, you can create the zip by copying the full path to that *\publish* directory and pasting it into the following command:
There are two settings that need to be set for the function app to access your A
The first setting gives the function app the **Azure Digital Twins Data Owner** role in the Azure Digital Twins instance. This role is required for any user or function that wants to perform many data plane activities on the instance. You can read more about security and role assignments in [Security for Azure Digital Twins solutions](concepts-security.md).
-1. Use the following command to see the details of the system-managed identity for the function. Take note of the **principalId** field in the output.
+1. Use the following command to create a system-managed identity for the function. The output will display details of the identity that's been created. Take note of the **principalId** field in the output to use in the next step.
```azurecli-interactive
- az functionapp identity show --resource-group <your-resource-group> --name <your-function-app-name>
+ az functionapp identity assign --resource-group <your-resource-group> --name <your-function-app-name>
```
- >[!NOTE]
- > If the result is empty instead of showing details of an identity, create a new system-managed identity for the function using this command:
- >
- >```azurecli-interactive
- >az functionapp identity assign --resource-group <your-resource-group> --name <your-function-app-name>
- >```
- >
- > The output will then display details of the identity, including the **principalId** value required for the next step.
- 1. Use the **principalId** value in the following command to assign the function app's identity to the **Azure Digital Twins Data Owner** role for your Azure Digital Twins instance. ```azurecli-interactive
deviceConnectionString = <your-device-connection-string>
Save the file.
-Now, to see the results of the data simulation that you've set up, navigate to *digital-twins-samples-main\DeviceSimulator\DeviceSimulator* in a local console window.
+Now, to see the results of the data simulation that you've set up, open a new local console window and navigate to *digital-twins-samples-main\DeviceSimulator\DeviceSimulator*.
>[!NOTE] > You should now have two open console windows: one that's open to the the *DeviceSimulator\DeviceSimulator* folder, and one from earlier that's still open to the *AdtSampleApp\SampleClientApp* folder.
event-hubs Event Hubs Dedicated Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-overview.md
The Event Hubs dedicated offering is billed at a fixed monthly price, with a **m
For more information about quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md)
+## High availability with Azure Availability Zones
+Event Hubs dedicated clusters offer [availability zones](../availability-zones/az-overview.md#availability-zones) support where you can run event streaming workloads in physically separate locations within each Azure region that are tolerant to local failures.
+
+> [!IMPORTANT]
+> Event Hubs dedicated clusters require at least 8 Capacity Units(CUs) to enable availability zones. Clusters with self-serve scaling does not support availability zones yet. Availability zone support is only available in [Azure regions with availability zones](https://learn.microsoft.com/azure/availability-zones/az-overview#azure-regions-with-availability-zones).
++ ## How to onboard Event Hubs dedicated tier is generally available (GA). The self-serve experience to create an Event Hubs cluster through the [Azure portal](event-hubs-dedicated-cluster-create-portal.md) is currently in Preview. You can also request for the cluster to be created by contacting the [Event Hubs team](mailto:askeventhubs@microsoft.com).
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-premium-overview.md
In addition to these storage-related features and all capabilities and protocol
> [!NOTE] > Event Hubs Premium supports TLS 1.2 or greater.
+You can purchase 1, 2, 4, 8 and 16 processing units for each namespace. As the premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle as it is in the standard tier, but depends on the work you ask Event Hubs to do, similar to the dedicated tier. The effective ingest and stream throughput per PU will depend on various factors, including:
+
+* Number of producers and consumers
+* Payload size
+* Partition count
+* Egress request rate
+* Usage of Event Hubs Capture, Schema Registry, and other advanced features
+
+For more information, see [comparison between Event Hubs SKUs](event-hubs-quotas.md).
+ ## Why premium? The premium tier offers three compelling benefits for customers who require better isolation in a multitenant environment with low latency and high throughput data ingestion needs.
It implements a *cluster in cluster* model in its multitenant clusters to provid
### Cost savings and scalability As the premium tier is a multitenant offering, it can dynamically scale more flexibly and very quickly. Capacity is allocated in processing units (PUs) that allocate isolated pods of CPU/memory inside the cluster. The number of those pods can be scaled up/down per namespace. Therefore, the premium tier is a low-cost option for messaging scenarios with the overall throughput range that is less than 120 MB/s but higher than what you can achieve with the standard SKU.
-## Premium vs. dedicated tiers
-In comparison to the dedicated offering, the premium tier provides the following benefits:
--- Isolation inside a very large multi-tenant environment that can shift resources quickly-- Scale far more elastically and quicker-- PUs can be dynamically adjusted-
-Therefore, the premium tier is often a more cost effective option for event streaming workloads up to 160 MB/sec (per namespace), especially with changing loads throughout the day or week, when compared to the dedicated tier.
-
-> [!NOTE]
-> For the extra robustness gained by **availability-zone** support, the minimal deployment scale for the dedicated tier is **8 capacity units (CU)**, but you'll have availability zone support in the premium tier from the first PU in all availability zone regions.
-
-You can purchase 1, 2, 4, 8 and 16 processing units for each namespace. As the premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle as it is in the standard tier, but depends on the work you ask Event Hubs to do, similar to the dedicated tier. The effective ingest and stream throughput per PU will depend on various factors, including:
-
-* Number of producers and consumers
-* Payload size
-* Partition count
-* Egress request rate
-* Usage of Event Hubs Capture, Schema Registry, and other advanced features
-
-For more information, see [comparison between Event Hubs SKUs](event-hubs-quotas.md).
- ## Encryption of events Azure Event Hubs provides encryption of data at rest with Azure Storage Service Encryption (Azure SSE). The Event Hubs service uses Azure Storage to store the data. All the data that's stored with Azure Storage is encrypted using Microsoft-managed keys. If you use your own key (also referred to as Bring Your Own Key (BYOK) or customer-managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key will be encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the BYOK feature is a one time setup process on your namespace. For more information, see [Configure customer-managed keys for encrypting Azure Event Hubs data at rest](configure-customer-managed-key.md).
Azure Event Hubs provides encryption of data at rest with Azure Storage Service
The premium tier offers all the features of the standard plan, but with better performance, isolation and more generous quotas. For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md)
+## High availability with Azure Availability Zones
+Event Hubs premium offers [availability zones](../availability-zones/az-overview.md#availability-zones) support with no extra cost. Using availability zones, you can run event streaming workloads in physically separate locations within each Azure region that are tolerant to local failures.
+
+> [!IMPORTANT]
+> Availability zone support is only available in [Azure regions with availability zones](https://learn.microsoft.com/azure/availability-zones/az-overview#azure-regions-with-availability-zones).
++
+## Premium vs. dedicated tiers
+In comparison to the dedicated offering, the premium tier provides the following benefits:
+
+- Isolation inside a large multi-tenant environment that can shift resources quickly
+- Scale far more elastically and quicker
+- PUs can be dynamically adjusted
+
+Therefore, the premium tier is often a more cost effective option for event streaming workloads up to 160 MB/sec (per namespace), especially with changing loads throughout the day or week, when compared to the dedicated tier.
+
+> [!NOTE]
+> For the extra robustness gained by **availability-zone** support, the minimal deployment scale for the dedicated tier is **8 capacity units (CU)**, but you'll have availability zone support in the premium tier from the first PU in all availability zone regions.
+ ## Pricing The Premium offering is billed by [Processing Units (PUs)](event-hubs-scalability.md#processing-units) which correspond to a share of isolated resources (CPU, Memory, and Storage) in the underlying infrastructure.
event-hubs Event Hubs Quickstart Kafka Enabled Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md
Title: 'Quickstart: Data streaming with Azure Event Hubs using the Kafka protocol' description: 'Quickstart: This article provides information on how to stream into Azure Event Hubs using the Kafka protocol and APIs.' Previously updated : 05/10/2021 Last updated : 09/26/2022 # Quickstart: Data streaming with Event Hubs using the Kafka protocol
-This quickstart shows how to stream into Event Hubs without changing your protocol clients or running your own clusters. You learn how to use your producers and consumers to talk to Event Hubs with just a configuration change
-in your applications.
+
+This quickstart shows how to stream into Event Hubs without changing your protocol clients or running your own clusters. You learn how to use your producers and consumers to talk to Event Hubs with just a configuration change in your applications.
> [!NOTE] > This sample is available on [GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/quickstart/java)
in your applications.
To complete this quickstart, make sure you have the following prerequisites: * Read through the [Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md) article.
-* An Azure subscription. If you do not have one, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
* [Java Development Kit (JDK) 1.7+](/azure/developer/java/fundamentals/java-support-on-azure). * [Download](https://maven.apache.org/download.cgi) and [install](https://maven.apache.org/install.html) a Maven binary archive. * [Git](https://www.git-scm.com/)-
+* To run this quickstart using managed identity, you need to run it on an Azure virtual machine.
## Create an Event Hubs namespace
-When you create an Event Hubs namespace, the Kafka endpoint for the namespace is automatically enabled. You can stream events from your applications that use the Kafka protocol into event hubs. Follow step-by-step instructions in the [Create an event hub using Azure portal](event-hubs-create.md) to create an Event Hubs namespace. If you are using a dedicated cluster, see [Create a namespace and event hub in a dedicated cluster](event-hubs-dedicated-cluster-create-portal.md#create-a-namespace-and-event-hub-within-a-cluster).
+
+When you create an Event Hubs namespace, the Kafka endpoint for the namespace is automatically enabled. You can stream events from your applications that use the Kafka protocol into event hubs. Follow step-by-step instructions in the [Create an event hub using Azure portal](event-hubs-create.md) to create an Event Hubs namespace. If you're using a dedicated cluster, see [Create a namespace and event hub in a dedicated cluster](event-hubs-dedicated-cluster-create-portal.md#create-a-namespace-and-event-hub-within-a-cluster).
> [!NOTE] > Event Hubs for Kafka isn't supported in the **basic** tier. ## Send and receive messages with Kafka in Event Hubs
+### [Passwordless (Recommended)](#tab/passwordless)
+
+1. Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
+
+ Azure Event Hubs supports using Azure Active Directory (Azure AD) to authorize requests to Event Hubs resources. With Azure AD, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which may be a user, or an application service principal.
+
+ To use Managed Identity, you can create or configure a virtual machine using a system-assigned managed identity. For more information about configuring managed identity on a VM, see [Configure managed identities for Azure resources on a VM using the Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity).
+
+1. In the virtual machine that you configure managed identity, clone the [Azure Event Hubs for Kafka repository](https://github.com/Azure/azure-event-hubs-for-kafka).
+
+1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/producer*.
+
+1. Update the configuration details for the producer in *src/main/resources/producer.config* as follows:
+
+ After you configure the virtual machine with managed identity, you need to add managed identity to Event Hubs namespace. For that you need to follow these steps.
+
+ * In the Azure portal, navigate to your Event Hubs namespace. Go to **Access Control (IAM)** in the left navigation.
+
+ * Select **Add** and select `Add role assignment`.
+
+ * In the **Role** tab, select **Azure Event Hubs Data Owner**, then select **Next**=.
+
+ * In the **Members** tab, select the **Managed Identity** radio button for the type to assign access to.
+
+ * Select the **Select members** link. In the **Managed Identity** dropdown, select **Virtual Machine**, then select your virtual machine's managed identity.
+
+ * Select **Review + Assign**.
+
+1. After you configure managed identity, you can update *src/main/resources/producer.config* as shown below.
+
+ ```xml
+ bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
+ security.protocol=SASL_SSL
+ sasl.mechanism=OAUTHBEARER
+ sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
+ sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler;
+ ```
+
+ You can find the source code for the sample handler class CustomAuthenticateCallbackHandler on GitHub [here](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/appsecret/producer/src/main/java).
+
+1. Run the producer code and stream events into Event Hubs:
+
+ ```shell
+ mvn clean package
+ mvn exec:java -Dexec.mainClass="TestProducer"
+ ```
+
+1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/consumer*.
+
+1. Update the configuration details for the consumer in *src/main/resources/consumer.config* as follows:
+
+1. Make sure you configure managed identity as mentioned in step 3 and use the following consumer configuration.
+
+ ```xml
+ bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
+ security.protocol=SASL_SSL
+ sasl.mechanism=OAUTHBEARER
+ sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
+ sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler;
+ ```
+
+ You can find the source code for the sample handler class CustomAuthenticateCallbackHandler on GitHub [here](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/appsecret/consumer/src/main/java).
+
+ You can find all the OAuth samples for Event Hubs for Kafka [here](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth).
+
+1. Run the consumer code and process events from event hub using your Kafka clients:
+
+ ```java
+ mvn clean package
+ mvn exec:java -Dexec.mainClass="TestConsumer"
+ ```
+
+ If your Event Hubs Kafka cluster has events, you now start receiving them from the consumer.
+
+### [Connection string](#tab/connection-string)
+ 1. Clone the [Azure Event Hubs for Kafka repository](https://github.com/Azure/azure-event-hubs-for-kafka).
-2. Navigate to `azure-event-hubs-for-kafka/quickstart/java/producer`.
-
-3. Update the configuration details for the producer in `src/main/resources/producer.config` as follows:
-
- **TLS/SSL:**
-
- ```xml
- bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
- security.protocol=SASL_SSL
- sasl.mechanism=PLAIN
- sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
- ```
-
- > [!IMPORTANT]
- > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
-
- **OAuth:**
-
- ```xml
- bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
- security.protocol=SASL_SSL
- sasl.mechanism=OAUTHBEARER
- sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
- sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler;
- ```
-
- You can find the source code for the sample handler class CustomAuthenticateCallbackHandler on GitHub [here](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/appsecret/producer/src/main/java).
-4. Run the producer code and stream events into Event Hubs:
-
- ```shell
- mvn clean package
- mvn exec:java -Dexec.mainClass="TestProducer"
- ```
-
-5. Navigate to `azure-event-hubs-for-kafka/quickstart/java/consumer`.
-
-6. Update the configuration details for the consumer in `src/main/resources/consumer.config` as follows:
-
- **TLS/SSL:**
-
- ```xml
- bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
- security.protocol=SASL_SSL
- sasl.mechanism=PLAIN
- sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
- ```
-
- > [!IMPORTANT]
- > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
-
- **OAuth:**
-
- ```xml
- bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
- security.protocol=SASL_SSL
- sasl.mechanism=OAUTHBEARER
- sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
- sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler;
- ```
-
- You can find the source code for the sample handler class CustomAuthenticateCallbackHandler on GitHub [here](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/appsecret/consumer/src/main/java).
-
- You can find all the OAuth samples for Event Hubs for Kafka [here](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth).
-7. Run the consumer code and process events from event hub using your Kafka clients:
-
- ```java
- mvn clean package
- mvn exec:java -Dexec.mainClass="TestConsumer"
- ```
-
-If your Event Hubs Kafka cluster has events, you now start receiving them from the consumer.
+1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/producer*.
+
+1. Update the configuration details for the producer in *src/main/resources/producer.config* as follows:
+
+ ```xml
+ bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
+ security.protocol=SASL_SSL
+ sasl.mechanism=PLAIN
+ sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
+ ```
+
+ > [!IMPORTANT]
+ > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
+
+1. Run the producer code and stream events into Event Hubs:
+
+ ```shell
+ mvn clean package
+ mvn exec:java -Dexec.mainClass="TestProducer"
+ ```
+
+1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/consumer*.
+
+1. Update the configuration details for the consumer in *src/main/resources/consumer.config* as follows:
+
+ ```xml
+ bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
+ security.protocol=SASL_SSL
+ sasl.mechanism=PLAIN
+ sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
+ ```
+
+ > [!IMPORTANT]
+ > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
+
+1. Run the consumer code and process events from event hub using your Kafka clients:
+
+ ```java
+ mvn clean package
+ mvn exec:java -Dexec.mainClass="TestConsumer"
+ ```
+
+If your Event Hubs Kafka cluster has events, you will now start receiving them from the consumer.
++ ## Next steps+ In this article, you learned how to stream into Event Hubs without changing your protocol clients or running your own clusters. To learn more, see [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md).
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
# What is applicability in Azure Policy?
-When a policy definition is assigned to a scope, Azure Policy scans every resource in that scope to determine what should be considered for compliance evaluation. A resource will only be assessed for compliance if it is considered **applicable** to the given policy assignment.
+When a policy definition is assigned to a scope, Azure Policy determines which resources in that scope should be considered for compliance evaluation. A resource will only be assessed for compliance if it is considered **applicable** to the given policy assignment.
Applicability is determined by several factors: - **Conditions** in the `if` block of the [policy rule](../concepts/definition-structure.md#policy-rule).
governance Determine Non Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/determine-non-compliance.md
its corresponding explanation:
| NonModifiablePolicyAlias | NonModifiableAliasConflict: The alias '{alias}' is not modifiable in requests using API version '{apiVersion}'. This error happens when a request using an API version where the alias does not support the 'modify' effect or only supports the 'modify' effect with a different token type. | | AppendPoliciesNotApplicable | AppendPoliciesUnableToAppend: The aliases: '{ aliases }' are not modifiable in requests using API version: '{ apiVersion }'. This can happen in requests using API versions for which the aliases do not support the 'modify' effect, or support the 'modify' effect with a different token type. | | ConflictingAppendPolicies | ConflictingAppendPolicies: Found conflicting policy assignments that modify the '{notApplicableFields}' field. Policy identifiers: '{policy}'. Please contact the subscription administrator to update the policy assignments. |
-| |
| AppendPoliciesFieldsExist | AppendPoliciesFieldsExistWithDifferentValues: Policy assignments attempted to append fields which already exist in the request with different values. Fields: '{existingFields}'. Policy identifiers: '{policy}'. Please contact the subscription administrator to update the policies. |
-| |
| AppendPoliciesUndefinedFields | AppendPoliciesUndefinedFields: Found policy definition that refers to an undefined field property for API version '{apiVersion}'. Fields: '{nonExistingFields}'. Policy identifiers: '{policy}'. Please contact the subscription administrator to update the policies. |
-| |
| MissingRegistrationForType | MissingRegistrationForResourceType: The subscription is not registered for the resource type '{ResourceType}'. Please check that the resource type exists and that the resource type is registered. |
-| |
| AmbiguousPolicyEvaluationPaths | The request content has one or more ambiguous paths: '{0}' required by policies: '{1}'. | | InvalidResourceNameWildcardPosition | The policy assignment '{0}' associated with the policy definition '{1}' could not be evaluated. The resource name '{2}' within an ifNotExists condition contains the wildcard '?' character in an invalid position. Wildcards can only be located at the end of the name in a segment by themselves (ex. TopLevelResourceName/?). Please either fix the policy or remove the policy assignment to unblock. | | TooManyResourceNameSegments | The policy assignment '{0}' associated with the policy definition '{1}' could not be evaluated. The resource name '{2}' within an ifNotExists condition contains too many name segments. The number of name segments must be equal to or less than the number of type segments (excluding the resource provider namespace). Please either fix the policy definition or remove the policy assignment to unblock. |
query this information outside of the Azure portal, see [Get resource changes](.
- Understand how to [programmatically create policies](programmatically-create.md). - Learn how to [get compliance data](get-compliance-data.md). - Learn how to [remediate non-compliant resources](remediate-resources.md).-- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
+- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
If you don't have an Azure subscription, [create a free account](https://azure.m
* Interactive Query * Kafka * Spark
- * Storm
For the instructions on how to create an HDInsight cluster, see [Get started with Azure HDInsight](hadoop/apache-hadoop-linux-tutorial-get-started.md).
Available HDInsight workbooks:
- HDInsight Kafka Workbook - HDInsight HBase Workbook - HDInsight Hive/LLAP Workbook-- HDInsight Storm Workbook Screenshot of Spark Workbook :::image type="content" source="./media/hdinsight-hadoop-oms-log-analytics-tutorial/hdinsight-spark-workbook.png" alt-text="Spark workbook screenshot":::
You can see the detail cluster list in each section.
In the **Overview** tab under **Monitored Clusters**, you can see cluster type, critical Alerts, and resource utilizations. :::image type="content" source="./media/hdinsight-hadoop-oms-log-analytics-tutorial/hdinsight-cluster-alerts.png" alt-text="Cluster monitor alerts screenshot":::
-Also you can see the clusters in each workload type, including Spark, HBase, Hive, Kafka, and Storm.
+Also you can see the clusters in each workload type, including Spark, HBase, Hive, and Kafka.
The high-level metrics of each workload type will be presented, including how many active node managers, how many running applications, etc.
HDInsight support cluster auditing with Azure Monitor logs, by importing the fol
* Interactive Query * Kafka * Spark
- * Storm
For the instructions on how to create an HDInsight cluster, see [Get started with Azure HDInsight](hadoop/apache-hadoop-linux-tutorial-get-started.md).
Available HDInsight solutions:
* HDInsight Interactive Query Monitoring * HDInsight Kafka Monitoring * HDInsight Spark Monitoring
-* HDInsight Storm Monitoring
For management solution instructions, see [Management solutions in Azure](../azure-monitor/insights/solutions.md#install-a-monitoring-solution). To experiment, install a HDInsight Hadoop Monitoring solution. When it's done, you see an **HDInsightHadoop** tile listed under **Summary**. Select the **HDInsightHadoop** tile. The HDInsightHadoop solution looks like:
hdinsight Hdinsight Overview Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-versioning.md
Title: Versioning introduction - Azure HDInsight
description: Learn how versioning works in Azure HDInsight. Previously updated : 02/08/2021 Last updated : 09/26/2022 # How versioning works in HDInsight
-HDInsight service has two main components: a Resource provider and Apache Hadoop components that are deployed on a cluster.
+HDInsight service has two main components: a Resource provider and open-source software (OSS) componentscomponents that are deployed on a cluster.
## HDInsight Resource provider
healthcare-apis Access Healthcare Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/access-healthcare-apis.md
Title: Access Azure Health Data Services description: This article describes the different ways to access Azure Health Data Services in your applications using tools and programming languages. -+
healthcare-apis Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/authentication-authorization.md
Title: Azure Health Data Services Authentication and Authorization description: This article provides an overview of the authentication and authorization of Azure Health Data Services. -+ Last updated 06/06/2022-+ # Authentication and Authorization for Azure Health Data Services
healthcare-apis Autoscale Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/autoscale-azure-api-fhir.md
Title: Autoscale for Azure API for FHIR description: This article describes the autoscale feature for Azure API for FHIR.-+ Last updated 06/02/2022-+ # Autoscale for Azure API for FHIR
healthcare-apis Azure Active Directory Identity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-active-directory-identity-configuration.md
Title: Azure Active Directory identity configuration for Azure API for FHIR description: Learn the principles of identity, authentication, and authorization for Azure FHIR servers. -+ Last updated 06/02/2022-+ # Azure Active Directory identity configuration for Azure API for FHIR
healthcare-apis Azure Api Fhir Access Token Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-access-token-validation.md
Title: Azure API for FHIR access token validation description: Walks through token validation and gives tips on how to troubleshoot access issues --+ Last updated 06/02/2022-+ # Azure API for FHIR access token validation
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
Title: 'Quickstart: Deploy Azure API for FHIR using an ARM template' description: In this quickstart, learn how to deploy Azure API for Fast Healthcare Interoperability Resources (FHIR®), by using an Azure Resource Manager template (ARM template).-+ -+ Last updated 06/03/2022
healthcare-apis Azure Api For Fhir Additional Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-for-fhir-additional-settings.md
description: Overview of the additional settings you can set for Azure API for F
---++ Last updated 06/02/2022
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/carin-implementation-guide-blue-button-tutorial.md
---++ Last updated 06/02/2022
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/centers-for-medicare-tutorial-introduction.md
---++ Last updated 06/02/2022
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md
Title: Configure Azure role-based access control (Azure RBAC) for Azure API for FHIR description: This article describes how to configure Azure RBAC for the Azure API for FHIR data plane-+ Last updated 06/02/2022--+ # Configure Azure RBAC for FHIR
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in Azure API for FHIR description: This article describes how to configure cross-origin resource sharing in Azure API for FHIR.--++ Last updated 06/03/2022
healthcare-apis Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-database.md
Title: Configure database settings in Azure API for FHIR description: This article describes how to configure Database settings in Azure API for FHIR-+ Last updated 06/03/2022-+ # Configure database settings
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-export-data.md
Title: Configure export settings in Azure API for FHIR description: This article describes how to configure export settings in Azure API for FHIR-+ Last updated 06/03/2022-+ # Configure export settings in Azure API for FHIR and set up a storage account
healthcare-apis Configure Local Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-local-rbac.md
Title: Configure local role-based access control (local RBAC) for Azure API for FHIR description: This article describes how to configure the Azure API for FHIR to use a secondary Azure AD tenant for data plane-+ Last updated 06/03/2022-+ ms.devlang: azurecli
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-private-link.md
Title: Private link for Azure API for FHIR description: This article describes how to set up a private endpoint for Azure API for FHIR services -+ Last updated 06/03/2022-+ # Configure private link
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/convert-data.md
Title: Data conversion for Azure API for FHIR description: Use the $convert-data endpoint and customize-converter templates to convert data in Azure API for FHIR. -+ Last updated 06/03/2022-+ # Converting your data to FHIR for Azure API for FHIR
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/copy-to-synapse.md
Title: Copy data in Azure API for FHIR to Azure Synapse Analytics description: This article describes copying FHIR data into Synapse in Azure API for FHIR-+ Last updated 06/03/2022-+ # Copy data from Azure API for FHIR to Azure Synapse Analytics
healthcare-apis Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/customer-managed-key.md
Title: Configure customer-managed keys for Azure API for FHIR description: Bring your own key feature supported in Azure API for FHIR through Cosmos DB -+ Last updated 06/03/2022-+ ms.devlang: azurecli
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-drug-formulary-tutorial.md
---++ Last updated 06/03/2022
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-pdex-tutorial.md
--++ Last updated 06/03/2022
healthcare-apis Davinci Plan Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-plan-net.md
---++ Last updated 06/03/2022
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/de-identified-export.md
Title: Exporting de-identified data for Azure API for FHIR description: This article describes how to set up and use de-identified export for Azure API for FHIR-+ Last updated 08/24/2022-+ # Exporting de-identified data for Azure API for FHIR
healthcare-apis Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/disaster-recovery.md
Title: Disaster recovery for Azure API for FHIR description: In this article, you'll learn how to enable disaster recovery features for Azure API for FHIR.-+ Last updated 06/03/2022-+ # Disaster recovery for Azure API for FHIR
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md
---++ Last updated 06/03/2022
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
Title: Executing the export by invoking $export command on Azure API for FHIR description: This article describes how to export FHIR data using $export for Azure API for FHIR-+ Last updated 06/03/2022-+ # How to export FHIR data in Azure API for FHIR
healthcare-apis Fhir App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-app-registration.md
---++ Last updated 06/03/2022
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Title: Supported FHIR features in Azure - Azure API for FHIR description: This article explains which features of the FHIR specification that are implemented in Azure API for FHIR -+ Last updated 06/03/2022-+ # Features
healthcare-apis Fhir Github Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-github-projects.md
Title: Related GitHub Projects for Azure API for FHIR description: List all Open Source (GitHub) repositories for Azure API for FHIR. -+ Last updated 06/03/2022-+ # Related GitHub Projects
healthcare-apis Fhir Paas Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-cli-quickstart.md
Title: 'Quickstart: Deploy Azure API for FHIR using Azure CLI' description: In this quickstart, you'll learn how to deploy Azure API for FHIR in Azure using the Azure CLI. -+ Last updated 06/03/2022-+
healthcare-apis Fhir Paas Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-portal-quickstart.md
Title: 'Quickstart: Deploy Azure API for FHIR using Azure portal' description: In this quickstart, you'll learn how to deploy Azure API for FHIR and configure settings using the Azure portal. -+ Last updated 06/03/2022-+
healthcare-apis Fhir Paas Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-powershell-quickstart.md
Title: 'Quickstart: Deploy Azure API for FHIR using PowerShell' description: In this quickstart, you'll learn how to deploy Azure API for FHIR using PowerShell. -+ Last updated 06/03/2022-+
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
Title: FHIR REST API capabilities for Azure API for FHIR description: This article describes the RESTful interactions and capabilities for Azure API for FHIR.-+ Last updated 06/03/2022-+ # FHIR REST API capabilities for Azure API for FHIR
healthcare-apis Find Identity Object Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md
Title: Find identity object IDs for authentication - Azure API for FHIR description: This article explains how to locate the identity object IDs needed to configure authentication for Azure API for FHIR -+ Last updated 06/03/2022-+ # Find identity object IDs for authentication configuration for Azure API for FHIR
healthcare-apis Get Healthcare Apis Access Token Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-healthcare-apis-access-token-cli.md
Title: Get access token using Azure CLI - Azure API for FHIR description: This article explains how to obtain an access token for Azure API for FHIR using the Azure CLI. -+ Last updated 06/03/2022-+ # Get access token for Azure API for FHIR using Azure CLI
healthcare-apis Get Started With Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-started-with-azure-api-fhir.md
Title: Get started with Azure API for FHIR description: This document describes how to get started with Azure API for FHIR.-+ Last updated 06/03/2022-+ # Get started with Azure API for FHIR
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-do-custom-search.md
Title: How to do custom search in Azure API for FHIR description: This article describes how you can define your own custom search parameters in Azure API for FHIR to be used in the database. -+ Last updated 06/03/2022-+ # Defining custom search parameters for Azure API for FHIR
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-run-a-reindex.md
Title: How to run a reindex job in Azure API for FHIR description: This article describes how to run a reindex job to index any search or sort parameters that haven't yet been indexed in your database. -+ Last updated 06/03/2022-+ # Running a reindex job in Azure API for FHIR
healthcare-apis Move Fhir Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/move-fhir-service.md
Title: Move Azure API for FHIR instance to a different subscription or resource group description: This article describes how to move Azure an API for FHIR instance -+ Last updated 06/03/2022-+ # Move Azure API for FHIR to a different subscription or resource group
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview-of-search.md
Title: Overview of search in Azure API for FHIR description: This article describes an overview of FHIR search that is implemented in Azure API for FHIR-+ Last updated 06/03/2022-+ # Overview of search in Azure API for FHIR
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview.md
Title: What is Azure API for FHIR? - Azure API for FHIR description: Azure API for FHIR enables rapid exchange of data through FHIR APIs. Ingest, manage, and persist Protected Health Information PHI with a managed cloud service. -+ Last updated 06/03/2022-+ # What is Azure API for FHIR?
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/patient-everything.md
Title: Use patient-everything in Azure API for FHIR description: This article explains how to use the Patient-everything operation in the Azure API for FHIR. -+ Last updated 06/03/2022-+ # Patient-everything in FHIR
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 09/12/2022--++
healthcare-apis Purge History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/purge-history.md
Title: Purge history operation for Azure API for FHIR description: This article describes the $purge-history operation for Azure API for FHIR.-+ Last updated 06/03/2022-+ # Purge history operation for Azure API for FHIR
healthcare-apis Register Confidential Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-confidential-azure-ad-client-app.md
Title: Register a confidential client app in Azure AD - Azure API for FHIR description: Register a confidential client application in Azure Active Directory that authenticates on a user's behalf and requests access to resource applications.-+ Last updated 06/03/2022-+ # Register a confidential client application in Azure Active Directory for Azure API for FHIR
healthcare-apis Register Public Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app.md
Title: Register a public client app in Azure AD - Azure API for FHIR description: This article explains how to register a public client application in Azure Active Directory, in preparation for deploying FHIR API in Azure.-+ Last updated 06/03/2022-+ # Register a public client application in Azure Active Directory for Azure API for FHIR
healthcare-apis Register Resource Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-resource-azure-ad-client-app.md
Title: Register a resource app in Azure AD - Azure API for FHIR description: Register a resource (or API) app in Azure Active Directory, so that client applications can request access to the resource when authenticating. -+ Last updated 06/03/2022-+
healthcare-apis Register Service Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-service-azure-ad-client-app.md
Title: Register a service app in Azure AD - Azure API for FHIR description: Learn how to register a service client application in Azure Active Directory. -+ Last updated 06/03/2022-+ # Register a service client application in Azure Active Directory for Azure API for FHIR
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Title: Azure API for FHIR monthly releases description: This article provides details about the Azure API for FHIR monthly features and enhancements. -+ Last updated 06/16/2022 -+ # Release notes: Azure API for FHIR
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/search-samples.md
Title: Search examples for Azure API for FHIR description: How to search using different search parameters, modifiers, and other FHIR search tools-+ Last updated 06/03/2022-+ # FHIR search examples for Azure API for FHIR
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR
description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 09/12/2022 --++
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/store-profiles-in-fhir.md
Title: Store profiles in Azure API for FHIR description: This article describes how to store profiles in Azure API for FHIR.-+ Last updated 06/03/2022-+ # Store profiles in Azure API for FHIR
healthcare-apis Tutorial Member Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-member-match.md
---++ Last updated 06/03/2022
healthcare-apis Tutorial Web App Fhir Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-fhir-server.md
---++ Last updated 06/03/2022
healthcare-apis Tutorial Web App Public App Reg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-public-app-reg.md
---++ Last updated 06/03/2022
healthcare-apis Tutorial Web App Test Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-test-postman.md
---++ Last updated 06/03/2022
healthcare-apis Tutorial Web App Write Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-write-web-app.md
---++ Last updated 06/03/2022
healthcare-apis Use Custom Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-custom-headers.md
---++ Last updated 06/03/2022
healthcare-apis Use Smart On Fhir Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-smart-on-fhir-proxy.md
--++ Last updated 06/03/2022
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/validation-against-profiles.md
Title: Validate FHIR resources against profiles in Azure API for FHIR description: This article describes how to validate FHIR resources against profiles in Azure API for FHIR.-+ Last updated 06/03/2022-+ # Validate FHIR resources against profiles in Azure API for FHIR
healthcare-apis Configure Azure Rbac Using Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac-using-scripts.md
Title: Grant permissions to users and client applications using CLI and REST API - Azure Health Data Services description: This article describes how to grant permissions to users and client applications using CLI and REST API. -+ Last updated 06/06/2022
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac.md
Title: Configure Azure RBAC role for FHIR service - Azure Health Data Services description: This article describes how to configure Azure RBAC role for FHIR.-+ Last updated 06/06/2022
healthcare-apis Deploy Healthcare Apis Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deploy-healthcare-apis-using-bicep.md
Title: How to create Azure Health Data Services, workspaces, FHIR and DICOM service, and MedTech service using Azure Bicep description: This document describes how to deploy Azure Health Data Services using Azure Bicep.-+
healthcare-apis Api Versioning Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/api-versioning-dicom-service.md
Title: API versioning for DICOM service - Azure Health Data Services description: This guide gives an overview of the API version policies for the DICOM service. -+ Last updated 06/11/2022-+ # API versioning for DICOM service
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in DICOM service in Azure Health Data Services description: This article describes how to configure cross-origin resource sharing in DICOM service in Azure Health Data Services--++ Last updated 06/14/2022
healthcare-apis Deploy Dicom Services In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure.md
Title: Deploy DICOM service using the Azure portal - Azure Health Data Services description: This article describes how to deploy DICOM service in the Azure portal.-+ Last updated 05/03/2022-+
healthcare-apis Dicom Cast Access Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-access-request.md
Title: DICOM access request reference guide - Azure Health Data Services description: This reference guide provides information about to create an Azure support ticket to request DICOMcast access.-+ Last updated 06/03/2022-+ # DICOMcast access request
healthcare-apis Dicom Cast Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-overview.md
Title: DICOMcast overview - Azure Health Data Services description: In this article, you'll learn the concepts of DICOMcast.-+ Last updated 06/03/2022-+ # DICOMcast overview
healthcare-apis Dicom Change Feed Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-change-feed-overview.md
Title: Overview of DICOM Change Feed - Azure Health Data Services description: In this article, you'll learn the concepts of DICOM Change Feed.-+ Last updated 03/01/2022-+ # Change Feed Overview
healthcare-apis Dicom Configure Azure Rbac Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-configure-azure-rbac-old.md
Title: Configure Azure RBAC for the DICOM service - Azure Health Data Services description: This article describes how to configure Azure RBAC for the DICOM service-+ Last updated 03/02/2022-+ # Configure Azure RBAC for the DICOM service
healthcare-apis Dicom Extended Query Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-extended-query-tags-overview.md
Title: DICOM extended query tags overview - Azure Health Data Services description: In this article, you'll learn the concepts of Extended Query Tags.-+ Last updated 03/21/2022-+ # Extended query tags
healthcare-apis Dicom Get Access Token Azure Cli Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-get-access-token-azure-cli-old.md
Title: Get access token using Azure CLI - Azure Health Data Services for DICOM service description: This article explains how to obtain an access token for the DICOM service using the Azure CLI.-+ Last updated 03/02/2022-+ # Get access token for the DICOM service using Azure CLI
healthcare-apis Dicom Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-register-application.md
Title: Register a client application for the DICOM service in Azure Active Directory description: How to register a client application for the DICOM service in Azure Active Directory.-+ Last updated 09/02/2022-+ # Register a client application for the DICOM service in Azure Active Directory
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
Title: DICOM Conformance Statement for Azure Health Data Services description: This document provides details about the DICOM Conformance Statement for Azure Health Data Services. -+ Last updated 06/10/2022-+ # DICOM Conformance Statement
healthcare-apis Dicom Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-overview.md
Title: Overview of the DICOM service - Azure Health Data Services description: In this article, you'll learn concepts of DICOM and the DICOM service.-+ Last updated 07/11/2022-+ # Overview of the DICOM service
healthcare-apis Dicomweb Standard Apis C Sharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-c-sharp.md
Title: Using DICOMweb&trade;Standard APIs with C# - Azure Health Data Services description: In this tutorial, you'll learn how to use DICOMweb Standard APIs with C#. -+ Last updated 05/26/2022-+ # Using DICOMweb&trade; Standard APIs with C#
healthcare-apis Dicomweb Standard Apis Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-curl.md
Title: Using DICOMweb&trade;Standard APIs with cURL - Azure Health Data Services description: In this tutorial, you'll learn how to use DICOMweb Standard APIs with cURL. -+ Last updated 02/15/2022-+ # Using DICOMWeb&trade; Standard APIs with cURL
healthcare-apis Dicomweb Standard Apis Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-python.md
Title: Using DICOMweb Standard APIs with Python - Azure Health Data Services description: This tutorial describes how to use DICOMweb Standard APIs with Python. -+ Last updated 02/15/2022-+ # Using DICOMWeb&trade; Standard APIs with Python
healthcare-apis Dicomweb Standard Apis With Dicom Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-with-dicom-services.md
Title: Using DICOMweb - Standard APIs with Azure Health Data Services DICOM service description: This tutorial describes how to use DICOMweb Standard APIs with the DICOM service. -+ Last updated 03/22/2022-+ # Using DICOMweb&trade;Standard APIs with DICOM services
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/enable-diagnostic-logging.md
Title: Enable diagnostic logging in the DICOM service - Azure Health Data Services description: This article explains how to enable diagnostic logging in the DICOM service.-+ Last updated 03/02/2022-+ # Enable Diagnostic Logging in the DICOM service
healthcare-apis Get Started With Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-dicom.md
Title: Get started with the DICOM service - Azure Health Data Services description: This document describes how to get started with the DICOM service in Azure Health Data Services.-+ Last updated 06/03/2022-+
healthcare-apis Pull Dicom Changes From Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/pull-dicom-changes-from-change-feed.md
Title: Pull DICOM changes using the Change Feed description: This how-to guide explains how to pull DICOM changes using DICOM Change Feed for Azure Health Data Services.-+ Last updated 02/15/2022-+ # Pull DICOM changes using the Change Feed
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
Title: References for DICOM service - Azure Health Data Services description: This reference provides related resources for the DICOM service.-+ Last updated 06/03/2022-+ # DICOM service open-source projects
healthcare-apis Azure Active Directory Identity Configuration Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/azure-active-directory-identity-configuration-old.md
Title: Azure Active Directory identity configuration for Azure Health Data Services for FHIR service description: Learn the principles of identity, authentication, and authorization for FHIR service -+ Last updated 06/03/2022-+ # Azure Active Directory identity configuration for FHIR service
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/carin-implementation-guide-blue-button-tutorial.md
---++ Last updated 06/06/2022
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/centers-for-medicare-tutorial-introduction.md
---++ Last updated 06/06/2022
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in FHIR service description: This article describes how to configure cross-origin resource sharing in FHIR service--++ Last updated 06/06/2022
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
Title: Configure export settings in FHIR service - Azure Health Data Services description: This article describes how to configure export settings in the FHIR service-+ Last updated 08/12/2022-+ # Configure export settings and set up a storage account
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
Title: Configure import settings in the FHIR service - Azure Health Data Services description: This article describes how to configure import settings in the FHIR service.-+ Last updated 06/06/2022-+ # Configure bulk-import settings (Preview)
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data.md
Title: FHIR data conversion for Azure Health Data Services description: Use the $convert-data endpoint and custom converter templates to convert data to FHIR in Azure Health Data Services. -+ Last updated 08/15/2022-+
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/copy-to-synapse.md
Title: Copy data from FHIR service in Azure Health Data Services to Azure Synapse Analytics description: This article describes copying FHIR data into Synapse-+ Last updated 06/06/2022-+ # Copy data from FHIR service to Azure Synapse Analytics
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-drug-formulary-tutorial.md
---++ Last updated 06/06/2022
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-pdex-tutorial.md
--++ Last updated 06/06/2022
healthcare-apis Davinci Plan Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-plan-net.md
---++ Last updated 06/06/2022
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/de-identified-export.md
Title: Using the FHIR service to export de-identified data description: This article describes how to set up and use de-identified export-+ Last updated 08/30/2022-+ # Exporting de-identified data
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
Title: Executing the export by invoking $export command on FHIR service description: This article describes how to export FHIR data using $export-+ Last updated 08/03/2022-+ # How to export FHIR data
The FHIR service supports `$export` at the following levels:
* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET {{fhirurl}}/Patient/$export` * [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) ΓÇô *The FHIR service exports all referenced resources but doesn't export the characteristics of the group resource itself: `GET {{fhirurl}}/Group/[ID]/$export`
-When data is exported, a separate file is created for each resource type. No individual file will exceed one million resource records. The result is that you may get multiple files for a resource type, which will be enumerated (for example, `Patient-1.ndjson`, `Patient-2.ndjson`). Every file will not necessarily have one million resource records listed.
+With export, data is exported in multiple files each containing resources of only one type. No individual file will exceed 100,000 resource records. The result is that you may get multiple files for a resource type, which will be enumerated (for example, `Patient-1.ndjson`, `Patient-2.ndjson`).
> [!Note] > `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if a resource is in multiple groups or in a compartment of more than one resource.
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
Title: FAQs about FHIR service in Azure Health Data Services description: Get answers to frequently asked questions about FHIR service, such as the storage location of data behind FHIR APIs and version support. -+ Last updated 06/06/2022-+
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-features-supported.md
Title: Supported FHIR features in FHIR service description: This article explains which features of the FHIR specification that are implemented in Azure Health Data Services -+ Last updated 06/06/2022-+ # Supported FHIR Features
healthcare-apis Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-portal-quickstart.md
Title: Deploy a FHIR service within Azure Health Data Services description: This article teaches users how to deploy a FHIR service in the Azure portal.-+ Last updated 06/06/2022-+
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-rest-api-capabilities.md
Title: FHIR REST API capabilities for Azure Health Data Services FHIR service description: This article describes the RESTful interactions and capabilities for Azure Health Data Services FHIR service.-+ Last updated 06/06/2022-+ # FHIR REST API capabilities for Azure Health Data Services FHIR service
healthcare-apis Fhir Service Access Token Validation Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-access-token-validation-old.md
Title: FHIR service access token validation description: Access token validation procedure and troubleshooting guide for FHIR service -+ Last updated 06/06/2022-+ # FHIR service access token validation
healthcare-apis Fhir Service Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-autoscale.md
Title: Autoscale feature for Azure Health Data Services FHIR service description: This article describes the Autoscale feature for Azure Health Data Services FHIR service.-+ Last updated 06/06/2022-+ # FHIR service autoscale
healthcare-apis Fhir Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-bicep.md
Title: Deploy Azure Health Data Services FHIR service using Bicep description: Learn how to deploy FHIR service by using Bicep-+ -+ Last updated 05/27/2022
healthcare-apis Fhir Service Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-diagnostic-logs.md
Title: View and enable diagnostic settings in FHIR service - Azure Health Data Services description: This article describes how to enable diagnostic settings in FHIR service and review some sample queries for audit logs. -+ Last updated 06/06/2022-+ # View and enable diagnostic settings in the FHIR service
healthcare-apis Fhir Service Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-resource-manager-template.md
Title: Deploy Azure Health Data Services FHIR service using ARM template description: Learn how to deploy FHIR service by using an Azure Resource Manager template (ARM template)-+ -+ Last updated 06/06/2022
healthcare-apis Fhir Versioning Policy And History Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-versioning-policy-and-history-management.md
Title: Versioning policy and history management for Azure Health Data Services FHIR service description: This article describes the concepts of versioning policy and history management for Azure Health Data Services FHIR service.-+ Last updated 06/06/2022-+ # Versioning policy and history management
healthcare-apis Get Started With Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/get-started-with-fhir.md
Title: Get started with FHIR service - Azure Health Data Services description: This document describes how to get started with FHIR service in Azure Health Data Services.-+ Last updated 06/06/2022-+
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-do-custom-search.md
Title: How to do custom search in FHIR service description: This article describes how you can define your own custom search parameters to be used in the database. -+ Last updated 08/22/2022-+ # Defining custom search parameters
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-run-a-reindex.md
Title: How to run a reindex job in FHIR service - Azure Health Data Services description: How to run a reindex job to index any search or sort parameters that haven't yet been indexed in your database-+ Last updated 08/22/2022-+ # Running a reindex job
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
Title: Executing the import by invoking $import operation on FHIR service in Azure Health Data Services description: This article describes how to import FHIR data using $import.-+ Last updated 06/06/2022-+ # Bulk-import FHIR data
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-search.md
Title: Overview of FHIR search in Azure Health Data Services description: This article describes an overview of FHIR search that is implemented in Azure Health Data Services-+ Last updated 08/18/2022-+ # Overview of FHIR search
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview.md
Title: What is the FHIR service in Azure Health Data Services? description: The FHIR service enables rapid exchange of health data through FHIR APIs. Ingest, manage, and persist Protected Health Information (PHI) with a managed cloud service. -+ Last updated 09/20/2022-+ # What is the FHIR service in Azure Health Data Services?
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/patient-everything.md
Title: Patient-everything - Azure Health Data Services description: This article explains how to use the Patient-everything operation. -+ Last updated 06/06/2022-+ # Using Patient-everything in FHIR service
healthcare-apis Purge History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/purge-history.md
Title: Purge history operation for Azure Health Data Services FHIR service description: This article describes the $purge-history operation for the FHIR service.-+ Last updated 06/06/2022-+ # Purge history operation
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/search-samples.md
Title: Search examples for FHIR service description: How to search using different search parameters, modifiers, and other search tools for FHIR-+ Last updated 08/22/2022-+ # FHIR search examples
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/store-profiles-in-fhir.md
Title: Store profiles in FHIR service in Azure Health Data Services description: This article describes how to store profiles in the FHIR service-+ Last updated 06/06/2022-+ # Store profiles in FHIR service
healthcare-apis Tutorial Member Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/tutorial-member-match.md
---++ Last updated 06/06/2022
healthcare-apis Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/use-postman.md
Title: Access the Azure Health Data Services FHIR service using Postman description: This article describes how to access Azure Health Data Services FHIR service with Postman. -+ Last updated 06/06/2022-+ # Access using Postman
healthcare-apis Using Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-curl.md
Title: Access Azure Health Data Services with cURL description: This article explains how to access Azure Health Data Services with cURL -+ Last updated 06/06/2022-+ # Access the Azure Health Data Services with cURL
healthcare-apis Using Rest Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-rest-client.md
Title: Access Azure Health Data Services using REST Client description: This article explains how to access the Healthcare APIs using the REST Client extension in VS Code -+ Last updated 06/06/2022-+ # Accessing Azure Health Data Services using the REST Client Extension in Visual Studio Code
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/validation-against-profiles.md
Title: Validate FHIR resources against profiles in Azure Health Data Services description: This article describes how to validate FHIR resources against profiles in the FHIR service.-+ Last updated 06/06/2022-+ # Validate FHIR resources against profiles in Azure Health Data Services
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/get-access-token.md
Title: Get access token using Azure CLI or Azure PowerShell description: This article explains how to obtain an access token for Azure Health Data Services using the Azure CLI or Azure PowerShell. -+
healthcare-apis Get Started With Health Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/get-started-with-health-data-services.md
Title: Get started with Azure Health Data Services description: This document describes how to get started with Azure Health Data Services.-+ Last updated 06/06/2022-+ # Get started with Azure Health Data Services
healthcare-apis Github Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/github-projects.md
Title: Related GitHub Projects for Azure Health Data Services description: List all Open Source (GitHub) repositories -+ Last updated 06/06/2022-+ # GitHub Projects
healthcare-apis Healthcare Apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-configure-private-link.md
Title: Private Link for Azure Health Data Services description: This article describes how to set up a private endpoint for Azure Health Data Services -+
healthcare-apis Healthcare Apis Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-faqs.md
Title: FAQs about Azure Health Data Services description: This document provides answers to the frequently asked questions about Azure Health Data Services. -+ Last updated 06/15/2022-+ # Frequently asked questions about Azure Health Data Services
healthcare-apis Healthcare Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-overview.md
Title: What is Azure Health Data Services? description: This article is an overview of Azure Health Data Services. -+ Last updated 06/03/2022-+ # What is Azure Health Data Services?
healthcare-apis Healthcare Apis Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-quickstart.md
Title: Deploy workspace in the Azure portal - Azure Health Data Services description: This document teaches users how to deploy a workspace in the Azure portal.-+ Last updated 06/06/2022-+
healthcare-apis How To Use Iot Central Json Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iot-central-json-content-mappings.md
Title: IotCentralJsonPathContentTemplate mappings in MedTech service Device mappings - Azure Health Data Services
-description: This article describes how IotCentralJsonPathContent mappings with MedTech service Device mappings templates.
+ Title: IotCentralJsonPathContentTemplate mappings in MedTech service device mappings - Azure Health Data Services
+description: This article describes how IotCentralJsonPathContent mappings with MedTech service device mappings templates.
Previously updated : 02/16/2022 Last updated : 09/16/2022 # How to use IotCentralJsonPathContentTemplate mappings > [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service device and FHIR destination mappings. Export mappings for uploading to MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-This article describes how to use IoTCentralJsonPathContentTemplate mappings with the MedTech service Device mappings.
+This article describes how to use IoTCentralJsonPathContentTemplate mappings with the MedTech service device mappings.
## IotCentralJsonPathContentTemplate
If you're using Azure IoT Central's Data Export feature and custom properties in
## Next steps
-In this article, you learned how to use Device mappings. To learn how to use FHIR destination mappings, see
+In this article, you learned how to use IotCentralJsonPathContentTemplate with your MedTech service device mappings. To learn how to use FHIR destination mappings, see
>[!div class="nextstepaction"] >[How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/logging.md
Title: Logging for Azure Health Data Services description: This article explains how logging works and how to enable logging for the Azure Health Data Services -+ Last updated 06/06/2022-+ # Logging for Azure Health Data Services
healthcare-apis Register Application Cli Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application-cli-rest.md
Title: Register a client application in Azure AD using CLI and REST API - Azure Health Data Services description: This article describes how to register a client application Azure AD using CLI and REST API. -+ Last updated 05/03/2022
healthcare-apis Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application.md
Title: Register a client application in Azure Active Directory for the Azure Health Data Services description: How to register a client application in the Azure AD and how to add a secret and API permissions to the Azure Health Data Services-+ Last updated 09/02/2022-+ # Register a client application in Azure Active Directory
healthcare-apis Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/workspace-overview.md
Title: What is the workspace? - Azure Health Data Services description: This article describes an overview of the Azure Health Data Services workspace.-+ Last updated 06/06/2022-+ # What is Azure Health Data Services workspace?
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
In IoT Central, you can configure and manage security in the following areas:
- Device access to your application. - Programmatic access to your application. - Authentication to other services from your application.
+- Audit logs track activity in your application.
To learn more, see the [IoT Central security guide](overview-iot-central-security.md).
iot-central Concepts Iiot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iiot-architecture.md
Secure your IIoT solution by using the following IoT Central features:
- Ensure safe, secure data exports with Azure Active Directory managed identities.
+- Use audit logs to track activity in your IoT Central application.
+ ## Patterns :::image type="content" source="media/concepts-iiot-architecture/automation-pyramid.svg" alt-text="Diagram that shows the five levels of the automation pyramid." border="false":::
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
Title: Authorize REST API in Azure IoT Central
description: How to authenticate and authorize IoT Central REST API calls Previously updated : 06/22/2022 Last updated : 07/25/2022
To get a bearer token for a service principal, see [Service principal authentica
To get an API token, you can use the IoT Central UI or a REST API call. Administrators associated with the root organization and users assigned to the correct role can create API tokens.
+> [!TIP]
+> Create and delete operations on API tokens are recorded in the [audit log](howto-use-audit-logs.md).
+ In the IoT Central UI: 1. Navigate to **Permissions > API tokens**.
iot-central Howto Manage Iot Central From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-portal.md
You can configure role assignments in the Azure portal or use the Azure CLI:
You can use the set of metrics provided by IoT Central to assess the health of devices connected to your IoT Central application and the health of your running data exports.
+> [!NOTE]
+> IoT Central applications have an internal [audit log](howto-use-audit-logs.md) to track activity within the application.
+ Metrics are enabled by default for your IoT Central application and you access them from the [Azure portal](https://portal.azure.com/). The [Azure Monitor data platform exposes these metrics](../../azure-monitor/essentials/data-platform-metrics.md) and provides several ways for you to interact with them. For example, you can use charts in the Azure portal, a REST API, or queries in PowerShell or the Azure CLI. Access to metrics in the Azure portal is managed by [Azure role based access control](../../role-based-access-control/overview.md). Use the Azure portal to add users to the IoT Central application/resource group/subscription to grant them access. You must add a user in the portal even they're already added to the IoT Central application. Use [Azure built-in roles](../../role-based-access-control/built-in-roles.md) for finer grained access control.
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
The IoT Central REST API lets you develop client applications that integrate wit
Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
+> [!NOTE]
+> Operations on users and roles are recorded in the IoT Central [audit log](howto-use-audit-logs.md).
+ For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/). [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
Title: Manage users and roles in Azure IoT Central application | Microsoft Docs
description: As an administrator, how to manage users and roles in your Azure IoT Central application Previously updated : 06/22/2022 Last updated : 08/01/2022
To learn how to manage users and roles by using the IoT Central REST API, see [H
## Add users
-Every user must have a user account before they can sign in and access an application. IoT Central currently supports Microsoft user accounts, Azure Active Directory accounts, and Azure Active Directory service principals. IoT Central doesn't currently support Azure Active Directory groups. To learn more, see [Microsoft account help](https://support.microsoft.com/products/microsoft-account?category=manage-account) and [Quickstart: Add new users to Azure Active Directory](../../active-directory/fundamentals/add-users-azure-active-directory.md).
+Every user must have a user account before they can sign in and access an application. IoT Central supports Microsoft user accounts, Azure Active Directory accounts, Azure Active Directory groups, and Azure Active Directory service principals. To learn more, see [Microsoft account help](https://support.microsoft.com/products/microsoft-account?category=manage-account) and [Quickstart: Add new users to Azure Active Directory](../../active-directory/fundamentals/add-users-azure-active-directory.md).
1. To add a user to an IoT Central application, go to the **Users** page in the **Permissions** section. :::image type="content" source="media/howto-manage-users-roles/manage-users-pnp.png" alt-text="Screenshot of manage users page in IoT Central.":::
-1. To add a user on the **Users** page, choose **+ Assign user**. To add a service principal on the **Users** page, choose **+ Assign service principal**. Start typing the name of the service principal to auto-populate the form.
+1. To add a user on the **Users** page, choose **+ Assign user**. To add a service principal on the **Users** page, choose **+ Assign service principal**. To add an Azure Active Directory group on the **Users** page, choose **+ Assign group**. Start typing the name of the Active Directory group or service principal to auto-populate the form.
> [!NOTE]
- > A service principal must belong to the same Azure Active Directory tenant as the Azure subscription associated with the IoT Central application.
+ > Service principals and Active Directory groups must belong to the same Azure Active Directory tenant as the Azure subscription associated with the IoT Central application.
1. If your application uses [organizations](howto-create-organizations.md), choose an organization to assign to the user from the **Organization** drop-down menu.
Every user must have a user account before they can sign in and access an applic
> [!NOTE] > If a user is deleted from Azure Active Directory and then added back, they won't be able to sign into the IoT Central application. To re-enable access, the application's administrator should delete and re-add the user in the application as well.
+The following limitations apply to Azure Active Directory groups and service principals:
+
+- Total number of Azure Active Directory groups for each IoT Central application can't be more than 20.
+- Total number of unique Azure Active Directory groups from the same Azure Active Directory tenant can't be more than 200 across all IoT Central applications.
+- Service principals that are part of an Azure Active Directory group aren't automatically granted access to the application. The service principals must be added explicitly.
+ ### Edit the roles and organizations that are assigned to users Roles and organizations can't be changed after they're assigned. To change the role or organization that's assigned to a user, delete the user, and then add the user again with a different role or organization.
When you define a custom role, you choose the set of permissions that a user is
| Manage | None | | Full Control | Manage |
+**Audit log permissions**
+
+| Name | Dependencies |
+| - | -- |
+| View | None |
+| Full Control | View |
+
+> [!CAUTION]
+> Any user granted permission to view the audit log can see all log entries even if they don't have permission to view or modify the entities listed in the log. Therefore, any user who can view the log can view the identity of and changes made to any modified entity.
+ #### Managing users and roles **Custom roles permissions**
iot-central Howto Use Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-audit-logs.md
+
+ Title: Use Azure IoT Central audit logs | Microsoft Docs
+description: Learn how to use audit logs in IoT Central to track changes made in an IoT Central application
++ Last updated : 07/25/2022++++
+# Administrator
++
+# Use audit logs to track activity in your IoT Central application
+
+This article describes how to use audit logs to track who made what changes at what time in your IoT Central applications. You can:
+
+- Sort the audit log.
+- Filter the audit log.
+- Customize the audit log.
+- Manage access to the audit log.
+
+The audit log records information about who made a change, information about the modified entity, the action that made change, and when the change was made. The log tracks changes made through the UI, programatically with the REST API, and through the CLI.
+
+The log records changes to the following IoT Central entities:
+
+- [Users](howto-manage-users-roles.md#add-users)
+- [Roles](howto-manage-users-roles.md#manage-roles)
+- [API tokens](howto-authorize-rest-api.md#token-types)
+- [Application template export](howto-create-iot-central-application.md#create-and-use-a-custom-application-template)
+- [File upload configuration](howto-configure-file-uploads.md#configure-device-file-uploads)
+- [Application customization](howto-customize-ui.md)
+- [Device enrollment groups](concepts-device-authentication.md)
+- [Device templates](howto-set-up-template.md)
+- [Device lifecycle events](howto-export-to-blob-storage.md#device-lifecycle-changes-format)
+
+The log records changes made by the following types of user:
+
+- IoT Central user - the log shows the user's email.
+- API token - the log shows the token name.
+- Azure Active Directory user - the log shows the user email or ID.
+- Service principal - the log shows the service principal name.
+
+The log stores data for 30 days, after which it's no longer available.
+
+The following screenshot shows the audit log view with the location of the sorting and filtering controls highlighted:
++
+## Customize the log
+
+Select **Column options** to customize the audit log view. You can add and remove columns, reorder the columns, and change the column widths:
++
+## Sort the log
+
+You can sort the log into ascending or descending timestamp order. To sort, select **Timestamp**:
++
+## Filter the log
+
+To focus on a specific time, filter the log by time range. Select **Edit time range** and specify the range you're interested in:
++
+To focus on specific entries, filter by entity type or action. Select **Filter** and use the multi-select drop-downs to specify your filter conditions:
++
+## Manage access
+
+The built-in **App Administrator** role has access to the audit logs by default. The administrator can grant access to other roles. An administrator can assign either **Full control** or **View** audit log permissions to other roles. To learn more, see [Manage users and roles in your IoT Central application](howto-manage-users-roles.md).
+
+> [!IMPORTANT]
+> Any user granted permission to view the audit log can see all log entries even if they don't have permission to view or modify the entities listed in the log. Therefore, any user who can view the log can view the identity of and changes made to any modified entity.
+
+## Next steps
+
+Now that you've learned how to manage users and roles in your IoT Central application, the suggested next step is to learn how to [Manage IoT Central organizations](howto-create-organizations.md).
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
In IoT Central, you can configure and manage security in the following areas:
- Device access to your application. - Programmatic access to your application. - Authentication to other services from your application.
+- Use audit logs to track activity in your IoT Central application.
To learn more, see the [IoT Central security guide](overview-iot-central-security.md).
An administrator can:
To learn more, see [Create and use a custom application template](howto-create-iot-central-application.md#create-and-use-a-custom-application-template).
-## Integrate with DevOps pipelines
+## Integrate with Azure Pipelines
-Continuous integration and continuous delivery (CI/CD) refers to the process of developing and delivering software in short, frequent cycles using automation pipelines. You can use Azure DevOps pipelines to automate the build, test, and deployment of IoT Central application configurations.
+Continuous integration and continuous delivery (CI/CD) refers to the process of developing and delivering software in short, frequent cycles using automation pipelines. You can use Azure Pipelines to automate the build, test, and deployment of IoT Central application configurations.
Just as IoT Central is a part of your larger IoT solution, make IoT Central a part of your CI/CD pipeline.
-To learn more, see [Integrate IoT Central into your Azure DevOps CI/CD pipeline](howto-integrate-with-devops.md).
+To learn more, see [Integrate IoT Central into your Azure CI/CD pipeline](howto-integrate-with-devops.md).
## Monitor application health
iot-central Overview Iot Central Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-security.md
Title: Azure IoT Central application security guide
description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to secure your IoT Central application. IoT Central security includes users, devices, API access, and authentication to other services for data export. Previously updated : 04/12/2022 Last updated : 07/25/2022
In IoT Central, you can configure and manage security in the following areas:
- Device access to your application. - Programmatic access to your application. - Authentication to other services from your application.
+- Use a secure virtual network.
+- Audit logs track activity in the application.
## Manage user access
To learn more, see:
Data export in IoT Central lets you continuously stream device data to destinations such as Azure Blob Storage, Azure Event Hubs, Azure Service Bus Messaging. You may choose to lock down these destinations by using an Azure Virtual Network (VNet) and private endpoints. To enable IoT Central to connect to a destination on a secure VNet, configure a firewall exception. To learn more, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
+## Audit logs
+
+Audit logs let administrators track activity within your IoT Central application. Administrators can see who made what changes at what times. To learn more, see [Use audit logs to track activity in your IoT Central application](howto-use-audit-logs.md).
+
## Next steps Now that you've learned about security in your Azure IoT Central application, the suggested next step is to learn about [Manage users and roles](howto-manage-users-roles.md) in Azure IoT Central.
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-tour.md
You launch your IoT Central application by navigating to the URL you chose durin
Once you're inside your IoT application, use the left pane to access various features. You can expand or collapse the left pane by selecting the three-lined icon on top of the pane: > [!NOTE]
-> The items you see in the left pane depend on your user role. Learn more about [managing users and roles](howto-manage-users-roles.md).
+> The items you see in the left pane depend on your user role. Learn more about [managing users and roles](howto-manage-users-roles.md).
+
+<!-- TODO: Needs a new screenshot and entry. -->
:::row::: :::column span="":::
- :::image type="content" source="media/overview-iot-central-tour/navigation-bar.png" alt-text="left pane":::
+
+ :::image type="content" source="media/overview-iot-central-tour/navigation-bar.png" alt-text="left pane":::
:::column-end::: :::column span="2":::
-
- **Devices** lets you manage all your devices.
- **Device groups** lets you view and create collections of devices specified by a query. Device groups are used through the application to perform bulk operations.
+ **Devices** lets you manage all your devices.
+
+ **Device groups** lets you view and create collections of devices specified by a query. Device groups are used through the application to perform bulk operations.
- **Device templates** lets you create and manage the characteristics of devices that connect to your application.
+ **Device templates** lets you create and manage the characteristics of devices that connect to your application.
- **Data explorer** exposes rich capabilities to analyze historical trends and correlate various telemetries from your devices.
+ **Data explorer** exposes rich capabilities to analyze historical trends and correlate various telemetries from your devices.
- **Dashboards** displays all application and personal dashboards.
+ **Dashboards** displays all application and personal dashboards.
- **Jobs** lets you manage your devices at scale by running bulk operations.
+ **Jobs** lets you manage your devices at scale by running bulk operations.
- **Rules** lets you create and edit rules to monitor your devices. Rules are evaluated based on device data and trigger customizable actions.
+ **Rules** lets you create and edit rules to monitor your devices. Rules are evaluated based on device data and trigger customizable actions.
- **Data export** lets you configure a continuous export to external services such as storage and queues.
+ **Data export** lets you configure a continuous export to external services such as storage and queues.
- **Permissions** lets you manage an organization's users, devices and data.
+ **Audit logs** lets you view changes made to entities in your application.
- **Application** lets you manage your application's settings, billing, users, and roles.
+ **Permissions** lets you manage an organization's users, devices and data.
+
+ **Application** lets you manage your application's settings, billing, users, and roles.
- **Customization** lets you customize your application appearance.
+ **Customization** lets you customize your application appearance.
+
+ **IoT Central Home** lets you jump back to the IoT Central app manager.
- **IoT Central Home** lets you jump back to the IoT Central app manager.
-
- :::column-end:::
+ :::column-end:::
:::row-end::: ### Search, help, theme, and support
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md
Build IoT solutions such as:
## Administer your application
-IoT Central applications are fully hosted by Microsoft, which reduces the administration overhead of managing your applications. Administrators manage access to your application with [user roles and permissions](howto-administer.md).
+IoT Central applications are fully hosted by Microsoft, which reduces the administration overhead of managing your applications. Administrators manage access to your application with [user roles and permissions](howto-administer.md) and track activity by using [audit logs](howto-use-audit-logs.md).
## Pricing
iot-dps Quick Setup Auto Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision.md
Title: Quickstart - Set up IoT Hub Device Provisioning Service in the Microsoft Azure portal
+ Title: Quickstart - Set up Device Provisioning Service in portal
description: Quickstart - Set up the Azure IoT Hub Device Provisioning Service (DPS) in the Microsoft Azure portal
# Quickstart: Set up the IoT Hub Device Provisioning Service with the Azure portal
-The IoT Hub Device Provisioning Service enables zero-touch, just-in-time device provisioning to any IoT hub. The Device Provisioning Service enables customers to provision millions of IoT devices in a secure and scalable manner, without requiring human intervention. Azure IoT Hub Device Provisioning Service supports IoT devices with TPM, symmetric key, and X.509 certificate authentications. For more information, please refer to [IoT Hub Device Provisioning Service overview](./about-iot-dps.md)
-
-In this quickstart, you'll learn how to set up the IoT Hub Device Provisioning Service in the Azure portal.
+In this quickstart, you will learn how to set up the IoT Hub Device Provisioning Service in the Azure portal. The IoT Hub Device Provisioning Service enables zero-touch, just-in-time device provisioning to any IoT hub. The Device Provisioning Service enables customers to provision millions of IoT devices in a secure and scalable manner, without requiring human intervention. Azure IoT Hub Device Provisioning Service supports IoT devices with TPM, symmetric key, and X.509 certificate authentications. For more information, please refer to [IoT Hub Device Provisioning Service overview](about-iot-dps.md).
To provision your devices, you will:
key-vault Tutorial Import Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/tutorial-import-certificate.md
$Password = ConvertTo-SecureString -String "123" -AsPlainText -Force
Import-AzKeyVaultCertificate -VaultName "<your-key-vault-name>" -Name "ExampleCertificate" -FilePath "C:\path\to\ExampleCertificate.pem" -Password $Password ```
-After importing the certificate, you can view the certificate using the Azure PowerShell [Import-AzKeyVaultCertificate](/powershell/module/az.keyvault/import-azkeyvaultcertificate) cmdlet
+After importing the certificate, you can view the certificate using the Azure PowerShell [Get-AzKeyVaultCertificate](/powershell/module/az.keyvault/get-azkeyvaultcertificate) cmdlet
```azurepowershell Get-AzKeyVaultCertificate -VaultName "<your-key-vault-name>" -Name "ExampleCertificate"
In this tutorial, you created a Key Vault and imported a certificate in it. To l
- Read more about [Managing certificate creation in Azure Key Vault](./create-certificate-scenarios.md) - See examples of [Importing Certificates Using REST APIs](/rest/api/keyvault/certificates/import-certificate/import-certificate)-- Review the [Key Vault security overview](../general/security-features.md)
+- Review the [Key Vault security overview](../general/security-features.md)
key-vault Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/network-security.md
# Configure Azure Key Vault firewalls and virtual networks
-This document will cover the different configurations for the Key Vault firewall in detail. To follow the step-by-step instructions on how to configure these settings, follow guide [here](how-to-azure-key-vault-network-security.md)
+This document will cover the different configurations for an Azure Key Vault firewall in detail. To follow the step-by-step instructions on how to configure these settings, see [Configure Azure Key Vault networking settings](how-to-azure-key-vault-network-security.md).
For more information, see [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md). ## Firewall Settings
-This section will cover the different ways that the Azure Key Vault firewall can be configured.
+This section will cover the different ways that an Azure Key Vault firewall can be configured.
### Key Vault Firewall Disabled (Default)
-By default, when you create a new key vault, the Azure Key Vault firewall is disabled. All applications and Azure services can access the key vault and send requests to the key vault. Note, this configuration does not mean that any user will be able to perform operations on your key vault. The key vault still restricts to secrets, keys, and certificates stored in key vault by requiring Azure Active Directory authentication and access policy permissions. To understand key vault authentication in more detail see the key vault authentication fundamentals document [here](./authentication.md). For more information, see [Access Azure Key Vault behind a firewall](./access-behind-firewall.md).
+By default, when you create a new key vault, the Azure Key Vault firewall is disabled. All applications and Azure services can access the key vault and send requests to the key vault. Note, this configuration does not mean that any user will be able to perform operations on your key vault. The key vault still restricts access to secrets, keys, and certificates stored in key vault by requiring Azure Active Directory authentication and access policy permissions. To understand key vault authentication in more detail see [Authentication in Azure Key Vault](authentication.md). For more information, see [Access Azure Key Vault behind a firewall](access-behind-firewall.md).
### Key Vault Firewall Enabled (Trusted Services Only)
-When you enable the Key Vault Firewall, you will be given an option to 'Allow Trusted Microsoft Services to bypass this firewall.' The trusted services list does not cover every single Azure service. For example, Azure DevOps is not on the trusted services list. **This does not imply that services that do not appear on the trusted services list not trusted or insecure.** The trusted services list encompasses services where Microsoft controls all of the code that runs on the service. Since users can write custom code in Azure services such as Azure DevOps, Microsoft does not provide the option to create a blanket approval for the service. Furthermore, just because a service appears on the trusted service list, doesn't mean it is allowed for all scenarios.
+When you enable the Key Vault Firewall, you will be given an option to 'Allow Trusted Microsoft Services to bypass this firewall.' The trusted services list does not cover every single Azure service. For example, Azure DevOps is not on the trusted services list. **This does not imply that services that do not appear on the trusted services list not trusted or insecure.** The trusted services list encompasses services where Microsoft controls all of the code that runs on the service. Since users can write custom code in Azure services such as Azure DevOps, Microsoft does not provide the option to create a blanket approval for the service. Furthermore, just because a service appears on the trusted service list, doesn't mean it is allowed for all scenarios.
-To determine if a service you are trying to use is on the trusted service list, please see the following document [here](./overview-vnet-service-endpoints.md#trusted-services).
+To determine if a service you are trying to use is on the trusted service list, please see the following document [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md#trusted-services).
For how-to guide, follow the instructions here for [Portal, Azure CLI and PowerShell](how-to-azure-key-vault-network-security.md) ### Key Vault Firewall Enabled (IPv4 Addresses and Ranges - Static IPs)
-If you would like to authorize a particular service to access key vault through the Key Vault Firewall, you can add it's IP Address to the key vault firewall allow list. This configuration is best for services that use static IP addresses or well-known ranges. There is a limit of 1000 CIDR ranges for this case.
+If you would like to authorize a particular service to access key vault through the Key Vault Firewall, you can add its IP Address to the key vault firewall allowlist. This configuration is best for services that use static IP addresses or well-known ranges. There is a limit of 1000 CIDR ranges for this case.
To allow an IP Address or range of an Azure resource, such as a Web App or Logic App, perform the following steps.
-1. Log in to the Azure portal
-1. Select the resource (specific instance of the service)
-1. Click on the 'Properties' blade under 'Settings'
+1. Log in to the Azure portal.
+1. Select the resource (specific instance of the service).
+1. Click on the 'Properties' blade under 'Settings'.
1. Look for the "IP Address" field.
-1. Copy this value or range and enter it into the key vault firewall allow list.
+1. Copy this value or range and enter it into the key vault firewall allowlist.
To allow an entire Azure service, through the Key Vault firewall, use the list of publicly documented data center IP addresses for Azure [here](https://www.microsoft.com/download/details.aspx?id=56519). Find the IP addresses associated with the service you would like in the region you want and add those IP addresses to the key vault firewall using the steps above.
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/security-features.md
tags: azure-resource-manager
Previously updated : 04/15/2021 Last updated : 09/25/2022 #Customer intent: As a key vault administrator, I want to learn the options available to secure my vaults
When you create a key vault in a resource group, you manage access by using Azur
There are several predefined roles. If a predefined role doesn't fit your needs, you can define your own role. For more information, see [Azure RBAC: Built-in roles](../../role-based-access-control/built-in-roles.md). > [!IMPORTANT]
-> When using the Access Policy permission model, if a user has `Contributor` permissions to a key vault management plane, the user can grant themselves access to the data plane by setting a Key Vault access policy. You should tightly control who has `Contributor` role access to your key vaults with the Access Policy permission model to ensure that only authorized persons can access and manage your key vaults, keys, secrets, and certificates. It is recommended to use the new **Role Based Access Control (RBAC) permission model** to avoid this issue. With the RBAC permission model, permission management is limited to 'Owner' and 'User Access Administrator' roles, which allows separation of duties between roles for security operations and general administriative operations.
+> When using the Access Policy permission model, if a user has `Contributor` permissions to a key vault management plane, the user can grant themselves access to the data plane by setting a Key Vault access policy. You should tightly control who has `Contributor` role access to your key vaults with the Access Policy permission model to ensure that only authorized persons can access and manage your key vaults, keys, secrets, and certificates. It is recommended to use the new **Role Based Access Control (RBAC) permission model** to avoid this issue. With the RBAC permission model, permission management is limited to 'Owner' and 'User Access Administrator' roles, which allows separation of duties between roles for security operations and general administrative operations.
### Controlling access to Key Vault data
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-cli.md
You need to provide following inputs to create a Managed HSM resource:
The following example creates an HSM named **ContosoMHSM**, in the resource group **ContosoResourceGroup**, residing in the **West US 3** location, with **the current signed in user** as the only administrator, with **7 days retention period** for soft-delete. Read more about [Managed HSM soft-delete](soft-delete-overview.md) ```azurecli-interactive
-oid=$(az ad signed-in-user show --query objectId -o tsv)
+oid=$(az ad signed-in-user show --query id -o tsv)
az keyvault create --hsm-name "ContosoMHSM" --resource-group "ContosoResourceGroup" --location "westus3" --administrators $oid --retention-days 7 ```
key-vault Multiline Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/multiline-secrets.md
# Store a multi-line secret in Azure Key Vault
-The [Azure CLI quickstart](quick-create-cli.md) and [Azure PowerShell quickstart](quick-create-powershell.md) demonstrate how to store a single-line secret. You can also use Key Vault to store a multi-line secret, such as a JSON file or RSA private key.
+The [Azure CLI quickstart](quick-create-cli.md) or [Azure PowerShell quickstart](quick-create-powershell.md) demonstrate how to store a single-line secret. You can also use Key Vault to store a multi-line secret, such as a JSON file or RSA private key.
Multi-line secrets cannot be passed to the Azure CLI [az keyvault secret set](/cli/azure/keyvault/secret#az-keyvault-secret-set) command or the Azure PowerShell [Set-AzKeyVaultSecret](/powershell/module/az.keyvault/set-azkeyvaultsecret) cmdlet through the commandline. Instead, you must first store the multi-line secret as a text file.
multi-line
secret ```
+## Set the secret using Azure CLI
+ You can then pass this file to the Azure CLI [az keyvault secret set](/cli/azure/keyvault/secret#az-keyvault-secret-set) command using the `--file` parameter. ```azurecli-interactive az keyvault secret set --vault-name "<your-unique-keyvault-name>" --name "MultilineSecret" --file "secretfile.txt" ```
+You can then view the stored secret using the Azure CLI [az keyvault secret show](/cli/azure/keyvault/secret#az-keyvault-secret-show) command.
+
+```azurecli-interactive
+az keyvault secret show --name "MultilineSecret" --vault-name "<your-unique-keyvault-name>" --query "value"
+```
+
+The secret will be returned with newlines embedded:
+
+```bash
+"This is\nmy multi-line\nsecret"
+```
+
+## Set the secret using Azure Powershell
With Azure PowerShell, you must first read in the file using the [Get-Content](/powershell/module/microsoft.powershell.management/get-content) cmdlet, then convert it to a secure string using [ConvertTo-SecureString](/powershell/module/microsoft.powershell.security/convertto-securestring).
Lastly, you store the secret using the [Set-AzKeyVaultSecret](/powershell/module
$secret = Set-AzKeyVaultSecret -VaultName "<your-unique-keyvault-name>" -Name "MultilineSecret" -SecretValue $SecureSecret ```
-In either case, you can then view the stored secret using the Azure CLI [az keyvault secret show](/cli/azure/keyvault/secret#az-keyvault-secret-show) command or the Azure PowerShell [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret) cmdlet.
+You can then view the stored secret using the Azure CLI [az keyvault secret show](/cli/azure/keyvault/secret#az-keyvault-secret-show) command or the Azure PowerShell [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret) cmdlet.
```azurecli-interactive az keyvault secret show --name "MultilineSecret" --vault-name "<your-unique-keyvault-name>" --query "value"
logic-apps Export From Ise To Standard Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-ise-to-standard-logic-app.md
+
+ Title: Export workflows from ISE to Standard
+description: Export logic app workflows from an integration service environment (ISE) to a Standard logic app using Visual Studio Code.
+
+ms.suite: integration
++ Last updated : 09/14/2022
+#Customer intent: As a developer, I want to export one or more ISE workflows to a Standard workflow.
++
+# Export ISE workflows to a Standard logic app (Preview)
+
+> [!NOTE]
+>
+> This capability is in preview and is subject to the
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Standard logic app workflows, which run in single-tenant Azure Logic Apps, offer many new and improved capabilities. For example, you get compute isolation, virtual network integration, and private endpoints along with App Services Environment hosting, local development and debugging using Visual Studio Code, low latency with stateless workflows, and more.
+
+If you want the benefits from Standard workflows, but your workflows run in an integration service environment (ISE), you can now replace your ISE with single-tenant Azure Logic Apps. This switch makes sense for most scenarios that require some ISE capabilities such as isolation and network integration, and can help lower operation costs.
+
+You can now export logic app workflows from an ISE to a Standard logic app. Using Visual Studio Code and the latest Azure Logic Apps (Standard) extension, you export your logic apps as stateful workflows to a Standard logic app project. You can then locally update, test, and debug your workflows to get them ready for redeployment. When you're ready, you can deploy either directly from Visual Studio Code or through your own DevOps process.
+
+> [!NOTE]
+>
+> The export capability doesn't migrate your workflows. Instead, this tool replicates artifacts,
+> such as workflow definitions, connections, integration account artifacts, and others. Your source
+> logic app resources, workflows, trigger history, run history, and other data stay intact.
+>
+> You control the export process and your migration journey. You can test and validate your
+> exported workflows to your satisfaction with the destination environment. You choose when
+> to disable or delete your source logic apps.
+
+This article provides information about the export process and shows how to export your logic app workflows from an ISE to a local Standard logic app project in Visual Studio Code.
+
+## Known issues and limitations
+
+- To run the export tool, you must be on the same network as your ISE. So, if your ISE is internal, you have to run the export tool from a Visual Studio Code instance that can access your ISE through the internal network. Otherwise, you can't download the exported package or files.
+
+- The following logic apps and scenarios are currently ineligible for export:
+
+ - Consumption workflows in multi-tenant Azure Logic Apps
+ - Logic apps that use custom connectors
+ - Logic apps that use the Azure API Management connector
+ - Logic apps that use the Azure Functions connector
+
+- The export tool doesn't export any infrastructure information, such as virtual network dependencies or integration account settings.
+
+- The export tool can export logic app workflows with triggers that have concurrency settings. However, single-tenant Azure Logic Apps ignores these settings.
+
+- For now, connectors with the **ISE** label deploy as their *managed* versions, which appear in the designer under the **Azure** tab. The export tool will have the capability to export **ISE** connectors as built-in, service provider connectors when the latter gain parity with their ISE versions. The export tool automatically makes the conversion when an **ISE** connector is available to export as a built-in, service provider connector.
+
+- Currently, connection credentials aren't cloned from source logic app workflows. Before your logic app workflows can run, you'll have to reauthenticate these connections after export.
+
+## Exportable operation types
+
+| Operation | JSON type |
+|--|--|
+| Trigger | **Built-in**: `Http`, `HttpWebhook`, `Recurrence`, `manual` (Request) <br><br>**Managed**: `ApiConnection` `ApiConnectionNotification`, `ApiConnectionWebhook` |
+| Action | **Built-in**: `AppendToArrayVariable`, `AppendToStringVariable`, `Compose`, `DecrementVariable`, `Foreach`, `Http`, `HttpWebhook`, `If`, `IncrementVariable`, `InitializeVariable`, `JavaScriptCode`, `Join`, `ParseJson`, `Response`, `Scope`, `Select`, `SetVariable`, `Switch`, `Table`, `Terminate`, `Until`, `Wait` <br><br>- **Managed**: `ApiConnection`, `ApiConnectionWebhook` |
+
+## Prerequisites
+
+- An existing ISE with the logic app workflows that you want to export.
+
+- To include and deploy managed connections in your workflows, you'll need an existing Azure resource group for deploying these connections. This option is recommended only for non-production environments.
+
+- Review and meet the requirements for [how to set up Visual Studio Code with the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
+
+## Group logic apps for export
+
+With the Azure Logic Apps (Standard) extension, you can combine multiple ISE-hosted logic app workflows into a single Standard logic app project. In single-tenant Azure Logic Apps, one Standard logic app resource can have multiple workflows. With this approach, you can pre-validate your workflows so that you don't miss any dependencies when you select logic apps for export.
+
+Consider the following recommendations when you select logic apps for export:
+
+- Group logic apps where workflows share the same resources, such as integration account artifacts, maps, and schemas, or use resources through a chain of processes.
+
+- For the organization and number of workflows per logic app, review [Best practices and recommendations](create-single-tenant-workflows-azure-portal.md#best-practices-and-recommendations).
+
+## Export ISE workflows to a local project
+
+### Select logic apps for export
+
+1. In Visual Studio Code, sign in to Azure, if you haven't already.
+
+1. In the left navigation bar, select **Azure** to open the **Azure** window (Shift + Alt + A), and expand the **Logic Apps (Standard)** extension view.
+
+ ![Screenshot showing Visual Studio Code with 'Azure' view selected.](media/export-from-ise-to-standard-logic-app/select-azure-view.png)
+
+1. On the extension toolbar, select **Export Logic App...**.
+
+ ![Screenshot showing Visual Studio Code and **Logic Apps (Standard)** extension toolbar with 'Export Logic App' selected.](media/export-from-ise-to-standard-logic-app/select-export-logic-app.png)
+
+1. After the **Export** tab opens, select your Azure subscription and ISE instance, and then select **Next**.
+
+ ![Screenshot showing 'Export' tab and 'Select logic app instance' section with Azure subscription and ISE instance selected.](media/export-from-ise-to-standard-logic-app/select-subscription-ise.png)
+
+1. Select the logic apps to export. Each selected logic app appears on the **Selected logic apps** list to the side. When you're done, select **Next**.
+
+ ![Screenshot showing 'Select logic apps to export' section with logic apps selected for export.](media/export-from-ise-to-standard-logic-app/select-logic-apps.png)
+
+ > [!TIP]
+ >
+ > You can also search for logic apps and filter on resource group.
+
+ The export tool starts to validate whether your selected logic apps are eligible for export.
+
+### Review export validation results
+
+1. After export validation completes, review the results by expanding the entry for each logic app.
+
+ - Logic apps that have errors are ineligible for export. You must remove these logic apps from the export list until you fix them at the source. To remove a logic app from the list, select **Back**.
+
+ For example, **SourceLogicApp2** has an error and can't be exported until fixed:
+
+ ![Screenshot showing 'Review export status' section and validation status for logic app workflow with error.](media/export-from-ise-to-standard-logic-app/select-back-button-remove-app.png)
+
+ - Logic apps that pass validation with or without warnings are still eligible for export. To continue, select **Export** if all apps validate successfully, or select **Export with warnings** if apps have warnings.
+
+ For example, **SourceLogicApp3** has a warning, but you can still continue to export:
+
+ ![Screenshot showing 'Review export status' section and validation status for logic app workflow with warning.](media/export-from-ise-to-standard-logic-app/select-export-with-warnings.png)
+
+ The following table provides more information about each validation icon and status:
+
+ | Validation icon | Validation status |
+ |--|-|
+ | ![Success icon](media/export-from-ise-to-standard-logic-app/success-icon.png) | Item passed validation, so export can continue without problems to resolve. |
+ | ![Failed icon](media/export-from-ise-to-standard-logic-app/failed-icon.png) | Item failed validation, so export can't continue. <br><br>The validation entry for the failed item automatically appears expanded and provides information about the validation failure. |
+ | ![Warning icon](media/export-from-ise-to-standard-logic-app/warning-icon.png) | Item passed validation with a warning, but export can continue with required post-export resolution. <br><br>The validation entry for the item with a warning automatically appears expanded and provides information about the warning and required post-export remediation. |
+
+1. After the **Finish export** section appears, for **Export location**, browse and select a local folder for your new Standard logic app project.
+
+ ![Screenshot showing 'Finish export' section and 'Export location' property with selected local export project folder.](media/export-from-ise-to-standard-logic-app/select-local-folder.png)
+
+1. If your workflow has *managed* connections that you want to deploy, which is only recommended for non-production environments, select **Deploy managed connections**, which shows existing resource groups in your Azure subscription. Select the resource group where you want to deploy the managed connections.
+
+ ![Screenshot showing 'Finish export' section with selected local export folder, 'Deploy managed connections' selected, and target resource group selected.](media/export-from-ise-to-standard-logic-app/select-deploy-managed-connections-resource-group.png)
+
+1. Under **After export steps**, review any required post-export steps, for example:
+
+ ![Screenshot showing **After export steps** section and required post-export steps, if any.](media/export-from-ise-to-standard-logic-app/review-post-export-steps.png)
+
+1. Based on your scenario, select **Export and finish** or **Export with warnings and finish**.
+
+ The export tool downloads your project to your selected folder location, expands the project in Visual Studio Code, and deploys any managed connections, if you selected that option.
+
+ ![Screenshot showing the 'Export status` section with export progress.](media/export-from-ise-to-standard-logic-app/export-status.png)
+
+1. After this process completes, Visual Studio Code opens a new workspace. You can now safely close the export window.
+
+1. From your Standard logic app project, open and review the README.md file for the required post-export steps.
+
+ ![Screenshot showing a new Standard logic app project with README.md file opened.](media/export-from-ise-to-standard-logic-app/open-readme.png)
+
+## Post-export steps
+
+### Remediation steps
+
+Some exported logic app workflows require post-export remediation steps to run on the Standard platform.
+
+1. From your Standard logic app project, open the README.md file, and review the remediation steps for your exported workflows. The export tool generates the README.md file, which contains all the required post-export steps.
+
+1. Before you make any changes to your source logic app workflow, make sure to test your new Standard logic app resource and workflows.
+
+### Integration account actions and settings
+
+If you export actions that depend on an integration account, you have to manually set up your Standard logic app with a reference link to the integration account that contains the required artifacts. For more information, review [Link integration account to a Standard logic app](logic-apps-enterprise-integration-create-integration-account.md#link-account).
+
+## Project folder structure
+
+After the export process finishes, your Standard logic app project contains new folders and files alongside most others in a [typical Standard logic app project](create-single-tenant-workflows-visual-studio-code.md).
+
+The following table describes these new folders and files added by the export process:
+
+| Folder | File | Description |
+|--||-|
+| .development\\deployment | LogicAppStandardConnections.parameters.json | Azure Resource Manager template parameters file for deploying managed connectors |
+| | LogicAppStandardConnections.template.json | Azure Resource Manager template definition for deploying managed connectors |
+| | LogicAppStandardInfrastructure.parameters.json | Azure Resource Manager template parameters file for deploying Standard logic app resource |
+| | LogicAppStandardInfrastructure.template.json | Azure Resource Manager template definition for deploying Standard logic app resource |
+| .logs\\export | exportReport.json | Export report summary raw file, which includes all the steps required for post-export remediation |
+| | exportValidation.json | Validation report raw file, which includes the validation results for each exported logic app. |
+| | README.md | Markdown file with export results summary, including the created logic apps and all the required next steps. |
+
+## Next steps
+
+- [Run, test, and debug locally](create-single-tenant-workflows-visual-studio-code.md#run-test-and-debug-locally)
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
Task |AutoML job syntax| Description
-|-| Multi-class text classification | CLI v2: `text_classification` <br> SDK v2 (preview): `text_classification()`| There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. <br> <br> For example, classifying a movie script as "Comedy" or "Romantic". Multi-label text classification | CLI v2: `text_classification_multilabel` <br> SDK v2 (preview): `text_classification_multilabel()`| There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample<br> <br> For example, classifying a movie script as "Comedy", or "Romantic", or "Comedy and Romantic".
-Named Entity Recognition (NER)| CLI v2:`text_ner` <br> SDK v2 (preview): `text_ner()`| There are multiple possible tags for tokens in sequences. The task is to predict the tags for all the tokens for each sequence. <br> <br> For example, extracting domain-specific entities from unstructured text, such as contracts or financial documents
+Named Entity Recognition (NER)| CLI v2:`text_ner` <br> SDK v2 (preview): `text_ner()`| There are multiple possible tags for tokens in sequences. The task is to predict the tags for all the tokens for each sequence. <br> <br> For example, extracting domain-specific entities from unstructured text, such as contracts or financial documents.
+
+## Thresholding
+
+Thresholding is the multi-label feature that allows users to pick the threshold above which the predicted probabilities will lead to a positive label. Lower values allow for more labels, which is better when users care more about recall, but this option could lead to more false positives. Higher values allow fewer labels and hence better for users who care about precision, but this option could lead to more false negatives.
## Preparing data
Automated ML's NLP capability is triggered through task specific `automl` type j
However, there are key differences: * You can ignore `primary_metric`, as it is only for reporting purposes. Currently, automated ML only trains one model per run for NLP and there is no model selection. * The `label_column_name` parameter is only required for multi-class and multi-label text classification tasks.
-* If the majority of the samples in your dataset contain more than 128 words, it's considered long range. By default, automated ML considers all samples long range text. To disable this feature, include the `enable_long_range_text=False` parameter in your `AutoMLConfig`.
- * If you enable long range text, then a GPU with higher memory is required such as, [NCv3](../virtual-machines/ncv3-series.md) series or [ND](../virtual-machines/nd-series.md) series.
- * The `enable_long_range_text` parameter is only available for multi-class classification tasks.
+* If more than 10% of the samples in your dataset contain more than 128 tokens, it's considered long range.
+ * In order to use the long range text feature, you should use a NC6 or higher/better SKUs for GPU such as: [NCv3](../virtual-machines/ncv3-series.md) series or [ND](../virtual-machines/nd-series.md) series.
# [Azure CLI](#tab/cli)
max_concurrent_iterations = number_of_vms
enable_distributed_dnn_training = True ```
+In AutoML NLP only hold-out validation is supported and it requires a validation dataset.
+ ## Submit the AutoML job
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
ms.devlang: azurecli
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Learn how to use [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) in Azure Machine Learning with [Managed online endpoints](concept-endpoints.md#managed-online-endpoints). Triton is multi-framework, open-source software that is optimized for inference. It supports popular machine learning frameworks like TensorFlow, ONNX Runtime, PyTorch, NVIDIA TensorRT, and more. It can be used for your CPU or GPU workloads.
-In this article, you will learn how to deploy Triton and a model to a managed online endpoint. Information is provided on using both the CLI (command line) and Azure Machine Learning studio.
+In this article, you will learn how to deploy Triton and a model to a managed online endpoint. Information is provided on using the CLI (command line), Python SDK v2, and Azure Machine Learning studio.
> [!NOTE] > * [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) is an open-source third-party software that is integrated in Azure Machine Learning.
In this article, you will learn how to deploy Triton and a model to a managed on
## Prerequisites
+# [Azure CLI](#tab/azure-cli)
+ [!INCLUDE [basic prereqs](../../includes/machine-learning-cli-prereqs.md)]
-* A working Python 3.8 (or higher) environment.
+* A working Python 3.8 (or higher) environment.
+
+* You must have additional Python packages installed for scoring and may install them with the code below. They include:
+ * Numpy - An array and numerical computing library
+ * [Triton Inference Server Client](https://github.com/triton-inference-server/client) - Facilitates requests to the Triton Inference Server
+ * Pillow - A library for image operations
+ * Gevent - A networking library used when connecting to the Triton Server
+
+```azurecli
+pip install numpy
+pip install tritonclient[http]
+pip install pillow
+pip install gevent
+```
* Access to NCv3-series VMs for your Azure subscription. > [!IMPORTANT] > You may need to request a quota increase for your subscription before you can use this series of VMs. For more information, see [NCv3-series](../virtual-machines/ncv3-series.md). - NVIDIA Triton Inference Server requires a specific model repository structure, where there is a directory for each model and subdirectories for the model version. The contents of each model version subdirectory is determined by the type of the model and the requirements of the backend that supports the model. To see all the model repository structure [https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_repository.md#model-files](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_repository.md#model-files) The information in this document is based on using a model stored in ONNX format, so the directory structure of the model repository is `<model-repository>/<model-name>/1/model.onnx`. Specifically, this model performs image identification.
-## Deploy using CLI (v2)
+
+# [Python](#tab/python)
+++
+* A working Python 3.8 (or higher) environment.
+
+* You must have additional Python packages installed for scoring and may install them with the code below. They include:
+ * Numpy - An array and numerical computing library
+ * [Triton Inference Server Client](https://github.com/triton-inference-server/client) - Facilitates requests to the Triton Inference Server
+ * Pillow - A library for image operations
+ * Gevent - A networking library used when connecting to the Triton Server
+
+ ```azurecli
+ pip install numpy
+ pip install tritonclient[http]
+ pip install pillow
+ pip install gevent
+ ```
+
+* Access to NCv3-series VMs for your Azure subscription.
+
+ > [!IMPORTANT]
+ > You may need to request a quota increase for your subscription before you can use this series of VMs. For more information, see [NCv3-series](../virtual-machines/ncv3-series.md).
+
+The information in this article is based on the [Deploy a model to online endpoints using Triton](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/triton/single-model/online-endpoints-triton.ipynb) notebook contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste files, clone the repo and then change directories to the `sdk/endpoints/online/triton/single-model/online-endpoints-triton.ipynb` directory in the repo:
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples
+cd sdk/endpoints/online/triton/single-model/online-endpoints-triton.ipynb
+```
+
+# [Studio](#tab/azure-studio)
+
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+* An Azure Machine Learning workspace. If you don't have one, use the steps in [Manage Azure Machine Learning workspaces in the portal or with the Python SDK](how-to-manage-workspace.md) to create one.
+
+
+
+## Define the deployment configuration
+
+# [Azure CLI](#tab/azure-cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-This section shows how you can deploy Triton to managed online endpoint using the Azure CLI with the Machine Learning extension (v2).
+This section shows how you can deploy to a managed online endpoint using the Azure CLI with the Machine Learning extension (v2).
> [!IMPORTANT] > For Triton no-code-deployment, **[testing via local endpoints](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints)** is currently not supported.
This section shows how you can deploy Triton to managed online endpoint using th
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="set_endpoint_name":::
-1. Install Python requirements using the following commands:
-
- ```azurecli
- pip install numpy
- pip install tritonclient[http]
- pip install pillow
- pip install gevent
- ```
- 1. Create a YAML configuration file for your endpoint. The following example configures the name and authentication mode of the endpoint. The one used in the following commands is located at `/cli/endpoints/online/triton/single-model/create-managed-endpoint.yml` in the azureml-examples repo you cloned earlier: __create-managed-endpoint.yaml__ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/triton/single-model/create-managed-endpoint.yaml":::
-1. To create a new endpoint using the YAML configuration, use the following command:
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="create_endpoint":::
-
-1. Create a YAML configuration file for the deployment. The following example configures a deployment named __blue__ to the endpoint created in the previous step. The one used in the following commands is located at `/cli/endpoints/online/triton/single-model/create-managed-deployment.yml` in the azureml-examples repo you cloned earlier:
+1. Create a YAML configuration file for the deployment. The following example configures a deployment named __blue__ to the endpoint defined in the previous step. The one used in the following commands is located at `/cli/endpoints/online/triton/single-model/create-managed-deployment.yml` in the azureml-examples repo you cloned earlier:
> [!IMPORTANT] > For Triton no-code-deployment (NCD) to work, setting **`type`** to **`triton_modelΓÇï`** is required, `type: triton_modelΓÇï`. For more information, see [CLI (v2) model YAML schema](reference-yaml-model.md).
This section shows how you can deploy Triton to managed online endpoint using th
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/triton/single-model/create-managed-deployment.yaml":::
-1. To create the deployment using the YAML configuration, use the following command:
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="create_deployment":::
-
-### Invoke your endpoint
+# [Python](#tab/python)
-Once your deployment completes, use the following command to make a scoring request to the deployed endpoint.
-> [!TIP]
-> The file `/cli/endpoints/online/triton/single-model/triton_densenet_scoring.py` in the azureml-examples repo is used for scoring. The image passed to the endpoint needs pre-processing to meet the size, type, and format requirements, and post-processing to show the predicted label. The `triton_densenet_scoring.py` uses the `tritonclient.http` library to communicate with the Triton inference server.
+This section shows how you can define a Triton deployment to deploy to a managed online endpoint using the Azure Machine Learning Python SDK (v2).
-1. To get the endpoint scoring uri, use the following command:
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="get_scoring_uri":::
+> [!IMPORTANT]
+> For Triton no-code-deployment, **[testing via local endpoints](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints)** is currently not supported.
-1. To get an authentication token, use the following command:
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="get_token":::
+1. To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name.
-1. To score data with the endpoint, use the following command. It submits the image of a peacock (https://aka.ms/peacock-pic) to the endpoint:
+ ```python
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+ workspace_name = "<AML_WORKSPACE_NAME>"
+ ```
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="check_scoring_of_model":::
+1. Use the following command to set the name of the endpoint that will be created. In this example, a random name is created for the endpoint:
- The response from the script is similar to the following text:
+ ```python
+ import random
- ```
- Is server ready - True
- Is model ready - True
- /azureml-examples/cli/endpoints/online/triton/single-model/densenet_labels.txt
- 84 : PEACOCK
+ endpoint_name = f"endpoint-{random.randint(0, 10000)}"
```
-### Delete your endpoint and model
+1. We use these details above in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. Check the [configuration notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/configuration.ipynb) for more details on how to configure credentials and connect to a workspace.
-Once you're done with the endpoint, use the following command to delete it:
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+ ml_client = MLClient(
+ DefaultAzureCredential(),
+ subscription_id,
+ resource_group,
+ workspace_name,
+ )
+ ```
-Use the following command to delete your model:
+1. Create a `ManagedOnlineEndpoint` object to configure the endpoint. The following example configures the name and authentication mode of the endpoint.
-```azurecli
-az ml model delete --name $MODEL_NAME --version $MODEL_VERSION
-```
+ ```python
+ from azure.ai.ml.entities import ManagedOnlineEndpoint
-## Deploy using Azure Machine Learning studio
+ endpoint = ManagedOnlineEndpoint(name=endpoint_name, auth_mode="key")
+ ```
-This section shows how you can deploy Triton to managed online endpoint using [Azure Machine Learning studio](https://ml.azure.com).
+1. Create a `ManagedOnlineDeployment` object to configure the deployment. The following example configures a deployment named __blue__ to the endpoint defined in the previous step and defines a local model inline.
+
+ ```python
+ from azure.ai.ml.entities import ManagedOnlineDeployment, Model
+
+ model_name = "densenet-onnx-model"
+ model_version = 1
+
+ deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=endpoint_name,
+ model=Model(
+ name=model_name,
+ version=model_version,
+ path="./models",
+ type="triton_model"
+ ),
+ instance_type="Standard_NC6s_v3",
+ instance_count=1,
+ )
+ ```
+
+# [Studio](#tab/azure-studio)
+
+This section shows how you can define a Triton deployment on a managed online endpoint using [Azure Machine Learning studio](https://ml.azure.com).
1. Register your model in Triton format using the following YAML and CLI command. The YAML uses a densenet-onnx model from [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/triton/single-model](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/triton/single-model)
This section shows how you can deploy Triton to managed online endpoint using [A
:::image type="content" source="media/how-to-deploy-with-triton/triton-model-format.png" lightbox="media/how-to-deploy-with-triton/triton-model-format.png" alt-text="Screenshot showing Triton model format on Models page."::: - 1. From [studio](https://ml.azure.com), select your workspace and then use either the __endpoints__ or __models__ page to create the endpoint deployment: # [Endpoints page](#tab/endpoint)
This section shows how you can deploy Triton to managed online endpoint using [A
:::image type="content" source="media/how-to-deploy-with-triton/ncd-triton.png" lightbox="media/how-to-deploy-with-triton/ncd-triton.png" alt-text="Screenshot showing no code and environment needed for Triton models":::
- 1. Complete the wizard to deploy the model to the endpoint.
-
- :::image type="content" source="media/how-to-deploy-with-triton/review-screen-triton.png" lightbox="media/how-to-deploy-with-triton/review-screen-triton.png" alt-text="Screenshot showing NCD review screen":::
- # [Models page](#tab/models) 1. Select the Triton model, and then select __Deploy__. When prompted, select __Deploy to real-time endpoint__. :::image type="content" source="media/how-to-deploy-with-triton/deploy-from-models-page.png" lightbox="media/how-to-deploy-with-triton/deploy-from-models-page.png" alt-text="Screenshot showing how to deploy model from Models UI.":::
- 1. Complete the wizard to deploy the model to the endpoint.
- +++
+## Deploy to Azure
+
+# [Azure CLI](#tab/azure-cli)
++
+1. To create a new endpoint using the YAML configuration, use the following command:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="create_endpoint":::
++
+1. To create the deployment using the YAML configuration, use the following command:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="create_deployment":::
++
+# [Python](#tab/python)
++
+1. To create a new endpoint using the `ManagedOnlineEndpoint` object, use the following command:
+
+ ```python
+ endpoint = ml_client.online_endpoints.begin_create_or_update(endpoint)
+ ```
+
+1. To create the deployment using the `ManagedOnlineDeployment` object, use the following command:
+
+ ```python
+ ml_client.online_deployments.begin_create_or_update(deployment)
+ ```
+
+1. Once the deployment completes, its traffic value will be set to `0%`. Update the traffic to 100%.
+
+ ```python
+ endpoint.traffic = {"blue": 100}
+ ml_client.online_endpoints.begin_create_or_update(endpoint)
+ ```
++
+# [Studio](#tab/azure-studio)
+1. Complete the wizard to deploy to the endpoint.
+
+ :::image type="content" source="media/how-to-deploy-with-triton/review-screen-triton.png" lightbox="media/how-to-deploy-with-triton/review-screen-triton.png" alt-text="Screenshot showing NCD review screen":::
+
+1. Once the deployment completes, its traffic value will be set to `0%`. Update the traffic to 100% from the Endpoint page by clicking `Update Traffic` on the second menu row.
+++
+## Test the endpoint
+
+# [Azure CLI](#tab/azure-cli)
++
+Once your deployment completes, use the following command to make a scoring request to the deployed endpoint.
+
+> [!TIP]
+> The file `/cli/endpoints/online/triton/single-model/triton_densenet_scoring.py` in the azureml-examples repo is used for scoring. The image passed to the endpoint needs pre-processing to meet the size, type, and format requirements, and post-processing to show the predicted label. The `triton_densenet_scoring.py` uses the `tritonclient.http` library to communicate with the Triton inference server.
+
+1. To get the endpoint scoring uri, use the following command:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="get_scoring_uri":::
+
+1. To get an authentication key, use the following command:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="get_token":::
+
+1. To score data with the endpoint, use the following command. It submits the image of a peacock (https://aka.ms/peacock-pic) to the endpoint:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="check_scoring_of_model":::
+
+ The response from the script is similar to the following text:
+
+ ```
+ Is server ready - True
+ Is model ready - True
+ /azureml-examples/cli/endpoints/online/triton/single-model/densenet_labels.txt
+ 84 : PEACOCK
+ ```
+
+# [Python](#tab/python)
++
+1. To get the endpoint scoring uri, use the following command:
+
+ ```python
+ endpoint = ml_client.online_endpoints.get(endpoint_name)
+ scoring_uri = endpoint.scoring_uri
+ ```
+
+1. To get an authentication key, use the following command:
+ keys = ml_client.online_endpoints.list_keys(endpoint_name)
+ auth_key = keys.primary_key
+
+1. The following scoring code uses the [Triton Inference Server Client](https://github.com/triton-inference-server/client) to submit the image of a peacock to the endpoint. This script is available in the companion notebook to this example - [Deploy a model to online endpoints using Triton](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/triton/single-model/online-endpoints-triton.ipynb).
+
+ ```python
+ # Test the blue deployment with some sample data
+ import requests
+ import gevent.ssl
+ import numpy as np
+ import tritonclient.http as tritonhttpclient
+ from pathlib import Path
+ import prepost
+
+ img_uri = "http://aka.ms/peacock-pic"
+
+ # We remove the scheme from the url
+ url = scoring_uri[8:]
+
+ # Initialize client handler
+ triton_client = tritonhttpclient.InferenceServerClient(
+ url=url,
+ ssl=True,
+ ssl_context_factory=gevent.ssl._create_default_https_context,
+ )
+
+ # Create headers
+ headers = {}
+ headers["Authorization"] = f"Bearer {auth_key}"
+
+ # Check status of triton server
+ health_ctx = triton_client.is_server_ready(headers=headers)
+ print("Is server ready - {}".format(health_ctx))
+
+ # Check status of model
+ model_name = "model_1"
+ status_ctx = triton_client.is_model_ready(model_name, "1", headers)
+ print("Is model ready - {}".format(status_ctx))
+
+ if Path(img_uri).exists():
+ img_content = open(img_uri, "rb").read()
+ else:
+ agent = f"Python Requests/{requests.__version__} (https://github.com/Azure/azureml-examples)"
+ img_content = requests.get(img_uri, headers={"User-Agent": agent}).content
+
+ img_data = prepost.preprocess(img_content)
+
+ # Populate inputs and outputs
+ input = tritonhttpclient.InferInput("data_0", img_data.shape, "FP32")
+ input.set_data_from_numpy(img_data)
+ inputs = [input]
+ output = tritonhttpclient.InferRequestedOutput("fc6_1")
+ outputs = [output]
+
+ result = triton_client.infer(model_name, inputs, outputs=outputs, headers=headers)
+ max_label = np.argmax(result.as_numpy("fc6_1"))
+ label_name = prepost.postprocess(max_label)
+ print(label_name)
+ ```
+
+1. The response from the script is similar to the following text:
+
+ ```
+ Is server ready - True
+ Is model ready - True
+ /azureml-examples/sdk/endpoints/online/triton/single-model/densenet_labels.txt
+ 84 : PEACOCK
+ ```
+
+# [Studio](#tab/azure-studio)
+
+Azure Machine Learning Studio provides the ability to test endpoints with JSON. However, serialized JSON is not currently included for this example.
+
+To test an endpoint using Azure Machine Learning Studio, click `Test` from the Endpoint page.
+
+
+
+### Delete the endpoint and model
+# [Azure CLI](#tab/azure-cli)
++
+1. Once you're done with the endpoint, use the following command to delete it:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="delete_endpoint":::
+
+1. Use the following command to archive your model:
+
+ ```azurecli
+ az ml model archive --name $MODEL_NAME --version $MODEL_VERSION
+ ```
+
+# [Python](#tab/python)
++
+1. Delete the endpoint. Deleting the endpoint also deletes any child deployments, however it will not archive associated Environments or Models.
+
+ ```python
+ ml_client.online_endpoints.begin_delete(name=endpoint_name)
+ ```
+
+1. Archive the model with the following code.
+
+ ```python
+ ml_client.models.archive(name=model_name, version=model_version)
+ ```
+
+# [Studio](#tab/azure-studio)
+
+1. From the endpoint's page, click `Delete` in the second row below the endpoint's name.
+
+1. From the model's page, click `Delete` in the first row below the model's name.
++ ## Next steps
machine-learning Reference Managed Online Endpoints Vm Sku List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md
Last updated 06/02/2022
This table shows the VM SKUs that are supported for Azure Machine Learning managed online endpoints.
-* The `instance_type` attribute used for deployment must be specified in the form "Standard_F4s_v2". The table below lists instance names, for example, F2s v2. These names should be put in the specified form (`Standard_{name}`) for Azure CLI or Azure Resource Manager templates (ARM templates) requests to create and update deployments.
+* The full SKU names listed in the table can be used for Azure CLI or Azure Resource Manager templates (ARM templates) requests to create and update deployments.
-* For more information on configuration details such as CPU and RAM, see [Azure Machine Learning Pricing](https://azure.microsoft.com/pricing/details/machine-learning/).
+* For more information on configuration details such as CPU and RAM, see [Azure Machine Learning Pricing](https://azure.microsoft.com/pricing/details/machine-learning/) and [VM sizes](../virtual-machines/sizes.md).
> [!IMPORTANT] > If you use a Windows-based image for your deployment, we recommend using a VM SKU that provides a minimum of 4 cores. | Size | General Purpose | Compute Optimized | Memory Optimized | GPU | | | | | | |
-| V.Small | DS1 v2 <br/> DS2 v2 | F2s v2 | E2s v3 | NC4as_T4_v3 |
-| Small | DS3 v2 | F4s v2 | E4s v3 | NC6s v2 <br/> NC6s v3 <br/> NC8as_T4_v3 |
-| Medium | DS4 v2 | F8s v2 | E8s v3 | NC12s v2 <br/> NC12s v3 <br/> NC16as_T4_v3 |
-| Large | DS5 v2 | F16s v2 | E16s v3 | NC24s v2 <br/> NC24s v3 <br/> NC64as_T4_v3 |
-| X-Large| - | F32s v2 <br/> F48s v2 <br/> F64s v2 <br/> F72s v2 | E32s v3 <br/> E48s v3 <br/> E64s v3 | - |
+| V.Small | Standard_DS1_v2 <br/> Standard_DS2_v2 | Standard_F2s_v2 | Standard_E2s_v3 | Standard_NC4as_T4_v3 |
+| Small | Standard_DS3_v2 | Standard_F4s_v2 | Standard_E4s_v3 | Standard_NC6s_v2 <br/> Standard_NC6s_v3 <br/> Standard_NC8as_T4_v3 |
+| Medium | Standard_DS4_v2 | Standard_F8s_v2 | Standard_E8s_v3 | Standard_NC12s_v2 <br/> Standard_NC12s_v3 <br/> Standard_NC16as_T4_v3 |
+| Large | Standard_DS5_v2 | Standard_F16s_v2 | Standard_E16s_v3 | Standard_NC24s_v2 <br/> Standard_NC24s_v3 <br/> Standard_NC64as_T4_v3 |
+| X-Large| - | Standard_F32s_v2 <br/> Standard_F48s_v2 <br/> Standard_F64s_v2 <br/> Standard_F72s_v2 | Standard_E32s_v3 <br/> Standard_E48s_v3 <br/> Standard_E64s_v3 | - |
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
Create a key vault for VMware agentless migration | To migrate VMware VMs with a
You can create a project in many geographies in the public cloud. - Although you can only create projects in these geographies, you can assess or migrate servers for other target locations.-- The project geography is only used to store the discovered metadata.-- When you create a project, you select a geography. The project and related resources are created in one of the regions in the geography. The region is allocated by the Azure Migrate service.
+- The project geography is only used to store the discovered metadata.
+- When you create a project, you select a geography. The project and related resources are created in one of the regions in the geography. The region is allocated by the Azure Migrate service. Azure Migrate does not move or store customer data outside of the region allocated.
**Geography** | **Metadata storage location** |
There are two versions of the Azure Migrate service:
## Next steps - [Assess VMware VMs](./tutorial-assess-vmware-azure-vm.md) for migration.-- [Assess Hyper-V VMs](tutorial-assess-hyper-v.md) for migration.
+- [Assess Hyper-V VMs](tutorial-assess-hyper-v.md) for migration.
mysql How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-upgrade.md
+
+ Title: Azure Database for MySQL - flexible server - major version upgrade
+description: Learn how to upgrade major version for an Azure Database for MySQL - Flexible server.
+++++ Last updated : 9/26/2022++
+# Major version upgrade in Azure Database for MySQL flexible server preview
++
+>[!Note]
+> This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article.
+
+This article describes how you can upgrade your MySQL major version in-place in Azure Database for MySQL Flexible server.
+This feature will enable customers to perform in-place upgrades of their MySQL 5.7 servers to MySQL 8.0 with a click of button without any data movement or the need of any application connection string changes.
+
+>[!Important]
+> - Major version upgrade for Azure database for MySQL Flexible Server is available in public preview.
+> - Major version upgrade is currently not available for Burstable SKU 5.7 servers.
+> - Duration of downtime will vary based on the size of your database instance and the number of tables on the database.
+> - Upgrading major MySQL version is irreversible. Your deployment might fail if validation identifies the server is configured with any features that are [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) or [deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations). You can make necessary configuration changes on the server and try upgrade again
+
+## Prerequisites
+
+- Read Replicas with MySQL version 5.7 should be upgraded before Primary Server for replication to be compatible between different MySQL versions, read more on [Replication Compatibility between MySQL versions](https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication-compatibility.html).
+- Before you upgrade your production servers, we strongly recommend you to test your application compatibility and verify your database compatibility with features [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals)/[deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations) in the new MySQL version.
+- Trigger [on-demand backup](./how-to-trigger-on-demand-backup.md) before you perform major version upgrade on your production server, which can be used to [rollback to version 5.7](./how-to-restore-server-portal.md) from the full on-demand backup taken.
++
+## Perform Planned Major version upgrade from MySQL 5.7 to MySQL 8.0 using Azure portal
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.7 server.
+ >[!Important]
+ > We recommend performing upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](./how-to-restore-server-portal.md).
+
+2. From the overview page, click the Upgrade button in the toolbar
+
+ >[!Important]
+ > Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0.
+ > Verify deprecated [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure.
+ > [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) with values NO_AUTO_CREATE_USER, NO_FIELD_OPTIONS, NO_KEY_OPTIONS and NO_TABLE_OPTIONS are no longer supported in MySQL 8.0.
+
+ :::image type="content" source="./media/how-to-upgrade/1-how-to-upgrade.png" alt-text="Screenshot showing Azure Database for MySQL Upgrade.":::
+
+3. In the Upgrade sidebar, verify Major Upgrade version to upgrade i.e 8.0.
+
+ :::image type="content" source="./media/how-to-upgrade/2-how-to-upgrade.png" alt-text="Screenshot showing Upgrade.":::
+
+4. For Primary Server, click on confirmation checkbox, to confirm that all your replica servers are upgraded before primary server. Once confirmed that all your replicas are upgraded, Upgrade button will be enabled. For your read-replicas and standalone servers, Upgrade button will be enabled by default.
+
+ :::image type="content" source="./media/how-to-upgrade/3-how-to-upgrade.png" alt-text="Screenshot showing confirmation.":::
+
+5. Once Upgrade button is enabled, you can click on Upgrade button to proceed with deployment.
+
+ :::image type="content" source="./media/how-to-upgrade/4-how-to-upgrade.png" alt-text="Screenshot showing upgrade.":::
++
+## Perform Planned Major version upgrade from MySQL 5.7 to MySQL 8.0 using Azure CLI
+
+Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.7 server using Azure CLI.
+
+1. Install [Azure CLI](/cli/azure/install-azure-cli) for Windows or use [Azure CLI](../../cloud-shell/overview.md) in Azure Cloud Shell to run the upgrade commands.
+
+ This upgrade requires version 2.40.0 or later of the Azure CLI. If you are using Azure Cloud Shell, the latest version is already installed. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
++
+2. After you sign in, run the [az mysql server upgrade](/cli/azure/mysql/server#az-mysql-server-upgrade) command.
+
+ ```azurecli
+ az mysql server upgrade --name testsvr --resource-group testgroup --subscription MySubscription --version 8.0
+ ```
+
+3. Under confirmation prompt, type ΓÇ£yΓÇ¥ for confirming or ΓÇ£nΓÇ¥ to stop the upgrade process and enter.
++
+## Perform major version upgrade from MySQL 5.7 to MySQL 8.0 on read replica using Azure portal
+
+1. In the Azure portal, select your existing Azure Database for MySQL 5.7 read replica server.
+
+2. From the Overview page, click the Upgrade button in the toolbar.
+>[!Important]
+> Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0.
+>Verify deprecated [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure.
+
+3. In the Upgrade section, select Upgrade button to upgrade Azure database for MySQL 5.7 read replica server to 8.0 server.
+
+4. A notification will confirm that upgrade is successful.
+
+5. From the Overview page, confirm that your Azure database for MySQL read replica server version is 8.0.
+
+6. Now go to your primary server and perform major version upgrade on it.
++
+## Perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replicas
+
+1. In the Azure portal, select your existing Azure Database for MySQL 5.7.
+2. Create a [read replica](./how-to-read-replicas-portal.md) from your primary server.
+3. Upgrade your [read replica to version](#perform-planned-major-version-upgrade-from-mysql-57-to-mysql-80-using-azure-cli) 8.0.
+4. Once you confirm that the replica server is running on version 8.0, stop your application from connecting to your primary server.
+5. Check replication status, and make sure replica is all caught up with primary, so all the data is in sync and ensure there are no new operations performed in primary.
+Confirm with the show slave status command on the replica server to view the replication status.
+ ```azurecli
+ SHOW SLAVE STATUS\G
+ ```
+ If the state of Slave_IO_Running and Slave_SQL_Running are "yes" and the value of Seconds_Behind_Master is "0", replication is working well. Seconds_Behind_Master indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates. Once you confirm Seconds_Behind_Master is "0" it's safe to stop replication.
+
+6. Promote your read replica to primary by stopping replication.
+7. Set Server Parameter read_only to 0 i.e., OFF to start writing on promoted primary.
+
+ Point your application to the new primary (former replica) which is running server 8.0. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
+
+>[!Note]
+> This scenario will have downtime during steps 4, 5 and 6 only.
++
+## Frequently asked questions
+- Will this cause downtime of the server and if so, how long?
+
+ To have minimal downtime during upgrades, follow the steps mentioned under - [Perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replicas](#perform-minimal-downtime-major-version-upgrade-from-mysql-57-to-mysql-80-using-read-replicas).
+ The server will be unavailable during the upgrade process, so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server.
++
+- When will this upgrade feature be GA?
+
+ The GA of this feature will be planned by December 2022. However, the feature is production ready and fully supported by Azure so you should run it with confidence in your environment. As a recommended best practice, we strongly suggest you run and test it first on a restored copy of the server so you can estimate the downtime during upgrade and perform application compatibility test before you run it on production.
+
+- What happens to my backups after upgrade?
+
+ All backups (automated/on-demand) taken before major version upgrade, when used for restoration will always restore to a server with older version (5.7).
+ All the backups (automated/on-demand) taken after major version upgrade will restore to server with upgraded version (8.0). It's highly recommended to take on-demand backup before you perform the major version upgrade for an easy rollback.
++
+ ## Next steps
+ - Learn more on [how to configure scheduled maintenance](./how-to-maintenance-portal.md) for your Azure Database for MySQL flexible server.
+ - Learn about what's new in [MySQL version 8.0](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html).
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
> This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. ## September 2022+
+- **Major version upgrade in Azure Database for MySQL - Flexible Server (Preview)**
+ You can now upgrade your MySQL major version, in-place in Azure Database for MySQL Flexible server from MySQL 5.7 servers to MySQL 8.0 with a click of button without any data movement or the need of any application connection string changes.[Learn more](./how-to-upgrade.md)
++ - **Read replica for HA enabled Azure Database for MySQL - Flexible Server (General Availability)** The read replica feature allows you to replicate data from an Azure Database for MySQL flexible server to a read-only server. You can replicate the source server to up to 10 replicas. This functionality is now extended to support HA enabled servers within same region.[Learn more](concepts-read-replicas.md)
mysql Azure Pipelines Mysql Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/azure-pipelines-mysql-deploy.md
Last updated 09/14/2022
Get started with Azure Database for MySQL by deploying a database update with Azure Pipelines. Azure Pipelines lets you build, test, and deploy with continuous integration (CI) and continuous delivery (CD) using [Azure DevOps](/azure/devops/).
-You'll use the [Azure Database for MySQL Deployment task](/azure/devops/pipelines/tasks/deploy/azure-mysql-deployment.md). The Azure Database for MySQL Deployment task only works with Azure Database for MySQL Single Server.
+You'll use the [Azure Database for MySQL Deployment task](/azure/devops/pipelines/tasks/deploy/azure-mysql-deployment). The Azure Database for MySQL Deployment task only works with Azure Database for MySQL Single Server.
## Prerequisites
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md
ms.devlang: java
Previously updated : 06/20/2022 Last updated : 08/15/2022 # Quickstart: Use Java and JDBC with Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
-This topic demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for MySQL](./index.yml).
+This article demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for MySQL](./index.yml).
JDBC is the standard Java API to connect to traditional relational databases.
+In this article, we'll include two authentication methods: Azure Active Directory (Azure AD) authentication and MySQL authentication. The **Passwordless** tab shows the Azure AD authentication and the **Password** tab shows the MySQL authentication.
+
+Azure AD authentication is a mechanism for connecting to Azure Database for MySQL using identities defined in Azure AD. With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
+
+MySQL authentication uses accounts stored in MySQL. If you choose to use passwords as credentials for the accounts, these credentials will be stored in the `user` table. Because these passwords are stored in MySQL, you'll need to manage the rotation of the passwords by yourself.
+ ## Prerequisites - An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/). - [Azure Cloud Shell](../../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli). We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need. - A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell). - The [Apache Maven](https://maven.apache.org/) build tool.
+- MySQL command line client. You can connect to your server using the [mysql.exe](https://dev.mysql.com/downloads/) command-line tool with Azure Cloud Shell. Alternatively, you can use the `mysql` command line in your local environment.
## Prepare the working environment
-We are going to use environment variables to limit typing mistakes, and to make it easier for you to customize the following configuration for your specific needs.
+First, set up some environment variables. In [Azure Cloud Shell](https://shell.azure.com/), run the following commands:
-Set up those environment variables by using the following commands:
+### [Passwordless (Recommended)](#tab/passwordless)
```bash
-AZ_RESOURCE_GROUP=database-workshop
-AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
-AZ_LOCATION=<YOUR_AZURE_REGION>
-AZ_MYSQL_USERNAME=demo
-AZ_MYSQL_PASSWORD=<YOUR_MYSQL_PASSWORD>
-AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
+export AZ_RESOURCE_GROUP=database-workshop
+export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+export AZ_LOCATION=<YOUR_AZURE_REGION>
+export AZ_MYSQL_AD_NON_ADMIN_USERNAME=demo-non-admin
+export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
+export CURRENT_USERNAME=$(az ad signed-in-user show --query userPrincipalName -o tsv)
+export CURRENT_USER_OBJECTID=$(az ad signed-in-user show --query id -o tsv)
+```
+
+Replace the placeholders with the following values, which are used throughout this article:
+
+- `<YOUR_DATABASE_NAME>`: The name of your MySQL server. It should be unique across Azure.
+- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`.
+- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Spring Boot application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).
+
+### [Password](#tab/password)
+
+```bash
+export AZ_RESOURCE_GROUP=database-workshop
+export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+export AZ_LOCATION=<YOUR_AZURE_REGION>
+export AZ_MYSQL_ADMIN_USERNAME=demo
+export AZ_MYSQL_ADMIN_PASSWORD=<YOUR_MYSQL_ADMIN_PASSWORD>
+export AZ_MYSQL_NON_ADMIN_USERNAME=demo-non-admin
+export AZ_MYSQL_NON_ADMIN_PASSWORD=<YOUR_MYSQL_NON_ADMIN_PASSWORD>
+export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
``` Replace the placeholders with the following values, which are used throughout this article: - `<YOUR_DATABASE_NAME>`: The name of your MySQL server. It should be unique across Azure. - `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`.-- `<YOUR_MYSQL_PASSWORD>`: The password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).-- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to point your browser to [whatismyip.akamai.com](http://whatismyip.akamai.com/).
+- `<YOUR_MYSQL_ADMIN_PASSWORD>` and `<YOUR_MYSQL_NON_ADMIN_PASSWORD>`: The password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).
+- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).
++
-Next, create a resource group:
+Next, create a resource group by using the following command:
```azurecli az group create \ --name $AZ_RESOURCE_GROUP \ --location $AZ_LOCATION \
- | jq
+ --output tsv
```
-> [!NOTE]
-> We use the `jq` utility, which is installed by default on [Azure Cloud Shell](https://shell.azure.com/) to display JSON data and make it more readable.
-> If you don't like that utility, you can safely remove the `| jq` part of all the commands we'll use.
- ## Create an Azure Database for MySQL instance
-The first thing we'll create is a managed MySQL server.
+### Create a MySQL server and set up admin user
+
+The first thing you'll create is a managed MySQL server.
> [!NOTE]
-> You can read more detailed information about creating MySQL servers in [Create an Azure Database for MySQL server by using the Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md).
+> You can read more detailed information about creating MySQL servers in [Quickstart: Create an Azure Database for MySQL server by using the Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md).
+
+#### [Passwordless connection (Recommended)](#tab/passwordless)
+
+If you're using Azure CLI, run the following command to make sure it has sufficient permission:
+
+```bash
+az login --scope https://graph.microsoft.com/.default
+```
+
+Then, run the following command to create the server:
+
+```azurecli
+az mysql server create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_DATABASE_NAME \
+ --location $AZ_LOCATION \
+ --sku-name B_Gen5_1 \
+ --storage-size 5120 \
+ --output tsv
+```
+
+Next, run the following command to set the Azure AD admin user:
+
+```azurecli
+az mysql server ad-admin create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --server-name $AZ_DATABASE_NAME \
+ --display-name $CURRENT_USERNAME \
+ --object-id $CURRENT_USER_OBJECTID
+```
-In [Azure Cloud Shell](https://shell.azure.com/), run the following script:
+> [!IMPORTANT]
+> When setting the administrator, a new user is added to the Azure Database for MySQL server with full administrator permissions. You can only create one Azure AD admin per MySQL server. Selection of another user will overwrite the existing Azure AD admin configured for the server.
+
+This command creates a small MySQL server and sets the Active Directory admin to the signed-in user.
+
+#### [Password](#tab/password)
```azurecli az mysql server create \
az mysql server create \
--location $AZ_LOCATION \ --sku-name B_Gen5_1 \ --storage-size 5120 \
- --admin-user $AZ_MYSQL_USERNAME \
- --admin-password $AZ_MYSQL_PASSWORD \
- | jq
+ --admin-user $AZ_MYSQL_ADMIN_USERNAME \
+ --admin-password $AZ_MYSQL_ADMIN_PASSWORD \
+ --output tsv
``` This command creates a small MySQL server. ++ ### Configure a firewall rule for your MySQL server
-Azure Database for MySQL instances are secured by default. They have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server.
+Azure Databases for MySQL instances are secured by default. These instances have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server.
-Because you configured our local IP address at the beginning of this article, you can open the server's firewall by running:
+Because you configured your local IP address at the beginning of this article, you can open the server's firewall by running the following command:
```azurecli az mysql server firewall-rule create \
az mysql server firewall-rule create \
--server $AZ_DATABASE_NAME \ --start-ip-address $AZ_LOCAL_IP_ADDRESS \ --end-ip-address $AZ_LOCAL_IP_ADDRESS \
- | jq
+ --output tsv
+```
+
+If you're connecting to your MySQL server from Windows Subsystem for Linux (WSL) on a Windows computer, you'll need to add the WSL host ID to your firewall.
+
+Obtain the IP address of your host machine by running the following command in WSL:
+
+```bash
+cat /etc/resolv.conf
+```
+
+Copy the IP address following the term `nameserver`, then use the following command to set an environment variable for the WSL IP Address:
+
+```bash
+AZ_WSL_IP_ADDRESS=<the-copied-IP-address>
+```
+
+Then, use the following command to open the server's firewall to your WSL-based app:
+
+```azurecli
+az mysql server firewall-rule create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_DATABASE_NAME-database-allow-local-ip-wsl \
+ --server $AZ_DATABASE_NAME \
+ --start-ip-address $AZ_WSL_IP_ADDRESS \
+ --end-ip-address $AZ_WSL_IP_ADDRESS \
+ --output tsv
``` ### Configure a MySQL database
-The MySQL server that you created earlier is empty. It doesn't have any database that you can use with the Java application. Create a new database called `demo`:
+The MySQL server that you created earlier is empty. Use the following command to create a new database called `demo`:
```azurecli az mysql db create \ --resource-group $AZ_RESOURCE_GROUP \ --name demo \ --server-name $AZ_DATABASE_NAME \
- | jq
+ --output tsv
+```
+
+### Create a MySQL non-admin user and grant permission
+
+Next, create a non-admin user and grant all permissions on the `demo` database to it.
+
+> [!NOTE]
+> You can read more detailed information about creating MySQL users in [Create users in Azure Database for MySQL](/azure/mysql/single-server/how-to-create-users).
+
+#### [Passwordless connection (Recommended)](#tab/passwordless)
+
+Create a SQL script called *create_ad_user.sql* for creating a non-admin user. Add the following contents and save it locally:
+
+```bash
+export AZ_MYSQL_AD_NON_ADMIN_USERID=$CURRENT_USER_OBJECTID
+
+cat << EOF > create_ad_user.sql
+SET aad_auth_validate_oids_in_tenant = OFF;
+
+CREATE AADUSER '$AZ_MYSQL_AD_NON_ADMIN_USERNAME' IDENTIFIED BY '$AZ_MYSQL_AD_NON_ADMIN_USERID';
+
+GRANT ALL PRIVILEGES ON demo.* TO '$AZ_MYSQL_AD_NON_ADMIN_USERNAME'@'%';
+
+FLUSH privileges;
+
+EOF
+```
+
+Then, use the following command to run the SQL script to create the Azure AD non-admin user:
+
+```bash
+mysql -h $AZ_DATABASE_NAME.mysql.database.azure.com --user $CURRENT_USERNAME@$AZ_DATABASE_NAME --enable-cleartext-plugin --password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken` < create_ad_user.sql
+```
+
+Now use the following command to remove the temporary SQL script file:
+
+```bash
+rm create_ad_user.sql
```
+#### [Password](#tab/password)
+
+Create a SQL script called *create_user.sql* for creating a non-admin user. Add the following contents and save it locally:
+
+```bash
+cat << EOF > create_user.sql
+
+CREATE USER '$AZ_MYSQL_NON_ADMIN_USERNAME'@'%' IDENTIFIED BY '$AZ_MYSQL_NON_ADMIN_PASSWORD';
+
+GRANT ALL PRIVILEGES ON demo.* TO '$AZ_MYSQL_NON_ADMIN_USERNAME'@'%';
+
+FLUSH PRIVILEGES;
+
+EOF
+```
+
+Then, use the following command to run the SQL script to create the Azure AD non-admin user:
+
+```bash
+mysql -h $AZ_DATABASE_NAME.mysql.database.azure.com --user $AZ_MYSQL_ADMIN_USERNAME@$AZ_DATABASE_NAME --enable-cleartext-plugin --password=$AZ_MYSQL_ADMIN_PASSWORD < create_user.sql
+```
+
+Now use the following command to remove the temporary SQL script file:
+
+```bash
+rm create_user.sql
+```
+++ ### Create a new Java project
-Using your favorite IDE, create a new Java project, and add a `pom.xml` file in its root directory:
+Using your favorite IDE, create a new Java project using Java 8 or above. Create a *pom.xml* file in its root directory and add the following contents:
+
+#### [Passwordless connection (Recommended)](#tab/passwordless)
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>com.example</groupId>
+ <artifactId>demo</artifactId>
+ <version>0.0.1-SNAPSHOT</version>
+ <name>demo</name>
+
+ <properties>
+ <java.version>1.8</java.version>
+ <maven.compiler.source>1.8</maven.compiler.source>
+ <maven.compiler.target>1.8</maven.compiler.target>
+ </properties>
+
+ <dependencies>
+ <dependency>
+ <groupId>mysql</groupId>
+ <artifactId>mysql-connector-java</artifactId>
+ <version>8.0.30</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity-providers-jdbc-mysql</artifactId>
+ <version>1.0.0-beta.1</version>
+ </dependency>
+ </dependencies>
+</project>
+```
+
+#### [Password](#tab/password)
```xml <?xml version="1.0" encoding="UTF-8"?>
Using your favorite IDE, create a new Java project, and add a `pom.xml` file in
<dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId>
- <version>8.0.20</version>
+ <version>8.0.30</version>
</dependency> </dependencies> </project> ```
-This file is an [Apache Maven](https://maven.apache.org/) that configures our project to use:
+ -- Java 8-- A recent MySQL driver for Java
+This file is an [Apache Maven](https://maven.apache.org/) file that configures your project to use Java 8 and a recent MySQL driver for Java.
### Prepare a configuration file to connect to Azure Database for MySQL
-Create a *src/main/resources/application.properties* file, and add:
+Run the following script in the project root directory to create a *src/main/resources/application.properties* file and add configuration details:
-```properties
-url=jdbc:mysql://$AZ_DATABASE_NAME.mysql.database.azure.com:3306/demo?serverTimezone=UTC
-user=demo@$AZ_DATABASE_NAME
-password=$AZ_MYSQL_PASSWORD
+#### [Passwordless connection (Recommended)](#tab/passwordless)
+
+```bash
+mkdir -p src/main/resources && touch src/main/resources/application.properties
+
+cat << EOF > src/main/resources/application.properties
+url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin
+user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_NAME}
+EOF
``` -- Replace the two `$AZ_DATABASE_NAME` variables with the value that you configured at the beginning of this article.-- Replace the `$AZ_MYSQL_PASSWORD` variable with the value that you configured at the beginning of this article.
+#### [Password](#tab/password)
+
+```bash
+mkdir -p src/main/resources && touch src/main/resources/application.properties
+
+cat << EOF > src/main/resources/application.properties
+url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?useSSL=true&sslMode=REQUIRED&serverTimezone=UTC
+user=${AZ_MYSQL_NON_ADMIN_USERNAME}@${AZ_DATABASE_NAME}
+password=${AZ_MYSQL_NON_ADMIN_PASSWORD}
+EOF
+```
++ > [!NOTE]
-> We append `?serverTimezone=UTC` to the configuration property `url`, to tell the JDBC driver to use the UTC date format (or Coordinated Universal Time) when connecting to the database. Otherwise, our Java server would not use the same date format as the database, which would result in an error.
+> The configuration property `url` has `?serverTimezone=UTC` appended to tell the JDBC driver to use the UTC date format (or Coordinated Universal Time) when connecting to the database. Otherwise, your Java server would not use the same date format as the database, which would result in an error.
### Create an SQL file to generate the database schema
-We will use a *src/main/resources/`schema.sql`* file in order to create a database schema. Create that file, with the following content:
+Next, you'll use a *src/main/resources/schema.sql* file to create a database schema. Create that file, then add the following contents:
+
+```bash
+touch src/main/resources/schema.sql
-```sql
+cat << EOF > src/main/resources/schema.sql
DROP TABLE IF EXISTS todo; CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BOOLEAN);
+EOF
``` ## Code the application
CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARC
Next, add the Java code that will use JDBC to store and retrieve data from your MySQL server.
-Create a *src/main/java/DemoApplication.java* file, that contains:
+Create a *src/main/java/DemoApplication.java* file and add the following contents:
```java package com.example.demo;
public class DemoApplication {
statement.execute(scanner.nextLine()); }
- /*
- Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
+ /* Prepare to store and retrieve data from the MySQL server.
+ Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
insertData(todo, connection); todo = readData(connection); todo.setDetails("congratulations, you have updated data!"); updateData(todo, connection); deleteData(todo, connection);
- */
+ */
log.info("Closing database connection"); connection.close();
public class DemoApplication {
} ```
-This Java code will use the *application.properties* and the *schema.sql* files that we created earlier, in order to connect to the MySQL server and create a schema that will store our data.
+This Java code will use the *application.properties* and the *schema.sql* files that you created earlier. After connecting to the MySQL server, you can create a schema to store your data.
-In this file, you can see that we commented methods to insert, read, update and delete data: we will code those methods in the rest of this article, and you will be able to uncomment them one after each other.
+In this file, you can see that we commented methods to insert, read, update and delete data. You'll implement those methods in the rest of this article, and you'll be able to uncomment them one after each other.
> [!NOTE] > The database credentials are stored in the *user* and *password* properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument. > [!NOTE]
-> The `AbandonedConnectionCleanupThread.uncheckedShutdown();` line at the end is a MySQL driver specific command to destroy an internal thread when shutting down the application.
-> It can be safely ignored.
+> The `AbandonedConnectionCleanupThread.uncheckedShutdown();` line at the end is a MySQL driver command to destroy an internal thread when shutting down the application. You can safely ignore this line.
You can now execute this main class with your favorite tool: - Using your IDE, you should be able to right-click on the *DemoApplication* class and execute it.-- Using Maven, you can run the application by executing: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.
+- Using Maven, you can run the application with the following command: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.
-The application should connect to the Azure Database for MySQL, create a database schema, and then close the connection, as you should see in the console logs:
+The application should connect to the Azure Database for MySQL, create a database schema, and then close the connection. You should see output similar to the following example in the console logs:
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Closing database connection
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Closing database connection
``` ### Create a domain class
insertData(todo, connection);
Executing the main class should now produce the following output:
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
[INFO ] Closing database connection ``` ### Reading data from Azure Database for MySQL
-Let's read the data previously inserted, to validate that our code works correctly.
+Next, read the data previously inserted to validate that your code works correctly.
In the *src/main/java/DemoApplication.java* file, after the `insertData` method, add the following method to read data from the database:
todo = readData(connection);
Executing the main class should now produce the following output:
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Closing database connection
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Closing database connection
``` ### Updating data in Azure Database for MySQL
-Let's update the data we previously inserted.
+Next, update the data you previously inserted.
Still in the *src/main/java/DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database:
updateData(todo, connection);
Executing the main class should now produce the following output:
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
-[INFO ] Closing database connection
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
+[INFO ] Closing database connection
``` ### Deleting data in Azure Database for MySQL
-Finally, let's delete the data we previously inserted.
+Finally, delete the data you previously inserted.
Still in the *src/main/java/DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database:
deleteData(todo, connection);
Executing the main class should now produce the following output:
-```
-[INFO ] Loading application properties
-[INFO ] Connecting to the database
-[INFO ] Database connection test: demo
-[INFO ] Create database schema
-[INFO ] Insert data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
-[INFO ] Update data
-[INFO ] Read data
-[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
-[INFO ] Delete data
-[INFO ] Read data
-[INFO ] There is no data in the database!
-[INFO ] Closing database connection
+```output
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
+[INFO ] Delete data
+[INFO ] Read data
+[INFO ] There is no data in the database!
+[INFO ] Closing database connection
``` ## Clean up resources
network-watcher Network Watcher Intrusion Detection Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-intrusion-detection-open-source-tools.md
Packet captures are a key component for implementing network intrusion detection systems (IDS) and performing Network Security Monitoring (NSM). There are several open source IDS tools that process packet captures and look for signatures of possible network intrusions and malicious activity. Using the packet captures provided by Network Watcher, you can analyze your network for any harmful intrusions or vulnerabilities.
-One such open source tool is Suricata, an IDS engine that uses rulesets to monitor network traffic and triggers alerts whenever suspicious events occur. Suricata offers a multi-threaded engine, meaning it can perform network traffic analysis with increased speed and efficiency. For more details about Suricata and its capabilities, visit their website at https://suricata-ids.org/.
+One such open source tool is Suricata, an IDS engine that uses rulesets to monitor network traffic and triggers alerts whenever suspicious events occur. Suricata offers a multi-threaded engine, meaning it can perform network traffic analysis with increased speed and efficiency. For more details about Suricata and its capabilities, visit their website at https://suricata.io/.
## Scenario
notification-hubs Notification Hubs Python Push Notification Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-python-push-notification-tutorial.md
class NotificationHub:
for part in parts: if part.startswith('Endpoint'):
- self.Endpoint = 'https' + part[11:]
+ self.Endpoint = 'https' + part[11:].lower()
if part.startswith('SharedAccessKeyName'): self.SasKeyName = part[20:] if part.startswith('SharedAccessKey'):
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_freespacemap](https://www.postgresql.org/docs/13/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.6.1 | Extension to manage partitioned tables by time or ID | > |[pg_prewarm](https://www.postgresql.org/docs/13/pgprewarm.html) | 1.2 | prewarm relation data|
+> |[pg_repack](https://reorg.github.io/pg_repack/) | 1.4.7 | lets you remove bloat from tables and indexes|
> |[pg_stat_statements](https://www.postgresql.org/docs/13/pgstatstatements.html) | 1.8 | track execution statistics of all SQL statements executed| > |[pg_trgm](https://www.postgresql.org/docs/13/pgtrgm.html) | 1.5 | text similarity measurement and index searching based on trigrams| > |[pg_visibility](https://www.postgresql.org/docs/13/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_freespacemap](https://www.postgresql.org/docs/13/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.5.0 | Extension to manage partitioned tables by time or ID | > |[pg_prewarm](https://www.postgresql.org/docs/13/pgprewarm.html) | 1.2 | prewarm relation data|
+> |[pg_repack](https://reorg.github.io/pg_repack/) | 1.4.7 | lets you remove bloat from tables and indexes|
> |[pg_stat_statements](https://www.postgresql.org/docs/13/pgstatstatements.html) | 1.8 | track execution statistics of all SQL statements executed| > |[pg_trgm](https://www.postgresql.org/docs/13/pgtrgm.html) | 1.5 | text similarity measurement and index searching based on trigrams| > |[pg_visibility](https://www.postgresql.org/docs/13/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_freespacemap](https://www.postgresql.org/docs/12/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.5.0 | Extension to manage partitioned tables by time or ID | > |[pg_prewarm](https://www.postgresql.org/docs/12/pgprewarm.html) | 1.2 | prewarm relation data|
+> |[pg_repack](https://reorg.github.io/pg_repack/) | 1.4.7 | lets you remove bloat from tables and indexes|
> |[pg_stat_statements](https://www.postgresql.org/docs/12/pgstatstatements.html) | 1.7 | track execution statistics of all SQL statements executed| > |[pg_trgm](https://www.postgresql.org/docs/12/pgtrgm.html) | 1.4 | text similarity measurement and index searching based on trigrams| > |[pg_visibility](https://www.postgresql.org/docs/12/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_freespacemap](https://www.postgresql.org/docs/11/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.5.0 | Extension to manage partitioned tables by time or ID | > |[pg_prewarm](https://www.postgresql.org/docs/11/pgprewarm.html) | 1.2 | prewarm relation data|
+> |[pg_repack](https://reorg.github.io/pg_repack/) | 1.4.7 | lets you remove bloat from tables and indexes|
> |[pg_stat_statements](https://www.postgresql.org/docs/11/pgstatstatements.html) | 1.6 | track execution statistics of all SQL statements executed| > |[pg_trgm](https://www.postgresql.org/docs/11/pgtrgm.html) | 1.4 | text similarity measurement and index searching based on trigrams| > |[pg_visibility](https://www.postgresql.org/docs/11/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info|
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-java.md
ms.devlang: java Previously updated : 06/24/2022 Last updated : 09/27/2022 # Quickstart: Use Java and JDBC with Azure Database for PostgreSQL [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
-This topic demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for PostgreSQL](./index.yml).
+This article demonstrates how to create a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for PostgreSQL](./index.yml).
JDBC is the standard Java API to connect to traditional relational databases.
+In this article, we'll include two authentication methods: Azure Active Directory (Azure AD) authentication and PostgreSQL authentication. The **Passwordless** tab shows the Azure AD authentication and the **Password** tab shows the PostgreSQL authentication.
+
+Azure AD authentication is a mechanism for connecting to Azure Database for PostgreSQL using identities defined in Azure AD. With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
+
+PostgreSQL authentication uses accounts stored in PostgreSQL. If you choose to use passwords as credentials for the accounts, these credentials will be stored in the `user` table. Because these passwords are stored in PostgreSQL, you'll need to manage the rotation of the passwords by yourself.
+ ## Prerequisites - An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/).-- [Azure Cloud Shell](../../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli). We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need.
+- [Azure Cloud Shell](../../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli) 2.37.0 or above required. We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need.
- A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell). - The [Apache Maven](https://maven.apache.org/) build tool. ## Prepare the working environment
-We are going to use environment variables to limit typing mistakes, and to make it easier for you to customize the following configuration for your specific needs.
+First, set up some environment variables. In [Azure Cloud Shell](https://shell.azure.com/), run the following commands:
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+```bash
+export AZ_RESOURCE_GROUP=database-workshop
+export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+export AZ_LOCATION=<YOUR_AZURE_REGION>
+export AZ_POSTGRESQL_AD_NON_ADMIN_USERNAME=<YOUR_POSTGRESQL_AD_NON_ADMIN_USERNAME>
+export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
+export CURRENT_USERNAME=$(az ad signed-in-user show --query userPrincipalName -o tsv)
+export CURRENT_USER_OBJECTID=$(az ad signed-in-user show --query id -o tsv)
+```
+
+Replace the placeholders with the following values, which are used throughout this article:
+
+- `<YOUR_DATABASE_NAME>`: The name of your PostgreSQL server, which should be unique across Azure.
+- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`.
+- `<YOUR_POSTGRESQL_AD_NON_ADMIN_USERNAME>`: The username of your PostgreSQL database server. Make ensure the username is a valid user in your Azure AD tenant.
+- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Spring Boot application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).
-Set up those environment variables by using the following commands:
+> [!IMPORTANT]
+> When setting <YOUR_POSTGRESQL_AD_NON_ADMIN_USERNAME>, the username must already exist in your Azure AD tenant or you will be unable to create an Azure AD user in your database.
+
+### [Password](#tab/password)
```bash
-AZ_RESOURCE_GROUP=database-workshop
-AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
-AZ_LOCATION=<YOUR_AZURE_REGION>
-AZ_POSTGRESQL_USERNAME=demo
-AZ_POSTGRESQL_PASSWORD=<YOUR_POSTGRESQL_PASSWORD>
-AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
+export AZ_RESOURCE_GROUP=database-workshop
+export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+export AZ_LOCATION=<YOUR_AZURE_REGION>
+export AZ_POSTGRESQL_ADMIN_USERNAME=demo
+export AZ_POSTGRESQL_ADMIN_PASSWORD=<YOUR_POSTGRESQL_ADMIN_PASSWORD>
+export AZ_POSTGRESQL_NON_ADMIN_USERNAME=demo_non_admin
+export AZ_POSTGRESQL_NON_ADMIN_PASSWORD=<YOUR_POSTGRESQL_NON_ADMIN_PASSWORD>
+export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
``` Replace the placeholders with the following values, which are used throughout this article: - `<YOUR_DATABASE_NAME>`: The name of your PostgreSQL server. It should be unique across Azure. - `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`.-- `<YOUR_POSTGRESQL_PASSWORD>`: The password of your PostgreSQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).-- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to point your browser to [whatismyip.akamai.com](http://whatismyip.akamai.com/).
+- `<YOUR_POSTGRESQL_ADMIN_PASSWORD>` and `<YOUR_POSTGRESQL_NON_ADMIN_PASSWORD>`: The password of your PostgreSQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).
+- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).
++ Next, create a resource group by using the following command:
Next, create a resource group by using the following command:
az group create \ --name $AZ_RESOURCE_GROUP \ --location $AZ_LOCATION \
- | jq
+ --output tsv
```
-> [!NOTE]
-> We use the `jq` utility to display JSON data and make it more readable. This utility is installed by default on [Azure Cloud Shell](https://shell.azure.com/). If you don't like that utility, you can safely remove the `| jq` part of all the commands we'll use.
- ## Create an Azure Database for PostgreSQL instance
-The first thing we'll create is a managed PostgreSQL server.
+### Create a PostgreSQL server and set up admin user
+
+The first thing you'll create is a managed PostgreSQL server with an admin user.
> [!NOTE] > You can read more detailed information about creating PostgreSQL servers in [Create an Azure Database for PostgreSQL server by using the Azure portal](./quickstart-create-server-database-portal.md).
-In [Azure Cloud Shell](https://shell.azure.com/), run the following command:
+#### [Passwordless (Recommended)](#tab/passwordless)
+
+If you're using Azure CLI, run the following command to make sure it has sufficient permission:
+
+```bash
+az login --scope https://graph.microsoft.com/.default
+```
+
+Then run following command to create the server:
+
+```azurecli
+az postgres server create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_DATABASE_NAME \
+ --location $AZ_LOCATION \
+ --sku-name B_Gen5_1 \
+ --storage-size 5120 \
+ --output tsv
+```
+
+Now run the following command to set the Azure AD admin user:
+
+```azurecli
+az postgres server ad-admin create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --server-name $AZ_DATABASE_NAME \
+ --display-name $CURRENT_USERNAME \
+ --object-id $CURRENT_USER_OBJECTID
+```
+
+> [!IMPORTANT]
+> When setting the administrator, a new user is added to the Azure Database for PostgreSQL server with full administrator permissions. Only one Azure AD admin can be created per PostgreSQL server and selection of another one will overwrite the existing Azure AD admin configured for the server.
+
+This command creates a small PostgreSQL server and sets the Active Directory admin to the signed-in user.
+
+#### [Password](#tab/password)
```azurecli az postgres server create \
az postgres server create \
--location $AZ_LOCATION \ --sku-name B_Gen5_1 \ --storage-size 5120 \
- --admin-user $AZ_POSTGRESQL_USERNAME \
- --admin-password $AZ_POSTGRESQL_PASSWORD \
- | jq
+ --admin-user $AZ_POSTGRESQL_ADMIN_USERNAME \
+ --admin-password $AZ_POSTGRESQL_ADMIN_PASSWORD \
+ --output tsv
``` This command creates a small PostgreSQL server. ++ ### Configure a firewall rule for your PostgreSQL server Azure Database for PostgreSQL instances are secured by default. They have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server.
az postgres server firewall-rule create \
--server $AZ_DATABASE_NAME \ --start-ip-address $AZ_LOCAL_IP_ADDRESS \ --end-ip-address $AZ_LOCAL_IP_ADDRESS \
- | jq
+ --output tsv
+```
+
+If you're connecting to your PostgreSQL server from Windows Subsystem for Linux (WSL) on a Windows computer, you'll need to add the WSL host ID to your firewall.
+
+Obtain the IP address of your host machine by running the following command in WSL:
+
+```bash
+cat /etc/resolv.conf
+```
+
+Copy the IP address following the term `nameserver`, then use the following command to set an environment variable for the WSL IP Address:
+
+```bash
+AZ_WSL_IP_ADDRESS=<the-copied-IP-address>
+```
+
+Then, use the following command to open the server's firewall to your WSL-based app:
+
+```azurecli
+az postgres server firewall-rule create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_DATABASE_NAME-database-allow-local-ip \
+ --server $AZ_DATABASE_NAME \
+ --start-ip-address $AZ_WSL_IP_ADDRESS \
+ --end-ip-address $AZ_WSL_IP_ADDRESS \
+ --output tsv
``` ### Configure a PostgreSQL database
-The PostgreSQL server that you created earlier is empty. It doesn't have any database that you can use with the Java application. Create a new database called `demo` by using the following command:
+The PostgreSQL server that you created earlier is empty. Use the following command to create a new database called `demo`:
```azurecli az postgres db create \ --resource-group $AZ_RESOURCE_GROUP \ --name demo \ --server-name $AZ_DATABASE_NAME \
- | jq
+ --output tsv
```
+### Create a PostgreSQL non-admin user and grant permission
+
+Next, create a non-admin user and grant all permissions on the `demo` database to it.
+
+> [!NOTE]
+> You can read more detailed information about creating PostgreSQL users in [Create users in Azure Database for PostgreSQL](/azure/PostgreSQL/single-server/how-to-create-users).
+
+#### [Passwordless (Recommended)](#tab/passwordless)
+
+Create a SQL script called *create_ad_user.sql* for creating a non-admin user. Add the following contents and save it locally:
+
+```bash
+cat << EOF > create_ad_user.sql
+SET aad_validate_oids_in_tenant = off;
+CREATE ROLE "$AZ_POSTGRESQL_AD_NON_ADMIN_USERNAME" WITH LOGIN IN ROLE azure_ad_user;
+GRANT ALL PRIVILEGES ON DATABASE demo TO "$AZ_POSTGRESQL_AD_NON_ADMIN_USERNAME";
+EOF
+```
+
+Then, use the following command to run the SQL script to create the Azure AD non-admin user:
+
+```bash
+psql "host=$AZ_DATABASE_NAME.postgres.database.azure.com user=$CURRENT_USERNAME@$AZ_DATABASE_NAME dbname=demo port=5432 password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken` sslmode=require" < create_ad_user.sql
+```
+
+Now use the following command to remove the temporary SQL script file:
+
+```bash
+rm create_ad_user.sql
+```
+
+#### [Password](#tab/password)
+
+Create a SQL script called *create_user.sql* for creating a non-admin user. Add the following contents and save it locally:
+
+```bash
+cat << EOF > create_user.sql
+CREATE ROLE "$AZ_POSTGRESQL_NON_ADMIN_USERNAME" WITH LOGIN PASSWORD '$AZ_POSTGRESQL_NON_ADMIN_PASSWORD';
+GRANT ALL PRIVILEGES ON DATABASE demo TO "$AZ_POSTGRESQL_NON_ADMIN_USERNAME";
+EOF
+```
+
+Then, use the following command to run the SQL script to create the Azure AD non-admin user:
+
+```bash
+psql "host=$AZ_DATABASE_NAME.postgres.database.azure.com user=$AZ_POSTGRESQL_ADMIN_USERNAME@$AZ_DATABASE_NAME dbname=demo port=5432 password=$AZ_POSTGRESQL_ADMIN_PASSWORD sslmode=require" < create_user.sql
+```
+
+Now use the following command to remove the temporary SQL script file:
+
+```bash
+rm create_user.sql
+```
+++ ### Create a new Java project
-Using your favorite IDE, create a new Java project, and add a `pom.xml` file in its root directory:
+Using your favorite IDE, create a new Java project using Java 8 or above, and add a *pom.xml* file in its root directory with the following contents:
+
+#### [Passwordless (Recommended)](#tab/passwordless)
```xml <?xml version="1.0" encoding="UTF-8"?>
Using your favorite IDE, create a new Java project, and add a `pom.xml` file in
</properties> <dependencies>
- <dependency>
- <groupId>org.postgresql</groupId>
- <artifactId>postgresql</artifactId>
- <version>42.2.12</version>
- </dependency>
+ <dependency>
+ <groupId>org.postgresql</groupId>
+ <artifactId>postgresql</artifactId>
+ <version>42.3.6</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity-providers-jdbc-postgresql</artifactId>
+ <version>1.0.0-beta.1</version>
+ </dependency>
</dependencies> </project> ```
-This file is an [Apache Maven](https://maven.apache.org/) that configures our project to use:
+#### [Password](#tab/password)
-- Java 8-- A recent PostgreSQL driver for Java
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>com.example</groupId>
+ <artifactId>demo</artifactId>
+ <version>0.0.1-SNAPSHOT</version>
+ <name>demo</name>
+
+ <properties>
+ <java.version>1.8</java.version>
+ <maven.compiler.source>1.8</maven.compiler.source>
+ <maven.compiler.target>1.8</maven.compiler.target>
+ </properties>
+
+ <dependencies>
+ <dependency>
+ <groupId>org.postgresql</groupId>
+ <artifactId>postgresql</artifactId>
+ <version>42.3.6</version>
+ </dependency>
+ </dependencies>
+</project>
+```
+++
+This file is an [Apache Maven](https://maven.apache.org/) file that configures your project to use Java 8 and a recent PostgreSQL driver for Java.
### Prepare a configuration file to connect to Azure Database for PostgreSQL
-Create a *src/main/resources/application.properties* file, and add:
+Create a *src/main/resources/application.properties* file, then add the following contents:
-```properties
-url=jdbc:postgresql://$AZ_DATABASE_NAME.postgres.database.azure.com:5432/demo?ssl=true&sslmode=require
-user=demo@$AZ_DATABASE_NAME
-password=$AZ_POSTGRESQL_PASSWORD
+#### [Passwordless (Recommended)](#tab/passwordless)
+
+```bash
+cat << EOF > src/main/resources/application.properties
+url=jdbc:postgresql://${AZ_DATABASE_NAME}.postgres.database.azure.com:5432/demo?sslmode=require&authenticationPluginClassName=com.azure.identity.providers.postgresql.AzureIdentityPostgresqlAuthenticationPlugin
+user=${AZ_POSTGRESQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_NAME}
+EOF
``` -- Replace the two `$AZ_DATABASE_NAME` variables with the value that you configured at the beginning of this article.-- Replace the `$AZ_POSTGRESQL_PASSWORD` variable with the value that you configured at the beginning of this article.
+#### [Password](#tab/password)
+
+```bash
+cat << EOF > src/main/resources/application.properties
+url=jdbc:postgresql://${AZ_DATABASE_NAME}.postgres.database.azure.com:5432/demo?sslmode=require
+user=${AZ_POSTGRESQL_NON_ADMIN_USERNAME}@${AZ_DATABASE_NAME}
+password=${AZ_POSTGRESQL_NON_ADMIN_PASSWORD}
+EOF
+```
++ > [!NOTE]
-> We append `?ssl=true&sslmode=require` to the configuration property `url`, to tell the JDBC driver to use TLS ([Transport Layer Security](https://en.wikipedia.org/wiki/Transport_Layer_Security)) when connecting to the database. It is mandatory to use TLS with Azure Database for PostgreSQL, and it is a good security practice.
+> The configuration property `url` has `?sslmode=require` appended to tell the JDBC driver to use TLS ([Transport Layer Security](https://en.wikipedia.org/wiki/Transport_Layer_Security)) when connecting to the database. Using TLS is mandatory with Azure Database for PostgreSQL, and it's a good security practice.
### Create an SQL file to generate the database schema
-We will use a *src/main/resources/`schema.sql`* file in order to create a database schema. Create that file, with the following content:
+You'll use a *src/main/resources/schema.sql* file to create a database schema. Create that file, then add the following contents:
-```sql
+```bash
+cat << EOF > src/main/resources/schema.sql
DROP TABLE IF EXISTS todo; CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BOOLEAN);
+EOF
``` ## Code the application
CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARC
Next, add the Java code that will use JDBC to store and retrieve data from your PostgreSQL server.
-Create a *src/main/java/DemoApplication.java* file, that contains:
+Create a *src/main/java/DemoApplication.java* file, then add the following contents:
```java package com.example.demo;
public class DemoApplication {
statement.execute(scanner.nextLine()); }
- /*
- Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
+ /* Prepare for data processing in the PostgreSQL server.
+ Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
insertData(todo, connection); todo = readData(connection); todo.setDetails("congratulations, you have updated data!"); updateData(todo, connection); deleteData(todo, connection);
- */
+ */
log.info("Closing database connection"); connection.close();
public class DemoApplication {
} ```
-This Java code will use the *application.properties* and the *schema.sql* files that we created earlier, in order to connect to the PostgreSQL server and create a schema that will store our data.
+This Java code will use the *application.properties* and the *schema.sql* files that you created earlier in order to connect to the PostgreSQL server and create a schema that will store your data.
-In this file, you can see that we commented methods to insert, read, update and delete data: we will code those methods in the rest of this article, and you will be able to uncomment them one after each other.
+In this file, you can see that we commented methods to insert, read, update and delete data. You'll code those methods in the rest of this article, and you'll be able to uncomment them one after another.
> [!NOTE]
-> The database credentials are stored in the *user* and *password* properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
+> The database credentials are stored in the `user` and `password` properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
You can now execute this main class with your favorite tool: - Using your IDE, you should be able to right-click on the *DemoApplication* class and execute it.-- Using Maven, you can run the application by executing: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.
+- Using Maven, you can run the application by using the following command: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.
The application should connect to the Azure Database for PostgreSQL, create a database schema, and then close the connection, as you should see in the console logs:
-```
+```output
[INFO ] Loading application properties [INFO ] Connecting to the database [INFO ] Database connection test: demo
insertData(todo, connection);
Executing the main class should now produce the following output:
-```
+```output
[INFO ] Loading application properties [INFO ] Connecting to the database [INFO ] Database connection test: demo
Executing the main class should now produce the following output:
### Reading data from Azure Database for PostgreSQL
-Let's read the data previously inserted, to validate that our code works correctly.
+To validate that your code works correctly, read the data that you previously inserted.
In the *src/main/java/DemoApplication.java* file, after the `insertData` method, add the following method to read data from the database:
todo = readData(connection);
Executing the main class should now produce the following output:
-```
+```output
[INFO ] Loading application properties [INFO ] Connecting to the database [INFO ] Database connection test: demo
Executing the main class should now produce the following output:
### Updating data in Azure Database for PostgreSQL
-Let's update the data we previously inserted.
+Next, update the data you previously inserted.
Still in the *src/main/java/DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database:
updateData(todo, connection);
Executing the main class should now produce the following output:
-```
+```output
[INFO ] Loading application properties [INFO ] Connecting to the database [INFO ] Database connection test: demo
Executing the main class should now produce the following output:
### Deleting data in Azure Database for PostgreSQL
-Finally, let's delete the data we previously inserted.
+Finally, delete the data you previously inserted.
Still in the *src/main/java/DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database:
deleteData(todo, connection);
Executing the main class should now produce the following output:
-```
+```output
[INFO ] Loading application properties [INFO ] Connecting to the database [INFO ] Database connection test: demo
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
When setting up scan, you can choose to scan an entire Salesforce organization,
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An active [Microsoft Purview account](create-catalog-portal.md). - You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
+- A Salesforce connected app, which will be used to access your Salesforce information.
+ - If you need to create a connected app, you can follow the [Salesforce documentation](https://help.salesforce.com/s/articleView?id=sf.connected_app_create_basics.htm&type=5).
+ - You will need to [enable OAuth for you Salesforce application](https://help.salesforce.com/s/articleView?id=sf.connected_app_create_api_integration.htm&type=5).
> [!NOTE] > **If your data store is not publicly accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.), **you will need to configure a self hosted integration runtime to connect to it**.
To create and run a new scan, follow these steps:
1. **Credential**: Select the credential to connect to your data source. Make sure to: * Select **Consumer key** while creating a credential.
- * Provide the username of the user that the connected app is imitating in the User name input field.
+ * Provide the username of the user that the [connected app](#prerequisites) is imitating in the User name input field.
* Store the password of the user that the connected app is imitating in an Azure Key Vault secret. * If your self-hosted integration runtime machine's IP is within the [trusted IP ranges for your organization](https://help.salesforce.com/s/articleView?id=sf.security_networkaccess.htm&type=5) set on Salesforce, provide just the password of the user.
- * Otherwise, concatenate the password and security token as the value of the secret. The security token is an automatically generated key that must be added to the end of the password when logging in to Salesforce from an untrusted network. Learn more about how to [get or reset a security token](https://help.salesforce.com/apex/HTViewHelpDoc?id=user_security_token.htm).
+ * Otherwise, **concatenate the password and security token as the value of the secret**. The security token is an automatically generated key that must be added to the end of the password when logging in to Salesforce from an untrusted network. Learn more about how to [get or reset a security token](https://help.salesforce.com/apex/HTViewHelpDoc?id=user_security_token.htm).
* Provide the consumer key from the connected app definition. You can find it on the connected app's Manage Connected Apps page or from the connected app's definition. * Stored the consumer secret from the connected app definition in an Azure Key Vault secret. You can find it along with consumer key.
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
na Previously updated : 09/13/2022 Last updated : 09/26/2022
After you move a resource, you must re-create the role assignment. Eventually, t
### Symptom - Role assignment changes are not being detected
-You recently added or updated a role assignment, but the changes are not being detected.
+You recently added or updated a role assignment, but the changes are not being detected. You might see the message `Status: 401 (Unauthorized)`.
-**Cause**
+**Cause 1**
Azure Resource Manager sometimes caches configurations and data to improve performance. When you assign roles or remove role assignments, it can take up to 30 minutes for changes to take effect.
-**Solution**
+**Solution 1**
If you are using the Azure portal, Azure PowerShell, or Azure CLI, you can force a refresh of your role assignment changes by signing out and signing in. If you are making role assignment changes with REST API calls, you can force a refresh by refreshing your access token. If you are add or remove a role assignment at management group scope and the role has `DataActions`, the access on the data plane might not be updated for several hours. This applies only to management group scope and the data plane.
+**Cause 2**
+
+You added managed identities to a group and assigned a role to that group. The back-end services for managed identities maintain a cache per resource URI for around 24 hours.
+
+**Solution 2**
+
+It can take several hours for changes to a managed identity's group or role membership to take effect. For more information, see [Limitation of using managed identities for authorization](../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md#limitation-of-using-managed-identities-for-authorization).
+ ## Custom roles ### Symptom - Unable to update a custom role
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
Previously updated : 08/24/2022 Last updated : 09/22/2022 # Quickstart: Create an Azure Cognitive Search index in the Azure portal
-Create your first search index using the **Import data** wizard and a built-in sample data source consisting of fictitious hotel data. The wizard guides you through the creation of a search index (hotels-sample-index) so that you can write interesting queries within minutes.
+In this quickstart, you will create your first search index using the **Import data** wizard and a built-in sample data source consisting of fictitious hotel data. The wizard guides you through the creation of a search index (hotels-sample-index) so that you can write interesting queries within minutes.
-Although you won't use the options in this quickstart, the wizard includes a page for AI enrichment so that you can extract text and structure from image files and unstructured text. For a similar walkthrough that includes AI enrichment, see + [Quickstart: Create a skillset](cognitive-search-quickstart-blob.md).
+Although you won't use the options in this quickstart, the wizard includes a page for AI enrichment so that you can extract text and structure from image files and unstructured text. For a similar walkthrough that includes AI enrichment, see [Quickstart: Create a skillset](cognitive-search-quickstart-blob.md).
## Prerequisites
Check the service overview page to find out how many indexes, indexers, and data
:::image type="content" source="media/search-get-started-portal/tiles-indexers-datasources.png" alt-text="Lists of indexes, indexers, and datasources":::
-## <a name="create-index"></a> Create an index and load data
+## Create an index and load data
Search queries iterate over an [*index*](search-what-is-an-index.md) that contains searchable data, metadata, and additional constructs that optimize certain search behaviors.
sentinel Sentinel Solutions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-deploy.md
Centrally manage content items for an installed solution deployed by the content
:::image type="content" source="media/sentinel-solutions-deploy/manage-solution-parser.png" alt-text="Screenshot of parser content type in a solution.":::
-1. **Playbook** - Not yet supported in this view. In Microsoft Sentinel, go to **Playbook** to find and use the solution's playbook.
+1. **Playbook** - Select **Open Playbook** to advance to the Playbook templates (Preview) menu.
+
+ :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-playbook.png" alt-text="Screenshot of solution content for Log4j Vulnerability Detection with a Playbook selected and the Open Playbook button available.":::
+
+ After the populated search finds the template, select it and the **Create Playbook** button is available to start playbook creation. Alternately, you can select the Active playbooks menu tab where the playbook name is filtered. Selecting the playbook name link will take you to the Automation blade to view or edit the playbook in use.
+
+ :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-playbook-active.png" alt-text="Screenshot of solution content for Log4j Vulnerability Detection with a Playbook selected and the Create Playbook button available. The Active Playbooks menu tab is highlighted.":::
## Find the support model for your solution
service-connector How To Integrate Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-mysql.md
Previously updated : 08/11/2022 Last updated : 09/26/2022
This page shows the supported authentication types and client types of Azure Dat
## Supported authentication types and client types
-Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-||-|--|--|-|
-| .NET (MySqlConnector) | | | ![yes icon](./media/green-check.png) | |
-| Go (go-sql-driver for mysql) | | | ![yes icon](./media/green-check.png) | |
-| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Node.js (mysql) | | | ![yes icon](./media/green-check.png) | |
-| Python (mysql-connector-python) | | | ![yes icon](./media/green-check.png) | |
-| Python-Django | | | ![yes icon](./media/green-check.png) | |
-| PHP (mysqli) | | | ![yes icon](./media/green-check.png) | |
-| Ruby (mysql2) | | | ![yes icon](./media/green-check.png) | |
-| None | | | ![yes icon](./media/green-check.png) | |
+Supported authentication and clients for App Service, Container Apps, and Azure Spring Apps:
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||--|--|--|-|
+| .NET (MySqlConnector) | | | ![yes icon](./media/green-check.png) | |
+| Go (go-sql-driver for mysql) | | | ![yes icon](./media/green-check.png) | |
+| Java (JDBC) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (JDBC) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | |
+| Node.js (mysql) | | | ![yes icon](./media/green-check.png) | |
+| Python (mysql-connector-python) | | | ![yes icon](./media/green-check.png) | |
+| Python-Django | | | ![yes icon](./media/green-check.png) | |
+| PHP (mysqli) | | | ![yes icon](./media/green-check.png) | |
+| Ruby (mysql2) | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
## Default environment variable names or application properties
Use the connection details below to connect compute services to Azure Database f
|--||-| | AZURE_MYSQL_CONNECTIONSTRING | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>&password=<Uri.EscapeDataString(<MySQL-DB-password>)` |
+### Java (JDBC) system-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|--|||
+| AZURE_MYSQL_CONNECTIONSTRING | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>` |
+ ### Java - Spring Boot (JDBC) secret / connection string | Application properties | Description | Example value |
Use the connection details below to connect compute services to Azure Database f
| spring.datatsource.username | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` | | spring.datatsource.password | Database password | `MySQL-DB-password` |
+### Java - Spring Boot (JDBC) system-assigned managed identity
+
+| Application properties | Description | Example value |
+|--|-|--|
+| spring.datatsource.url | Spring Boot JDBC database URL | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required` |
+| spring.datatsource.username | Database username | `Connection-Name` |
+| spring.datatsource.password | Database password | `MySQL-DB-password` |
+ ### Node.js (mysql) secret / connection string | Default environment variable name | Description | Example value |
service-connector How To Integrate Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-postgres.md
Previously updated : 08/11/2022 Last updated : 09/26/2022
This page shows the supported authentication types and client types of Azure Dat
## Supported authentication types and client types
-Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
--
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-||-|--|--|-|
-| .NET (ADO.NET) | | | ![yes icon](./media/green-check.png) | |
-| Go (pg) | | | ![yes icon](./media/green-check.png) | |
-| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Node.js (pg) | | | ![yes icon](./media/green-check.png) | |
-| Python (psycopg2) | | | ![yes icon](./media/green-check.png) | |
-| Python-Django | | | ![yes icon](./media/green-check.png) | |
-| PHP (native) | | | ![yes icon](./media/green-check.png) | |
-| Ruby (ruby-pg) | | | ![yes icon](./media/green-check.png) | |
-| None | | | ![yes icon](./media/green-check.png) | |
+Supported authentication and clients for App Service, Container Apps, and Azure Spring Apps:
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||--|--|--|-|
+| .NET (ADO.NET) | | | ![yes icon](./media/green-check.png) | |
+| Go (pg) | | | ![yes icon](./media/green-check.png) | |
+| Java (JDBC) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (JDBC) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | |
+| Node.js (pg) | | | ![yes icon](./media/green-check.png) | |
+| Python (psycopg2) | | | ![yes icon](./media/green-check.png) | |
+| Python-Django | | | ![yes icon](./media/green-check.png) | |
+| PHP (native) | | | ![yes icon](./media/green-check.png) | |
+| Ruby (ruby-pg) | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
## Default environment variable names or application properties
Use the connection details below to connect compute services to PostgreSQL. For
### Java (JDBC) secret / connection string
-| Default environment variable name | Description | Example value |
-|--|--||
+| Default environment variable name | Description | Example value |
+|--|--|-|
| AZURE_POSTGRESQL_CONNECTIONSTRING | JDBC PostgreSQL connection string | `jdbc:postgresql://<PostgreSQL-server-name>.postgres.database.azure.com:5432/<database-name>?sslmode=require&user=<username>%40<PostgreSQL-server-name>l&password=<password>` |
+### Java (JDBC) system-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_POSTGRESQL_CONNECTIONSTRING | JDBC PostgreSQL connection string | `jdbc:postgresql://<PostgreSQL-server-name>.postgres.database.azure.com:5432/<database-name>?sslmode=require&user=<connection-name>` |
+ ### Java - Spring Boot (JDBC) secret / connection string | Application properties | Description | Example value |
Use the connection details below to connect compute services to PostgreSQL. For
| spring.datatsource.username | Database username | `<username>@<PostgreSQL-server-name>` | | spring.datatsource.password | Database password | `<password>` |
+### Java - Spring Boot (JDBC) system-assigned managed identity
+
+| Application properties | Description | Example value |
+|--|-||
+| spring.datatsource.url | Database URL | `jdbc:postgresql://<PostgreSQL-server-name>.postgres.database.azure.com:5432/<database-name>?sslmode=require` |
+| spring.datatsource.username | Database username | `Connection-Name` |
+| spring.datatsource.password | Database password | `<password>` |
+ ### Node.js (pg) secret / connection string | Default environment variable name | Description | Example value |
service-connector How To Integrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-sql-database.md
Previously updated : 08/11/2022 Last updated : 09/26/2022 # Integrate Azure SQL Database with Service Connector
This page shows all the supported compute services, clients, and authentication
## Supported authentication types and clients
-Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|--|:--:|::|::|:--:|
-| .NET | | | ![yes icon](./media/green-check.png) | |
-| Go | | | ![yes icon](./media/green-check.png) | |
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| PHP | | | ![yes icon](./media/green-check.png) | |
-| Node.js | | | ![yes icon](./media/green-check.png) | |
-| Python | | | ![yes icon](./media/green-check.png) | |
-| Python - Django | | | ![yes icon](./media/green-check.png) | |
-| Ruby | | | ![yes icon](./media/green-check.png) | |
-| None | | | ![yes icon](./media/green-check.png) | |
+Supported authentication and clients for App Service, Container Apps, and Azure Spring Apps:
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|--|::|::|::|:--:|
+| .NET | | | ![yes icon](./media/green-check.png) | |
+| Go | | | ![yes icon](./media/green-check.png) | |
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | |
+| PHP | | | ![yes icon](./media/green-check.png) | |
+| Node.js | | | ![yes icon](./media/green-check.png) | |
+| Python | | | ![yes icon](./media/green-check.png) | |
+| Python - Django | | | ![yes icon](./media/green-check.png) | |
+| Ruby | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
## Default environment variable names or application properties Use the environment variable names and application properties listed below to connect compute services to Azure SQL Database using a secret and a connection string.
-### Azure Container Apps
+### Azure Container Apps and Azure App Service
Use the connection details below to connect Azure App Service and Azure Container Apps instances with .NET, Go, Java, Java - Spring Boot, PHP, Node.js, Python, Python - Django and Ruby. For each example below, replace the placeholder texts `<sql-server>`, `<sql-database>`, `<sql-username>`, and `<sql-password>` with your own server name, database name, user ID and password.
Use the connection details below to connect Azure App Service and Azure Containe
> | | | | > | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;user=<sql-username>;password=<sql-password>;` |
+#### Java Database Connectivity (JDBC) system-assigned managed identity
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|--|--|
+> | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;authentication=ActiveDirectoryMSI;` |
+ #### Java Spring Boot (spring-boot-starter-jdbc) > [!div class="mx-tdBreakAll"]
Use the connection details below to connect Azure App Service and Azure Containe
> |--|-|-| > | spring.datasource.url | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;` | > | spring.datasource.username | Azure SQL Database datasource username | `<sql-user>` |
-> | spring.datasource.password | Azure SQL Database datasource password | `<sql-password>` |
+> | spring.datasource.password | Azure SQL Database datasource password | `<sql-password>` |
+
+#### Java Spring Boot (spring-boot-starter-jdbc) system-assigned managed identity
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|-|--|
+> | spring.datasource.url | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;authentication=ActiveDirectoryMSI;` |
#### Go (go-mssqldb) > [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> | | | |
+> | Default environment variable name | Description | Sample value |
+> |--|--||
> | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `server=<sql-server>.database.windows.net;port=1433;database=<sql-database>;user id=<sql-username>;password=<sql-password>;` | #### Node.js
Use the connection details below to connect Azure App Service and Azure Containe
> |--|--|-| > | AZURE_SQL_SERVER | Azure SQL Database server | `<sql-server>.database.windows.net` | > | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
-> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
-> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-username>` |
-> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
+> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-username>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
#### PHP
Use the connection details below to connect Azure App Service and Azure Containe
> | Default environment variable name | Description | Sample value | > |--|--|-| > | AZURE_SQL_SERVERNAME | Azure SQL Database servername | `<sql-server>.database.windows.net` |
-> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
-> | AZURE_SQL_UID | Azure SQL Database unique identifier (UID) | `<sql-username>` |
-> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
+> | AZURE_SQL_UID | Azure SQL Database unique identifier (UID) | `<sql-username>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
#### Python (pyobdc)
Use the connection details below to connect Azure App Service and Azure Containe
> |--|--|-| > | AZURE_SQL_SERVER | Azure SQL Database server | `<sql-server>.database.windows.net` | > | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
-> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
-> | AZURE_SQL_USER | Azure SQL Database user | `<sql-username>` |
-> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
+> | AZURE_SQL_USER | Azure SQL Database user | `<sql-username>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
#### ADjango (mssql-django)
Use the connection details below to connect Azure App Service and Azure Containe
> |--|--|-| > | AZURE_SQL_HOST | Azure SQL Database host | `<sql-server>.database.windows.net` | > | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
-> | AZURE_SQL_NAME | Azure SQL Database name | `<sql-database>` |
-> | AZURE_SQL_USER | Azure SQL Database user | `<sql-username>` |
-> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
+> | AZURE_SQL_NAME | Azure SQL Database name | `<sql-database>` |
+> | AZURE_SQL_USER | Azure SQL Database user | `<sql-username>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
#### Ruby
Use the connection details below to connect Azure App Service and Azure Containe
> |--|--|-| > | AZURE_SQL_HOST | Azure SQL Database host | `<sql-server>.database.windows.net` | > | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
-> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
-> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-username>` |
-> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
+> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-username>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
### Azure Spring Cloud
Use the connection details below to connect Azure Spring Cloud instances with Ja
#### Java Spring Boot (spring-boot-starter-jdbc) > [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> |--|-|-|
+> | Default environment variable name | Description | Sample value |
+> |--|-|-|
> | spring.datasource.url | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;` |
-> | spring.datasource.username | Azure SQL Database datasource username | `<sql-username>` |
-> | spring.datasource.password | Azure SQL Database datasource password | `<sql-password>` |
+> | spring.datasource.username | Azure SQL Database datasource username | `<sql-username>` |
+> | spring.datasource.password | Azure SQL Database datasource password | `<sql-password>` |
+
+#### Java Spring Boot (spring-boot-starter-jdbc) system-assigned managed identity
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|-|--|
+> | spring.datasource.url | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;authentication=ActiveDirectoryMSI;` |
## Next steps
service-fabric Service Fabric Cross Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cross-availability-zones.md
The Service Fabric node type must be enabled to support multiple Availability Zo
> * The `multipleAvailabilityZones` property on the node type can only be defined when the node type is created and can't be modified later. Existing node types can't be configured with this property. > * When `sfZonalUpgradeMode` is omitted or set to `Hierarchical`, the cluster and application deployments will be slower because there are more upgrade domains in the cluster. It's important to correctly adjust the upgrade policy timeouts to account for the upgrade time required for 15 upgrade domains. The upgrade policy for both the app and the cluster should be updated to ensure that the deployment doesn't exceed the Azure Resource Service deployment time limit of 12 hours. This means that deployment shouldn't take more than 12 hours for 15 UDs (that is, shouldn't take more than 40 minutes for each UD). > * Set the cluster reliability level to `Platinum` to ensure that the cluster survives the one zone-down scenario.
+> * Upgrading the DurabilityLevel for a nodetype with multipleAvailabilityZones, is not supported. Please create a new nodetype with the higher durability instead.
+> * SF supports just 3 AvailabilityZones. Any higher number is not supported right now.
>[!TIP] > We recommend setting `sfZonalUpgradeMode` to `Hierarchical` or omitting it. Deployment will follow the zonal distribution of VMs and affect a smaller amount of replicas or instances, making them safer.
spring-apps Connect Managed Identity To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/connect-managed-identity-to-azure-sql.md
Previously updated : 03/25/2021 Last updated : 09/26/2022
This article shows you how to create a managed identity for an Azure Spring Apps
* Follow the [Spring Data JPA tutorial](/azure/developer/java/spring-framework/configure-spring-data-jpa-with-azure-sql-server) to provision an Azure SQL Database and get it work with a Java app locally * Follow the [Azure Spring Apps system-assigned managed identity tutorial](./how-to-enable-system-assigned-managed-identity.md) to provision an Azure Spring Apps app with MI enabled
-## Grant permission to the Managed Identity
+## Connect to Azure SQL Database with a managed identity
+
+You can connect your application deployed to Azure Spring Apps to an Azure SQL Database with a managed identity by following manual steps or using [Service Connector](../service-connector/overview.md).
+
+### [Manual configuration](#tab/manual)
+
+### Grant permission to the managed identity
Connect to your SQL server and run the following SQL query:
ALTER ROLE db_ddladmin ADD MEMBER [<MIName>];
GO ```
-This `<MIName>` follows the rule: `<service instance name>/apps/<app name>`, for example: `myspringcloud/apps/sqldemo`. You can also query the MIName with Azure CLI:
+The value of the `<MIName>` placeholder follows the rule `<service-instance-name>/apps/<app-name>`; for example: `myspringcloud/apps/sqldemo`. You can also query the MIName with Azure CLI:
```azurecli
-az ad sp show --id <identity object ID> --query displayName
+az ad sp show --id <identity-object-ID> --query displayName
```
-## Configure your Java app to use Managed Identity
+### Configure your Java app to use a managed identity
-Open the *src/main/resources/application.properties* file, and add `Authentication=ActiveDirectoryMSI;` at the end of the following line. Be sure to use the correct value for $AZ_DATABASE_NAME variable.
+Open the *src/main/resources/application.properties* file, then add `Authentication=ActiveDirectoryMSI;` at the end of the `spring.datasource.url` line, as shown in the following example. Be sure to use the correct value for the $AZ_DATABASE_NAME variable.
```properties spring.datasource.url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:1433;database=demo;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;Authentication=ActiveDirectoryMSI; ```
+#### [Service Connector](#tab/service-connector)
+
+Configure your app deployed to Azure Spring to connect to an SQL Database with a system-assigned managed identity using the `az spring connection create` command, as shown in the following example.
+
+> [!NOTE]
+> This command requires you to run the latest [edge build of Azure CLI](https://github.com/Azure/azure-cli/blob/dev/doc/try_new_features_before_release.md). [Download and install the edge builds](https://github.com/Azure/azure-cli#edge-builds) for your platform.
+
+```azurecli-interactive
+az spring connection create sql \
+ --resource-group $SPRING_APP_RESOURCE_GROUP \
+ --service $Spring_APP_SERVICE_NAME \
+ --app $APP_NAME --deployment $DEPLOYMENT_NAME \
+ --target-resource-group $SQL_RESOURCE_GROUP \
+ --server $SQL_SERVER_NAME \
+ --database $DATABASE_NAME \
+ --system-assigned-identity
+```
+++ ## Build and deploy the app to Azure Spring Apps
-Rebuild the app and deploy it to the Azure Spring Apps app provisioned in the second bullet point under Prerequisites. Now you have a Spring Boot application, authenticated by a Managed Identity, that uses JPA to store and retrieve data from an Azure SQL Database in Azure Spring Apps.
+Rebuild the app and deploy it to the Azure Spring Apps provisioned in the second bullet point under Prerequisites. Now you have a Spring Boot application, authenticated by a managed identity, that uses JPA to store and retrieve data from an Azure SQL Database in Azure Spring Apps.
## Next steps
spring-apps How To Bind Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-mysql.md
description: Learn how to bind an Azure Database for MySQL instance to your appl
Previously updated : 11/04/2019 Last updated : 09/26/2022
With Azure Spring Apps, you can bind select Azure services to your applications
## Prerequisites
-* A deployed Azure Spring Apps instance
-* An Azure Database for MySQL account
-* Azure CLI
-
-If you don't have a deployed Azure Spring Apps instance, follow the instructions in [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md) to deploy your first Spring app.
+* An application deployed to Azure Spring Apps. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
+* An Azure Database for PostgreSQL Flexible Server instance.
+* [Azure CLI](/cli/azure/install-azure-cli).
## Prepare your Java project 1. In your project's *pom.xml* file, add the following dependency:
- ```xml
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-data-jpa</artifactId>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-data-jpa</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-starter-jdbc-mysql</artifactId>
+ </dependency>
+ ```
1. In the *application.properties* file, remove any `spring.datasource.*` properties.
If you don't have a deployed Azure Spring Apps instance, follow the instructions
1. To ensure that the service binding is correct, select the binding name, and then verify its detail. The `property` field should look like this:
- ```properties
- spring.datasource.url=jdbc:mysql://some-server.mysql.database.azure.com:3306/testdb?useSSL=true&requireSSL=false&useLegacyDatetimeCode=false&serverTimezone=UTC
- spring.datasource.username=admin@some-server
- spring.datasource.password=abc******
- spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect
- ```
+ ```properties
+ spring.datasource.url=jdbc:mysql://some-server.mysql.database.azure.com:3306/testdb?useSSL=true&requireSSL=false&useLegacyDatetimeCode=false&serverTimezone=UTC
+ spring.datasource.username=admin@some-server
+ spring.datasource.password=abc******
+ spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect
+ ```
+
+#### [Passwordless connection using a managed identity](#tab/Passwordless)
+
+Configure your Spring app to connect to a MySQL Database Flexible Server with a system-assigned managed identity by using the `az spring connection create` command, as shown in the following example.
+
+> [!NOTE]
+> This command requires you to run the latest [edge build of Azure CLI](https://github.com/Azure/azure-cli/blob/dev/doc/try_new_features_before_release.md). [Download and install the edge builds](https://github.com/Azure/azure-cli#edge-builds) for your platform.
+
+```azurecli
+az spring connection create mysql-flexible \
+ --resource-group $AZURE_SPRING_APPS_RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_SERVICE_INSTANCE_NAME \
+ --app $APP_NAME \
+ --deployment $DEPLOYMENT_NAME \
+ --target-resource-group $MYSQL_RESOURCE_GROUP \
+ --server $MYSQL_SERVER_NAME \
+ --database $DATABASE_NAME \
+ --system-assigned-identity
+```
#### [Terraform](#tab/Terraform)
spring-apps How To Bind Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-postgres.md
+
+ Title: How to bind an Azure Database for PostgreSQL to your application in Azure Spring Apps
+description: Learn how to bind an Azure Database for PostgreSQL instance to your application in Azure Spring Apps.
+++ Last updated : 09/26/2022+++
+# Bind an Azure Database for PostgreSQL to your application in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Java ❌ C#
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+With Azure Spring Apps, you can bind select Azure services to your applications automatically, instead of having to configure your Spring Boot application manually. This article shows you how to bind your application to your Azure Database for PostgreSQL instance.
+
+## Prerequisites
+
+* An application deployed to Azure Spring Apps. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
+* An Azure Database for PostgreSQL Flexible Server instance.
+* [Azure CLI](/cli/azure/install-azure-cli).
+
+## Prepare your Java project
+
+1. In your project's *pom.xml* file, add the following dependency:
+
+ ```xml
+ <dependency>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-data-jpa</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-starter-jdbc-postgresql</artifactId>
+ </dependency>
+ ```
+
+1. In the *application.properties* file, remove any `spring.datasource.*` properties.
+
+1. Update the current app by running `az spring app deploy`, or create a new deployment for this change by running `az spring app deployment create`.
+
+## Bind your app to the Azure Database for PostgreSQL instance
+
+### [Using admin credentials](#tab/Secrets)
+
+1. Note the admin username and password of your Azure Database for PostgreSQL account.
+
+1. Connect to the server, create a database named **testdb** from a PostgreSQL client, and then create a new non-admin account.
+
+1. Run the following command to connect to the database with admin username and password.
+
+ ```azurecli
+ az spring connection create postgres \
+ --resource-group $AZURE_SPRING_APPS_RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_SERVICE_INSTANCE_NAME \
+ --app $APP_NAME \
+ --deployment $DEPLOYMENT_NAME \
+ --target-resource-group $POSTGRES_RESOURCE_GROUP \
+ --server $POSTGRES_SERVER_NAME \
+ --database testdb \
+ --secret name=$USERNAME secret=$PASSWORD
+ ```
+
+### [Using a passwordless connection with a managed identity](#tab/Passwordless)
+
+Configure Azure Spring Apps to connect to the PostgreSQL Database Single Server with a system-assigned managed identity using the `az spring connection create` command.
+
+> [!NOTE]
+> This command requires you to run the latest [edge build of Azure CLI](https://github.com/Azure/azure-cli/blob/dev/doc/try_new_features_before_release.md). [Download and install the edge builds](https://github.com/Azure/azure-cli#edge-builds) for your platform.
+
+```azurecli
+az spring connection create postgres \
+ --resource-group $SPRING_APP_RESOURCE_GROUP \
+ --service $Spring_APP_SERVICE_NAME \
+ --app $APP_NAME --deployment $DEPLOYMENT_NAME \
+ --target-resource-group $POSTGRES_RESOURCE_GROUP \
+ --server $POSTGRES_SERVER_NAME \
+ --database $DATABASE_NAME \
+ --system-assigned-identity
+```
+++
+## Next steps
+
+In this article, you learned how to bind an application in Azure Spring Apps to an Azure Database for MySQL instance. To learn more about binding services to an application, see [Bind an Azure Cosmos DB database to an application in Azure Spring Apps](./how-to-bind-cosmos.md).
static-web-apps Branch Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/branch-environments.md
Previously updated : 03/29/2022 Last updated : 09/19/2022
You can configure your site to deploy every change made to branches that aren't
## Configuration
-To enable stable URL environments, make the following changes to your [configuration file](configuration.md).
+To enable stable URL environments, make the following changes to your [configuration .yml file](build-configuration.md?tabs=github-actions).
-- Set the `production_branch` input to your production branch name on the `static-web-apps-deploy` job in GitHub action or on the AzureStaticWebApp task. This ensures changes to your production branch are deployed to the production environment, while changes to other branches are deployed to a preview environment.
+- Set the `production_branch` input to your production branch name on the `static-web-apps-deploy` job in GitHub action or on the AzureStaticWebApp task. This action ensures changes to your production branch are deployed to the production environment, while changes to other branches are deployed to a preview environment.
- List the branches you want to deploy to preview environments in the trigger array in your workflow configuration so that changes to those branches also trigger the GitHub Actions or Azure Pipelines deployment. - Set this array to `**` for GitHub Actions or `*` for Azure Pipelines if you want to track all branches.
steps:
In this example, the preview environments are defined for the `dev` and `staging` branches. Each branch is deployed to a separate preview environment.
-## Next Steps
+## Next steps
> [!div class="nextstepaction"] > [Create named preview environments](./named-environments.md)
static-web-apps Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-portal.md
Title: 'Quickstart: Building your first static web app with Azure Static Web Apps using the Azure portal'
+ Title: 'Quickstart: Build your first static web app'
description: Learn to deploy a static site to Azure Static Web Apps with the Azure portal. Previously updated : 05/07/2021 Last updated : 09/19/2022 zone_pivot_groups: devops-or-github
-# Quickstart: Building your first static site in the Azure portal
+# Quickstart: Build your first static web app
Azure Static Web Apps publishes a website to a production environment by building apps from an Azure DevOps or GitHub repository. In this quickstart, you deploy a web application to Azure Static Web apps using the Azure portal.
Azure Static Web Apps publishes a website to a production environment by buildin
::: zone pivot="azure-devops" - If you don't have an Azure subscription, [create a free trial account](https://azure.microsoft.com/free).-- [Azure DevOps](https://azure.microsoft.com/services/devops) account
+- [Azure DevOps](https://azure.microsoft.com/services/devops) organization
::: zone-end ::: zone pivot="github"
After you sign in with GitHub, enter the repository information.
:::image type="content" source="media/getting-started-portal/quickstart-portal-source-control.png" alt-text="Repository details"::: > [!NOTE]
-> If you don't see any repositories, you may need to authorize Azure Static Web Apps in GitHub. Browse to your GitHub repository and go to **Settings > Applications > Authorized OAuth Apps**, select **Azure Static Web Apps**, and then select **Grant**. For organization repositories, you must be an owner of the organization to grant the permissions.
+> If you don't see any repositories:
+> - You may need to authorize Azure Static Web Apps in GitHub. Browse to your GitHub repository and go to **Settings > Applications > Authorized OAuth Apps**, select **Azure Static Web Apps**, and then select **Grant**.
+> - You may need to authorize Azure Static Web Apps in your Azure DevOps organization. You must be an owner of the organization to grant the permissions. Request third-party application access via via OAuth. For more information, see [Authorize access to REST APIs with OAuth 2.0](https://learn.microsoft.com/azure/devops/integrate/get-started/authentication/oauth?view=azure-devops#2-authorize-your-app).
::: zone-end
The Static Web Apps *Overview* window displays a series of links that help you i
:::image type="content" source="./media/getting-started/overview-window.png" alt-text="The Azure Static Web Apps overview window.":::
-1. Clicking on the banner that says, _Click here to check the status of your GitHub Actions runs_ takes you to the GitHub Actions running against your repository. Once you verify the deployment job is complete, then you can navigate to your website via the generated URL.
+1. Select the banner that says, _Click here to check the status of your GitHub Actions runs_, which takes you to the GitHub Actions and runs against your repository. Once you verify the deployment job is complete, then you can navigate to your website via the generated URL.
-2. Once GitHub Actions workflow is complete, you can click on the _URL_ link to open the website in new tab.
+2. Once GitHub Actions workflow is complete, select on the _URL_ link to open the website in new tab.
::: zone-end ::: zone pivot="azure-devops"
-Once the workflow is complete, you can click on the _URL_ link to open the website in new tab.
+Once the workflow is complete, select the _URL_ link to open the website in new tab.
::: zone-end
If you're not going to continue to use this application, you can delete the Azur
1. Open the [Azure portal](https://portal.azure.com). 1. Search for **my-first-web-static-app** from the top search bar. 1. Select the app name.
-1. Select the **Delete** button.
-1. Select **Yes** to confirm the delete action (this action may take a few moments to complete).
+2. Select **Delete**.
+3. Select **Yes** to confirm the delete action. This action may take a few moments to complete.
## Next steps
static-web-apps Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/monitor.md
Title: Monitoring Azure Static Web Apps
+ Title: Monitor Azure Static Web Apps
description: Monitor requests, failures, and tracing information in Azure Static Web Apps Previously updated : 4/23/2021 Last updated : 09/19/2022
Use the following steps to add Application Insights monitoring to your static we
:::image type="content" source="media/monitoring/azure-static-web-apps-application-insights-add.png" alt-text="Add Application Insights to Azure Static Web Apps":::
+Once you create the Application Insights instance, it creates an associated application setting in the Azure Static Web Apps instance used to link the services together.
+ > [!NOTE]
-> Once you create the Application Insights instance, an associated application setting is created in the Azure Static Web Apps instance used to link the services together.
+> If you want to track how the different features of your web app are used end-to-end client side, you can insert trace calls in your JavaScript code. For more information, see [Application Insights for webpages](/azure-monitor/app/javascript?tabs=snippet).
## Access data
Use the following steps to add Application Insights monitoring to your static we
1. From the list, select the Application Insights instance prefixed with the same name as your static web app.
-The following highlights a few locations in the portal used to inspect aspects of your application's API endpoints.
+The following table highlights a few locations in the portal you can use to inspect aspects of your application's API endpoints.
> [!NOTE]
-> For more detail on Application Insights usage, refer to [Application Insights overview](../azure-monitor/app/app-insights-overview.md).
+> For more information on Application Insights usage, see the [App insights overview](../azure-monitor/app/app-insights-overview.md).
| Type | Menu location | Description | | | | |
Using the following steps to view traces in your application.
## Limit logging
-In some cases you may want to limit logging while still capturing details on errors and warnings by making the following changes to your _host.json_ file of the Azure Functions app.
+In some cases, you may want to limit logging while still capturing details on errors and warnings. You can do so by making the following changes to your _host.json_ file of the Azure Functions app.
```json {
In some cases you may want to limit logging while still capturing details on err
## Next steps > [!div class="nextstepaction"]
-> [Setup authentication and authorization](authentication-authorization.md)
+> [Set up authentication and authorization](authentication-authorization.md)
storage Data Lake Storage Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-known-issues.md
If parent directories for soft-deleted files or directories are renamed, the sof
## Events
-If your account has an event subscription, read operations on the secondary endpoint will result in an error. To resolve this issue, remove event subscriptions.
+If your account has an event subscription, read operations on the secondary endpoint will result in an error. To resolve this issue, remove event subscriptions. Using the dfs endpoint (abfss://URI) for non-hierarchical namespace enabled accounts will not generate events, but the blob endpoint (wasb:// URI) will generate events.
> [!TIP] > Read access to the secondary endpoint is available only when you enable read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS).
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
public static void GetBlobServiceClient(ref BlobServiceClient blobServiceClient,
You can also create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) by using a connection string. ```csharp
- BlobServiceClient blobServiceClient = new BlobServiceClient(connectionString);
+BlobServiceClient blobServiceClient = new BlobServiceClient(connectionString);
``` For information about how to obtain account keys and best practice guidelines for properly managing and safeguarding your keys, see [Manage storage account access keys](../common/storage-account-keys-manage.md).
The following guides show you how to use each of these classes to build your app
- [Samples](../common/storage-samples-dotnet.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples) - [API reference](/dotnet/api/azure.storage.blobs) - [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs)-- [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
+- [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
storage Storage Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy.md
+
+ Title: Implement a retry policy .NET
+
+description: Learn about retry policies and how to implement them for Blob Storage. This article helps you set up a retry policy for Blob Storage requests using the .NET v12 SDK.
++++ Last updated : 09/26/2022+++
+# Implement a retry policy for .NET
+
+Any application that runs in the cloud or communicates with remote services and resources must be able to handle transient faults. It's common for these applications to experience faults due to a momentary loss of network connectivity, a request timeout when a service or resource is busy, or other factors. Developers should build applications to handle transient faults transparently to improve stability and resiliency.
+
+This article shows you how to use .NET client libraries to set up a retry policy for an application that connects to Azure Blob Storage. Retry policies define how the application handles failed requests, and should always be tuned to match the business requirements of the application and the nature of the failure.
+
+> [!NOTE]
+> The examples in this article assume that you're working with an existing app or that you've created a sample console app using the guidance in the [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md) article.
+
+## Configure retry options
+Retry policies for Blob Storage are configured programmatically, offering control over how retry options are applied to various service requests and scenarios. For example, a web app issuing requests based on user interaction might implement a policy with fewer retries and shorter delays to increase responsiveness and notify the user when an error occurs. Alternatively, an app or component running batch requests in the background might increase the number of retries and use an exponential backoff strategy to allow the request time to complete successfully.
+
+In this example for blob storage, we'll configure the retry options in the `Retry` property of the [BlobClientOptions](/dotnet/api/azure.storage.blobs.blobclientoptions) class. Then, we'll create a client object for the blob service using the retry options.
++
+In the code above, each service request issued from the `BlobServiceClient` object will use the retry options as defined in the `BlobClientOptions` object. You can configure various retry strategies for service clients based on the needs of your app.
+
+## Use geo-redundancy to improve app resiliency
+If your app requires high availability and greater resiliency against failures, you can leverage Azure Storage geo-redundancy options as part of your retry policy. Storage accounts configured for geo-redundant replication are synchronously replicated in the primary region, and asynchronously replicated to a secondary region that is hundreds of miles away.
+
+Azure Storage offers two options for geo-redundant replication: [Geo-redundant storage (GRS)](../common/storage-redundancy.md#geo-redundant-storage) and [Geo-zone-redundant storage (GZRS)](../common/storage-redundancy.md#geo-zone-redundant-storage). In addition to enabling geo-redundancy for your storage account, you also need to configure read access to the data in the secondary region. To learn how to change replication options for your storage account, see [Change how a storage account is replicated](../common/redundancy-migration.md).
+
+In this example, we set the `GeoRedundantSecondaryUri` property in `BlobClientOptions`. When this property is set and a read request failure occurs in the primary region, the app will seamlessly switch to perform retries against the secondary region endpoint.
++
+Apps that make use of geo-redundancy need to keep in mind some specific design considerations. To learn more, see [Use geo-redundancy to design highly available applications](../common/geo-redundant-design.md).
+
+## Next steps
+Now that you understand how to implement a retry policy using the .NET client library, see the following articles for more detailed architectural guidance:
+- For architectural guidance and general best practices for retry policies, see [Transient fault handling](/azure/architecture/best-practices/transient-faults).
+- For guidance on implementing a retry pattern for transient failures, see [Retry pattern](/azure/architecture/patterns/retry).
storsimple Storsimple Update52 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-update52-release-notes.md
+
+ Title: StorSimple 8000 Series Update 5.2 release notes
+description: Describes new features, issues, and workarounds for StorSimple 8000 Series Update 5.2.
+
+ms.assetid:
++ Last updated : 09/26/2022+++
+# StorSimple 8000 Series Update 5.2 release notes
++
+## Overview
+
+The following release notes describe new features and identify critical open issues for StorSimple 8000 Series Update 5.2. They also contain a list of the StorSimple software updates included in this release.
+
+The release notes are continuously updated. As critical issues are discovered, they're added to the update. Before you deploy StorSimple 8000 Series, carefully review the information contained in these release notes.
+
+Update 5.2 corresponds to software version 6.3.9600.17886.
+
+> [!IMPORTANT]
+>
+> * Update 5.2 is a mandatory security update. It must be installed immediately to ensure the operation of the device. Microsoft implements a phased rollout, so your new release might not detect all available updates. To ensure a complete update to 5.2, wait a few days and then scan for updates again.
+> * If you're not notified about Update 5.2 via a banner in the Azure portal UI, contact Microsoft Support.
+
+## What's new in Update 5.2
+
+* **Automatic remediation for failed backups caused by a device controller left active for long periods.** When a device controller is continuously active for a long period (more than a year), scheduled and manually triggered backups may fail. No alert or other notification is raised in the Azure portal. The only way to recover is to initiate a controller failover. Update 5.2 detects this condition and remediates it by initiating a controller failover. An alert informs the customer.
+
+* **Reliability issue fixed in backup code path** without which a backup could be corrupted in a rare scenario.
+
+* **Issue with Local Only volume conversion fixed.** In earlier releases, Local Only volume conversion might get stuck if the system restarts at a specific window of the conversion.
+
+* **SHA 256 hashing algorithm is supported for the remote management certificate.** Remote management certificates are used while connecting to the PowerShell interface of the appliance, or during a Support session using remote PowerShell over Single Sockets Layer (SSL). Earlier releases use an SHA 128 hashing algorithm, which is considered weak. Update 5.2 uses SHA 256, which is considered more secure.
+
+## Install Update 5.2
+
+Use the following steps to install Update 5.2:
+
+1. [Connect to Windows PowerShell on the StorSimple 8000 series device](storsimple-8000-deployment-walkthrough-u2.md#use-putty-to-connect-to-the-device-serial-console), or connect directly to the appliance via serial cable.
+
+1. Use [Start-HcsUpdate](/powershell/module/hcs/start-hcsupdate.md?view=winserver2012r2-ps&preserve-view=true) to update the device. For detailed steps, see [Install regular updates via Windows PowerShell](storsimple-update-device.md#to-install-regular-updates-via-windows-powershell-for-storsimple). This update is non-disruptive.
+
+1. If ```Start-HcsUpdate``` doesn't work because of firewall issues, contact Microsoft Support.
+
+## Verify the updates
+
+To verify Update 5.2, check for these software versions after installation:
+
+* FriendlySoftwareVersion: StorSimple 8000 Series Update 5.2
+* HcsSoftwareVersion: 6.3.9600.17886
+* CisAgentVersion: 1.0.9777.0
+* MdsAgentVersion: 35.2.2.0
+* Lsisas2Version: 2.0.78.00
+
+## Next steps
+
+Install StorSimple 8000 Series Update 5.2. Steps to install Update 5.2 are largely the same as for installation of Update 5.1. For more information, see detailed steps in [Installing via the hotfix method](storsimple-8000-install-update-51.md).
stream-analytics Cosmos Db Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cosmos-db-managed-identity.md
Previously updated : 08/30/2022 Last updated : 09/22/2022
New-AzCosmosDBSqlRoleAssignment -AccountName $accountName -ResourceGroupName $re
``` > [!NOTE]
-> Due to global replication or caching latency, there may be a delay when permissions are revoked or granted. Changes should be reflected within 8 minutes.
+> Due to global replication or caching latency, there may be a delay when permissions are revoked or granted. Changes should be reflected within 10 minutes. Even though test connection can pass initially, jobs may fail when they are started before the permissions fully propagate.
### Add the Cosmos DB as an output
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
The following table lists the features of Azure Synapse Analytics that have tran
| July 2022 | **Apache Spark in Azure Synapse Intelligent Cache feature** | Intelligent Cache for Spark automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md).| | June 2022 | **Map Data tool** | The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. To learn more about the Map Data tool, read [Map Data in Azure Synapse Analytics](./database-designer/overview-map-data.md).| | June 2022 | **User Defined Functions** | User defined functions (UDFs) are now generally available. To learn more, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628). |
-| May 2022 | **Azure Synapse Data Explorer connector for Power Automate, Logic Apps, and Power Apps** | The Azure Data Explorer connector for Power Automate enables you to orchestrate and schedule flows, send notifications, and alerts, as part of a scheduled or triggered task. To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/data-explorer/flow) and [Usage examples for Azure Data Explorer connector to Power Automate](/azure/data-explorer/flow-usage). |
+| May 2022 | **Azure Synapse Data Explorer connector for Power Automate, Logic Apps, and Power Apps** | The Azure Data Explorer connector for Power Automate enables you to orchestrate and schedule flows, send notifications, and alerts, as part of a scheduled or triggered task. To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/azure/data-explorer/flow) and [Usage examples for Azure Data Explorer connector to Power Automate](/azure/data-explorer/flow-usage). |
| April 2022 | **Cross-subscription restore for Azure Synapse SQL** | With the PowerShell `Az.Sql` module 3.8 update, the [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase) cmdlet can be used for cross-subscription restore of dedicated SQL pools. To learn more, see [Blog: Restore a dedicated SQL pool (formerly SQL DW) to a different subscription](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3280185). This feature is now generally available for dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in a Synapse workspace. [What's the difference?](https://aka.ms/dedicatedSQLpooldiff) | | April 2022 | **Database Designer** | The database designer allows users to visually create databases within Synapse Studio without writing a single line of code. For more information, see [Announcing General Availability of Database Designer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-general-availability-of-database-designer-amp/ba-p/3294234). Read more about [lake databases](database-designer/concepts-lake-database.md) and learn [How to modify an existing lake database using the database designer](database-designer/modify-lake-database.md).| | April 2022 | **Database Templates** | New industry-specific database templates were introduced in the [Synapse Database Templates General Availability blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-database-templates-general-availability-and-new-synapse/ba-p/3289790). Learn more about [Database templates](database-designer/concepts-database-templates.md) and [the improved exploration experience](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3295633#TOCREF_5).|
This section summarizes recent new security features and settings in Azure Synap
| August 2022 | **Execute Azure Synapse Spark Notebooks with system-assigned managed identity** | You can [now execute Spark Notebooks with the system-assigned managed identity (or workspace managed identity)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_30) by enabling *Run as managed identity* from the **Configure** session menu. With this feature, you will be able to validate that your notebook works as expected when using the system-assigned managed identity, before using the notebook in a pipeline. For more information, see [Managed identity for Azure Synapse](synapse-service-identity.md).| | July 2022 | **Changes to permissions needed for publishing to Git** | Now, only Git permissions and the Synapse Artifact Publisher (Synapse RBAC) role are needed to commit changes in Git-mode. For more information, see [Access control enforcement in Synapse Studio](security/synapse-workspace-access-control-overview.md#access-control-enforcement-in-synapse-studio).| | April 2022 | **Synapse Monitoring Operator RBAC role** | The Synapse Monitoring Operator role-based access control (RBAC) role allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. For more information, review the [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).|
-| March 2022 | **Enforce minimal TLS version** | You can now raise or lower the minimum TLS version for dedicated SQL pools in Synapse workspaces. To learn more, see [Azure SQL connectivity settings](/sql/azure-sql/database/connectivity-settings#minimal-tls-version). The [workspace managed SQL API](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) can be used to modify the minimum TLS settings.|
+| March 2022 | **Enforce minimal TLS version** | You can now raise or lower the minimum TLS version for dedicated SQL pools in Synapse workspaces. To learn more, see [Azure SQL connectivity settings](/azure/azure-sql/database/connectivity-settings#minimal-tls-version). The [workspace managed SQL API](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) can be used to modify the minimum TLS settings.|
| March 2022 | **Azure Synapse Analytics now supports Azure Active Directory (Azure AD) only authentication** | You can now use Azure Active Directory authentication to centrally manage access to all Azure Synapse resources, including SQL pools. You can [disable local authentication](sql/active-directory-authentication.md#disable-local-authentication) upon creation or after a workspace is created through the Azure portal.| | December 2021 | **User-Assigned managed identities** | Now you can use user-assigned managed identities in linked services for authentication in Synapse Pipelines and Dataflows. To learn more, see [Credentials in Azure Data Factory and Azure Synapse](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory).| | December 2021 | **Browse ADLS Gen2 folders in the Azure Synapse Analytics workspace** | You can now [browse and secure an Azure Data Lake Storage Gen2 (ADLS Gen2) container or folder](how-to-access-container-with-access-control-lists.md) in your Azure Synapse Analytics workspace by connecting to a specific container or folder in Synapse Studio.|
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
Previously updated : 08/25/2022 Last updated : 09/22/2022
Once you're connected to your remote app or desktop, you may be prompted for aut
### In-session passwordless authentication (preview) > [!IMPORTANT]
-> In-session passwordless authentication is currently in Insider preview.
+> In-session passwordless authentication is currently in public preview.
> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Azure Virtual Desktop supports in-session passwordless authentication (preview) using [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) or security devices like FIDO keys. Passwordless authentication is currently only available for certain versions of Windows Insider. When deploying new session hosts, choose one of the following images:
+Azure Virtual Desktop supports in-session passwordless authentication (preview) using [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) or security devices like FIDO keys when using the [Windows Desktop client](user-documentation/connect-windows-7-10.md). Passwordless authentication is enabled automatically when the session host and local PC are using the following operating systems:
-- Windows 11 version 22H2 Enterprise, (Preview) - X64 Gen 2.-- Windows 11 version 22H2 Enterprise multi-session, (Preview) - X64 Gen2.
+ - Windows 11 Enterprise single or multi-session with the [2022-09 Cumulative Updates for Windows 11 Preview (KB5017383)](https://support.microsoft.com/kb/KB5017383) or later installed.
+ - Windows 10 Enterprise single or multi-session, versions 20H2 or later with the [2022-09 Cumulative Updates for Windows 10 Preview (KB5017380)](https://support.microsoft.com/kb/KB5017380) or later installed.
+ - Windows Server 2022 with the [2022-09 Cumulative Update for Microsoft server operating system preview (KB5017381)](https://support.microsoft.com/kb/KB5017381) or later installed.
-Passwordless authentication is enabled by default when the local PC and session hosts use one of the supported operating systems above. You can disable it using the [WebAuthn redirection](configure-device-redirections.md#webauthn-redirection) RDP property.
+To disable passwordless authentication on your host pool, you must [customize an RDP property](customize-rdp-properties.md). You can find the **WebAuthn redirection** property under the **Device redirection** tab in the Azure portal or set the **redirectwebauthn** property to **0** using PowerShell.
When enabled, all WebAuthn requests in the session are redirected to the local PC. You can use Windows Hello for Business or locally attached security devices to complete the authentication process.
virtual-desktop Configure Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md
Previously updated : 08/25/2022 Last updated : 09/22/2022 # Configure single sign-on for Azure Virtual Desktop > [!IMPORTANT]
-> Single sign-on using Azure AD authentication is currently in Insider preview.
+> Single sign-on using Azure AD authentication is currently in public preview.
> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
This article will walk you through the process of configuring single sign-on (SS
## Prerequisites
-Single sign-on is currently only available for certain versions of Windows Insider. When deploying new session hosts, you must choose one of the following images:
+Single sign-on is available on session hosts using the following operating systems:
- - Windows 11 version 22H2 Enterprise, (Preview) - X64 Gen 2.
- - Windows 11 version 22H2 Enterprise multi-session, (Preview) - X64 Gen2.
+ - Windows 11 Enterprise single or multi-session with the [2022-09 Cumulative Updates for Windows 11 Preview (KB5017383)](https://support.microsoft.com/kb/KB5017383) or later installed.
+ - Windows 10 Enterprise single or multi-session, versions 20H2 or later with the [2022-09 Cumulative Updates for Windows 10 Preview (KB5017380)](https://support.microsoft.com/kb/KB5017380) or later installed.
+ - Windows Server 2022 with the [2022-09 Cumulative Update for Microsoft server operating system preview (KB5017381)](https://support.microsoft.com/kb/KB5017381) or later installed.
You can enable SSO for connections to Azure Active Directory (AD)-joined VMs. You can also use SSO to access Hybrid Azure AD-joined VMs, but only after creating a Kerberos Server object. Azure Virtual Desktop doesn't support this solution with VMs joined to Azure AD Domain Services.
-> [!NOTE]
-> Hybrid Azure AD-joined Windows Server 2019 VMs don't support SSO.
-
-Currently, the [Windows Desktop client](./user-documentation/connect-windows-7-10.md) is the only client that supports SSO. The local PC must be running Windows 10 or later. There's no domain join requirement for the local PC.
+You can use the [Windows Desktop client](user-documentation/connect-windows-7-10.md) on local PCs running Windows 10 or later. There's no requirement for the local PC to be joined to a domain or Azure AD. You can also have a single sign-on experience when using the [web client](user-documentation/connect-web.md).
SSO is currently supported in the Azure Public cloud.
SSO is currently supported in the Azure Public cloud.
If your host pool contains Hybrid Azure AD-joined session hosts, you must first enable Azure AD Kerberos in your environment by creating a Kerberos Server object. Azure AD Kerberos enables the authentication needed with the domain controller. We recommended you also enable Azure AD Kerberos for Azure AD-joined session hosts if you have a Domain Controller (DC). Azure AD Kerberos provides a single sign-on experience when accessing legacy Kerberos-based applications or network shares. To enable Azure AD Kerberos in your environment, follow the steps to [Create a Kerberos Server object](../active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md#create-a-kerberos-server-object) on your DC.
-To enable SSO on your host pool, you must [customize an RDP property](customize-rdp-properties.md). You can find the **Azure AD Authentication** property under the **Connection information** tab in the Azure portal or set the **enablerdsaadauth:i:1** property using PowerShell.
+To enable SSO on your host pool, you must [customize an RDP property](customize-rdp-properties.md). You can find the **Azure AD Authentication** property under the **Connection information** tab in the Azure portal or set the **enablerdsaadauth** property to **1** using PowerShell.
> [!IMPORTANT] > If you enable SSO on your Hybrid Azure AD-joined VMs before you create the Kerberos server object, you won't be able to connect to the VMs, and you'll see an error message saying the specific log on session doesn't exist.
When enabling single sign-on, you'll currently be prompted to authenticate to Az
- Check out [In-session passwordless authentication (preview)](authentication.md#in-session-passwordless-authentication-preview) to learn how to enable passwordless authentication. - If you're accessing Azure Virtual Desktop from our Windows Desktop client, see [Connect with the Windows Desktop client](./user-documentation/connect-windows-7-10.md).
+- If you're accessing Azure Virtual Desktop from our web client, see [Connect with the web client](./user-documentation/connect-web.md).
- If you encounter any issues, go to [Troubleshoot connections to Azure AD-joined VMs](troubleshoot-azure-ad-connections.md).
virtual-desktop Connection Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/connection-latency.md
Title: Azure Virtual Desktop user connection latency - Azure
-description: Connection latency for Azure Virtual Desktop users.
+ Title: Azure Virtual Desktop user connection quality - Azure
+description: Connection quality for Azure Virtual Desktop users.
Previously updated : 03/16/2022 Last updated : 09/26/2022 # Connection quality in Azure Virtual Desktop
+>[!IMPORTANT]
+>The Connection Graphics Data Logs are currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ Azure Virtual Desktop helps users host client sessions on their session hosts running on Azure. When a user starts a session, they connect from their end-user device, also known as a "client," over a network to access the session host. It's important that the user experience feels as much like a local session on a physical device as possible. In this article, we'll talk about how you can measure and improve the connection quality of your end-users. There are currently two ways you can analyze connection quality in your Azure Virtual Desktop deployment: Azure Log Analytics and Azure Front Door. This article will describe how to use each method to optimize graphics quality and improve end-user experience. ## Monitor connection quality with Azure Log Analytics
-If you're already using [Azure Log Analytics](diagnostics-log-analytics.md), you can monitor network data with the Azure Virtual Desktop connection network data diagnostics. The connection network data Log Analytics collects can help you discover areas that impact your end-user's graphical experience. The service collects data for reports regularly throughout the session. Azure Virtual Desktop connection network data reports have the following advantages over RemoteFX network performance counters:
+If you're already using [Azure Log Analytics](diagnostics-log-analytics.md), you can monitor network and graphics data for Azure Virtual Desktop connections. The connection network and graphics data Log Analytics collects can help you discover areas that impact your end-user's graphical experience. The service collects data for reports regularly throughout the session. Azure Virtual Desktop connection network data reports have the following advantages over RemoteFX network performance counters:
- Each record is connection-specific and includes the correlation ID of the connection that can be tied back to the user. - The round trip time measured in this table is protocol-agnostic and will record the measured latency for Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) connections.
-To start collecting this data, youΓÇÖll need to make sure you have diagnostics and the **NetworkData** table enabled in your Azure Virtual Desktop host pools.
+To start collecting this data, youΓÇÖll need to make sure you have diagnostics and the **Network Data Logs** and **Connection Graphics Data Logs Preview** tables enabled in your Azure Virtual Desktop host pools.
+
+>[!NOTE]
+>Normal storage charges for Log Analytics will apply. Learn more at [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md).
To check and modify your diagnostics settings in the Azure portal:
To check and modify your diagnostics settings in the Azure portal:
3. Select **Diagnostic settings**, then create a new setting if you haven't configured your diagnostic settings yet. If you've already configured your diagnostic settings, select **Edit setting**.
-4. Select **allLogs** or select the names of the diagnostics tables you want to collect data for, including **NetworkData**. The *allLogs* parameter will automatically add new tables to your data table in the future.
+4. Select **allLogs** or select the names of the diagnostics tables you want to collect data for, including **Network Data Logs** and **Connection Graphics Data Logs Preview**. The *allLogs* parameter will automatically add new tables to your data table in the future.
5. Select where you want to send the collected data. Azure Virtual Desktop Insights users should select a Log Analytics workspace.
To check and modify your diagnostics settings in the Azure portal:
7. Repeat this process for all other host pools you want to measure.
-8. Make sure the network data is going to your selected destination by returning to the host pool's resource page, selecting **Logs**, then running one of the queries in [Sample queries for Azure Log Analytics](#sample-queries-for-azure-log-analytics). In order for your query to get results, your host pool must have active users who have been connecting to sessions. Keep in mind that it can take up to 15 minutes for network data to appear in the Azure portal.
+8. Make sure the network data is going to your selected destination by returning to the host pool's resource page, selecting **Logs**, then running one of the queries in [Sample queries for Azure Log Analytics](#sample-queries-for-azure-log-analytics-network-data). In order for your query to get results, your host pool must have active users who have been connecting to sessions. Keep in mind that it can take up to 15 minutes for network data to appear in the Azure portal.
+
+ To check network data, return to the host pool's resource page, select **Logs**, then run one of the queries in [Sample queries for Azure Log Analytics](connection-latency.md#sample-queries-for-azure-log-analytics-network-data). In order for your query to get results, your host pool must have active users who've connected to sessions before. Keep in mind that it can take up to 15 minutes for network data to appear in the Azure portal.
### Connection network data
The network data you collect for your data tables includes the following informa
- The **estimated available bandwidth (kilobytes per second)** is the average estimated available network bandwidth during each connection time interval. -- The **estimated round trip time (milliseconds)**, which is the average estimated round trip time during each connection time interval. Round trip time is how long it takes a network request takes to go from the end-user's device over the network to the session host, then return to the device.
+- The **estimated round trip time (milliseconds)** is the average estimated round trip time during each connection time interval. Round trip time is how long a network request takes to go from the end-user's device to the session host through the network, then return from the session host to the end-user device.
+
+- The **Correlation ID** is the ActivityId of a specific Azure Virtual Desktop connection that's assigned to every diagnostic within that connection.
+
+- The **time generated** is a timestamp in Coordinated Universal Time (UTC) time that marks when an event the data counter is tracking happened on the virtual machine (VM). All averages are measured by the time window that ends at the marked timestamp.
+
+- The **Resource ID** is a unique ID assigned to the Azure Virtual Desktop host pool associated with the data the diagnostics service collects for this table.
+
+- The **source system**, **Subscription ID**, **Tenant ID**, and **type** (table name).
+
+#### Frequency
+
+The service generates these network data points every two minutes during an active session.
+
+### Connection graphics data (preview)
+
+You should consult the Graphics data table (preview) when users report slow or choppy experiences in their Azure Virtual Desktop sessions. The Graphics data table will give you useful information whenever graphical indicators, end-to-end delay, and dropped frames percentage fall below the "healthy" threshold for Azure Virtual Desktop. This table will help your admins track and understand factors across the server, client, and network that could be contributing to the user's slow or choppy experience. However, while the Graphics data table is a useful tool for troubleshooting poor user experience, since it's not regularly populated throughout a session, it isn't a reliable environment baseline.
+
+The Graphics table only captures performance data from the Azure Virtual Desktop graphics stream. This table doesn't capture performance degradation or "slowness" caused by application-specific factors or the virtual machine (CPU or storage constraints). You should use this table with other VM performance metrics to determine if the delay is caused by the remote desktop service (graphics and network) or something inherent in the VM or app itself.
+
+The graphics data you collect for your data tables includes the following information:
+
+- The **Last evaluated connection time interval** is the two minutes leading up to the time graphics indicators fell below the quality threshold.
-- The **Correlation ID**, which is the activity ID of a specific Azure Virtual Desktop connection that's assigned to every diagnostic within that connection.
+- The **end-to-end delay (milliseconds)** is the delay in the time between when a frame is captured on the server until the time frame is rendered on the client, measured as the sum of the encoding delay on the server, network delay, the decoding delay on the client, and the rendering time on the client. The delay reflected is the highest (worst) delay recorded in the last evaluated connection time interval.
+
+- The **compressed frame size (bytes)** is he compressed size of the frame with the highest end-to-end delay in the last evaluated connection time interval.
+
+- The **encoding delay on the server (milliseconds)** is the time it takes to encode the frame with the highest end-to-end delay in the last evaluated connection time interval on the server.
+
+- The **decoding delay on the client (milliseconds)** is the time it takes to decode the frame with the highest end-to-end delay in the last evaluated connection time interval on the client.
+
+- The **rendering delay on the client (milliseconds)** is the time it takes to render the frame with the highest end-to-end delay in the last evaluated connection time interval on the client.
+
+- The **percentage of frames skipped** is the total percentage of frames dropped by these three sources:
+
+ - The client (slow client decoding).
+ - The network (insufficient network bandwidth).
+ - The server (the server is busy).
+
+ The recorded values (one each for client, server, and network) are from the second with the highest dropped frames in the last evaluated connection time interval.
+
+- The **estimated available bandwidth (kilobytes per second)** is the average estimated available network bandwidth during the second with the highest end-to-end delay in the time interval.
+
+- The **estimated round trip time (milliseconds)**, which is the average estimated round trip time during the second with the highest end-to-end delay in the time interval. Round trip time is how long a network request takes to go from the end-user's device to the session host through the network, then return from the session host to the end-user device.
+
+- The **Correlation ID**, which is the ActivityId of a specific Azure Virtual Desktop connection that's assigned to every diagnostic within that connection.
- The **time generated**, which is a timestamp in UTC time that marks when an event the data counter is tracking happened on the virtual machine (VM). All averages are measured by the time window that ends that the marked timestamp. -- The **Resource ID**, which is a unique ID assigned to the Azure Virtual Desktop host pool associated with the data the diagnostics service collects for this table.
+- The **Resource ID** is a unique ID assigned to the Azure Virtual Desktop host pool associated with the data the diagnostics service collects for this table.
- The **source system**, **Subscription ID**, **Tenant ID**, and **type** (table name).
-## Sample queries for Azure Log Analytics
+#### Frequency
+
+In contrast to other diagnostics tables that report data at regular intervals throughout a session, the frequency of data collection for the graphics data varies depending on the graphical health of a connection. The table won't record data for "Good" scenarios, but will recording if any of the following metrics are recorded as "Poor" or "Okay," and the resulting data will be sent to your storage account. Data only records once every two minutes, maximum. The metrics involved in data collection are listed in the following table:
+
+| Metric | Bad | Okay | Good |
+|--|--|||
+| Percentage of dropped frames with low frame rate (less than 15 fps) | Greater than 15% | 10%ΓÇô15% | less than 10% |
+| Percentage of dropped frames with high frame rage (greater than 15 fps) | Greater than 50% | 20%ΓÇô50% | Less than 20% |
+| End-to-end delay per frame | Greater than 300 ms | 150 msΓÇô300 ms | Less than 150 ms |
+
+>[!NOTE]
+>For end-to-end delay per frame, if any frame in a single second is delayed by over 300 ms, the service registers it as "Bad". If all frames in a single second take between 150 ms and 300 ms, the service marks it as "Okay."
+
+## Sample queries for Azure Log Analytics: network data
In this section, we have a list of queries that will help you review connection quality information. You can run queries in the [Log Analytics query editor](../azure-monitor/logs/log-analytics-tutorial.md#write-a-query).
virtual-desktop Store Fslogix Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/store-fslogix-profile.md
The following tables compare the storage solutions Azure Storage offers for Azur
|Use case|General purpose|Ultra performance or migration from NetApp on-premises|Cross-platform| |Platform service|Yes, Azure-native solution|Yes, Azure-native solution|No, self-managed| |Regional availability|All regions|[Select regions](https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all)|All regions|
-|Redundancy|Locally redundant/zone-redundant/geo-redundant/geo-zone-redundant|Locally redundant|Locally redundant/zone-redundant/geo-redundant|
-|Tiers and performance| Standard (Transaction optimized)<br>Premium<br>Up to max 100K IOPS per share with 10 GBps per share at about 3 ms latency|Standard<br>Premium<br>Ultra<br>Up to 4.5GBps per volume at about 1 ms latency. For IOPS and performance details, see [Azure NetApp Files performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) and [the FAQ](../azure-netapp-files/faq-performance.md#how-do-i-convert-throughput-based-service-levels-of-azure-netapp-files-to-iops).|Standard HDD: up to 500 IOPS per-disk limits<br>Standard SSD: up to 4k IOPS per-disk limits<br>Premium SSD: up to 20k IOPS per-disk limits<br>We recommend Premium disks for Storage Spaces Direct|
+|Redundancy|Locally redundant/zone-redundant/geo-redundant/geo-zone-redundant|Locally redundant/geo-redundant [with cross-region replication](/azure/azure-netapp-files/cross-region-replication-introduction)|Locally redundant/zone-redundant/geo-redundant|
+|Tiers and performance| Standard (Transaction optimized)<br>Premium<br>Up to max 100K IOPS per share with 10 GBps per share at about 3 ms latency|Standard<br>Premium<br>Ultra<br>Up to 4.5GBps per volume at about 1 ms latency. For IOPS and performance details, see [Azure NetApp Files performance considerations](/azure/azure-netapp-files/azure-netapp-files-performance-considerations) and [the FAQ](/azure/azure-netapp-files/faq-performance#how-do-i-convert-throughput-based-service-levels-of-azure-netapp-files-to-iops).|Standard HDD: up to 500 IOPS per-disk limits<br>Standard SSD: up to 4k IOPS per-disk limits<br>Premium SSD: up to 20k IOPS per-disk limits<br>We recommend Premium disks for Storage Spaces Direct|
|Capacity|100 TiB per share, Up to 5 PiB per general purpose account |100 TiB per volume, up to 12.5 PiB per subscription|Maximum 32 TiB per disk| |Required infrastructure|Minimum share size 1 GiB|Minimum capacity pool 4 TiB, min volume size 100 GiB|Two VMs on Azure IaaS (+ Cloud Witness) or at least three VMs without and costs for disks| |Protocols|SMB 3.0/2.1, NFSv4.1 (preview), REST|NFSv3, NFSv4.1 (preview), SMB 3.x/2.x|NFSv3, NFSv4.1, SMB 3.1|
The following table lists our recommendations for which performance tier to use
For more information about Azure Files performance, see [File share and file scale targets](../storage/files/storage-files-scale-targets.md#azure-files-scale-targets). For more information about pricing, see [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/).
+## Azure NetApp Files tiers
+
+Azure NetApp Files volumes are organized in capacity pools. Volume performance is defined by the service level of the hosting capacity pool. Three performance levels are offered, ultra, premium and standard. For more information, see [Storage hierarchy of Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy).
+ ## Next steps To learn more about FSLogix profile containers, user profile disks, and other user profile technologies, see the table in [FSLogix profile containers and Azure Files](fslogix-containers-azure-files.md).
virtual-desktop Troubleshoot Connection Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-connection-quality.md
Title: Troubleshoot Azure Virtual Desktop connection quality
description: How to troubleshoot connection quality issues in Azure Virtual Desktop. Previously updated : 03/16/2022 Last updated : 09/26/2022
To reduce round trip time:
The [Azure Virtual Desktop Experience Estimator tool](https://azure.microsoft.com/services/virtual-desktop/assessment/) can help you determine the best location to optimize the latency of your VMs. We recommend you use the tool every two to three months to make sure the optimal location hasn't changed as Azure Virtual Desktop rolls out to new areas.
+## My connection data isn't going to Azure Log Analytics
+
+If your **Connection Network Data Logs** aren't going to Azure Log Analytics every two minutes, you'll need to check the following things:
+
+- Make sure you've [configured the diagnostic settings correctly](diagnostics-log-analytics.md).
+- Make sure you've configured the VM and [monitoring agents](azure-monitor.md) correctly.
+- Make sure you're actively using the session. Sessions that aren't actively used won't send data to Azure Log Analytics as frequently.
+ ## Next steps For more information about how to diagnose connection quality, see [Connection quality in Azure Virtual Desktop](connection-latency.md).
virtual-machines Easv5 Eadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/easv5-eadsv5-series.md
Easv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
Eadsv5-series virtual machines support Standard SSD, Standard HDD, and Premium S
[VM Generation Support](generation-2.md): Generation 1 and 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
virtual-machines Key Vault Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-linux.md
The following JSON shows the schema for the Key Vault VM extension. The extensio
"observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.: ["https://myvault.vault.azure.net/secrets/mycertificate", "https://myvault.vault.azure.net/secrets/mycertificate2"]> }, "authenticationSettings": {
- "msiEndpoint": <Optional MSI endpoint e.g.: "http://169.254.169.254/metadata/identity">,
- "msiClientId": <Optional MSI identity e.g.: "c7373ae5-91c2-4165-8ab6-7381d6e75619">
+ "msiEndpoint": <Required when msiClientId is provided. MSI endpoint e.g. for most Azure VMs: "http://169.254.169.254/metadata/identity">,
+ "msiClientId": <Required when VM has any user assigned identities. MSI identity e.g.: "c7373ae5-91c2-4165-8ab6-7381d6e75619".>
} } }
The following JSON shows the schema for the Key Vault VM extension. The extensio
> This is because the `/secrets` path returns the full certificate, including the private key, while the `/certificates` path does not. More information about certificates can be found here: [Key Vault Certificates](../../key-vault/general/about-keys-secrets-certificates.md) > [!IMPORTANT]
-> The 'authenticationSettings' property is **required** for VMs with **user assigned identities**.
+> The 'authenticationSettings' property is **required** for VMs with any **user assigned identities**. Even if you want to use a system assigned identity this is still required otherwise the VM extension will not know which identity to use. Without this section, a VM with user assigned identities will result in the Key Vault extension failing and being unable to download certificates.
> Set msiClientId to the identity that will authenticate to Key Vault. > > Also **required** for **Azure Arc-enabled VMs**.
virtual-machines Key Vault Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-windows.md
The following JSON shows the schema for the Key Vault VM extension. The extensio
"observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.: "https://myvault.vault.azure.net/secrets/mycertificate" }, "authenticationSettings": {
- "msiEndpoint": <Optional MSI endpoint e.g.: "http://169.254.169.254/metadata/identity">,
- "msiClientId": <Optional MSI identity e.g.: "c7373ae5-91c2-4165-8ab6-7381d6e75619">
+ "msiEndpoint": <Required when msiClientId is provided. MSI endpoint e.g. for most Azure VMs: "http://169.254.169.254/metadata/identity">,
+ "msiClientId": <Required when VM has any user assigned identities. MSI identity e.g.: "c7373ae5-91c2-4165-8ab6-7381d6e75619".>
} } }
The following JSON shows the schema for the Key Vault VM extension. The extensio
> This is because the `/secrets` path returns the full certificate, including the private key, while the `/certificates` path does not. More information about certificates can be found here: [Key Vault Certificates](../../key-vault/general/about-keys-secrets-certificates.md) > [!IMPORTANT]
-> The 'authenticationSettings' property is **required** only for VMs with **user assigned identities**.
+> The 'authenticationSettings' property is **required** for VMs with any **user assigned identities**. Even if you want to use a system-assigned identity, this is still required; otherwise the VM extension will not know which identity to use. Without this section, a VM with user-assigned identities will result in the Key Vault extension failing and being unable to download certificates.
> It specifies identity to use for authentication to Key Vault. > [!IMPORTANT]
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md
In this case, resize the VM using either the Hyper-V Manager console or the [Res
4. Now, convert the RAW disk back to a fixed-size VHD. ```bash
- qemu-img convert -f raw -o subformat=fixed -O vpc MyLinuxVM.raw MyLinuxVM.vhd
+ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc MyLinuxVM.raw MyLinuxVM.vhd
```
- Or, with qemu version 2.6+, include the `force_size` option.
+ Or, with qemu versions before 2.6, remove the `force_size` option.
```bash
- qemu-img convert -f raw -o subformat=fixed,force_size -O vpc MyLinuxVM.raw MyLinuxVM.vhd
+ qemu-img convert -f raw -o subformat=fixed -O vpc MyLinuxVM.raw MyLinuxVM.vhd
``` ## Linux Kernel Requirements
virtual-machines Byos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/byos.md
Red Hat Enterprise Linux (RHEL) images are available in Azure via a pay-as-you-g
- Standard support policies apply to VMs created from these images. - The VMs provisioned from Red Hat Gold Images don't carry RHEL fees associated with RHEL pay-as-you-go images. - The images are unentitled. You must use Red Hat Subscription-Manager to register and subscribe the VMs to get updates from Red Hat directly.-- It's possible to switch from pay-as-you-go images to BYOS using the [Azure Hybrid Benefit](../../linux/azure-hybrid-benefit-linux.md). However it's not possible to switch from an initially deployed BYOS to pay-as-you-go billing models for Linux images. To switch the billing model from BYOS to pay-as-you-go, you must redeploy the VM from the respective image.
+- It's possible to switch from pay-as-you-go images to BYOS using the [Azure Hybrid Benefit](../../linux/azure-hybrid-benefit-linux.md). To convert from RHEL BYOS to pay-as-you-go, follow the steps in [Azure Hybrid Benefit for bring-your-own-subscription Linux virtual machines](../../linux/azure-hybrid-benefit-byos-linux.md#get-started)
## Requirements and conditions to access the Red Hat Gold Images
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
User-defined routes aren't necessary.
## Design guidance
-Review this section to familiarize yourself with considerations for designing virtual networks with NAT.
+Review this section to familiarize yourself with considerations for designing virtual networks with NAT gateway.
### Connect to Azure services with Private Link
-When you connect your private network to Azure services such as Storage, SQL, Cosmos DB, or any other [Azure service listed here](../../private-link/availability.md), the recommended approach is to use [Private Link](../../private-link/private-link-overview.md).
+Connecting from your Azure virtual network to Azure PaaS services can be done directly over the Azure backbone and bypass the internet. When you bypass the internet to connect to other Azure PaaS services, you free up SNAT ports and reduce the risk of SNAT port exhaustion. [Private Link](../../private-link/private-link-overview.md) should be used when possible to connect to Azure PaaS services in order to free up SNAT port inventory.
-Private Link uses the private IP addresses of your virtual machines or other compute resources from your Azure network to connect privately and securely to Azure PaaS services over the Azure backbone network instead of over the internet. Private Link should be used when possible to connect to Azure services since it frees up SNAT ports for making outbound connections to the internet. To learn more about how NAT gateway uses SNAT ports, see [Source Network Address Translation](#source-network-address-translation).
+Private Link uses the private IP addresses of your virtual machines or other compute resources from your Azure network to directly connect privately and securely to Azure PaaS services over the Azure backbone. See a list of all [Azure service listed here](../../private-link/availability.md) that are supported by Private Link.
### Connect to the internet with NAT gateway NAT gateway is recommended for outbound scenarios for all production workloads where you need to connect to a public endpoint. When NAT gateway is configured to subnets, all previous outbound configurations, such as Load balancer or instance-level public IPs (IL PIPs) are superseded and NAT gateway directs all outbound traffic to the internet. Return traffic in response to an outbound initiated flow will also go through NAT gateway. Inbound initiated traffic is not affected by the addition of NAT gateway. Inbound traffic through Load balancer or IL PIPs are translated separately from outbound traffic through NAT gateway. This separation allows inbound and outbound services to coexist seamlessly.
+### Coexistence of outbound and inbound connectivity
+ The following scenarios are examples of how to ensure coexistence of Load balancer or instance level public IPs for inbound with NAT gateway for outbound. #### NAT and VM with an instance-level public IP
Any outbound configuration from a load-balancing rule or outbound rules is super
Any outbound configuration from a load-balancing rule or outbound rules is superseded by NAT gateway. The VM will also use NAT gateway for outbound. Inbound originated isn't affected.
-### Scale NAT gateway
+### Monitor outbound network traffic with NSG flow logs
-Scaling NAT gateway is primarily a function of managing the shared, available SNAT port inventory. NAT needs sufficient SNAT port inventory for expected peak outbound flows for all subnets that are attached to a NAT gateway. You can use public IP addresses, public IP prefixes, or both to create SNAT port inventory.
+A network security group allows you to filter inbound and outbound traffic to and from a virtual machine. To monitor outbound traffic flowing from NAT, you can enable NSG flow logs.
-> [!NOTE]
-> If you assign a public IP prefix, the entire public IP prefix is used. You can't assign a public IP prefix and then break out individual IP addresses to assign to other resources. If you want to assign individual IP addresses from a public IP prefix to multiple resources, you need to create individual public IP addresses and assign them as needed instead of using the public IP prefix itself.
+To learn more about NSG flow logs, see [NSG Flow Log Overview](../../network-watcher/network-watcher-nsg-flow-logging-overview.md).
-SNAT maps private addresses to one or more public IP addresses, rewriting the source address and source port in the process. A single NAT gateway can scale up to 16 IP addresses. If a public IP prefix is provided, each IP address within the prefix provides SNAT port inventory. Adding more public IP addresses increases the available inventory of SNAT ports. TCP and UDP are separate SNAT port inventories and are unrelated to NAT gateway.
+For guides on how to enable NSG flow logs, see [Enabling NSG Flow Logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md#enabling-nsg-flow-logs).
-When you scale your workload, assume that each flow requires a new SNAT port, and then scale the total number of available IP addresses for outbound traffic. Carefully consider the scale you're designing for, and then allocate IP addresses quantities accordingly.
+## Performance
-SNAT ports sent to different destinations will most likely be reused when possible. As SNAT port exhaustion approaches, flows may not succeed.
+Each NAT gateway can provide up to 50 Gbps of throughput. This data throughput includes data processed both outbound and inbound through a NAT gateway resource. You can split your deployments into multiple subnets and assign each subnet or group of subnets a NAT gateway to scale out.
-For a SNAT example, see [SNAT fundamentals](#source-network-address-translation).
+NAT gateway can support up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet for TCP and UDP. NAT gateway can process 1M packets per second and scale up to 5M packets per second.
-### Monitor outbound network traffic
+Review the following section for details and the [troubleshooting article](./troubleshoot-nat.md) for specific problem resolution guidance.
-A network security group allows you to filter inbound and outbound traffic to and from a virtual machine. To monitor outbound traffic flowing from NAT, you can enable NSG flow logs.
+## Scalability
-To learn more about NSG flow logs, see [NSG Flow Log Overview](../../network-watcher/network-watcher-nsg-flow-logging-overview.md).
+Scaling NAT gateway is primarily a function of managing the shared, available SNAT port inventory. NAT needs sufficient SNAT port inventory for expected peak outbound flows for all subnets that are attached to a NAT gateway. You can use public IP addresses, public IP prefixes, or both to create SNAT port inventory.
-For guides on how to enable NSG flow logs, see [Enabling NSG Flow Logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md#enabling-nsg-flow-logs).
+A single NAT gateway can scale up to 16 IP addresses. Each NAT gateway public IP address provides 64,512 SNAT ports to make outbound connections. NAT gateway can scale up to over 1 million SNAT ports. TCP and UDP are separate SNAT port inventories and are unrelated to NAT gateway.
-## Performance
+> [!NOTE]
+> If you assign a public IP prefix, the entire public IP prefix is used. You can't assign a public IP prefix and then break out individual IP addresses to assign to other resources. If you want to assign individual IP addresses from a public IP prefix to multiple resources, you need to create individual public IP addresses and assign them as needed instead of using the public IP prefix itself.
-Each NAT gateway can provide up to 50 Gbps of throughput. This data throughput includes data processed both outbound and inbound through a NAT gateway resource. You can split your deployments into multiple subnets and assign each subnet or group of subnets a NAT gateway to scale out.
+When you scale your workload, assume that each flow requires a new SNAT port, and then scale the total number of available IP addresses for outbound traffic. Carefully consider the scale you're designing for, and then allocate IP addresses quantities accordingly.
-Each NAT gateway public IP address provides 64,512 SNAT ports to make outbound connections. NAT gateway can support up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet for TCP and UDP. NAT gateway can process 1M packets per second and scale up to 5M packets per second.
+SNAT maps private addresses in your subnet to one or more public IP addresses attached to NAT gateway, rewriting the source address and source port in the process. SNAT ports sent to different destinations will most likely be reused when possible. As SNAT port exhaustion approaches, flows may not succeed.
-Review the following section for details and the [troubleshooting article](./troubleshoot-nat.md) for specific problem resolution guidance.
+For a SNAT example, see [SNAT fundamentals](#source-network-address-translation).
## Protocols
The following illustrates this concept as an additional flow to the preceding se
|::|::|::| | 4 | 192.168.0.16:4285 | 65.52.0.2:80 |
-A NAT gateway will translate flow 4 to a source port that may already be in use for other destinations as well (see flow 1 from table above). See [Scale NAT gateway](#scale-nat-gateway) for more discussion on correctly sizing your IP address provisioning.
+A NAT gateway will translate flow 4 to a source port that may already be in use for other destinations as well (see flow 1 from table above). See [Scale NAT gateway](#scalability) for more discussion on correctly sizing your IP address provisioning.
| Flow | Source tuple | Source tuple after SNAT | Destination tuple | |::|::|::|::|
virtual-network Nat Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-metrics.md
To create the alert, use the following steps:
5. From the **Aggregation type** drop-down menu, select **Total**.
-6. In the **Threshold value** box, enter a percentage value that the Total SNAT connection count must drop below before an alert is fired. When deciding what threshold value to use, keep in mind how much you've scaled out your NAT gateway outbound connectivity with public IP addresses. For more information, see [Scale NAT gateway](./nat-gateway-resource.md#scale-nat-gateway).
+6. In the **Threshold value** box, enter a percentage value that the Total SNAT connection count must drop below before an alert is fired. When deciding what threshold value to use, keep in mind how much you've scaled out your NAT gateway outbound connectivity with public IP addresses. For more information, see [Scale NAT gateway](./nat-gateway-resource.md#scalability).
7. From the **Unit** drop-down menu, select **Count**.
web-application-firewall Application Gateway Web Application Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-web-application-firewall-portal.md
To do this, you'll:
6. On the **Networking** tab, verify that **myVNet** is selected for the **Virtual network** and the **Subnet** is set to **myBackendSubnet**. 1. For **Public IP**, select **None**. 1. Accept the other defaults and then select **Next: Management**.
-1. On the **Management** tab, set **Boot diagnostics** to **Disable**. Accept the other defaults and then select **Review + create**.
+1. On the **Monitoring** tab, set **Boot diagnostics** to **Disable**. Accept the other defaults and then select **Review + create**.
1. On the **Review + create** tab, review the settings, correct any validation errors, and then select **Create**. 1. Wait for the virtual machine creation to complete before continuing.