Updates from: 06/18/2022 01:09:49
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md
Content-type: application/json
} ], "displayName": "John Smith",
- "objectId": "11111111-0000-0000-0000-000000000000"
+ "objectId": "11111111-0000-0000-0000-000000000000",
"givenName":"John", "surname":"Smith", "jobTitle":"Supplier",
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md
Previously updated : 03/30/2022 Last updated : 06/17/2022 zone_pivot_groups: b2c-policy-type
zone_pivot_groups: b2c-policy-type
In your applications you may have user flows that enable users to sign up, sign in, or manage their profile. You can create multiple user flows of different types in your Azure Active Directory B2C (Azure AD B2C) tenant and use them in your applications as needed. User flows can be reused across applications. ::: zone pivot="b2c-user-flow"
-A user flow lets you determine how users interact with your application when they do things like sign in, sign up, edit a profile, or reset a password. In this article, you learn how to:
+A user flow lets you determine how users interact with your application when they do things like sign-in, sign-up, edit a profile, or reset a password. In this article, you learn how to:
::: zone-end ::: zone pivot="b2c-custom-policy"
The sign-up and sign-in user flow handles both sign-up and sign-in experiences w
1. On the **Create a user flow** page, select the **Sign up and sign in** user flow.
- ![Select a user flow page with Sign up and sign in flow highlighted](./media/tutorial-create-user-flows/select-user-flow-type.png)
+ ![Select a user flow page with Sign-up and sign-in flow highlighted](./media/tutorial-create-user-flows/select-user-flow-type.png)
1. Under **Select a version**, select **Recommended**, and then select **Create**. ([Learn more](user-flow-versions.md) about user flow versions.)
The sign-up and sign-in user flow handles both sign-up and sign-in experiences w
1. Enter a **Name** for the user flow. For example, *signupsignin1*. 1. For **Identity providers**, select **Email signup**.
-1. For **User attributes and claims**, choose the claims and attributes that you want to collect and send from the user during sign-up. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Click **OK**.
+1. For **User attributes and claims**, choose the claims and attributes that you want to collect and send from the user during sign-up. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**.
![Attributes and claims selection page with three claims selected](./media/tutorial-create-user-flows/signup-signin-attributes.png)
-1. Click **Create** to add the user flow. A prefix of *B2C_1* is automatically prepended to the name.
+1. Select **Create** to add the user flow. A prefix of *B2C_1_* is automatically prepended to the name.
### Test the user flow
-1. Select the user flow you created to open its overview page, then select **Run user flow**.
+1. Select the user flow you created to open its overview page.
+1. At the top of the user flow overview page, select **Run user flow**. A pane opens at the right side of the page.
1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**, and then select **Sign up now**.
+1. Select **Run user flow**, and then select **Sign up now**.
![Run user flow page in portal with Run user flow button highlighted](./media/tutorial-create-user-flows/signup-signin-run-now.PNG)
-1. Enter a valid email address, click **Send verification code**, enter the verification code that you receive, then select **Verify code**.
+1. Enter a valid email address, select **Send verification code**, enter the verification code that you receive, then select **Verify code**.
1. Enter a new password and confirm the password.
-1. Select your country and region, enter the name that you want displayed, enter a postal code, and then click **Create**. The token is returned to `https://jwt.ms` and should be displayed to you.
+1. Select your country and region, enter the name that you want displayed, enter a postal code, and then select **Create**. The token is returned to `https://jwt.ms` and should be displayed to you.
1. You can now run the user flow again and you should be able to sign in with the account that you created. The returned token includes the claims that you selected of country/region, name, and postal code. > [!NOTE]
If you want to enable users to edit their profile in your application, you use a
1. On the **Create a user flow** page, select the **Profile editing** user flow. 1. Under **Select a version**, select **Recommended**, and then select **Create**. 1. Enter a **Name** for the user flow. For example, *profileediting1*.
-1. For **Identity providers**, select **Local Account SignIn**.
-2. For **User attributes**, choose the attributes that you want the customer to be able to edit in their profile. For example, select **Show more**, and then choose both attributes and claims for **Display name** and **Job title**. Click **OK**.
-3. Click **Create** to add the user flow. A prefix of *B2C_1* is automatically appended to the name.
+1. For **Identity providers**, under **Local accounts**, select **Email signup**.
+2. For **User attributes**, choose the attributes that you want the customer to be able to edit in their profile. For example, select **Show more**, and then choose both attributes and claims for **Display name** and **Job title**. Select **OK**.
+3. Select **Create** to add the user flow. A prefix of *B2C_1_* is automatically appended to the name.
### Test the user flow
-1. Select the user flow you created to open its overview page, then select **Run user flow**.
+1. Select the user flow you created to open its overview page.
+1. At the top of the user flow overview page, select **Run user flow**. A pane opens at the right side of the page.
1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**, and then sign in with the account that you previously created.
-1. You now have the opportunity to change the display name and job title for the user. Click **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you.
+1. Select **Run user flow**, and then sign in with the account that you previously created.
+1. You now have the opportunity to change the display name and job title for the user. Select **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you.
::: zone-end ::: zone pivot="b2c-custom-policy"
Add the application IDs to the extensions file *TrustFrameworkExtensions.xml*.
## Add Facebook as an identity provider
-The **SocialAndLocalAccounts** starter pack includes Facebook social sign in. Facebook is *not* required for using custom policies, but we use it here to demonstrate how you can enable federated social login in a custom policy. If you don't need to enable federated social login, use the **LocalAccounts** starter pack instead, and skip [Add Facebook as an identity provider](tutorial-create-user-flows.md?pivots=b2c-custom-policy#add-facebook-as-an-identity-provider) section.
+The **SocialAndLocalAccounts** starter pack includes Facebook social sign in. Facebook isn't required for using custom policies, but we use it here to demonstrate how you can enable federated social login in a custom policy. If you don't need to enable federated social login, use the **LocalAccounts** starter pack instead, and skip [Add Facebook as an identity provider](tutorial-create-user-flows.md?pivots=b2c-custom-policy#add-facebook-as-an-identity-provider) section.
### Create Facebook application
-Use the steps outlined in [Create a Facebook application](identity-provider-facebook.md#create-a-facebook-application) to obtain Facebook *App ID* and *App Secret*. Skip the prerequisites and the rest of the steps in the [Set up sign-up and sign-in with a Facebook account](identity-provider-facebook.md) article.
+Use the steps outlined in [Create a Facebook application](identity-provider-facebook.md#create-a-facebook-application) to obtain Facebook *App ID* and *App Secret*. Skip the prerequisites and the rest of the steps in the [Set up sign up and sign in with a Facebook account](identity-provider-facebook.md) article.
### Create the Facebook key
As you upload the files, Azure adds the prefix `B2C_1A_` to each.
In this article, you learned how to: > [!div class="checklist"]
-> * Create a sign-up and sign-in user flow
+> * Create a sig- up and sign in user flow
> * Create a profile editing user flow > * Create a password reset user flow
active-directory-domain-services Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/csp.md
Previously updated : 07/09/2020 Last updated : 06/16/2022
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
Previously updated : 05/19/2021 Last updated : 06/17/2022
To complete this article, you need the following resources:
Azure AD DS requires a service principal to authenticate and communicate and an Azure AD group to define which users have administrative permissions in the managed domain.
-First, create an Azure AD service principal by using a specific application ID named *Domain Controller Services*. In public Azure, the ID value is *2565bd9d-da50-47d4-8b85-4c97f669dc36*. In other clouds, the value is *6ba9a5d4-8456-4118-b521-9c5ca10cdf84*. Don't change this application ID.
+First, create an Azure AD service principal by using a specific application ID named *Domain Controller Services*. The ID value is *2565bd9d-da50-47d4-8b85-4c97f669dc36*. Don't change this application ID.
Create an Azure AD service principal using the [New-AzureADServicePrincipal][New-AzureADServicePrincipal] cmdlet:
else {
} ```
-With the *AAD DC Administrators* group created, get the desired user's object ID using the [Get-AzureADUser][Get-AzureADUser] cmdlet, then add the user to the group using the [Add-AzureADGroupMember][Add-AzureADGroupMember] cmdlet..
+With the *AAD DC Administrators* group created, get the desired user's object ID using the [Get-AzureADUser][Get-AzureADUser] cmdlet, then add the user to the group using the [Add-AzureADGroupMember][Add-AzureADGroupMember] cmdlet.
In the following example, the user object ID for the account with a UPN of `admin@contoso.onmicrosoft.com`. Replace this user account with the UPN of the user you wish to add to the *AAD DC Administrators* group:
$vnet | Set-AzVirtualNetwork
Now let's create a managed domain. Set your Azure subscription ID, and then provide a name for the managed domain, such as *aaddscontoso.com*. You can get your subscription ID using the [Get-AzSubscription][Get-AzSubscription] cmdlet.
-If you choose a region that supports Availability Zones, the Azure AD DS resources are distributed across zones for additional redundancy.
+If you choose a region that supports Availability Zones, the Azure AD DS resources are distributed across zones for redundancy.
Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions.
Connect-AzureAD
Connect-AzAccount # Create the service principal for Azure AD Domain Services.
-New-AzureADServicePrincipal -AppId "2565bd9d-da50-47d4-8b85-4c97f669dc36"
+New-AzureADServicePrincipal -AppId "6ba9a5d4-8456-4118-b521-9c5ca10cdf84"
# First, retrieve the object ID of the 'AAD DC Administrators' group. $GroupObjectId = Get-AzureADGroup `
active-directory-domain-services Tutorial Configure Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-ldaps.md
Previously updated : 03/07/2022 Last updated : 06/16/2022 #Customer intent: As an identity administrator, I want to secure access to an Azure Active Directory Domain Services managed domain using secure lightweight directory access protocol (LDAPS)
active-directory-domain-services Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-networking.md
Previously updated : 03/07/2022 Last updated : 06/16/2022 #Customer intent: As an identity administrator, I want to create and configure a virtual network subnet or network peering for application workloads in an Azure Active Directory Domain Services managed domain
active-directory-domain-services Tutorial Configure Password Hash Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-password-hash-sync.md
Previously updated : 07/06/2020 Last updated : 06/16/2022 #Customer intent: As an server administrator, I want to learn how to enable password hash synchronization with Azure AD Connect to create a hybrid environment using an on-premises AD DS domain.
To complete this tutorial, you need the following resources:
## Password hash synchronization using Azure AD Connect
-Azure AD Connect is used to synchronize objects like user accounts and groups from an on-premises AD DS environment into an Azure AD tenant. As part of the process, password hash synchronization enables accounts to use the same password in the on-prem AD DS environment and Azure AD.
+Azure AD Connect is used to synchronize objects like user accounts and groups from an on-premises AD DS environment into an Azure AD tenant. As part of the process, password hash synchronization enables accounts to use the same password in the on-premises AD DS environment and Azure AD.
To authenticate users on the managed domain, Azure AD DS needs password hashes in a format that's suitable for NTLM and Kerberos authentication. Azure AD doesn't store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Azure AD DS for your tenant. For security reasons, Azure AD also doesn't store any password credentials in clear-text form. Therefore, Azure AD can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
Previously updated : 04/04/2022 Last updated : 06/16/2022 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain and define advanced configuration options so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
Previously updated : 04/04/2022 Last updated : 06/16/2022 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
active-directory-domain-services Tutorial Create Management Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-management-vm.md
Previously updated : 07/06/2020 Last updated : 06/16/2022 #Customer intent: As an identity administrator, I want to create a management VM and install the required tools to connect to and manage an Azure Active Directory Domain Services managed domain.
active-directory-domain-services Tutorial Create Replica Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-replica-set.md
Previously updated : 03/22/2021 Last updated : 06/16/2022 #Customer intent: As an identity administrator, I want to create and use replica sets in Azure Active Directory Domain Services to provide resiliency or geographical distributed managed domain data.
To delete a replica set, complete the following steps:
1. Choose your managed domain, such as *aaddscontoso.com*. 1. On the left-hand side, select **Replica sets**. From the list of replica sets, select the **...** context menu next to the replica set you want to delete. 1. Select **Delete** from the context menu, then confirm you want to delete the replica set.
-1. In the Azure ADDS management VM, access the DNS console and manually delete DNS records for the domain controllers from the deleted replica set.
+1. In the Azure AD DS management VM, access the DNS console and manually delete DNS records for the domain controllers from the deleted replica set.
> [!NOTE] > Replica set deletion may be a time-consuming operation.
active-directory-domain-services Tutorial Perform Disaster Recovery Drill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-perform-disaster-recovery-drill.md
Previously updated : 09/22/2021 Last updated : 06/16/2022 #Customer intent: As an identity administrator, I want to perform a disaster recovery drill by using replica sets in Azure Active Directory Domain Services to demonstrate resiliency for geographically distributed domain data.
The following requirements must be in place to complete the DR drill:
## Environment validation
-1. Log into the client machine with a domain account.
+1. Log in to the client machine with a domain account.
1. Install the Active Directory Domain Services RSAT tools. 1. Start an elevated PowerShell window. 1. Perform basic domain validation checks:
The following requirements must be in place to complete the DR drill:
## Perform the disaster recovery drill
-You will be performing these operations for each replica set in the Azure AD DS instance. This will simulate an outage for each replica set. When domain controllers are not reachable, the client will automatically failover to a reachable domain controller and this experience should be seamless to the end user or workload. Therefore it is critical that applications and services don't point to a specific domain controller.
+You will be performing these operations for each replica set in the Azure AD DS instance. This will simulate an outage for each replica set. When domain controllers are not reachable, the client will automatically fail over to a reachable domain controller and this experience should be seamless to the end user or workload. Therefore it is critical that applications and services don't point to a specific domain controller.
1. Identify the domain controllers in the replica set that you want to simulate going offline. 1. On the client machine, connect to one of the domain controllers using `nltest /sc_reset:[domain]\[domain controller name]`.
active-directory-domain-services Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/use-azure-monitor-workbooks.md
Previously updated : 07/09/2020 Last updated : 06/16/2022
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Last updated 06/15/2022 +
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Previously updated : 05/24/2022 Last updated : 06/17/2022
Azure AD combined security information registration is available for Azure US Go
> [!IMPORTANT] > Users that are enabled for both the original preview and the enhanced combined registration experience see the new behavior. Users that are enabled for both experiences see only the My Account experience. The *My Account* aligns with the look and feel of combined registration and provides a seamless experience for users. Users can see My Account by going to [https://myaccount.microsoft.com](https://myaccount.microsoft.com). >
+> You can set **Require users to register when signing in** to **Yes** to require all users to register when signing in, ensuring that all users are protected.
+>
> You might encounter an error message while trying to access the Security info option, such as, "Sorry, we can't sign you in". Confirm that you don't have any configuration or group policy object that blocks third-party cookies on the web browser. *My Account* pages are localized based on the language settings of the computer accessing the page. Microsoft stores the most recent language used in the browser cache, so subsequent attempts to access the pages continue to render in the last language used. If you clear the cache, the pages re-render.
Combined registration supports the following authentication methods and actions:
Users can set one of the following options as the default Multi-Factor Authentication method: -- Microsoft Authenticator ΓÇô push notification
+- Microsoft Authenticator ΓÇô push notification or passwordless
- Authenticator app or hardware token ΓÇô code - Phone call - Text message
Users can access manage mode by going to [https://aka.ms/mysecurityinfo](https:/
An admin has enforced registration.
-A user has not set up all required security info and goes to the Azure portal. After entering the user name and password, the user is prompted to set up security info. The user then follows the steps shown in the wizard to set up the required security info. If your settings allow it, the user can choose to set up methods other than those shown by default. After completing the wizard, users review the methods they set up and their default method for Multi-Factor Authentication. To complete the setup process, the user confirms the info and continues to the Azure portal.
+A user has not set up all required security info and goes to the Azure portal. After the user enters the user name and password, the user is prompted to set up security info. The user then follows the steps shown in the wizard to set up the required security info. If your settings allow it, the user can choose to set up methods other than those shown by default. After users complete the wizard, they review the methods they set up and their default method for Multi-Factor Authentication. To complete the setup process, the user confirms the info and continues to the Azure portal.
### Set up security info from My Account
In addition, users who access a resource tenant may be confused when they change
For example, a user sets Microsoft Authenticator app push notification as the primary authentication to sign-in to home tenant and also has SMS/Text as another option. This user is also configured with SMS/Text option on a resource tenant.
-If this user removes SMS/Text as one of the authentication option on their home tenant, they get confused when access to the resource tenant asks them to respond to SMS/Text message.
+If this user removes SMS/Text as one of the authentication options on their home tenant, they get confused when access to the resource tenant asks them to respond to SMS/Text message.
To switch the directory in the Azure portal, click the user account name in the upper right corner and click **Switch directory**.
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Title: Passwordless sign-in with the Microsoft ENtra Authenticator app - Azure Active Directory
+ Title: Passwordless sign-in with the Microsoft Entra Authenticator app - Azure Active Directory
description: Enable passwordless sign-in to Azure AD using the Microsoft Entra Authenticator app Previously updated : 01/07/2022 Last updated : 06/15/2022+
# Enable passwordless sign-in with the Microsoft Entra Authenticator app
-The Microsoft Entra Authenticator app can be used to sign in to any Azure AD account without using a password. The Authenticator app uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-identity-verification) uses a similar technology.
+The Microsoft Entra Authenticator app can be used to sign in to any Azure AD account without using a password. Microsoft Authenticator uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-identity-verification) uses a similar technology.
This authentication technology can be used on any device platform, including mobile. This technology can also be used with any app or website that integrates with Microsoft Authentication Libraries. :::image type="content" border="false" source="./media/howto-authentication-passwordless-phone/phone-sign-in-microsoft-authenticator-app.png" alt-text="Example of a browser sign-in asking for the user to approve the sign-in.":::
-People who enabled phone sign-in from the Authenticator app see a message that asks them to tap a number in their app. No username or password is asked for. To complete the sign-in process in the app, a user must next take the following actions:
+People who enabled phone sign-in from Microsoft Authenticator see a message that asks them to tap a number in their app. No username or password is asked for. To complete the sign-in process in the app, a user must next take the following actions:
-1. Enter the number they see on the login screen into the Authenticator app dialog.
+1. Enter the number they see on the login screen into Microsoft Authenticator dialog.
1. Choose **Approve**. 1. Provide their PIN or biometric. ## Prerequisites
-To use passwordless phone sign-in with the Authenticator app, the following prerequisites must be met:
+To use passwordless phone sign in with Microsoft Authenticator, the following prerequisites must be met:
-- Recommended: Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Push notifications to your smartphone or tablet help the Authenticator app to prevent unauthorized access to accounts and stop fraudulent transactions. The Authenticator app automatically generates codes when set up to do push notifications so a user has a backup sign-in method even if their device doesn't have connectivity.
+- Recommended: Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Push notifications to your smartphone or tablet help Microsoft Authenticator to prevent unauthorized access to accounts and stop fraudulent transactions. Microsoft Authenticator can either perform traditional MFA push notifications to a device that a user must approve or deny, or it can perform passwordless authentication that requires a user to type a matching number. Microsoft Authenticator automatically generates codes when set up to do push notifications so a user has a backup sign-in method even if their device doesn't have connectivity.
- Latest version of Authenticator installed on devices running iOS 8.0 or greater, or Android 6.0 or greater.-- The device on which the Authenticator app is installed must be registered within the Azure AD tenant to an individual user.
+- The device on which Microsoft Authenticator is installed must be registered within the Azure AD tenant to an individual user.
> [!NOTE]
-> If you enabled the Authenticator app for passwordless sign-in using Azure AD PowerShell, it was enabled for your entire directory. If you enable using this new method, it supercedes the PowerShell policy. We recommend you enable for all users in your tenant via the new *Authentication Methods* menu, otherwise users not in the new policy are no longer be able to sign in without a password.
+> If you enabled Microsoft Authenticator for passwordless sign-in using Azure AD PowerShell, it was enabled for your entire directory. If you enable using this new method, it supercedes the PowerShell policy. We recommend you enable for all users in your tenant via the new *Authentication Methods* menu, otherwise users not in the new policy are no longer be able to sign in without a password.
## Enable passwordless authentication methods
To enable the authentication method for passwordless phone sign-in, complete the
1. Under **Microsoft Entra Authenticator**, choose the following options: 1. **Enable** - Yes or No 1. **Target** - All users or Select users
-1. Each added group or user is enabled by default to use the Authenticator app in both passwordless and push notification modes ("Any" mode). To change this, for each row:
+1. Each added group or user is enabled by default to use Microsoft Authenticator in both passwordless and push notification modes ("Any" mode). To change this, for each row:
1. Browse to **...** > **Configure**. 1. For **Authentication mode** - choose **Any**, or **Passwordless**. Choosing **Push** prevents the use of the passwordless phone sign-in credential. 1. To apply the new policy, click **Save**.
To enable the authentication method for passwordless phone sign-in, complete the
>[!NOTE] >If you see an error when you try to save, the cause might be due to the number of users or groups being added. As a workaround, replace the users and groups you are trying to add with a single group, in the same operation, and then click **Save** again.
-## User registration and management of the Authenticator app
+## User registration and management of Microsoft Authenticator
Users register themselves for the passwordless authentication method of Azure AD by using the following steps: 1. Browse to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo).
-1. Sign in, then click **Add method** > **Authenticator app** > **Add** to add the Authenticator app.
+1. Sign in, then click **Add method** > **Authenticator app** > **Add** to add Microsoft Authenticator.
1. Follow the instructions to install and configure the Microsoft Authenticator app on your device. 1. Select **Done** to complete Authenticator configuration. 1. In **Microsoft Entra Authenticator**, choose **Enable phone sign-in** from the drop-down menu for the account registered. 1. Follow the instructions in the app to finish registering the account for passwordless phone sign-in.
-An organization can direct its users to sign in with their phones, without using a password. For further assistance configuring the Authenticator app and enabling phone sign-in, see [Sign in to your accounts using the Microsoft Entra Authenticator app](https://support.microsoft.com/account-billing/sign-in-to-your-accounts-using-the-microsoft-authenticator-app-582bdc07-4566-4c97-a7aa-56058122714c).
+An organization can direct its users to sign in with their phones, without using a password. For further assistance configuring Microsoft Authenticator and enabling phone sign-in, see [Sign in to your accounts using the Microsoft Entra Authenticator app](https://support.microsoft.com/account-billing/sign-in-to-your-accounts-using-the-microsoft-authenticator-app-582bdc07-4566-4c97-a7aa-56058122714c).
> [!NOTE]
-> Users who aren't allowed by policy to use phone sign-in are no longer able to enable it within the Authenticator app.
+> Users who aren't allowed by policy to use phone sign-in are no longer able to enable it within Microsoft Authenticator.
## Sign in with passwordless credential A user can start to utilize passwordless sign-in after all the following actions are completed: - An admin has enabled the user's tenant.-- The user has updated her Authenticator app to enable phone sign-in.
+- The user has added Microsoft Authenticator as a sign-in method.
The first time a user starts the phone sign-in process, the user performs the following steps:
-1. Enters her name at the sign-in page.
+1. Enters their name at the sign-in page.
2. Selects **Next**. 3. If necessary, selects **Other ways to sign in**. 4. Selects **Approve a request on my Authenticator app**.
In one scenario, a user can have an unanswered passwordless phone sign-in verifi
To resolve this scenario, the following steps can be used:
-1. Open the Authenticator app.
+1. Open Microsoft Authenticator.
2. Respond to any notification prompts. Then the user can continue to utilize passwordless phone sign-in.
When a user has enabled any passwordless credential, the Azure AD login process
This logic generally prevents a user in a hybrid tenant from being directed to Active Directory Federated Services (AD FS) for sign-in verification. However, the user retains the option of clicking **Use your password instead**.
-### Azure MFA server
+### On-premises users
-An end user can be enabled for multifactor authentication (MFA) through an on-premises Azure MFA server. The user can still create and utilize a single passwordless phone sign-in credential.
+An end user can be enabled for multifactor authentication (MFA) through an on-premises. The user can still create and utilize a single passwordless phone sign-in credential.
-If the user attempts to upgrade multiple installations (5+) of the Authenticator app with the passwordless phone sign-in credential, this change might result in an error.
+If the user attempts to upgrade multiple installations (5+) of Microsoft Authenticator with the passwordless phone sign-in credential, this change might result in an error.
### Device registration
-Before you can create this new strong credential, there are prerequisites. One prerequisite is that the device on which the Authenticator app is installed must be registered within the Azure AD tenant to an individual user.
+Before you can create this new strong credential, there are prerequisites. One prerequisite is that the device on which Microsoft Authenticator is installed must be registered within the Azure AD tenant to an individual user.
-Currently, a device can only be enabled for passwordless sign-in in a single tenant. This limit means that only one work or school account in the Authenticator app can be enabled for phone sign-in.
+Currently, a device can only be enabled for passwordless sign-in in a single tenant. This limit means that only one work or school account in Microsoft Authenticator can be enabled for phone sign-in.
> [!NOTE] > Device registration is not the same as device management or mobile device management (MDM). Device registration only associates a device ID and a user ID together, in the Azure AD directory.
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Previously updated : 05/24/2022 Last updated : 06/17/2022
These roles can perform the following actions related to a Temporary Access Pass
![Screenshot of Temporary Access Pass details.](./media/how-to-authentication-temporary-access-pass/details.png)
-The following commands show how to create and get a Temporary Access Pass by using PowerShell:
+The following commands show how to create and get a Temporary Access Pass by using PowerShell.
```powershell # Create a Temporary Access Pass for a user
c5dbd20a-8b8f-4791-a23f-488fcbde3b38 5/22/2022 11:19:17 PM False True
```
+For more information, see [New-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/new-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta) and [Get-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/get-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta).
+ ## Use a Temporary Access Pass The most common use for a Temporary Access Pass is for a user to register authentication details during the first sign-in or device setup, without the need to complete additional security prompts. Authentication methods are registered at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). Users can also update existing authentication methods here.
You can also use PowerShell:
Remove-MgUserAuthenticationTemporaryAccessPassMethod -UserId user3@contoso.com -TemporaryAccessPassAuthenticationMethodId c5dbd20a-8b8f-4791-a23f-488fcbde3b38 ```
+For more information, see [Remove-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/remove-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta).
+ ## Replace a Temporary Access Pass - A user can only have one Temporary Access Pass. The passcode can be used during the start and end time of the Temporary Access Pass.
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md
Previously updated : 07/07/2021 Last updated : 06/17/2022
Some organizations haven't moved to hybrid authentication for the following reas
* Changing the Azure AD UPN creates a mismatch between on-premises and Azure AD environments that could cause problems with certain applications and services. * Due to business or compliance reasons, the organization doesn't want to use the on-premises UPN to sign in to Azure AD.
-To help with the move to hybrid authentication, you can configure Azure AD to let users sign in with their email as an alternate login ID. For example, if *Contoso* rebranded to *Fabrikam*, rather than continuing to sign in with the legacy `ana@contoso.com` UPN, email as an alternate login ID can be used. To access an application or service, users would sign in to Azure AD using their non-UPN email, such as `ana@fabrikam.com`.
+To move toward hybrid authentication, you can configure Azure AD to let users sign in with their email as an alternate login ID. For example, if *Contoso* rebranded to *Fabrikam*, rather than continuing to sign in with the legacy `ana@contoso.com` UPN, email as an alternate login ID can be used. To access an application or service, users would sign in to Azure AD using their non-UPN email, such as `ana@fabrikam.com`.
-![Diagram of email as an alternate login I D.](media/howto-authentication-use-email-signin/email-alternate-login-id.png)
+![Diagram of email as an alternate login ID.](media/howto-authentication-use-email-signin/email-alternate-login-id.png)
This article shows you how to enable and use email as an alternate login ID.
Here's what you need to know about email as an alternate login ID:
* The feature supports managed authentication with Password Hash Sync (PHS) or Pass-Through Authentication (PTA). * There are two options for configuring the feature: * [Home Realm Discovery (HRD) policy](#enable-user-sign-in-with-an-email-address) - Use this option to enable the feature for the entire tenant. Global administrator privileges required.
- * [Staged rollout policy](#enable-staged-rollout-to-test-user-sign-in-with-an-email-address) - Use this option to test the feature with specific Azure AD groups. Global administrator privileges required.
+ * [Staged rollout policy](#enable-staged-rollout-to-test-user-sign-in-with-an-email-address) - Use this option to test the feature with specific Azure AD groups. Global administrator privileges required. When you first add a security group for staged rollout, you're limited to 200 users to avoid a UX time-out. After you've added the group, you can add more users directly to it, as required.
## Preview limitations
One of the user attributes that's automatically synchronized by Azure AD Connect
## B2B guest user sign-in with an email address
-![Diagram of email as an alternate login I D for B 2 B guest user sign-in.](media/howto-authentication-use-email-signin/email-alternate-login-id-b2b.png)
+![Diagram of email as an alternate login ID for B 2 B guest user sign-in.](media/howto-authentication-use-email-signin/email-alternate-login-id-b2b.png)
-Email as an alternate login ID applies to [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) under a "bring your own sign-in identifiers" model. When email as an alternate login ID is enabled in the home tenant, Azure AD users can perform guest sign in with non-UPN email on the resource tenanted endpoint. No action is required from the resource tenant to enable this functionality.
+Email as an alternate login ID applies to [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) under a "bring your own sign-in identifiers" model. When email as an alternate login ID is enabled in the home tenant, Azure AD users can perform guest sign in with non-UPN email on the resource tenant endpoint. No action is required from the resource tenant to enable this functionality.
## Enable user sign-in with an email address
During preview, you currently need *global administrator* permissions to enable
1. Search for and select **Azure Active Directory**. 1. From the navigation menu on the left-hand side of the Azure Active Directory window, select **Azure AD Connect > Email as alternate login ID**.
- ![Screenshot of email as alternate login I D option in the Azure portal.](media/howto-authentication-use-email-signin/azure-ad-connect-screen.png)
+ ![Screenshot of email as alternate login ID option in the Azure portal.](media/howto-authentication-use-email-signin/azure-ad-connect-screen.png)
1. Click the checkbox next to *Email as an alternate login ID*. 1. Click **Save**.
- ![Screenshot of email as alternate login I D blade in the Azure portal.](media/howto-authentication-use-email-signin/email-alternate-login-id-screen.png)
+ ![Screenshot of email as alternate login ID blade in the Azure portal.](media/howto-authentication-use-email-signin/email-alternate-login-id-screen.png)
With the policy applied, it can take up to 1 hour to propagate and for users to be able to sign in using their alternate login ID.
If users have trouble signing in with their email address, review the following
### Sign-in logs You can review the [sign-in logs in Azure AD][sign-in-logs] for more information. Sign-ins with email as an alternate login ID will emit `proxyAddress` in the *Sign-in identifier type* field and the inputted username in the *Sign-in identifier* field.
active-directory Howto Mfa Nps Extension Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-advanced.md
Previously updated : 06/01/2022 Last updated : 06/17/2022
# Advanced configuration options for the NPS extension for Multi-Factor Authentication
-The Network Policy Server (NPS) extension extends your cloud-based Azure AD Multi-Factor Authentication features into your on-premises infrastructure. This article assumes that you already have the extension installed, and now want to know how to customize the extension for you needs.
+The Network Policy Server (NPS) extension extends your cloud-based Azure AD Multi-Factor Authentication features into your on-premises infrastructure. This article assumes that you already have the extension installed, and now want to know how to customize the extension for your needs.
## Alternate login ID
To configure an IP allowed list, go to `HKLM\SOFTWARE\Microsoft\AzureMfa` and co
> [!NOTE] > This registry key is not created by default by the installer and an error appears in the AuthZOptCh log when the service is restarted. This error in the log can be ignored, but if this registry key is created and left empty if not needed then the error message does not return.
-When a request comes in from an IP address that exists in the `IP_WHITELIST`, two-step verification is skipped. The IP list is compared to the IP address that is provided in the *ratNASIPAddress* attribute of the RADIUS request. If a RADIUS request comes in without the ratNASIPAddress attribute, a warning is logged: "IP_WHITE_LIST_WARNING::IP Whitelist is being ignored as the source IP is missing in the RADIUS request NasIpAddress attribute.
+When a request comes in from an IP address that exists in the `IP_WHITELIST`, two-step verification is skipped. The IP list is compared to the IP address that is provided in the *ratNASIPAddress* attribute of the RADIUS request. If a RADIUS request comes in without the ratNASIPAddress attribute, a warning is logged: "IP_WHITE_LIST_WARNING::IP Whitelist is being ignored as the source IP is missing in the RADIUS request NasIpAddress attribute."
## Next steps
-[Resolve error messages from the NPS extension for Azure AD Multi-Factor Authentication](howto-mfa-nps-extension-errors.md)
+- [Resolve error messages from the NPS extension for Azure AD Multi-Factor Authentication](howto-mfa-nps-extension-errors.md)
+- [Use REQUIRE_USER_MATCH to prepare for users that aren't enrolled for MFA](howto-mfa-nps-extension.md#configure-your-nps-extension)
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
The following client apps have been confirmed to support this setting:
- Microsoft Cortana - Microsoft Edge - Microsoft Excel-- Microsoft Lists (iOS)
+- Microsoft Lists
- Microsoft Office - Microsoft OneDrive - Microsoft OneNote
active-directory Scenario Spa Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-production.md
Check out a [deployment sample](https://github.com/Azure-Samples/ms-identity-jav
These code samples demonstrate several key operations for a single-page app. - [SPA with an ASP.NET back-end](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnetcore-webapi): How to get tokens for your own back-end web API (ASP.NET Core) by using **MSAL.js**. -- [Node.js Web API (Azure AD](https://github.com/Azure-Samples/active-directory-javascript-nodejs-webapi-v2): How to validate access tokens for your back-end web API (Node.js) by using **passport-azure-ad**.
+- [Node.js Web API (Azure AD)](https://github.com/Azure-Samples/active-directory-javascript-nodejs-webapi-v2): How to validate access tokens for your back-end web API (Node.js) by using **passport-azure-ad**.
- [SPA with Azure AD B2C](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa): How to use **MSAL.js** to sign in users in an app that's registered with **Azure Active Directory B2C** (Azure AD B2C).
These code samples demonstrate several key operations for a single-page app.
## Next steps -- [JavaScript SPA tutorial](./tutorial-v2-javascript-auth-code.md): Deep dive to how to sign in users and get an access token to call the **Microsoft Graph API** by using **MSAL.js**.
+- [JavaScript SPA tutorial](./tutorial-v2-javascript-auth-code.md): Deep dive to how to sign in users and get an access token to call the **Microsoft Graph API** by using **MSAL.js**.
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
Previously updated : 11/22/2021 Last updated : 06/16/2022
Ensure your VM is configured with the following functionality:
Ensure your client meets the following requirements: -- SSH client must support OpenSSH based certificates for authentication. You can use Az CLI (2.21.1 or higher) with OpenSSH (included in Windows 10 version 1803 or higher) or Azure Cloud Shell to meet this requirement. -- SSH extension for Az CLI. You can install this using `az extension add --name ssh`. You donΓÇÖt need to install this extension when using Azure Cloud Shell as it comes pre-installed.-- If youΓÇÖre using any other SSH client other than Az CLI or Azure Cloud Shell that supports OpenSSH certificates, youΓÇÖll still need to use Az CLI with SSH extension to retrieve ephemeral SSH cert and optionally a config file and then use the config file with your SSH client.
+- SSH client must support OpenSSH based certificates for authentication. You can use Azure CLI (2.21.1 or higher) with OpenSSH (included in Windows 10 version 1803 or higher) or Azure Cloud Shell to meet this requirement.
+- SSH extension for Azure CLI. You can install this using `az extension add --name ssh`. You donΓÇÖt need to install this extension when using Azure Cloud Shell as it comes pre-installed.
+- If youΓÇÖre using any other SSH client other than Azure CLI or Azure Cloud Shell that supports OpenSSH certificates, youΓÇÖll still need to use Azure CLI with SSH extension to retrieve ephemeral SSH cert and optionally a config file and then use the config file with your SSH client.
- TCP connectivity from the client to either the public or private IP of the VM (ProxyCommand or SSH forwarding to a machine with connectivity also works). > [!IMPORTANT]
Ensure your client meets the following requirements:
## Enabling Azure AD login in for Linux VM in Azure
-To use Azure AD login in for Linux VM in Azure, you need to first enable Azure AD login option for your Linux VM, configure Azure role assignments for users who are authorized to login in to the VM and then use SSH client that supports OpensSSH such as Az CLI or Az Cloud Shell to SSH to your Linux VM. There are multiple ways you can enable Azure AD login for your Linux VM, as an example you can use:
+To use Azure AD login in for Linux VM in Azure, you need to first enable Azure AD login option for your Linux VM, configure Azure role assignments for users who are authorized to login in to the VM and then use SSH client that supports OpensSSH such as Azure CLI or Az Cloud Shell to SSH to your Linux VM. There are multiple ways you can enable Azure AD login for your Linux VM, as an example you can use:
- Azure portal experience when creating a Linux VM - Azure Cloud Shell experience when creating a Windows VM or for an existing Linux VM
az role assignment create \
For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see the article [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
-## Install SSH extension for Az CLI
+## Install SSH extension for Azure CLI
-If youΓÇÖre using Azure Cloud Shell, then no other setup is needed as both the minimum required version of Az CLI and SSH extension for Az CLI are already included in the Cloud Shell environment.
+If youΓÇÖre using Azure Cloud Shell, then no other setup is needed as both the minimum required version of Azure CLI and SSH extension for Azure CLI are already included in the Cloud Shell environment.
-Run the following command to add SSH extension for Az CLI
+Run the following command to add SSH extension for Azure CLI
```azurecli az extension add --name ssh
az extension show --name ssh
You can enforce Conditional Access policies such as require multi-factor authentication, require compliant or hybrid Azure AD joined device for the device running SSH client, and checking for risk before authorizing access to Linux VMs in Azure that are enabled with Azure AD login in. The application that appears in Conditional Access policy is called "Azure Linux VM Sign-In". > [!NOTE]
-> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join on the client device running SSH client only works with Az CLI running on Windows and macOS. It is not supported when using Az CLI on Linux or Azure Cloud Shell.
+> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join on the client device running SSH client only works with Azure CLI running on Windows and macOS. It is not supported when using Azure CLI on Linux or Azure Cloud Shell.
## Login using Azure AD user account to SSH into the Linux VM
-### Using Az CLI
+### Using Azure CLI
First do az login and then az ssh vm.
The following example automatically resolves the appropriate IP address for the
az ssh vm -n myVM -g AzureADLinuxVM ```
-If prompted, enter your Azure AD login credentials at the login page, perform an MFA, and/or satisfy device checks. YouΓÇÖll only be prompted if your az CLI session doesnΓÇÖt already meet any required Conditional Access criteria. Close the browser window, return to the SSH prompt, and youΓÇÖll be automatically connected to the VM.
+If prompted, enter your Azure AD login credentials at the login page, perform an MFA, and/or satisfy device checks. YouΓÇÖll only be prompted if your Azure CLI session doesnΓÇÖt already meet any required Conditional Access criteria. Close the browser window, return to the SSH prompt, and youΓÇÖll be automatically connected to the VM.
YouΓÇÖre now signed in to the Azure Linux virtual machine with the role permissions as assigned, such as VM User or VM Administrator. If your user account is assigned the Virtual Machine Administrator Login role, you can use sudo to run commands that require root privileges.
Use the following example to authenticate to Azure CLI using the service princip
az login --service-principal -u <sp-app-id> -p <password-or-cert> --tenant <tenant-id> ```
-Once authentication with a service principal is complete, use the normal Az CLI SSH commands to connect to the VM.
+Once authentication with a service principal is complete, use the normal Azure CLI SSH commands to connect to the VM.
```azurecli az ssh vm -n myVM -g AzureADLinuxVM
For customers who are using previous version of Azure AD login for Linux that wa
## Using Azure Policy to ensure standards and assess compliance
-Use Azure Policy to ensure Azure AD login is enabled for your new and existing Linux virtual machines and assess compliance of your environment at scale on your Azure Policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Linux VMs within your environment that donΓÇÖt have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Linux VMs that donΓÇÖt have Azure AD login enabled, as well as remediate existing Linux VMs to the same standard. In addition to these capabilities, you can also use Azure Policy to detect and flag Linux VMs that have non-approved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
+Use Azure Policy to ensure Azure AD login is enabled for your new and existing Linux virtual machines and assess compliance of your environment at scale on your Azure Policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Linux VMs within your environment that donΓÇÖt have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Linux VMs that donΓÇÖt have Azure AD login enabled, and remediate existing Linux VMs to the same standard. In addition to these capabilities, you can also use Azure Policy to detect and flag Linux VMs that have non-approved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
## Troubleshoot sign-in issues Some common errors when you try to SSH with Azure AD credentials include no Azure roles assigned, and repeated prompts to sign in. Use the following sections to correct these issues.
+### Missing application
+
+If the Azure Linux VM Sign-in application is missing from Conditional Access, use the following steps to remediate the issue:
+
+1. Check to make sure the application isn't in the tenant by:
+ 1. Sign in to the **Azure portal**.
+ 1. Browse to **Azure Active Directory** > **Enterprise applications**
+ 1. Remove the filters to see all applications, and search for "VM". If you don't see Azure Linux VM Sign-in as a result, the service principal is missing from the tenant.
+
+Another way to verify it is via Graph PowerShell:
+
+1. [Install the Graph PowerShell SDK](/powershell/microsoftgraph/installation) if you haven't already done so.
+1. `Connect-MgGraph -Scopes "ServicePrincipalEndpoint.ReadWrite.All","Application.ReadWrite.All"`
+1. Sign-in with a Global Admin account
+1. Consent to permission prompt
+1. `Get-MgServicePrincipal -ConsistencyLevel eventual -Search '"DisplayName:Azure Linux VM"'`
+ 1. If this command results in no output and returns you to the PowerShell prompt, you can create the Service Principal with the following Graph PowerShell command:
+ 1. `New-MgServicePrincipal -AppId ce6ff14a-7fdc-4685-bbe0-f6afdfcfa8e0`
+ 1. Successful output will show that the AppID and the Application Name Azure Linux VM Sign-in was created.
+1. Sign out of Graph PowerShell when complete with the following command: `Disconnect-MgGraph`
+ ### CouldnΓÇÖt retrieve token from local cache You must run az login again and go through an interactive sign in flow. Review the section [Using Az Cloud Shell](#using-az-cloud-shell).
Virtual machine scale set VM connections may fail if the virtual machine scale s
### AllowGroups / DenyGroups statements in sshd_config cause first login to fail for Azure AD users
-Cause 1: If sshd_config contains either AllowGroups or DenyGroups statements, the very first login fails for Azure AD users. If the statement was added after a user already has a successful login, they can log in.
+Cause 1: If sshd_config contains either AllowGroups or DenyGroups statements, the first login fails for Azure AD users. If the statement was added after a user already has a successful login, they can log in.
Solution 1: Remove AllowGroups and DenyGroups statements from sshd_config.
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Previously updated : 04/01/2022 Last updated : 06/16/2022
If you haven't deployed Windows Hello for Business and if that isn't an option f
Share your feedback about this feature or report issues using it on the [Azure AD feedback forum](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789).
+### Missing application
+
+If the Azure Windows VM Sign-In application is missing from Conditional Access, use the following steps to remediate the issue:
+
+1. Check to make sure the application isn't in the tenant by:
+ 1. Sign in to the **Azure portal**.
+ 1. Browse to **Azure Active Directory** > **Enterprise applications**
+ 1. Remove the filters to see all applications, and search for "VM". If you don't see Azure Windows VM Sign-In as a result, the service principal is missing from the tenant.
+
+Another way to verify it is via Graph PowerShell:
+
+1. [Install the Graph PowerShell SDK](/powershell/microsoftgraph/installation) if you haven't already done so.
+1. `Connect-MgGraph -Scopes "ServicePrincipalEndpoint.ReadWrite.All","Application.ReadWrite.All"`
+1. Sign-in with a Global Admin account
+1. Consent to permission prompt
+1. `Get-MgServicePrincipal -ConsistencyLevel eventual -Search '"DisplayName:Azure Windows VM Sign-In"'`
+ 1. If this command results in no output and returns you to the PowerShell prompt, you can create the Service Principal with the following Graph PowerShell command:
+ 1. `New-MgServicePrincipal -AppId 372140e0-b3b7-4226-8ef9-d57986796201`
+ 1. Successful output will show that the AppID and the Application Name Azure Windows VM Sign-In was created.
+1. Sign out of Graph PowerShell when complete with the following command: `Disconnect-MgGraph`
+ ## Next steps For more information on Azure Active Directory, see [What is Azure Active Directory](../fundamentals/active-directory-whatis.md).
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 05/20/2022 Last updated : 06/17/2022
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Hybrid Identity Administrator](#hybrid-identity-administrator) | Can manage AD to Azure AD cloud provisioning, Azure AD Connect, Pass-through Authentication (PTA), Password hash synchronization (PHS), Seamless Single sign-on (Seamless SSO), and federation settings. | 8ac3fc64-6eca-42ea-9e69-59f4c7b60eb2 | > | [Identity Governance Administrator](#identity-governance-administrator) | Manage access using Azure AD for identity governance scenarios. | 45d8d3c5-c802-45c6-b32a-1d70b5e1e86e | > | [Insights Administrator](#insights-administrator) | Has administrative access in the Microsoft 365 Insights app. | eb1f4a8d-243a-41f0-9fbd-c7cdf6c5ef7c |
+> | [Insights Analyst](#insights-analyst) | Access the analytical capabilities in Microsoft Viva Insights and run custom queries. | 25df335f-86eb-4119-b717-0ff02de207e9 |
> | [Insights Business Leader](#insights-business-leader) | Can view and share dashboards and insights via the Microsoft 365 Insights app. | 31e939ad-9672-4796-9c2e-873181342d2d | > | [Intune Administrator](#intune-administrator) | Can manage all aspects of the Intune product. | 3a2c62db-5318-420d-8d74-23affee5d9d5 | > | [Kaizala Administrator](#kaizala-administrator) | Can manage settings for Microsoft Kaizala. | 74ef975b-6605-40af-a5d2-b9539d836353 |
Users with this role can manage Azure AD identity governance configuration, incl
## Insights Administrator
-Users in this role can access the full set of administrative capabilities in the [Microsoft 365 Insights application](https://go.microsoft.com/fwlink/?linkid=2129521). This role has the ability to read directory information, monitor service health, file support tickets, and access the Insights Administrator settings aspects.
+Users in this role can access the full set of administrative capabilities in the Microsoft Viva Insights app. This role has the ability to read directory information, monitor service health, file support tickets, and access the Insights Administrator settings aspects.
+
+[Learn more](https://go.microsoft.com/fwlink/?linkid=2129521)
> [!div class="mx-tableFixed"] > | Actions | Description |
Users in this role can access the full set of administrative capabilities in the
> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Insights Analyst
+
+Assign the Insights Analyst role to users who need to do the following:
+
+- Analyze data in the Microsoft Viva Insights app, but can't manage any configuration settings
+- Create, manage, and run queries
+- View basic settings and reports in the Microsoft 365 admin center
+- Create and manage service requests in the Microsoft 365 admin center
+
+[Learn more](https://go.microsoft.com/fwlink/?linkid=2129521)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.insights/queries/allProperties/allTasks | Run and manage queries in Viva Insights |
+> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests |
+> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+ ## Insights Business Leader
-Users in this role can access a set of dashboards and insights via the [Microsoft 365 Insights application](https://go.microsoft.com/fwlink/?linkid=2129521). This includes full access to all dashboards and presented insights and data exploration functionality. Users in this role do not have access to product configuration settings, which is the responsibility of the Insights Administrator role.
+Users in this role can access a set of dashboards and insights via the Microsoft Viva Insights app. This includes full access to all dashboards and presented insights and data exploration functionality. Users in this role do not have access to product configuration settings, which is the responsibility of the Insights Administrator role.
+
+[Learn more](https://go.microsoft.com/fwlink/?linkid=2129521)
> [!div class="mx-tableFixed"] > | Actions | Description |
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/credential-design.md
Previously updated : 04/01/2021 Last updated : 06/08/2022 # Customer intent: As a developer I am looking for information on how to enable my users to control their own information
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-Verifiable credentials are made up of two components, the rules and display files. The rules file determines what the user needs to provide before they receive a verifiable credential. The display file controls the branding of the credential and styling of the claims. In this guide, we will explain how to modify both files to meet the requirements of your organization.
+Verifiable credentials are made up of two components, the rules and display definitions. The rules definition determines what the user needs to provide before they receive a verifiable credential. The display definition controls the branding of the credential and styling of the claims. In this guide, we will explain how to modify both files to meet the requirements of your organization.
> [!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview. > This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Rules file: Requirements from the user
+## Rules definition: Requirements from the user
-The rules file is a simple JSON file that describes important properties of verifiable credentials. In particular, it describes how claims are used to populate your verifiable credential.
+The rules definition is a simple JSON document that describes important properties of verifiable credentials. In particular, it describes how claims are used to populate your verifiable credential.
-There are currently three input types that are available to configure in the rules file. These types are used by the verifiable credential issuing service to insert claims into a verifiable credential and attest to that information with your DID. The following are the three types with explanations.
+There are currently three input types that are available to configure in the rules definition. These types are used by the verifiable credential issuing service to insert claims into a verifiable credential and attest to that information with your DID. The following are the four types with explanations.
- ID Token
+- ID Token Hint
- Verifiable credentials via a verifiable presentation. - Self-Attested Claims
-**ID token:** The sample App and Tutorial use the ID Token. When this option is configured, you will need to provide an Open ID Connect configuration URI and include the claims that should be included in the VC. The user will be prompted to 'Sign in' on the Authenticator app to meet this requirement and add the associated claims from their account.
+**ID token:** When this option is configured, you will need to provide an Open ID Connect configuration URI and include the claims that should be included in the VC. The user will be prompted to 'Sign in' on the Authenticator app to meet this requirement and add the associated claims from their account.
-**Verifiable credentials:** The end result of an issuance flow is to produce a Verifiable Credential but you may also ask the user to Present a Verifiable Credential in order to issue one. The Rules File is able to take specific claims from the presented Verifiable Credential and include those claims in the newly issued Verifiable Credential from your organization.
+**ID token hint:** The sample App and Tutorial use the ID Token Hint. When this option is configured, the relying party app will need to provide claims that should be included in the VC in the Request Service API issuance request. Where the relying party app gets the claims from is up to the app, but it can come from the current login session, from backend CRM systems or even from self asserted user input.
+
+**Verifiable credentials:** The end result of an issuance flow is to produce a Verifiable Credential but you may also ask the user to Present a Verifiable Credential in order to issue one. The rules definition is able to take specific claims from the presented Verifiable Credential and include those claims in the newly issued Verifiable Credential from your organization.
**Self attested claims:** When this option is selected, the user will be able to directly type information into Authenticator. At this time, strings are the only supported input for self attested claims. ![detailed view of verifiable credential card](media/credential-design/issuance-doc.png)
-**Static claims:** Additionally we are able to declare a static claim in the Rules file, however this input does not come from the user. The Issuer defines a static claim in the Rules file and would look like any other claim in the Verifiable Credential. Simply add a credentialSubject after vc.type and declare the attribute and the claim.
+**Static claims:** Additionally we are able to declare a static claim in the rules definition, however this input doesn't come from the user. The Issuer defines a static claim in the rules definition and would look like any other claim in the Verifiable Credential. Add a credentialSubject after vc.type and declare the attribute and the claim.
```json "vc": {
There are currently three input types that are available to configure in the rul
} ``` - ## Input type: ID token
-To get ID Token as input, the rules file needs to configure the well-known endpoint of the OIDC compatible Identity system. In that system you need to register an application with the correct information from [Issuer service communication examples](issuer-openid.md). Additionally, the client_id needs to be put in the rules file, as well as a scope parameter needs to be filled in with the correct scopes. For example, Azure Active Directory needs the email scope if you want to return an email claim in the ID token.
+To get ID Token as input, the rules definition needs to configure the well-known endpoint of the OIDC compatible Identity system. In that system you need to register an application with the correct information from [Issuer service communication examples](issuer-openid.md). Additionally, the client_id needs to be put in the rules definition, as well as a scope parameter needs to be filled in with the correct scopes. For example, Azure Active Directory needs the email scope if you want to return an email claim in the ID token.
+ ```json
- {
+ {
"attestations": { "idTokens": [ {
- "mapping": {
- "firstName": { "claim": "given_name" },
- "lastName": { "claim": "family_name" }
- },
+ "mapping": [
+ {
+ "outputClaim": "firstName",
+ "inputClaim": "given_name",
+ "required": true,
+ "indexed": false
+ },
+ {
+ "outputClaim": "lastName",
+ "inputClaim": "family_name",
+ "required": true,
+ "indexed": true
+ }
+ ],
"configuration": "https://dIdPlayground.b2clogin.com/dIdPlayground.onmicrosoft.com/B2C_1_sisu/v2.0/.well-known/openid-configuration", "client_id": "8d5b446e-22b2-4e01-bb2e-9070f6b20c90", "redirect_uri": "vcclient://openid/",
To get ID Token as input, the rules file needs to configure the well-known endpo
} ```
-| Property | Description |
-| -- | -- |
-| `attestations.idTokens` | An array of OpenID Connect identity providers that are supported for sourcing user information. |
-| `...mapping` | An object that describes how claims in each ID token are mapped to attributes in the resulting verifiable credential. |
-| `...mapping.{attribute-name}` | The attribute that should be populated in the resulting Verifiable Credential. |
-| `...mapping.{attribute-name}.claim` | The claim in ID tokens whose value should be used to populate the attribute. |
-| `...configuration` | The location of your identity provider's configuration document. This URL must adhere to the [OpenID Connect standard for identity provider metadata](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata). The configuration document must include the `issuer`, `authorization_endpoint`, `token_endpoint`, and `jwks_uri` fields. |
-| `...client_id` | The client ID obtained during the client registration process. |
-| `...scope` | A space-delimited list of scopes the IDP needs to be able to return the correct claims in the ID token. |
-| `...redirect_uri` | Must always use the value `vcclient://openid/`. |
-| `validityInterval` | A time duration, in seconds, representing the lifetime of your verifiable credentials. After this time period elapses, the verifiable credential will no longer be valid. Omitting this value means that each Verifiable Credential will remain valid until is it explicitly revoked. |
-| `vc.type` | An array of strings indicating the schema(s) that your Verifiable Credential satisfies. See the section below for more detail. |
+Please see [idToken attestation](rules-and-display-definitions-model.md#idtokenattestation-type) for reference of properties.
+
+## Input type: ID token hint
+
+To get ID Token hint as input, the rules definition shouldn't contain configuration for and OIDC Identity system but instead have the special value `https://self-issued.me` for the configuration property. The claims mappings are the same as for the ID token type, but the difference is that the claim values need to be provided by the issuance relying party app in the Request Service API issuance request.
+
+```json
+ {
+ "attestations": {
+ "idTokenHints": [
+ {
+ "configuration": "https://self-issued.me",
+ "mapping": [
+ {
+ "outputClaim": "firstName",
+ "inputClaim": "given_name",
+ "required": true,
+ "indexed": false
+ },
+ {
+ "outputClaim": "lastName",
+ "inputClaim": "family_name",
+ "required": true,
+ "indexed": true
+ }
+ ]
+ }
+ ]
+ },
+ "validityInterval": 2592000,
+ "vc": {
+ "type": ["VerifiedCredentialExpert" ]
+ }
+ }
+```
+
+See [idTokenHint attestation](rules-and-display-definitions-model.md#idtokenhintattestation-type) for reference of properties.
### vc.type: Choose credential type(s)
-All verifiable credentials must declare their "type" in their rules file. The type of a credential distinguishes your verifiable credentials from credentials issued by other organizations and ensures interoperability between issuers and verifiers. To indicate a credential type, you must provide one or more credential types that the credential satisfies. Each type is represented by a unique string - often a URI will be used to ensure global uniqueness. The URI does not need to be addressable; it is treated as a string.
+All verifiable credentials must declare their "type" in their rules definition. The type of a credential distinguishes your verifiable credentials from credentials issued by other organizations and ensures interoperability between issuers and verifiers. To indicate a credential type, you must provide one or more credential types that the credential satisfies. Each type is represented by a unique string - often a URI will be used to ensure global uniqueness. The URI doesn't need to be addressable; it is treated as a string.
As an example, a diploma credential issued by Contoso University might declare the following types:
To ensure interoperability of your credentials, it's recommended that you work c
## Input type: Verifiable credential >[!NOTE]
->Rules files that ask for a verifiable credential do not use the presentation exchange format for requesting credentials. This will be updated when the Issuing Service supports the standard, Credential Manifest.
+>rules definitions that ask for a verifiable credential do not use the presentation exchange format for requesting credentials. This will be updated when the Issuing Service supports the standard, Credential Manifest.
```json { "attestations": { "presentations": [ {
- "mapping": {
- "first_name": {
- "claim": "$.vc.credentialSubject.firstName"
- },
- "last_name": {
- "claim": "$.vc.credentialSubject.lastName",
- "indexed": true
- }
- },
+ "mapping": [
+ {
+ "outputClaim": "first_name",
+ "inputClaim": "$.vc.credentialSubject.firstName ",
+ "required": true,
+ "indexed": false
+ },
+ {
+ "outputClaim": "last_name",
+ "inputClaim": ""$.vc.credentialSubject.lastName ",
+ "required": true,
+ "indexed": true
+ },
"credentialType": "VerifiedCredentialNinja", "contracts": [ "https://beta.did.msidentity.com/v1.0/3c32ed40-8a10-465b-8ba4-0b1e86882668/verifiableCredential/contracts/VerifiedCredentialNinja"
To ensure interoperability of your credentials, it's recommended that you work c
"ProofOfNinjaNinja" ] }
- }
- ```
-
-| Property | Description |
-| -- | -- |
-| `attestations.presentations` | An array of verifiable credentials being requested as inputs. |
-| `...mapping` | An object that describes how claims in each presented Verifiable Credential are mapped to attributes in the resulting Verifiable Credential. |
-| `...mapping.{attribute-name}` | The attribute that should be populated in the resulting verifiable credential. |
-| `...mapping.{attribute-name}.claim` | The claim in the Verifiable Credential whose value should be used to populate the attribute. |
-| `...mapping.{attribute-name}.indexed` | Only one can be enabled per Verifiable Credential to save for revoke. Please see the [article on how to revoke a credential](how-to-issuer-revoke.md) for more information. |
-| `credentialType` | The credentialType of the Verifiable Credential you are asking the user to present. |
-| `contracts` | The URI of the contract in the Verifiable Credential Service portal. |
-| `issuers.iss` | The issuer DID for the Verifiable Credential being asked of the user. |
-| `validityInterval` | A time duration, in seconds, representing the lifetime of your verifiable credentials. After this time period elapses, the Verifiable Credential will no longer be valid. Omitting this value means that each Verifiable Credential will remain valid until is it explicitly revoked. |
-| `vc.type` | An array of strings indicating the schema(s) that your verifiable credential satisfies. |
+}
+```
+See [verifiablePresentation attestation](rules-and-display-definitions-model.md#verifiablepresentationattestation-type) for reference of properties.
-## Input type: Self-attested claims
+## Input type: Selfattested claims
During the issuance flow, the user can be asked to input some self-attested information. As of now, the only input type is a 'string'. + ```json { "attestations": { "selfIssued" : {
- "mapping": {
- "firstName": { "claim": "firstName" },
- "lastName": { "claim": "lastName" }
- }
+ "mapping": [
+ {
+ "outputClaim": "firstName",
+ "inputClaim": "firstName",
+ "required": true,
+ "indexed": false
+ },
+ {
+ "outputClaim": "lasttName",
+ "inputClaim": "lastName",
+ "required": true,
+ "indexed": true
+ }
++ } }, "validityInterval": 2592001,
During the issuance flow, the user can be asked to input some self-attested info
"type": [ "VerifiedCredentialExpert" ] } }-- ```
-| Property | Description |
-| -- | -- |
-| `attestations.selfIssued` | An array of self-issued claims that require input from the user. |
-| `...mapping` | An object that describes how self-issued claims are mapped to attributes in the resulting Verifiable Credential. |
-| `...mapping.alias` | The attribute that should be populated in the resulting Verifiable Credential. |
-| `...mapping.alias.claim` | The claim in the Verifiable Credential whose value should be used to populate the attribute. |
-| `validityInterval` | A time duration, in seconds, representing the lifetime of your verifiable credentials. After this time period elapses, the Verifiable Credential will no longer be valid. Omitting this value means that each Verifiable Credential will remain valid until is it explicitly revoked. |
-| `vc.type` | An array of strings indicating the schema(s) that your Verifiable Credential satisfies. |
+See [selfIssued attestation](rules-and-display-definitions-model.md#selfissuedattestation-type) for reference of properties.
-## Display file: Verifiable credentials in Microsoft Authenticator
+## Display definition: Verifiable credentials in Microsoft Authenticator
Verifiable credentials offer a limited set of options that can be used to reflect your brand. This article provides instructions how to customize your credentials, and best practices for designing credentials that look great once issued to users.
Verifiable credentials issued to users are displayed as cards in Microsoft Authe
Cards also contain customizable fields that you can use to let users know the purpose of the card, the attributes it contains, and more.
-## Create a credential display file
+## Create a credential display definition
-Much like the rules file, the display file is a simple JSON file that describes how the contents of your verifiable credentials should be displayed in the Microsoft Authenticator app.
+Much like the rules definition, the display definition is a simple JSON document that describes how the contents of your verifiable credentials should be displayed in the Microsoft Authenticator app.
>[!NOTE] > At this time, this display model is only used by Microsoft Authenticator.
-The display file has the following structure.
+The display definition has the following structure.
```json {
The display file has the following structure.
"title": "Do you want to get your digital diploma from Contoso U?", "instructions": "Please log in with your Contoso U account to receive your digital diploma." },
- "claims": {
- "vc.credentialSubject.name": {
+ "claims": [
+ {
+ "claim": "vc.credentialSubject.name",
"type": "String", "label": "Name" }
- }
+ ]
} } ```
-| Property | Description |
-| -- | -- |
-| `locale` | The language of the Verifiable Credential. Reserved for future use. |
-| `card.title` | Displays the type of credential to the user. Recommended maximum length of 25 characters. |
-| `card.issuedBy` | Displays the name of the issuing organization to the user. Recommended maximum length of 40 characters. |
-| `card.backgroundColor` | Determines the background color of the card, in hex format. A subtle gradient will be applied to all cards. |
-| `card.textColor` | Determines the text color of the card, in hex format. Recommended to use black or white. |
-| `card.logo` | A logo that is displayed on the card. The URL provided must be publicly addressable. Recommended maximum height of 36 px, and maximum width of 100 px regardless of phone size. Recommend PNG with transparent background. |
-| `card.description` | Supplemental text displayed alongside each card. Can be used for any purpose. Recommended maximum length of 100 characters. |
-| `consent.title` | Supplemental text displayed when a card is being issued. Used to provide details about the issuance process. Recommended length of 100 characters. |
-| `consent.instructions` | Supplemental text displayed when a card is being issued. Used to provide details about the issuance process. Recommended length of 100 characters. |
-| `claims` | Allows you to provide labels for attributes included in each credential. |
-| `claims.{attribute}` | Indicates the attribute of the credential to which the label applies. |
-| `claims.{attribute}.type` | Indicates the attribute type. Currently we only support 'String'. |
-| `claims.{attribute}.label` | The value that should be used as a label for the attribute, which will show up in Authenticator. This maybe different than the label that was used in the rules file. Recommended maximum length of 40 characters. |
-
->[!NOTE]
- >If a claim is included in the rules file and then omitted in the display file, there are two different types of experiences. On iOS, the claim will not be displayed in details section shown in the above image, while on Android the claim will be shown.
+See [Display definition model](rules-and-display-definitions-model.md#displaymodel-type) for reference of properties.
## Next steps Now you have a better understanding of verifiable credential design and how you can create your own to meet your needs. - [Issuer service communication examples](issuer-openid.md)
+- Reference for [Rules and Display definitions](rules-and-display-definitions-model.md)
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
editor:
Previously updated : 04/01/2021 Last updated : 06/16/2022
Our digital and physical lives are increasingly linked to the apps, services, and devices we use to access a rich set of experiences. This digital transformation allows us to interact with hundreds of companies and thousands of other users in ways that were previously unimaginable.
-But identity data has too often been exposed in security breaches. These breaches affect our social, professional, and financial lives. Microsoft believes that thereΓÇÖs a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. This primer explains how we are joining hands with a diverse community to build an open, trustworthy, interoperable, and standards-based Decentralized Identity (DID) solution for individuals and organizations.
+But identity data has too often been exposed in security breaches. These breaches affect our social, professional, and financial lives. Microsoft believes that thereΓÇÖs a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. We are building an open, trustworthy, interoperable, and standards-based Decentralized Identity (DID) solution for individuals and organizations.
## Why we need Decentralized Identity
-Today we use our digital identity at work, at home, and across every app, service, and device we use. ItΓÇÖs made up of everything we say, do, and experience in our livesΓÇöpurchasing tickets for an event, checking into a hotel, or even ordering lunch. Currently, our identity and all our digital interactions are owned and controlled by other parties, some of whom we arenΓÇÖt even aware of.
+Today we use our digital identity at work, home, and across every app, service, and device we use. Our digital identity is made up of everything we say, do, and experience in our lives. Activities like purchasing tickets for an event, checking into a hotel, or even ordering lunch become part of our identity. Today our identity and information about our online activity are owned and controlled by others. In some cases, even without our knowledge.
Generally, users grant consent to several apps and devices. This approach requires a high degree of vigilance on the user's part to track who has access to what information. On the enterprise front, collaboration with consumers and partners requires high-touch orchestration to securely exchange data in a way that maintains privacy and security for all involved.
We believe a standards-based Decentralized Identity system can unlock a new set
## Lead with open standards
-WeΓÇÖre committed to working closely with customers, partners, and the community to unlock the next generation of Decentralized IdentityΓÇôbased experiences, and weΓÇÖre excited to partner with the individuals and organizations that are making incredible contributions in this space. If the DID ecosystem is to grow, standards, technical components, and code deliverables must be open source and accessible to all.
+WeΓÇÖre committed to working closely with customers, partners, and the community to unlock the next generation of Decentralized IdentityΓÇôbased experiences. We are excited to partner with individuals and organizations making incredible contributions in this space. If the DID ecosystem is to grow, standards, technical components, and code deliverables must be open source and accessible to all.
Microsoft is actively collaborating with members of the Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. WeΓÇÖve worked with these groups to identify and develop critical standards, and the following standards have been implemented in our services.
Microsoft is actively collaborating with members of the Decentralized Identity F
Before we can understand DIDs, it helps to compare them with current identity systems. Email addresses and social network IDs are human-friendly aliases for collaboration but are now overloaded to serve as the control points for data access across many scenarios beyond collaboration. This creates a potential problem, because access to these IDs can be removed at any time by external parties.
-Decentralized Identifiers (DIDs) are different. DIDs are user-generated, self-owned, globally unique identifiers rooted in decentralized systems like ION. They possess unique characteristics, like greater assurance of immutability, censorship resistance, and tamper evasiveness. These attributes are critical for any ID system that is intended to provide self-ownership and user control.
+Decentralized Identifiers (DIDs) are different. DIDs are user-generated, self-owned, globally unique identifiers rooted in decentralized systems like ION. They possess unique characteristics, like greater assurance of immutability, censorship resistance, and tamper evasiveness. These attributes are critical for any ID system that is intended to provide self-ownership and user control.
MicrosoftΓÇÖs verifiable credential solution uses decentralized credentials (DIDs) to cryptographically sign as proof that a relying party (verifier) is attesting to information proving they are the owners of a verifiable credential. A basic understanding of DIDs is recommended for anyone creating a verifiable credential solution based on the Microsoft offering.
To deliver on these promises, we need a technical foundation made up of seven ke
**1. W3C Decentralized Identifiers (DIDs)** IDs users create, own, and control independently of any organization or government. DIDs are globally unique identifiers linked to Decentralized Public Key Infrastructure (DPKI) metadata composed of JSON documents that contain public key material, authentication descriptors, and service endpoints.
-**2. Decentralized system: ION (Identity Overlay Network)**
-ION is a Layer 2 open, permissionless network based on the purely deterministic Sidetree protocol, which requires no special tokens, trusted validators, or other consensus mechanisms; the linear progression of Bitcoin's time chain is all that's required for its operation. We have [open sourced a npm package](https://www.npmjs.com/package/@decentralized-identity/ion-tools) to make working with the ION network easy to integrate into your apps and services. Libraries include creating a new DID, generating keys and anchoring your DID on the Bitcoin blockchain.
+**2. Decentralized system**
+
+- ION (Identity Overlay Network) ION is a Layer 2 open, permissionless network based on the purely deterministic Sidetree protocol, which requires no special tokens, trusted validators, or other consensus mechanisms. The linear progression of Bitcoin's time chain is all that's required for its operation. We have open sourced a [npm package](https://www.npmjs.com/package/@decentralized-identity/ion-tools) to make working with the ION network easy to integrate into your apps and services. Libraries include creating a new DID, generating keys and anchoring your DID on the Bitcoin blockchain.
+
+- `did:web` is a permission based model that allows trust using a web domainΓÇÖs existing reputation.
**3. DID User Agent/Wallet: Microsoft Authenticator App** Enables real people to use decentralized identities and Verifiable Credentials. Authenticator creates DIDs, facilitates issuance and presentation requests for verifiable credentials and manages the backup of your DID's seed through an encrypted wallet file.
An issuance and verification service in Azure and a REST API for [W3C Verifiable
## A sample scenario
-The scenario we use to explain how VCs work involves:
--- Woodgrove Inc. a company.-- Proseware, a company that offers Woodgrove employees discounts.-- Alice, an employee at Woodgrove, Inc. who wants to get a discount from Proseware-
+The scenario we use to explain how Verifiable Credentials work involves:
+- WoodGrove Inc. a company.
+- ProseWare, a company that offers WoodGrove employees discounts.
+- Alice, an employee at WoodGrove, Inc. who wants to get a discount from ProseWare
-Today, Alice provides a username and password to log onto WoodgroveΓÇÖs networked environment. Woodgrove is deploying a verifiable credential solution to provide a more manageable way for Alice to prove that she is an employee of Woodgrove. Proseware accepts verifiable credentials issued by Woodgrove as proof of employment to offer corporate discounts as part of their corporate discount program.
+Today, Alice provides a username and password to sign in WoodGroveΓÇÖs networked environment. WoodGrove is deploying a verifiable credential solution to provide a more manageable way for Alice to prove that she's an employee of WoodGrove. ProseWare accepts verifiable credentials issued by WoodGrove as proof of employment to offer corporate discounts as part of their corporate discount program.
-Alice requests Woodgrove Inc for a proof of employment verifiable credential. Woodgrove Inc attests Alice's identity and issues a signed verfiable credential that Alice can accept and store in her digital wallet application. Alice can now present this verifiable credential as a proof of employement on the Proseware site. After a succesfull presentation of the credential, Prosware offers discount to Alice and the transaction is logged in Alice's wallet application so that she can track where and to whom she has presented her proof of employment verifiable credential.
+Alice requests WoodGrove Inc for a proof of employment verifiable credential. WoodGrove Inc attests Alice's identity and issues a signed verifiable credential that Alice can accept and store in her digital wallet application. Alice can now present this verifiable credential as a proof of employment on the ProseWare site. After a successful presentation of the credential, ProsWare offers discount to Alice and the transaction is logged in Alice's wallet application so that she can track where and to whom she's presented her proof of employment verifiable credential.
![microsoft-did-overview](media/decentralized-identifier-overview/did-overview.png)
The roles in this scenario are:
![roles in a verifiable credential environment](media/decentralized-identifier-overview/issuer-user-verifier.png)
-**issuer** ΓÇô The issuer is an organization that creates an issuance solution requesting information from a user. The information is used to verify the userΓÇÖs identity. For example, Woodgrove, Inc. has an issuance solution that enables them to create and distribute verifiable credentials (VCs) to all their employees. The employee uses the Authenticator app to sign in with their username and password, which passes an ID token to the issuing service. Once Woodgrove, Inc. validates the ID token submitted, the issuance solution creates a VC that includes claims about the employee and is signed with Woodgrove, Inc. DID. The employee now has a verifiable credential that is signed by their employer, which includes the employees DID as the subject DID.
+**issuer** ΓÇô The issuer is an organization that creates an issuance solution requesting information from a user. The information is used to verify the userΓÇÖs identity. For example, WoodGrove, Inc. has an issuance solution that enables them to create and distribute verifiable credentials (VCs) to all their employees. The employee uses the Authenticator app to sign in with their username and password, which passes an ID token to the issuing service. Once WoodGrove, Inc. validates the ID token submitted, the issuance solution creates a VC that includes claims about the employee and is signed with WoodGrove, Inc. DID. The employee now has a verifiable credential that is signed by their employer, which includes the employees DID as the subject DID.
-**user** ΓÇô The user is the person or entity that is requesting a VC. For example, Alice is a new employee of Woodgrove, Inc. and was previously issued her proof of employment verifiable credential. When Alice needs to provide proof of employment in order to get a discount at Proseware, she can grant access to the credential in her Authenticator app by signing a verifiable presentation that proves Alice is the owner of the DID. Proseware is able to validate the credential was issued by Woodgrove, Inc.and Alice is the owner of the credential.
+**user** ΓÇô The user is the person or entity that is requesting a VC. For example, Alice is a new employee of WoodGrove, Inc. and was previously issued her proof of employment verifiable credential. When Alice needs to provide proof of employment in order to get a discount at ProseWare, she can grant access to the credential in her Authenticator app by signing a verifiable presentation that proves Alice is the owner of the DID. ProseWare is able to validate the credential was issued by WoodGrove, Inc.and Alice is the owner of the credential.
-**verifier** ΓÇô The verifier is a company or entity who needs to verify claims from one or more issuers they trust. For example, Proseware trusts Woodgrove, Inc. does an adequate job of verifying their employeesΓÇÖ identity and issuing authentic and valid VCs. When Alice tries to order the equipment she needs for her job, Proseware will use open standards such as SIOP and Presentation Exchange to request credentials from the User proving they are an employee of Woodgrove, Inc. For example, Proseware might provide Alice a link to a website with a QR code she scans with her phone camera. This initiates the request for a specific VC, which Authenticator will analyze and give Alice the ability to approve the request to prove her employment to Proseware. Proseware can use the verifiable credentials service API or SDK, to verify the authenticity of the verifiable presentation. Based on the information provided by Alice they give Alice the discount. If other companies and organizations know that Woodgrove, Inc. issues VCs to their employees, they can also create a verifier solution and use the Woodgrove, Inc. verifiable credential to provide special offers reserved for Woodgrove, Inc. employees.
+**verifier** ΓÇô The verifier is a company or entity who needs to verify claims from one or more issuers they trust. For example, ProseWare trusts WoodGrove, Inc. does an adequate job of verifying their employeesΓÇÖ identity and issuing authentic and valid VCs. When Alice tries to order the equipment she needs for her job, ProseWare will use open standards such as SIOP and Presentation Exchange to request credentials from the user proving they are an employee of WoodGrove, Inc. For example, ProseWare might provide Alice a link to a website with a QR code she scans with her phone camera. This initiates the request for a specific VC, which Authenticator will analyze and give Alice the ability to approve the request to prove her employment to ProseWare. ProseWare can use the verifiable credentials service API or SDK, to verify the authenticity of the verifiable presentation. Based on the information provided by Alice they give Alice the discount. If other companies and organizations know that WoodGrove, Inc. issues VCs to their employees, they can also create a verifier solution and use the WoodGrove, Inc. verifiable credential to provide special offers reserved for WoodGrove, Inc. employees.
## Next steps
active-directory Get Started Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/get-started-request-api.md
Previously updated : 05/03/2022 Last updated : 06/16/2022 #Customer intent: As an administrator, I am trying to learn how to use the Request Service API and integrate it into my business application.
Azure Active Directory (Azure AD) Verifiable Credentials includes the Request Se
## API access token
-For your application to access the Request Service REST API, you need to include a valid access token with the required permissions. Access tokens issued by the Microsoft identity platform contain information (scopes) that the Request Service REST API uses to validate the caller. An access token ensures that the caller has the proper permissions to perform the operation they're requesting.
+Your application needs to include a valid access token with the required permissions so that it can access the Request Service REST API. Access tokens issued by the Microsoft identity platform contain information (scopes) that the Request Service REST API uses to validate the caller. An access token ensures that the caller has the proper permissions to perform the operation they're requesting.
To get an access token, your app must be registered with the Microsoft identity platform, and be authorized by an administrator for access to the Request Service REST API. If you haven't registered the *verifiable-credentials-app* application, see [how to register the app](verifiable-credentials-configure-tenant.md#register-an-application-in-azure-ad) and then [generate an application secret](verifiable-credentials-configure-issuer.md#configure-the-verifiable-credentials-app).
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
Previously updated : 02/22/2022 Last updated : 06/02/2022 #Customer intent: Why are we doing this?
Linking a DID to a domain solves the initial trust problem by allowing any entit
## When do you need to update the domain in your DID?
-In the event where the domain associated with your company changes, you would also need to change the domain in your DID document that is also published in the ION network. You can update the domain in your DID directly from the Azure AD Verifiable Credential portal.
+In the event where the domain associated with your company changes, you would also need to change the domain in your DID document. You can update the domain in your DID directly from the Azure AD Verifiable Credential portal
## How do we link DIDs and domains?
It is of high importance that you link your DID to a domain recognizable to the
:::image type="content" source="media/how-to-dnsbind/publish-update-domain.png" alt-text="Choose the publish button so your changes become":::
-It might take up to two hours for your DID document to be updated in the [ION network](https://identity.foundation/ion) with the new domain information. No other changes to the domain are possible before the changes are published.
+If the trust system is ION, it might take up to two hours for your DID document to be updated in the [ION network](https://identity.foundation/ion) with the new domain information. No other changes to the domain are possible before the changes are published. If the trust system is Web, the changes are public as soon as you replace the did-configuration.json file on your web server.
>[!NOTE] >If your changes are successful you will need to [verify](#verified-domain) your newly added domain.
Yes. You need to wait until the config.json file gets updated before you publish
### How do I know when the linked domain update has successfully completed?
-Once the domain changes are publised to ION, the domain section inside the Azure AD Verifiable Credentials service will display `Published` as the status and you should be able to make new changes to the domain.
+If the trust system is ION, once the domain changes are published to ION, the domain section inside the Azure AD Verifiable Credentials service will display Published as the status and you should be able to make new changes to the domain. If the trust system is Web, the changes are public as soon as you replace the did-configuration.json file on your web server.
>[!IMPORTANT] > No changes to your domain are possible while publishing is in progress.
active-directory How To Issuer Revoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md
Previously updated : 04/01/2021 Last updated : 06/03/2022 #Customer intent: As an administrator, I am trying to learn the process of revoking verifiable credentials that I have issued
See below for an example of how the Rules file is modified to include the index.
"attestations": { "idTokens": [ {
- "mapping": {
- "Name": { "claim": "name" },
- "email": { "claim": "email", "indexed": true}
- },
+ "mapping": [
+ {
+ "outputClaim": "Name",
+ "inputClaim": "name",
+ "required": true,
+ "indexed": false
+ },
+ {
+ "outputClaim": "email",
+ "inputClaim": "email",
+ "required": true,
+ "indexed": true
+ }
+ ],
"configuration": "https://login.microsoftonline.com/tenant-id-here7/v2.0/.well-known/openid-configuration", "client_id": "c0d6b785-7a08-494e-8f63-c30744c3be2f", "redirect_uri": "vcclient://openid"
See below for an example of how the Rules file is modified to include the index.
``` >[!NOTE]
->Only one attribute can be indexed from a Rules file.
+>Only one attribute can be indexed from a rules claims mapping.
## How do I revoke a verifiable credential
active-directory How To Opt Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-opt-out.md
Previously updated : 02/08/2022 Last updated : 06/16/2022 #Customer intent: As an administrator, I am looking for information to help me disable
In this article:
## When do you need to opt out?
-Opting out is a one-way operation, after you opt-out your Azure Active Directory Verifiable Credentials environment will be reset. During the Public Preview opting out may be required to:
+Opting out is a one-way operation, after you opt out your Azure Active Directory Verifiable Credentials environment will be reset. During the public preview, opting out may be required to:
+ - Enable new service capabilities. - Reset your service configuration.
+- Switch between trust systems ION and Web
-## What happens to your data when you opt-out?
+## What happens to your data when you opt out?
When you complete opting out of the Azure Active Directory Verifiable Credentials service, the following actions will take place:
Once an opt-out takes place, you won't be able to recover your DID or conduct an
## Effect on existing verifiable credentials All verifiable credentials already issued will continue to exist. They won't be cryptographically invalidated as your DID will remain resolvable through ION.
-However, when relying parties call the status API, they will always receive back a failure message.
+However, when relying parties call the status API, they'll always receive back a failure message.
-## How to opt-out from the Azure Active Directory Verifiable Credentials Public Preview?
+## How to opt out from the Azure Active Directory Verifiable Credentials Public Preview?
1. From the Azure portal search for verifiable credentials. 2. Choose **Organization Settings** from the left side menu.
active-directory How To Register Didwebsite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-register-didwebsite.md
+
+ Title: How to register your website ID
+description: Learn how to register your website ID for did:web
+documentationCenter: ''
+++++ Last updated : 06/14/2022++
+#Customer intent: As an administrator, I am looking for information to help me disable
++
+# How to register your website ID for did:web
++
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- Complete verifiable credentials onboarding with Web as the selected trust system.
+- Complete the Linked Domain setup.
+
+## Why do I need to register my website ID?
+
+If your trust system for the tenant is Web, you need register your website ID to be able to issue and verify your credentials. When you use the ION based trust system, information like your issuers' public keys are published to the blockchain. When the trust system is Web, you have to make this information available on your website.
+
+## How do I register my website ID?
+
+1. Navigate to the Verifiable Credentials | Getting Started page.
+1. On the left side of the page, select Domain.
+1. At the Website ID registration, select Review.
+
+ ![Screenshot of website registration page.](media/how-to-register-didwebsite/how-to-register-didwebsite-domain.png)
+1. Copy or download the DID document being displayed in the box
+
+ ![Screenshot of did.json.](media/how-to-register-didwebsite/how-to-register-didwebsite-diddoc.png)
+1. Upload the file to your webserver. The DID document JSON file needs to be uploaded to location /.well-known/did.json on your webserver.
+1. Once the file is available on your webserver, you need to select the Refresh registration status button to verify that the system can request the file.
+
+## When is the DID document in the did.json file used?
+
+The DID document contains the public keys for your issuer and is used during both issuance and presentation. An example of how the public keys are used is when Authenticator, as a wallet, validates the signature of an issuance or presentation request.
+
+## When does the did.json file need to be republished to the webserver?
+
+The DID document in the did.json file needs to be republished if you changed the Linked Domain or if you rotate your signing keys.
+
+## Next steps
+
+- [Tutorial for issue a verifiable credential](verifiable-credentials-configure-issuer.md)
active-directory How To Use Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart.md
+
+ Title: How to create credentials using the QuickStart
+description: Learn how to use the QuickStart to create custom credentials
+documentationCenter: ''
+++++ Last updated : 06/16/2022++
+#Customer intent: As an administrator, I am looking for information to help me disable
++
+# How to create credentials using the Quickstart
++
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+To use the Azure Active Directory Verifiable Credentials QuickStart, you only need to complete verifiable credentials onboarding.
+
+## What is the QuickStart?
+
+Azure AD verifiable Credentials now come with a QuickStart in the portal for creating custom credentials. When using the QuickStart, you don't need to edit and upload of display and rules files to Azure Storage. Instead you enter all details in the portal and create the credential in one page.
+
+>[!NOTE]
+>When working with custom credentials, you provide display and rules definitions in JSON documents. These definitions are now stored together with the credential's details.
+
+## Create a Custom credential
+
+When you select + Add credential in the portal, you get the option to launch two Quickstarts. Select [x] Custom credential and select Next.
+
+![Screenshot of VC quickstart](media/how-to-use-quickstart/quickstart-startscreen.png)
+
+In the next screen, you enter JSON for the Display and the Rules definitions and give the credential a type name. Select Create to create the credential.
+
+![screenshot of create new credential section with JSON sample](media/how-to-use-quickstart/quickstart-create-new.png)
+
+## Sample JSON Display definitions
+
+The expected JSON for the Display definitions is the inner content of the displays collection. The JSON is a collection, so if you want to support multiple locales, you add multiple entries with a comma as separator.
+
+```json
+{
+ "locale": "en-US",
+ "card": {
+ "title": "Verified Credential Expert",
+ "issuedBy": "Microsoft",
+ "backgroundColor": "#000000",
+ "textColor": "#ffffff",
+ "logo": {
+ "uri": "https://didcustomerplayground.blob.core.windows.net/public/VerifiedCredentialExpert_icon.png",
+ "description": "Verified Credential Expert Logo"
+ },
+ "description": "Use your verified credential to prove to anyone that you know all about verifiable credentials."
+ },
+ "consent": {
+ "title": "Do you want to get your Verified Credential?",
+ "instructions": "Sign in with your account to get your card."
+ },
+ "claims": [
+ {
+ "claim": "vc.credentialSubject.firstName",
+ "label": "First name",
+ "type": "String"
+ },
+ {
+ "claim": "vc.credentialSubject.lastName",
+ "label": "Last name",
+ "type": "String"
+ }
+ ]
+}
+```
+
+## Sample JSON Rules definitions
+
+The expected JSON for the Rules definitions is the inner content of the rules attribute, which starts with the attestation attribute.
+
+```json
+{
+ "attestations": {
+ "idTokenHints": [
+ {
+ "mapping": [
+ {
+ "outputClaim": "firstName",
+ "required": true,
+ "inputClaim": "$.given_name",
+ "indexed": false
+ },
+ {
+ "outputClaim": "lastName",
+ "required": true,
+ "inputClaim": "$.family_name",
+ "indexed": false
+ }
+ ],
+ "required": false
+ }
+ ]
+ }
+}
+```
+
+## Configure the samples to issue and verify your Custom credential
+
+To configure your sample code to issue and verify using custom credentials, you need:
+
+- Your tenant's issuer DID
+- The credential type
+- The manifest url to your credential.
+
+The easiest way to find this information for a Custom Credential is to go to your credential in the portal, select **Issue credential** and switch to Custom issue.
+
+![Screenshot of QuickStart issue credential screen.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
+
+After switching to custom issue, you have access to a textbox with a JSON payload for the Request Service API. Replace the place holder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+
+![Screenshot of Quickstart custom issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
+
+## Next steps
+
+- Reference for [Rules and Display definitions model](rules-and-display-definitions-model.md)
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
Previously updated : 07/20/2021 Last updated : 06/02/2022
Terminology for verifiable credentials (VCs) might be confusing if you're not fa
ΓÇ£A ***distributed ledger*** is a non-centralized system for recording events. These systems establish sufficient confidence for participants to rely upon the data recorded by others to make operational decisions. They typically use distributed databases where different nodes use a consensus protocol to confirm the ordering of cryptographically signed transactions. The linking of digitally signed transactions over time often makes the history of the ledger effectively immutable.ΓÇ¥
-* The Microsoft solution uses the ***Identity Overlay Network (ION)*** to provide decentralized public key infrastructure (PKI) capability.
-
-
+* The Microsoft solution uses the ***Identity Overlay Network (ION)*** to provide decentralized public key infrastructure (PKI) capability. As an alternative to ION, Microsoft also offers DID Web as the trust system.
### Combining centralized and decentralized identity architectures
These use cases demonstrate how centralized identities and decentralized identit
### Distributing initial credentials
-Alice accepts employment with Woodgrove. As part of the onboarding process, an Azure Active Directory (AD) account is created for Alice to use inside of the Woodgrove trust boundary. AliceΓÇÖs manager must figure out how to enable Alice, who works remotely, to receive initial sign in information in a secure way. In the past, the IT department might have provided those credentials to their manager, who would print them and hand them to Alice. This doesnΓÇÖt work with remote employees.
+Alice accepts employment with Woodgrove. As part of the onboarding process, an Azure Active Directory (AD) account is created for Alice to use inside of the Woodgrove trust boundary. AliceΓÇÖs manager must figure out how to enable Alice, who works remotely, to receive initial sign-in information in a secure way. In the past, the IT department might have provided those credentials to their manager, who would print them and hand them to Alice. This doesnΓÇÖt work with remote employees.
VCs can add value to centralized systems by augmenting the credential distribution process. Instead of needing the manager to provide credentials, Alice can use their VC as proof of identity to receive their initial username and credentials for centralized systems access. Alice presents the proof of identity they added to their wallet as part of the onboarding process.
By combining centralized and decentralized identity architectures for onboarding
![Accessing resources inside of the trust boundary](media/introduction-to-verifiable-credentials-architecture/inside-trust-boundary.png)
-As an employee, Alice is operating inside of the trust boundary of Woodgrove. Woodgrove acts as the identity provider (IDP) and maintains complete control of the identity and the configuration of the apps Alice uses to interact within the Woodgrove trust boundary. To use resources in the Azure AD trust boundary, Alice provides potentially multiple forms of proof of identification to log on to WoodgroveΓÇÖs trust boundary and access the resources inside of WoodgroveΓÇÖs technology environment. This is a typical scenario that is well served using a centralized identity architecture.
+As an employee, Alice is operating inside of the trust boundary of Woodgrove. Woodgrove acts as the identity provider (IDP) and maintains complete control of the identity and the configuration of the apps Alice uses to interact within the Woodgrove trust boundary. To use resources in the Azure AD trust boundary, Alice provides potentially multiple forms of proof of identification to sign in WoodgroveΓÇÖs trust boundary and access the resources inside of WoodgroveΓÇÖs technology environment. This is a typical scenario that is well served using a centralized identity architecture.
* Woodgrove manages the trust boundary and using good security practices provides the least-privileged level of access to Alice based on the job performed. To maintain a strong security posture, and potentially for compliance reasons, Woodgrove must also be able to track employeesΓÇÖ permissions and access to resources and must be able to revoke permissions when the employment is terminated.
As an employee, Alice is operating inside of the trust boundary of Woodgrove. Wo
Individual employees have changing identity needs, and VCs can augment centralized systems to manage those changes.
-* While employed by Woodgrove Alice might need additional access to resources based on meeting specific requirements. For example, when Alice completes privacy training, she can be issued a new employee VC with that claim, and that VC can be used to access restricted resources.
+* While employed by Woodgrove Alice might need gain access to resources based on meeting specific requirements. For example, when Alice completes privacy training, she can be issued a new employee VC with that claim, and that VC can be used to access restricted resources.
* VCs can be used inside of the trust boundary for account recovery. For example, if the employee has lost their phone and computer, they can regain access by getting a new VC from the identity verification service trusted by Woodgrove, and then use that VC to get new credentials.
By combining centralized and decentralized identity architectures for operating
Woodgrove will add and end business relationships with other organizations and will need to determine when centralized and decentralized identity architectures are used.
-By combining centralized and decentralized identity architectures, the responsibility and effort associated with identity and proof of identity is distributed, risk is reduced, and the user does not risk releasing their private information as often or to as many unknown verifiers. Specifically:
+By combining centralized and decentralized identity architectures, the responsibility and effort associated with identity and proof of identity is distributed, risk is reduced, and the user doesn't risk releasing their private information as often or to as many unknown verifiers. Specifically:
* In centralized identity architectures, the IDP issues credentials and performs verification of those issued credentials. Information about all identities is processed by the IDP, either storing them in or retrieving them from a directory. IDPs may also dynamically accept security tokens from other IDP systems, such as social sign-ins or business partners. For a relying party to use identities in the IDP trust boundary, they must be configured to accept the tokens issued by the IDP.
By combining centralized and decentralized identity architectures, the responsib
In decentralized identity architectures, the issuer, user, and relying party (RP) each have a role in establishing and ensuring ongoing trusted exchange of each otherΓÇÖs credentials. The public keys of the actorsΓÇÖ DIDs are resolvable in ION, which allows signature validation and therefore trust of any artifact, including a verifiable credential. Relying parties can consume verifiable credentials without establishing trust relationships with the issuer. Instead, the issuer provides the subject a credential to present as proof to relying parties. All messages between actors are signed with the actorΓÇÖs DID; DIDs from issuers and verifiers also need to own the DNS domains that generated the requests.
-For example: When VC holders needs to access a resource, they must present the VC to that relying party. They do so by using a wallet application to read the RPΓÇÖs request to present a VC. As a part of reading the request, the wallet application uses the RPΓÇÖs DID to find the RPs public keys using ION, validating that the request to present the VC has not been tampered with. The wallet also checks that the DID is referenced in a metadata document that is hosted in the DNS domain of the RP, to prove domain ownership.
+For example: When VC holders need to access a resource, they must present the VC to that relying party. They do so by using a wallet application to read the RPΓÇÖs request to present a VC. As a part of reading the request, the wallet application uses the RPΓÇÖs DID to find the RPs public keys using ION, validating that the request to present the VC hasn't been tampered with. The wallet also checks that the DID is referenced in a metadata document hosted in the DNS domain of the RP, to prove domain ownership.
-
![How a decentralized identity system works](media/introduction-to-verifiable-credentials-architecture/how-decentralized-works.png)
In this flow, the credential holder interacts with the issuer to request a verif
1. The wallet validates the issuance requests and processes the contract requirements:
- 1. Validates that the issuance request message is signed by the issuerΓÇÖ keys found in the DID document resolved in ION. This ensures that the message has not been tampered with.
+ 1. Validates that the issuance request message is signed by the issuerΓÇÖ keys found in the DID document resolved in ION. This ensures that the message hasn't been tampered with.
1. Validates that the DNS domain referenced in the issuerΓÇÖs DID document is owned by the issuer.
Decentralized architectures can be used to enhance existing solutions and provid
To deliver on the aspirations of the [Decentralized Identity Foundation](https://identity.foundation/) (DIF) and W3C [Design goals](https://www.w3.org/TR/did-core/), the following should be considered when creating a verifiable credential solution:
-* There are no central points of trust establishment between actors in the system. That is, trust boundaries are not expanded through federation because actors trust specific VCs
+* There are no central points of trust establishment between actors in the system. That is, trust boundaries aren't expanded through federation because actors trust specific VCs.
* ION enables the discovery of any actorΓÇÖs decentralized identifier (DID). * The solution enables verifiers to validate any verifiable credentials (VCs) from any issuer.
- * The solution does not enable the issuer to control authorization of the subject or the verifier (relying party).
+ * The solution doesn't enable the issuer to control authorization of the subject or the verifier (relying party).
* The actors operate in a decoupled manner, each capable of completing the tasks for their roles.
- * Issuers service every VC request and do not discriminate on the requests serviced.
+ * Issuers service every VC request and don't discriminate on the requests serviced.
* Subjects own their VC once issued and can present their VC to any verifier.
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
Previously updated : 07/20/2021 Last updated : 06/03/2022
The **Azure Key Vault** service stores your issuer keys, which are generated whe
Each issuer has a single key set used for signing, updating, and recovery. This key set is used for every issuance of every verifiable credential you produce.
-**Azure Storage** is used to store credential metadata and definitions; specifically, the rules and display files for your credentials.
+**Azure AD Verifiable Credentials Service** is used to store credential metadata and definitions; specifically, the rules and display definitions for your credentials.
-* Display files determine which claims are stored in the VC and how it's displayed in the holderΓÇÖs wallet. The display file also includes branding and other elements. Rules files are limited in size to 50 KB, while display files are limited to 150 KB. See [How to customize your verifiable credentials](../verifiable-credentials/credential-design.md).
+* Display definitions determine which claims are stored in the VC and how it's displayed in the holderΓÇÖs wallet. The display definition also includes branding and other elements. Rules definitions are limited in size to 50 KB, while display definitions are limited to 150 KB. See [How to customize your verifiable credentials](../verifiable-credentials/credential-design.md).
-* Rules are an issuer-defined model that describes the required inputs of a verifiable credential, the trusted sources of the inputs, and the mapping of input claims to output claims.
+* Rules are an issuer-defined model that describes the required inputs of a verifiable credential, the trusted sources of the inputs, and the mapping of input claims to output claims.
* **Input** ΓÇô Are a subset of the model in the rules file for client consumption. The subset must describe the set of inputs, where to obtain the inputs and the endpoint to call to obtain a verifiable credential.
-* Rules and display files for different credentials can be configured to use different containers, subscriptions, and storage. For example, you can delegate permissions to different teams that own management of specific VCs.
### Azure AD Verifiable Credentials service
The Azure AD Verifiable Credentials service enables you to issue and revoke VCs
![ION](media/plan-issuance-solution/plan-for-issuance-solution-ion.png)
-Microsoft uses the [Identity Overlay Network (ION)](https://identity.foundation/ion/), [a Sidetree-based network](https://identity.foundation/sidetree/spec/) that uses BitcoinΓÇÖs blockchain for decentralized identifier (DID) implementation. The DID document of the issuer is stored in ION and is used to perform cryptographic signature checks by parties to the transaction.
+As one alternative for the tenants trust system, Microsoft uses the [Identity Overlay Network (ION)](https://identity.foundation/ion/), [a Sidetree-based network](https://identity.foundation/sidetree/spec/) that uses BitcoinΓÇÖs blockchain for decentralized identifier (DID) implementation. The DID document of the issuer is stored in ION and is used to perform cryptographic signature checks by parties to the transaction. The other alternative for trust system is Web, where the DID document is hosted on the issuers webserver.
### Microsoft Authenticator application
For scalability, consider implementing metrics for the following:
* [Azure Key Vault monitoring and alerting](../../key-vault/general/alert.md)
- * [Monitoring Azure Blob Storage](../../storage/blobs/monitor-blob-storage.md)
- * Monitor the components used for your business logic layer. ### Plan for reliability
To plan for reliability, we recommend:
* [Azure Key Vault availability and redundancy - Azure Key Vault](../../key-vault/general/disaster-recovery-guidance.md)
- * [Disaster recovery and storage account failover - Azure Storage](../../storage/common/storage-disaster-recovery-guidance.md)
- * For frontend and business layer, your solution can manifest in an unlimited number of ways. As with any solution, for the dependencies you identify, ensure that the dependencies are resilient and monitored. If the rare event that the Azure AD Verifiable Credentials issuance service, Azure Key Vault, or Azure Storage services become unavailable, the entire solution will become unavailable.
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-verification-solution.md
Previously updated : 07/20/2021 Last updated : 06/02/2022
This content covers the technical aspects of planning for a verifiable credentia
Supporting technologies that aren't specific to verification solutions are out of scope. For example, websites are used in a verifiable credential verification solution but planning a website deployment isn't covered in detail.
-As you plan your verification solution, you must consider what business capability is being added or modified. You must also consider what IT capabilities can be reused, and what capabilities must be added to create the solution. Also consider what training is needed for the people involved in the business process and the people that support the end users and staff of the solution. These topics aren't covered in this content. We recommend reviewing the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/) for information covering these topics.
+As you plan your verification solution, you must consider what business capability is being added or modified. You must also consider what IT capabilities can be reused, and what capabilities must be added to create the solution. Also consider what training is needed for the people involved in the business process and the people that support the end users and staff of the solution. These articles aren't covered in this content. We recommend reviewing the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/) for information covering these articles.
## Components of the solution
Application programming interfaces (APIs) provide developers a method to abstrac
![Azure AD VC ION](./media/plan-verification-solution/plan-verification-solution-ion.png)
-Verifiable credential solutions use a decentralized ledger system to record transactions. Microsoft uses the [Identity Overlay Network (ION)](https://identity.foundation/ion/), [a Sidetree-based network](https://identity.foundation/sidetree/spec/) that uses Bitcoin as its blockchain-styled ledger for decentralized identifier (DID) implementation. The DID document of the issuer is stored in ION and used by parties to the transaction to perform cryptographic signature checks.
+As one alternative for the tenants trust system, Microsoft uses the [Identity Overlay Network (ION)](https://identity.foundation/ion/), [a Sidetree-based network](https://identity.foundation/sidetree/spec/)that uses BitcoinΓÇÖs blockchain for decentralized identifier (DID) implementation. The DID document of the issuer is stored in ION and is used to perform cryptographic signature checks by parties to the transaction. The other alternative for trust system is Web, where the DID document is hosted on the issuers webserver.
+ ### Microsoft Authenticator application ![Microsoft Authenticator application](media/plan-verification-solution/plan-verification-solution-authenticator.png)
-Microsoft Authenticator is the mobile application that orchestrates the interactions between the relying party, the user, the Azure AD Verifiable Credentials issuance service, and dependencies described in the contract used to issue VCs. Microsoft Authenticator acts as a digital wallet in which the holder of the VC stores the VC. It is also the mechanism used to present VCs for verification.
+Microsoft Authenticator is the mobile application that orchestrates the interactions between the relying party, the user, the Azure AD Verifiable Credentials issuance service, and dependencies described in the contract used to issue VCs. Microsoft Authenticator acts as a digital wallet in which the holder of the VC stores the VC. It's also the mechanism used to present VCs for verification.
### Relying party (RP)
Verifiable credentials can be used as other proof to access to sensitive applica
* **Goal**: The goal of the scenario determines what kind of credential and issuer is needed. Typical scenarios include:
- * **Authorization**: In this scenario the user presents the VC to make an authorization decision. VCs designed for proof of completion of a training or holding a specific certification, are a good fit for this scenario. The VC attributes should contain fine-grained information conducive to authorization decisions and auditing. For example, if the VC is used to certify the individual is trained and can access sensitive financial apps, the app logic can check the department claim for fine-grained authorization, and use the employee ID for audit purposes.
+ * **Authorization**: In this scenario, the user presents the VC to make an authorization decision. VCs designed for proof of completion of a training or holding a specific certification, are a good fit for this scenario. The VC attributes should contain fine-grained information conducive to authorization decisions and auditing. For example, if the VC is used to certify the individual is trained and can access sensitive financial apps, the app logic can check the department claim for fine-grained authorization, and use the employee ID for audit purposes.
* **Confirmation of identity verification**: In this scenario, the goal is to confirm that the same person who initially onboarded is indeed the one attempting to access the high-value application. A credential from an identity verification issuer would be a good fit and the application logic should validate that the attributes from the VC align with the user who logged in the application.
The decentralized nature of verifiable credentials enables this scenario without
* **Goal**: The goal of the scenario determines what kind of credential and issuer is needed. Typical scenarios include:
- * **Authentication**: In this scenario a user must have possession of VC to prove employment or relationship to a particular organization(s). In this case, the RP should be configured to accept VCs issued by the target organizations.
+ * **Authentication**: In this scenario, a user must have possession of VC to prove employment or relationship to a particular organization(s). In this case, the RP should be configured to accept VCs issued by the target organizations.
* **Authorization**: Based on the application requirements, the applications might consume the VC attributes for fine-grained authorization decisions and auditing. For example, if an e-commerce website offers discounts to employees of the organizations in a particular location, they can validate this based on the country claim in the VC (if present).
Note: While the scenario we describe in this section is specific to recover Azur
**VC Attribute correlation with Azure AD**: When defining the attributes of the VC in collaboration with the issuer, establish a mechanism to correlate information with internal systems based on the claims in the VC and user input. For example, if you have an identity verification provider (IDV) verify identity prior to onboarding employees, ensure that the issued VC includes claims that would also be present in an internal system such as a human resources system for correlation. This might be a phone number, address, or date of birth. In addition to claims in the VC, the RP can ask for some information such as the last four digits of their social security number (SSN) as part of this process.
-**Role of VCs with Existing Azure AD Credential Reset Capabilities**: Azure AD has a built-in self-service password reset (SSPR) capability. Verifiable Credentials can be used to provide an other way to recover, particularly in cases where users do not have access to or lost control of the SSPR method, for example theyΓÇÖve lost both computer and mobile device. In this scenario, the user can reobtain a VC from an identity proof issuer and present it to recover their account.
+**Role of VCs with Existing Azure AD Credential Reset Capabilities**: Azure AD has a built-in self-service password reset (SSPR) capability. Verifiable Credentials can be used to provide another way to recover, particularly in cases where users do not have access to or lost control of the SSPR method, for example theyΓÇÖve lost both computer and mobile device. In this scenario, the user can reobtain a VC from an identity proof issuer and present it to recover their account.
Similarly, you can use a VC to generate a temporary access pass that will allow users to reset their MFA authentication methods without a password.
You can use information in presented VCs to build a user profile. If you want to
## Plan for performance
-As with any solution, you must plan for performance. Focus areas include latency, throughput, storage, and scalability. During initial phases of a release cycle, performance should not be a concern. However, when adoption of your issuance solution results in many verifiable credentials being issued, performance planning might become a critical part of your solution.
+As with any solution, you must plan for performance. Focus areas include latency, throughput, and scalability. During initial phases of a release cycle, performance should not be a concern. However, when adoption of your solution results in many verifiable credentials being verified, performance planning might become a critical part of your solution.
The following provides areas to consider when planning for performance:
The following provides areas to consider when planning for performance:
* Maximum signing performance of a Key Vault is 2000 signings/~10 seconds. This means your solution can support up to 12,000 VC validation requests per minute.
- * You cannot control throttling; however, we recommend you read [Azure Key Vault throttling guidance](../../key-vault/general/overview-throttling.md) so that you understand how throttling might impact performance.
+ * You can't control throttling; however, we recommend you read [Azure Key Vault throttling guidance](../../key-vault/general/overview-throttling.md) so that you understand how throttling might impact performance.
## Plan for reliability
As you are designing for security, consider the following:
* Only the Azure AD Verifiable Credentials service and the website service principals should have permissions to use Key Vault to sign messages with the private key.
-* Do not assign any human identity administrative permissions to the Key Vault. For more information on Key Vault best practices, refer to [Azure Security Baseline for Key Vault](../../key-vault/general/security-baseline.md).
+* Don't assign any human identity administrative permissions to the Key Vault. For more information on Key Vault best practices, see [Azure Security Baseline for Key Vault](../../key-vault/general/security-baseline.md).
* Review [Securing Azure environments with Azure Active Directory](https://azure.microsoft.com/resources/securing-azure-environments-with-azure-active-directory/) for best practices for managing the supporting services for your solution.
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
Previously updated : 05/26/2022 Last updated : 06/02/2022 #Customer intent: As an administrator, I am trying to learn the process of revoking verifiable credentials that I have issued.
The following permission is required to call the Request Service REST API. For m
| Permission type | Permission | |||
-| Application | bbb94529-53a3-4be5-a069-7eaf2712b826/.default|
+| Application | 3db474b9-6a0c-4840-96ac-1fceb342124f/.default |
## Presentation request payload
The Request Service REST API generates several events to the callback endpoint.
|Property |Type |Description | ||||
-| `url` | string| URI to the callback endpoint of your application. The URI must point to a reacheable endpoint on the internet otherwise the service will throw a callback URL unreadable error. Accepted inputs IPv4, IPv6 or DNS resolvable hostname. |
+| `url` | string| URI to the callback endpoint of your application. The URI must point to a reachable endpoint on the internet otherwise the service will throw a callback URL unreadable error. Accepted inputs IPv4, IPv6 or DNS resolvable hostname. |
| `state` | string| Associates with the state passed in the original payload. | | `headers` | string| Optional. You can include a collection of HTTP headers required by the receiving end of the POST message. The current supported header values are the `api-key` or the `Authorization` headers. Any other header will throw an invalid callback header error.|
active-directory Rules And Display Definitions Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/rules-and-display-definitions-model.md
+
+ Title: Rules and Display Definition Reference
+description: Rules and Display Definition Reference
+documentationCenter: ''
+++++ Last updated : 06/16/2022++
+#Customer intent: As an administrator, I am looking for information to help me disable
++
+# Rules and Display Definition Reference
++
+Rules and Display definitions are used to define a credential. You can read more about it in [How to customize your credentials](credential-design.md).
+
+## rulesModel type
+
+| Property | Type | Description |
+| -- | -- | -- |
+|`attestations`| [idTokenAttestation](#idtokenattestation-type) and/or [idTokenHintAttestation](#idtokenhintattestation-type) and/or [verifiablePresentationAttestation](#verifiablepresentationattestation-type) and/or [selfIssuedAttestation](#selfissuedattestation-type) |
+|`validityInterval` | number | represents the lifespan of the credential |
+|`vc`| vcType array | types for this contract |
++
+### idTokenAttestation type
+
+When you sign in the user from within Authenticator, you can use the returned ID token from the OIDC compatible provider as input.
+
+| Property | Type | Description |
+| -- | -- | -- |
+| `mapping` | [claimMapping](#claimmapping-type) (optional) | rules to map input claims into output claims in the verifiable credential |
+| `configuration` | string (url) | location of the identity provider's configuration document |
+| `clientId` | string | client ID to use when obtaining the ID token |
+| `redirectUri` | string | redirect uri to use when obtaining the ID token MUST BE vcclient://openid/ |
+| `scope` | string | space delimited list of scopes to use when obtaining the ID token |
+| `required` | boolean (default false) | indicating whether this attestation is required or not |
+| `trustedIssuers` | optional string (array) | a list of DIDs allowed to issue the verifiable credential for this contract. This property is only used for specific scenarios where the `idtoken` hint can come from another issuer |
+
+### idTokenHintAttestation type
+
+This flow uses the IDTokenHint, which is provided as payload through the Request REST API. The mapping is the same as for the ID Token attestation.
+
+| Property | Type | Description |
+| -- | -- | -- |
+| `mapping` | [claimMapping](#claimmapping-type) (optional) | rules to map input claims into output claims in the verifiable credential |
+| `required` | boolean (default false) | indicating whether this attestation is required or not |
+| `trustedIssuers` | optional string (array) | a list of DIDs allowed to issue the verifiable credential for this contract. This property is only used for specific scenarios where the idtoken hint can come from another issuer |
+
+### verifiablePresentationAttestation type
+
+When you want the user to present another VC as input for a new issued VC. The wallet will allow the user to select the VC during issuance.
+
+| Property | Type | Description |
+| -- | -- | -- |
+| `mapping` | [claimMapping](#claimmapping-type) (optional) | rules to map input claims into output claims in the verifiable credential |
+| `credentialType` | string (optional) | required credential type of the input |
+| `required` | boolean (default false) | indicating whether this attestation is required or not |
+| `trustedIssuers` | string (array) | a list of DIDs allowed to issue the verifiable credential for this contract. The service will default to your issuer under the covers so no need to provide this value yourself. |
+
+### selfIssuedAttestation type
+
+When you want the user to enter information themselves. This type is also called self-attested input.
+
+| Property | Type | Description |
+| -- | -- | -- |
+| `mapping` | [claimMapping](#claimmapping-type) (optional) | rules to map input claims into output claims in the verifiable credential |
+| `required` | boolean (default false) | indicating whether this attestation is required or not |
++
+### claimMapping type
+
+| Property | Type | Description |
+| -- | -- | -- |
+| `inputClaim` | string | the name of the claim to use from the input |
+| `outputClaim` | string | the name of the claim in the verifiable credential |
+| `indexed` | boolean (default false) | indicating whether the value of this claim is used for searching; only one clientMapping object is indexable for a given contract |
+| `required` | boolean (default false) | indicating whether this mapping is required or not |
+| `type` | string (optional) | type of claim |
+
+## Example rules definition:
+```
+{
+ "attestations": {
+ "idTokenHints": [
+ {
+ "mapping": [
+ {
+ "outputClaim": "givenName",
+ "required": false,
+ "inputClaim": "given_name",
+ "indexed": false
+ },
+ {
+ "outputClaim": "familyName",
+ "required": false,
+ "inputClaim": "family_name",
+ "indexed": false
+ }
+ ],
+ "required": false
+ }
+ ]
+ },
+ "validityInterval": 2592000,
+ "vc": {
+ "type": [
+ "VerifiedCredentialExpert"
+ ]
+ }
+}
+
+```
+
+## displayModel type
+| Property | Type | Description |
+| -- | -- | -- |
+|`locale`| string | the locale of this display |
+|`credential` | [displayCredential](#displaycredential-type) | the display properties of the verifiable credential |
+|`consent` | [displayConsent](#displayconsent-type) | supplemental data when the verifiable credential is issued |
+|`claims`| [displayClaims](#displayclaims-type) array | labels for the claims included in the verifiable credential |
+
+### displayCredential type
+| Property | Type | Description |
+| -- | -- | -- |
+|`title`| string | title of the credential |
+|`issuedBy` | string | the name of the issuer of the credential |
+|`backgroundColor` | number (hex)| background color of the credential in hex format, for example, #FFAABB |
+|`textColor`| number (hex)| text color of the credential in hex format, for example, #FFAABB |
+|`description`| string | supplemental text displayed alongside each credential |
+|`logo`| [displayCredentialLogo](#displaycredentiallogo-type) | the logo to use for the credential |
+
+### displayCredentialLogo type
+
+| Property | Type | Description |
+| -- | -- | -- |
+|`url`| string (url) | url of the logo (optional if image is specified) |
+|`description` | string | the description of the logo |
+|`image` | string | the base-64 encoded image (optional if url is specified) |
+
+### displayConsent type
+
+| Property | Type | Description |
+| -- | -- | -- |
+|`title`| string | title of the consent |
+|`instructions` | string | supplemental text to use when displaying consent |
+
+### displayClaims type
++
+| Property | Type | Description |
+| -- | -- | -- |
+|`label`| string | the label of the claim in display |
+|`claim`| string | the name of the claim to which the label applies |
+|`type`| string | the type of the claim |
+|`description` | string (optional) | the description of the claim |
+
+## Example display definition:
+```
+{
+ "locale": "en-US",
+ "card": {
+ "backgroundColor": "#FFA500",
+ "description": "This is your Verifiable Credential",
+ "issuedBy": "Contoso",
+ "textColor": "#FFFF00",
+ "title": "Verifiable Credential Expert",
+ "logo": {
+ "description": "Default VC logo",
+ "uri": "https://didcustomerplayground.blob.core.windows.net/public/VerifiedCredentialExpert_icon.png"
+ }
+ },
+ "consent": {
+ "instructions": "Please click accept to add this credentials",
+ "title": "Do you want to accept the verified credential expert dentity?"
+ },
+ "claims": [
+ {
+ "claim": "vc.credentialSubject.givenName",
+ "label": "Name",
+ "type": "String"
+ },
+ {
+ "claim": "vc.credentialSubject.familyName",
+ "label": "Surname",
+ "type": "String"
+ }
+ ]
+}
+```
+## Next steps
+
+- Learn more on [how to customize your credentials](credential-design.md)
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
Previously updated : 05/03/2022 Last updated : 06/16/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
In this article, you learn how to:
> [!div class="checklist"] >
-> - Set up Azure Blob Storage for storing your Azure AD Verifiable Credentials configuration files.
-> - Create and upload your Verifiable Credentials configuration files.
> - Create the verified credential expert card in Azure. > - Gather credentials and environment details to set up the sample application. > - Download the sample application code to your local computer.
The following diagram illustrates the Azure AD Verifiable Credentials architectu
- Android version 6.2108.5654 or later installed. - iOS version 6.5.82 or later installed.
-## Create a storage account
-
-Azure Blob Storage is an object storage solution for the cloud. Azure AD Verifiable Credentials uses [Azure Blob Storage](../../storage/blobs/storage-blobs-introduction.md) to store the configuration files when the service is issuing verifiable credentials.
-
-Create and configure Blob Storage by following these steps:
-
-1. If you don't have an Azure Blob Storage account, [create one](../../storage/common/storage-account-create.md).
-1. After you've created the storage account, create a container. In the left menu for the storage account, scroll to the **Data storage** section, and select **Containers**.
-1. Select **+ Container**.
-1. Type a name for your new container. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character. For example, *vc-container*.
-1. Set **Public access level** to **Private** (no anonymous access).
-1. Select **Create**.
-
- ![Screenshot that shows how to create a container.](media/verifiable-credentials-configure-issuer/create-container.png)
-
-## Grant access to the container
-
-After you create your container, grant the signed-in user the correct role assignment so they can access the files in Blob Storage.
-
-1. From the list of containers, select **vc-container**.
-
-1. From the menu, select **Access Control (IAM)**.
-
-1. Select **+ Add,** and then select **Add role assignment**.
-
- ![Screenshot that shows how to add a new role assignment to the blob container.](media/verifiable-credentials-configure-issuer/add-role-assignment.png)
-
-1. In **Add role assignment**:
-
- 1. For the **Role**, select **Storage Blob Data Reader**.
-
- 1. For the **Assign access to**, select **User, group, or service
- principal**.
-
- 1. Then, search the account that you're using to perform these steps, and
- select it.
-
- ![Screenshot that shows how to set up the new role assignment.](media/verifiable-credentials-configure-issuer/add-role-assignment-container.png)
-
->[!IMPORTANT]
->By default, container creators get the owner role assigned. The owner role isn't enough on its own. Your account needs the storage blob data reader role. For more information, see [Use the Azure portal to assign an Azure role for access to blob and queue data](../../storage/blobs/assign-azure-role-data-access.md).
-
-### Upload the configuration files
-
-Azure AD Verifiable Credentials uses two JSON configuration files, the rules file and the display file.
--- The *rules* file describes important properties of verifiable credentials. In particular, it describes the claims that subjects (users) need to provide before a verifiable credential is issued for them. -- The *display* file controls the branding of the credential and styling of the claims.
+## Create the verified credential expert card in Azure
-In this section, you upload sample rules and display files to your storage. For more information, see [How to customize your verifiable credentials](credential-design.md).
+In this step, you create the verified credential expert card by using Azure AD Verifiable Credentials. After you create the credential, your Azure AD tenant can issue it to users who initiate the process.
-To upload the configuration files, follow these steps:
+1. Using the [Azure portal](https://portal.azure.com/), search for *verifiable credentials*. Then select **Verifiable Credentials (Preview)**.
+1. After you [set up your tenant](verifiable-credentials-configure-tenant.md), the **Create credential** should appear. Alternatively, you can select **Credentials** in the left hand menu and select **+ Add a credential**.
+1. In **Create a new credential**, do the following:
-1. Copy the following JSON, and save the content into a file called *VerifiedCredentialExpertDisplay.json*.
+ 1. For **Credential name**, enter **VerifiedCredentialExpert**. This name is used in the portal to identify your verifiable credentials. It's included as part of the verifiable credentials contract.
+ 1. Copy the following JSON and paste it in the **Display definition** textbox
```json {
- "default": {
"locale": "en-US", "card": { "title": "Verified Credential Expert", "issuedBy": "Microsoft",
- "backgroundColor": "#2E4053",
+ "backgroundColor": "#000000",
"textColor": "#ffffff", "logo": { "uri": "https://didcustomerplayground.blob.core.windows.net/public/VerifiedCredentialExpert_icon.png",
To upload the configuration files, follow these steps:
"title": "Do you want to get your Verified Credential?", "instructions": "Sign in with your account to get your card." },
- "claims": {
- "vc.credentialSubject.firstName": {
- "type": "String",
- "label": "First name"
+ "claims": [
+ {
+ "claim": "vc.credentialSubject.firstName",
+ "label": "First name",
+ "type": "String"
},
- "vc.credentialSubject.lastName": {
- "type": "String",
- "label": "Last name"
+ {
+ "claim": "vc.credentialSubject.lastName",
+ "label": "Last name",
+ "type": "String"
}
- }
- }
+ ]
} ```
-1. Copy the following JSON, and save the content into a file called *VerifiedCredentialExpertRules.json*. The following verifiable credential defines a couple of simple claims in it: `firstName` and `lastName`.
-
- ```json
+ 1. Copy the following JSON and paste it in the **Rules definition** textbox
+ ```JSON
{ "attestations": {
- "idTokens": [
+ "idTokenHints": [
{
- "id": "https://self-issued.me",
- "mapping": {
- "firstName": { "claim": "$.given_name" },
- "lastName": { "claim": "$.family_name" }
- },
- "configuration": "https://self-issued.me",
- "client_id": "",
- "redirect_uri": ""
+ "mapping": [
+ {
+ "outputClaim": "firstName",
+ "required": true,
+ "inputClaim": "$.given_name",
+ "indexed": false
+ },
+ {
+ "outputClaim": "lastName",
+ "required": true,
+ "inputClaim": "$.family_name",
+ "indexed": false
+ }
+ ],
+ "required": false
} ]
- },
- "validityInterval": 2592001,
- "vc": {
- "type": [ "VerifiedCredentialExpert" ]
} } ```
-
-1. In the Azure portal, go to the Azure Blob Storage container that [you created](#create-a-storage-account).
-
-1. In the left menu, select **Containers** to show a list of blobs it contains. Then select the **vc-container** that you created earlier.
-
-1. Select **Upload** to open the upload pane and browse your local file system to find a file to upload. Select the **VerifiedCredentialExpertDisplay.json** and **VerifiedCredentialExpertRules.json** files. Then select **Upload** to upload the files to your container.
-
-## Create the verified credential expert card in Azure
-
-In this step, you create the verified credential expert card by using Azure AD Verifiable Credentials. After creating a verified credential, your Azure AD tenant can issue this credential to users who initiate the process.
-
-1. Using the [Azure portal](https://portal.azure.com/), search for *verifiable credentials*. Then select **Verifiable Credentials (Preview)**.
-1. After you [set up your tenant](verifiable-credentials-configure-tenant.md), the **Create a new credential** window should appear. If itΓÇÖs not opened, or you want to create more credentials, in the left menu, select **Credentials**. Then select **+ Credential**.
-1. In **Create a new credential**, do the following:
-
- 1. For **Name**, enter **VerifiedCredentialExpert**. This name is used in the portal to identify your verifiable credentials. It's included as part of the verifiable credentials contract.
-
- 1. For **Subscription**, select your Azure AD subscription where you created Blob Storage.
-
- 1. Under the **Display file**, select **Select display file**. In the Storage accounts section, select **vc-container**. Then select the **VerifiedCredentialExpertDisplay.json** file and click **Select**.
-
- 1. Under the **Rules file**, **Select rules file**. In the Storage accounts section, select the **vc-container**. Then select the **VerifiedCredentialExpertRules.json** file, and choose **Select**.
1. Select **Create**.
The following screenshot demonstrates how to create a new credential:
Now that you have a new credential, you're going to gather some information about your environment and the credential that you created. You use these pieces of information when you set up your sample application.
-1. In Verifiable Credentials, select **Credentials**. From the list of credentials, select **VerifiedCredentialExpert**, which you created earlier.
+1. In Verifiable Credentials, select **Issue credential** and switch to **Custom issue**.
- ![Screenshot that shows how to select the newly created verified credential.](media/verifiable-credentials-configure-issuer/select-verifiable-credential.png)
+ ![Screenshot that shows how to select the newly created verified credential.](media/verifiable-credentials-configure-issuer/issue-credential-custom-view.png)
-1. Copy the **Issue Credential URL**. This URL is the combination of the rules and display files. It's the URL that Authenticator evaluates before it displays to the user verifiable credential issuance requirements. Record it for later use.
+1. Copy the **authority**, which is the Decentralized Identifier, and record it for later.
-1. Copy the **Decentralized identifier**, and record it for later.
+1. Copy the **manifest** URL. It's the URL that Authenticator evaluates before it displays to the user verifiable credential issuance requirements. Record it for later use.
-1. Copy your **Tenant ID**, and record it for later.
-
- ![Screenshot that shows how to copy the verifiable credentials required values.](media/verifiable-credentials-configure-issuer/copy-the-issue-credential-url.png)
+1. Copy your **Tenant ID**, and record it for later. The Tenant ID is the guid in the manifest URL highlighted in red above.
## Download the sample code
At this point, you should have all the required information that you need to set
Now you'll make modifications to the sample app's issuer code to update it with your verifiable credential URL. This step allows you to issue verifiable credentials by using your own tenant.
-1. Under the *active-directory-verifiable-credentials-dotnet-main* folder, open Visual Studio Code, and select the project inside the *1.asp-net-core-api-idtokenhint* folder.
+1. Under the *active-directory-verifiable-credentials-dotnet-main* folder, open Visual Studio Code, and select the project inside the *1-asp-net-core-api-idtokenhint* folder.
1. Under the project root folder, open the *appsettings.json* file. This file contains information about your Azure AD Verifiable Credentials. Update the following properties with the information that you recorded in earlier steps: 1. **Tenant ID:** your tenant ID 1. **Client ID:** your client ID 1. **Client Secret**: your client secret
- 1. **IssuerAuthority**: Your decentralized identifier
- 1. **VerifierAuthority**: Your decentralized identifier
- 1. **Credential Manifest**: Your issue credential URL
+ 1. **IssuerAuthority**: Your Decentralized Identifier
+ 1. **VerifierAuthority**: Your Decentralized Identifier
+ 1. **Credential Manifest**: Your manifest URL
1. Save the *appsettings.json* file.
The following JSON demonstrates a complete *appsettings.json* file:
{ "AppSettings": { "Endpoint": "https://beta.did.msidentity.com/v1.0/{0}/verifiablecredentials/request",
- "VCServiceScope": "bbb94529-53a3-4be5-a069-7eaf2712b826/.default",
+ "VCServiceScope": "3db474b9-6a0c-4840-96ac-1fceb342124f/.default",
"Instance": "https://login.microsoftonline.com/{0}",- "TenantId": "12345678-0000-0000-0000-000000000000", "ClientId": "33333333-0000-0000-0000-000000000000", "ClientSecret": "123456789012345678901234567890", "CertificateName": "[Or instead of client secret: Enter here the name of a certificate (from the user cert store) as registered with your application]",
- "IssuerAuthority": "did:ion:EiCcn9dz_OC6HY60AYBXF2Dd8y5_2UYIx0Ni6QIwRarjzg:eyJkZWx0YSI6eyJwYXRjaGVzIjpbeyJhY3Rpb24iOiJyZXBsYWNlIiwiZG9jdW1lbnQiOnsicHVibGljS2V5cyI6W3siaWQiOiJzaWdfN2U4MmYzNjUiLCJwdWJsaWNLZXlKd2siOnsiY3J2Ijoic2VjcDI1NmsxIiwia3R5IjoiRUMiLCJ4IjoiaUo0REljV09aWVA...",
- "VerifierAuthority": " did:ion:EiCcn9dz_OC6HY60AYBXF2Dd8y5_2UYIx0Ni6QIwRarjzg:eyJkZWx0YSI6eyJwYXRjaGVzIjpbeyJhY3Rpb24iOiJyZXBsYWNlIiwiZG9jdW1lbnQiOnsicHVibGljS2V5cyI6W3siaWQiOiJzaWdfN2U4MmYzNjUiLCJwdWJsaWNLZXlKd2siOnsiY3J2Ijoic2VjcDI1NmsxIiwia3R5IjoiRUMiLCJ4IjoiaUo0REljV09aWVA...",
+ "IssuerAuthority": "did:web:example.com...",
+ "VerifierAuthority": "did:web:example.com...",
"CredentialManifest": "https://beta.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredential/contracts/VerifiedCredentialExpert" } }
Now you're ready to issue your first verified credential expert card by running
![Screenshot that shows how to scan the Q R code.](media/verifiable-credentials-configure-issuer/scan-issuer-qr-code.png)
-1. At this time, you will see a message warning that this app or website might be risky. Select **Advanced**.
+1. At this time, you'll see a message warning that this app or website might be risky. Select **Advanced**.
![Screenshot that shows how to respond to the warning message.](media/verifiable-credentials-configure-issuer/at-risk.png)
Now you're ready to issue your first verified credential expert card by running
![Screenshot that shows how to proceed with the risky warning.](media/verifiable-credentials-configure-issuer/proceed-anyway.png)
-1. You will be prompted to enter a PIN code that is displayed in the screen where you scanned the QR code. The PIN adds an extra layer of protection to the issuance. The PIN code is randomly generated every time an issuance QR code is displayed.
+1. You'll be prompted to enter a PIN code that is displayed in the screen where you scanned the QR code. The PIN adds an extra layer of protection to the issuance. The PIN code is randomly generated every time an issuance QR code is displayed.
![Screenshot that shows how to type the pin code.](media/verifiable-credentials-configure-issuer/enter-verification-code.png)
-1. After entering the PIN number, the **Add a credential** screen appears. At the top of the screen, you see a **Not verified** message (in red). This warning is related to the domain validation warning mentioned earlier.
+1. After you enter the PIN number, the **Add a credential** screen appears. At the top of the screen, you see a **Not verified** message (in red). This warning is related to the domain validation warning mentioned earlier.
1. Select **Add** to accept your new verifiable credential.
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Previously updated : 05/06/2022 Last updated : 06/16/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
Specifically, you learn how to:
> - Set up the Verifiable Credentials service. > - Register an application in Azure AD. - The following diagram illustrates the Azure AD Verifiable Credentials architecture and the component you configure. ![Diagram that illustrates the Azure AD Verifiable Credentials architecture.](media/verifiable-credentials-configure-tenant/verifiable-credentials-architecture.png)
-See a [video walkthrough](https://www.youtube.com/watch?v=8jqjHjQo-3c) going over the setup of the Azure AD Verifiable Credential service.
- ## Prerequisites - You need an Azure tenant with an active subscription. If you don't have Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
See a [video walkthrough](https://www.youtube.com/watch?v=8jqjHjQo-3c) going ove
## Create a key vault
-[Azure Key Vault](../../key-vault/general/basic-concepts.md) is a cloud service that enables the secure storage and access of secrets and keys. Your Verifiable
+[Azure Key Vault](../../key-vault/general/basic-concepts.md) is a cloud service that enables the secure storage and access of secrets and keys. The Verifiable
Credentials service stores public and private keys in Azure Key Vault. These keys are used to sign and verify credentials.
-If you don't have an instance of Azure Key Vault, follow these steps to create a key vault by using the Azure portal.
+If you don't have an Azure Key Vault instance available, follow [these steps](/key-vault/general/quick-create-portal.md) to create a key vault using the Azure portal.
>[!NOTE]
->By default, the account that creates the key vault is the only one with access. The Verifiable Credentials service needs access to the key vault. You must configure the key vault with an access policy that allows the account used during configuration to create and delete keys. The account used during configuration also requires permission to sign to create the domain binding for Verifiable Credentials. If you use the same account while testing, modify the default policy to grant the account sign permission, in addition to the default permissions granted to vault creators.
+>By default, the account that creates a vault is the only one with access. The Verifiable Credentials service needs access to the key vault. You must configure the key vault with an access policy that allows the account used during configuration to create and delete keys. The account used during configuration also requires permission to sign to create the domain binding for Verifiable Credentials. If you use the same account while testing, modify the default policy to grant the account sign permission, in addition to the default permissions granted to vault creators.
### Set access policies for the key vault
+A Key Vault [access policy](../../key-vault/general/assign-access-policy.md) defines whether a specified security principal can perform operations on Key Vault secrets and keys. Set access policies in your key vault for both the Azure AD Verifiable Credentials service administrator account, and for the Request Service API principal that you created.
After you create your key vault, Verifiable Credentials generates a set of keys used to provide message security. These keys are stored in Key Vault. You use a key set for signing, updating, and recovering verifiable credentials.
-A Key Vault [access policy](../../key-vault/general/assign-access-policy.md) defines whether a specified security principal can perform operations on Key Vault secrets and keys. Set access policies in your key vault for both the administrator account of the Azure AD Verifiable Credentials service, and for the Request Service API principal that you created.
- ### Set access policies for the Verifiable Credentials Admin user 1. In the [Azure portal](https://portal.azure.com/), go to the key vault you use for this tutorial.
A Key Vault [access policy](../../key-vault/general/assign-access-policy.md) def
1. To save the changes, select **Save**.
+### Set access policies for the Verifiable Credentials Service Request service principal
+
+The Verifiable Credentials Service Request is the Request Service API, and it needs access to Key Vault in order to sign issuance and presentation requests.
+
+1. Select **+ Add Access Policy** and select the service principal **Verifiable Credentials Service Request** with AppId **3db474b9-6a0c-4840-96ac-1fceb342124**.
+
+1. For **Key permissions**, select permissions **Get** and **Sign**.
+
+ ![screenshot of key vault granting access to a security principal](media/verifiable-credentials-configure-tenant/set-key-vault-sp-access-policy.png)
+
+1. To save the changes, select **Save**.
+ ## Set up Verifiable Credentials To set up Azure AD Verifiable Credentials, follow these steps:
To set up Azure AD Verifiable Credentials, follow these steps:
>[!IMPORTANT] > The domain can't be a redirect. Otherwise, the DID and domain can't be linked. Make sure to use HTTPS for the domain. For example: `https://contoso.com`.
- 1. **Key vault**: Enter the name of the key vault that you created earlier.
+ 1. **Key vault**: Select the key vault that you created earlier.
+
+ 1. Under **Advanced**, you may choose the **trust system** that you want to use for your tenant. You can choose from either **Web** or **ION**. Web means your tenant uses [did:web](https://w3c-ccg.github.io/did-method-web/) as the did method and ION means it uses [did:ion](https://identity.foundation/ion/).
+
+ >[!IMPORTANT]
+ > The only way to change the trust system is to opt-out of verifiable credentials and redo the onboarding.
+
-1. Select **Save and create credential**.
+1. Select **Save and get started**.
![Screenshots that shows how to set up Verifiable Credentials.](media/verifiable-credentials-configure-tenant/verifiable-credentials-getting-started.png) ## Register an application in Azure AD
-Azure AD Verifiable Credentials Request Service needs to be able to get access tokens to issue and verify. To get access tokens, register a web application and grant API permission for the API Verifiable Credential Request Service that you set up in the previous step.
+Azure AD Verifiable Credentials Service Request needs to get access tokens to issue and verify. To get access tokens, register a web application and grant API permission for the API Verifiable Credential Request Service that you set up in the previous step.
1. Sign in to the [Azure portal](https://portal.azure.com/) with your administrative account.
-1. If you have access to multiple tenants, select the **Directory + subscription** :::image type="icon" source="media/verifiable-credentials-configure-tenant/portal-directory-subscription-filter.png" border="false"::: icon. Then, search for and select your **Azure Active Directory**.
+1. If you have access to multiple tenants, select the **Directory + subscription**. Then, search for and select your **Azure Active Directory**.
1. Under **Manage**, select **App registrations** > **New registration**.
Azure AD Verifiable Credentials Request Service needs to be able to get access t
### Grant permissions to get access tokens
-In this step, you grant permissions to the Verifiable Credential Request Service principal.
+In this step, you grant permissions to the Verifiable Credentials Service Request Service principal.
To add the required permissions, follow these steps:
To add the required permissions, follow these steps:
1. Select **APIs my organization uses**.
-1. Search for the service principal that you created earlier, **Verifiable Credential Request Service**, and select it.
+1. Search for the service principal that you created earlier, **Verifiable Credentials Service Request**, and select it.
![Screenshot that shows how to select the service principal.](media/verifiable-credentials-configure-tenant/add-app-api-permissions-select-service-principal.png)
To add the required permissions, follow these steps:
1. Select **Grant admin consent for \<your tenant name\>**.
+## Service endpoint configuration
+
+1. In the Azure portal, navigate to the Verifiable credentials page.
+1. Select **Registration**.
+1. Notice that there are two sections:
+ 1. Website ID registration
+ 1. Domain verification.
+1. Select on each section and download the JSON file under each.
+1. Crete a website that you can use to distribute the files. If you specified **https://contoso.com** as your domain, the URLs for each of the files would look as shown below:
+ - https://contoso.com/.well-known/did.json
+ - https://contoso.com/.well-known/did-configuration.json.
+
+Once that you have successfully completed the verification steps, you are ready to continue to the next tutorial.
## Next steps
active-directory Verifiable Credentials Configure Verifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-verifier.md
Previously updated : 05/18/2022 Last updated : 06/16/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
Create a client secret for the registered application you created. The sample ap
1. Under **Expires**, select a duration for which the secret is valid (for example, six months). Then select **Add**.
- 1. Record the secret's **Value**. You'll use this value for configuration in a later step. The secretΓÇÖs value will not be displayed again, and is not retrievable by any other means, so you should record it as soon as it is visible.
+ 1. Record the secret's **Value**. This value is needed in a later step. The secretΓÇÖs value won't be displayed again, and isn't retrievable by **any** other means, so you should record it once it's visible.
At this point, you should have all the required information that you need to set up your sample application.
Now make modifications to the sample app's issuer code to update it with your ve
1. In the *active-directory-verifiable-credentials-dotnet-main* directory, open **Visual Studio Code**. Select the project inside the *1. asp-net-core-api-idtokenhint* directory.
-1. Under the project root folder, open the *appsettings.json* file. This file contains information about your credentials in Azure AD Verifiable Credentials. Update the following properties with the information that you have previously recorded during the earlier steps.
+1. Under the project root folder, open the *appsettings.json* file. This file contains information about your credentials in Azure AD Verifiable Credentials. Update the following properties with the information that you collected during earlier steps.
1. **Tenant ID**: Your tenant ID 1. **Client ID**: Your client ID
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
Previously updated : 04/28/2022 Last updated : 06/02/2022 # Customer intent: As a developer I am looking for information on how to enable my users to control their own information
We implement [the Decentralized Identity Foundation's Well Known DID Configurati
### Why does the Verifiable Credential preview use ION as its DID method, and therefore Bitcoin to provide decentralized public key infrastructure?
-ION is a decentralized, permissionless, scalable decentralized identifier Layer 2 network that runs atop Bitcoin. It achieves scalability without including a special crypto asset token, trusted validators, or centralized consensus mechanisms. We use Bitcoin for the base Layer 1 substrate because of the strength of the decentralized network to provide a high degree of immutability for a chronological event record system.
+Microsoft now offers two different trust systems, Web and ION. You may choose to use either one of them during tenant onboarding. ION is a decentralized, permissionless, scalable decentralized identifier Layer 2 network that runs atop Bitcoin. It achieves scalability without including a special crypto asset token, trusted validators, or centralized consensus mechanisms. We use Bitcoin for the base Layer 1 substrate because of the strength of the decentralized network to provide a high degree of immutability for a chronological event record system.
## Using the preview
active-directory Verifiable Credentials Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-standards.md
+
+ Title: Current and upcoming standards
+description: This article outlines current and upcoming standards
++++++ Last updated : 06/16/2021+
+# Customer intent: As a developer I am looking for information around the open standards supported by Microsoft Entra verified ID
++
+# Entra Verified ID supported standards
++
+This page outlines currently supported open standards for Microsoft Entra Verified ID. The full document outlining how to build an implementation that interoperates with Microsoft is [JWT VC Presentation Profile](https://identity.foundation/jwt-vc-presentation-profile/).
+
+## Standard bodies
+
+- [OpenID Foundation (OIDF)](https://openid.net/foundation/)
+- [Decentralized Identity Foundation (DIF)](https://identity.foundation/)
+- [World Wide Web Consortium (W3C)](https://www.w3.org/)
+- [Internet Engineering Task Force (IETF)](https://www.ietf.org/)
+
+## Supported standards
+
+Entra Verified ID supports the following open standards:
+
+| Component in a Tech Stack | Open Standard | Standard Body |
+|:|:--|:--|
+| Data Model | [Verifiable Credentials Data Model v1.1](https://www.w3.org/TR/vc-data-model) | W3C VC WG |
+| Credential Format | [JSON Web Token VC (JWT-VC)](https://www.w3.org/TR/vc-data-model/#json-web-token) - encoded as JSON and signed as a JWS ([RFC7515](https://datatracker.ietf.org/doc/html/rfc7515)) | W3C VC WG /IETF |
+| Entity Identifier (Issuer, Verifier) | [did:web](https://github.com/w3c-ccg/did-method-web) | W3C CCG |
+| Entity Identifier (Issuer, Verifier, User) | [did:ion](https://github.com/decentralized-identity/ion)| DIF |
+| User Authentication | [Self-Issued OpenID Provider v2](https://openid.net/specs/openid-connect-self-issued-v2-1_0.html)| OIDF |
+| Presentation | [OpenID for Verifiable Credentials](https://openid.net/specs/openid-connect-4-verifiable-presentations-1_0.html) | OIDF|
+| Query language | [Presentation Exchange v1.0](https://identity.foundation/presentation-exchange/spec/v1.0.0/)| DIF |
+| User Authentication | [Self-Issued OpenID Provider v2](https://openid.net/specs/openid-connect-self-issued-v2-1_0.html)| OIDF |
+| Trust in DID Owner | [Well Known DID Configuration](https://identity.foundation/.well-known/resources/did-configuration)| DIF |
+| Revocation |[Verifiable Credential Status List 2021](https://github.com/w3c-ccg/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e)| W3C CCG |
+
+## Supported algorithms
+
+Entra Verified ID supports the following Key Types for the JWS signature verification:
+
+|Key Type|JWT Algorithm|
+|--|-|
+|secp256k1|ES256K|
+|Ed25519|EdDSA|
+
+## Next steps
+
+- [Get started with verifiable credentials](verifiable-credentials-configure-tenant.md)
advisor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Advisor description: Sample Azure Resource Graph queries for Azure Advisor showing use of resource types and tables to access Azure Advisor related resources and properties. Previously updated : 03/08/2022 Last updated : 06/16/2022
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
Learn more about networking in AKS in the following articles:
[services]: https://kubernetes.io/docs/concepts/services-networking/service/ [portal]: https://portal.azure.com [cni-networking]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md
-[kubenet]: https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#kubenet
+[kubenet]: concepts-network.md#kubenet-basic-networking
+ <!-- LINKS - Internal --> [az-aks-create]: /cli/azure/aks#az_aks_create
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
This article shows you how to use *kubenet* networking to create and use a virtu
## Before you begin
+### [Azure CLI](#tab/azure-cli)
+ You need the Azure CLI version 2.0.65 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+### [Azure PowerShell](#tab/azure-powershell)
+
+You need the Azure PowerShell version 7.5.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][install-azure-powershell].
+++ ## Overview of kubenet networking with your own subnet In many environments, you have defined virtual networks and subnets with allocated IP address ranges. These virtual network resources are used to support multiple services and applications. To provide network connectivity, AKS clusters can use *kubenet* (basic networking) or Azure CNI (*advanced networking*).
With *kubenet*, only the nodes receive an IP address in the virtual network subn
![Kubenet network model with an AKS cluster](media/use-kubenet/kubenet-overview.png)
-Azure supports a maximum of 400 routes in a UDR, so you can't have an AKS cluster larger than 400 nodes. AKS [Virtual Nodes][virtual-nodes] and Azure Network Policies aren't supported with *kubenet*. You can use [Calico Network Policies][calico-network-policies], as they are supported with kubenet.
+Azure supports a maximum of 400 routes in a UDR, so you can't have an AKS cluster larger than 400 nodes. AKS [Virtual Nodes][virtual-nodes] and Azure Network Policies aren't supported with *kubenet*. You can use [Calico Network Policies][calico-network-policies], as they are supported with kubenet.
With *Azure CNI*, each pod receives an IP address in the IP subnet, and can directly communicate with other pods and services. Your clusters can be as large as the IP address range you specify. However, the IP address range must be planned in advance, and all of the IP addresses are consumed by the AKS nodes based on the maximum number of pods that they can support. Advanced network features and scenarios such as [Virtual Nodes][virtual-nodes] or Network Policies (either Azure or Calico) are supported with *Azure CNI*.
Use *kubenet* when:
- You have limited IP address space. - Most of the pod communication is within the cluster.-- You don't need advanced AKS features such as virtual nodes or Azure Network Policy. Use [Calico network policies][calico-network-policies].
+- You don't need advanced AKS features such as virtual nodes or Azure Network Policy. Use [Calico network policies][calico-network-policies].
Use *Azure CNI* when: - You have available IP address space. - Most of the pod communication is to resources outside of the cluster. - You don't want to manage user defined routes for pod connectivity.-- You need AKS advanced features such as virtual nodes or Azure Network Policy. Use [Calico network policies][calico-network-policies].
+- You need AKS advanced features such as virtual nodes or Azure Network Policy. Use [Calico network policies][calico-network-policies].
For more information to help you decide which network model to use, see [Compare network models and their support scope][network-comparisons]. ## Create a virtual network and subnet
+### [Azure CLI](#tab/azure-cli)
+ To get started with using *kubenet* and your own virtual network subnet, first create a resource group using the [az group create][az-group-create] command. The following example creates a resource group named *myResourceGroup* in the *eastus* location: ```azurecli-interactive az group create --name myResourceGroup --location eastus ```
-If you don't have an existing virtual network and subnet to use, create these network resources using the [az network vnet create][az-network-vnet-create] command. In the following example, the virtual network is named *myVnet* with the address prefix of *192.168.0.0/16*. A subnet is created named *myAKSSubnet* with the address prefix *192.168.1.0/24*.
+If you don't have an existing virtual network and subnet to use, create these network resources using the [az network vnet create][az-network-vnet-create] command. In the following example, the virtual network is named *myAKSVnet* with the address prefix of *192.168.0.0/16*. A subnet is created named *myAKSSubnet* with the address prefix *192.168.1.0/24*.
```azurecli-interactive az network vnet create \
az network vnet create \
--subnet-prefix 192.168.1.0/24 ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+To get started with using *kubenet* and your own virtual network subnet, first create a resource group using the [New-AzResourceGroup][new-azresourcegroup] cmdlet. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name myResourceGroup -Location eastus
+```
+
+If you don't have an existing virtual network and subnet to use, create these network resources using the [New-AzVirtualNetwork][new-azvirtualnetwork] and [New-AzVirtualNetworkSubnetConfig][new-azvirtualnetworksubnetconfig] cmdlets. In the following example, the virtual network is named *myAKSVnet* with the address prefix of *192.168.0.0/16*. A subnet is created named *myAKSSubnet* with the address prefix *192.168.1.0/24*.
+
+```azurepowershell-interactive
+$myAKSSubnet = New-AzVirtualNetworkSubnetConfig -Name myAKSSubnet -AddressPrefix 192.168.1.0/24
+$params = @{
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus'
+ Name = 'myAKSVnet'
+ AddressPrefix = '192.168.0.0/16'
+ Subnet = $myAKSSubnet
+}
+New-AzVirtualNetwork @params
+```
+++ ## Create a service principal and assign permissions
+### [Azure CLI](#tab/azure-cli)
+ To allow an AKS cluster to interact with other Azure resources, an Azure Active Directory service principal is used. The service principal needs to have permissions to manage the virtual network and subnet that the AKS nodes use. To create a service principal, use the [az ad sp create-for-rbac][az-ad-sp-create-for-rbac] command: ```azurecli-interactive
The following example output shows the application ID and password for your serv
{ "appId": "476b3636-5eda-4c0e-9751-849e70b5cfad", "displayName": "azure-cli-2019-01-09-22-29-24",
- "name": "http://azure-cli-2019-01-09-22-29-24",
- "password": "a1024cd7-af7b-469f-8fd7-b293ecbb174e",
- "tenant": "72f998bf-85f1-41cf-92ab-2e7cd014db46"
+ "password": "tzG8Q~DRYSJtMPhajpHfYaG~.4Yp2VonoZfU9bjy",
+ "tenant": "00000000-0000-0000-0000-000000000000"
} ```
Now assign the service principal for your AKS cluster *Network Contributor* perm
az role assignment create --assignee <appId> --scope $VNET_ID --role "Network Contributor" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+To allow an AKS cluster to interact with other Azure resources, an Azure Active Directory service principal is used. The service principal needs to have permissions to manage the virtual network and subnet that the AKS nodes use. To create a service principal, use the [New-AzADServicePrincipal][new-azadserviceprincipal] command:
+
+```azurepowershell-interactive
+$servicePrincipal = New-AzADServicePrincipal
+```
+
+The following example output shows the application ID and password for your service principal. These values are used in additional steps to assign a role to the service principal and then create the AKS cluster:
+
+```azurepowershell-interactive
+$servicePrincipal.AppId
+$servicePrincipal.PasswordCredentials[0].SecretText
+```
+
+```output
+476b3636-5eda-4c0e-9751-849e70b5cfad
+tzG8Q~DRYSJtMPhajpHfYaG~.4Yp2VonoZfU9bjy
+```
+
+To assign the correct delegations in the remaining steps, use the [Get-AzVirtualNetwork][get-azvirtualnetwork] command to get the required resource IDs. These resource IDs are stored as variables and referenced in the remaining steps:
+
+> [!NOTE]
+> If you are using CLI, you can skip this step. With ARM template or other clients, you need to do the below role assignment.
+
+```azurepowershell-interactive
+$myAKSVnet = Get-AzVirtualNetwork -ResourceGroupName myResourceGroup -Name myAKSVnet
+$VNET_ID = $myAKSVnet.Id
+$SUBNET_ID = $myAKSVnet.Subnets[0].Id
+```
+
+Now assign the service principal for your AKS cluster *Network Contributor* permissions on the virtual network using the [New-AzRoleAssignment][new-azroleassignment] cmdlet. Provide your application ID as shown in the output from the previous command to create the service principal:
+
+```azurepowershell-interactive
+New-AzRoleAssignment -ApplicationId $servicePrincipal.AppId -Scope $VNET_ID -RoleDefinitionName "Network Contributor"
+```
+++ ## Create an AKS cluster in the virtual network
+### [Azure CLI](#tab/azure-cli)
+ You've now created a virtual network and subnet, and created and assigned permissions for a service principal to use those network resources. Now create an AKS cluster in your virtual network and subnet using the [az aks create][az-aks-create] command. Define your own service principal *\<appId>* and *\<password>*, as shown in the output from the previous command to create the service principal. The following IP address ranges are also defined as part of the cluster create process:
az aks create \
--client-secret <password> ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+You've now created a virtual network and subnet, and created and assigned permissions for a service principal to use those network resources. Now create an AKS cluster in your virtual network and subnet using the [New-AzAksCluster][new-azakscluster] cmdlet. Define your own service principal *\<appId>* and *\<password>*, as shown in the output from the previous command to create the service principal.
+
+The following IP address ranges are also defined as part of the cluster create process:
+
+* The *-ServiceCidr* is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
+
+* The *-DnsServiceIP* address should be the *.10* address of your service IP address range.
+
+* The *-PodCidr* should be a large address space that isn't in use elsewhere in your network environment. This range includes any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
+ * This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed if you need more addresses for additional nodes.
+ * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *-PodCidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*.
+ * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.
+
+* The *-DockerBridgeCidr* lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn't overlap with other address ranges in use on your network.
+
+```azurepowershell-interactive
+# Create a PSCredential object using the service principal's ID and secret
+$password = ConvertTo-SecureString -String $servicePrincipal.PasswordCredentials[0].SecretText -AsPlainText -Force
+$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $servicePrincipal.AppId, $password
+
+$params = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myAKSCluster'
+ NodeCount = 3
+ NetworkPlugin = 'kubenet'
+ ServiceCidr = '10.0.0.0/16'
+ DnsServiceIP = '10.0.0.10'
+ PodCidr = '10.244.0.0/16'
+ DockerBridgeCidr = '172.17.0.1/16'
+ NodeVnetSubnetID = $SUBNET_ID
+ ServicePrincipalIdAndSecret = $credential
+}
+New-AzAksCluster @params
+```
+
+> [!Note]
+> If you wish to enable an AKS cluster to include a [Calico network policy][calico-network-policies] you can use the following command.
+
+```azurepowershell-interactive
+$params = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myAKSCluster'
+ NodeCount = 3
+ NetworkPlugin = 'kubenet'
+ NetworkPolicy = 'calico'
+ ServiceCidr = '10.0.0.0/16'
+ DnsServiceIP = '10.0.0.10'
+ PodCidr = '10.244.0.0/16'
+ DockerBridgeCidr = '172.17.0.1/16'
+ NodeVnetSubnetID = $SUBNET_ID
+ ServicePrincipalIdAndSecret = $credential
+}
+New-AzAksCluster @params
+```
+++ When you create an AKS cluster, a network security group and route table are automatically created. These network resources are managed by the AKS control plane. The network security group is automatically associated with the virtual NICs on your nodes. The route table is automatically associated with the virtual network subnet. Network security group rules and route tables are automatically updated as you create and expose services. ## Bring your own subnet and route table with kubenet
Limitations:
After you create a custom route table and associate it to your subnet in your virtual network, you can create a new AKS cluster that uses your route table. You need to use the subnet ID for where you plan to deploy your AKS cluster. This subnet also must be associated with your custom route table.
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli-interactive # Find your subnet ID az network vnet subnet list --resource-group
az network vnet subnet list --resource-group
``` ```azurecli-interactive
-# Create a kubernetes cluster with with a custom subnet preconfigured with a route table
-az aks create -g MyResourceGroup -n MyManagedCluster --vnet-subnet-id MySubnetID
+# Create a kubernetes cluster with a custom subnet preconfigured with a route table
+az aks create -g MyResourceGroup -n MyManagedCluster --vnet-subnet-id <MySubnetID>
```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+# Find your subnet ID
+Get-AzVirtualNetwork -ResourceGroupName MyResourceGroup -Name myAKSVnet |
+ Select-Object -ExpandProperty subnets |
+ Select-Object -Property Id
+```
+
+```azurepowershell-interactive
+# Create a kubernetes cluster with a custom subnet preconfigured with a route table
+New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyManagedCluster -NodeVnetSubnetID <MySubnetID>
+```
+++ ## Next steps With an AKS cluster deployed into your existing virtual network subnet, you can now use the cluster as normal. Get started with [creating new apps using Helm][develop-helm] or [deploy existing apps using Helm][use-helm].
With an AKS cluster deployed into your existing virtual network subnet, you can
<!-- LINKS - Internal --> [install-azure-cli]: /cli/azure/install-azure-cli
+[install-azure-powershell]: /powershell/azure/install-az-ps
[aks-network-concepts]: concepts-network.md [aks-network-nsg]: concepts-network.md#network-security-groups [az-group-create]: /cli/azure/group#az_group_create
+[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup
[az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create
+[new-azvirtualnetwork]: /powershell/module/az.network/new-azvirtualnetwork
+[new-azvirtualnetworksubnetconfig]: /powershell/module/az.network/new-azvirtualnetworksubnetconfig
[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
+[new-azadserviceprincipal]: /powershell/module/az.resources/new-azadserviceprincipal
[az-network-vnet-show]: /cli/azure/network/vnet#az_network_vnet_show
+[get-azvirtualnetwork]: /powershell/module/az.network/get-azvirtualnetwork
[az-network-vnet-subnet-show]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_show [az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[new-azroleassignment]: /powershell/module/az.resources/new-azroleassignment
[az-aks-create]: /cli/azure/aks#az_aks_create
+[new-azakscluster]: /powershell/module/az.aks/new-azakscluster
[byo-subnet-route-table]: #bring-your-own-subnet-and-route-table-with-kubenet [develop-helm]: quickstart-helm.md [use-helm]: kubernetes-helm.md
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integra
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Before you start, ensure that your version of the Azure CLI is 2.30.0 or later. If it's an earlier version, [install the latest version](/cli/azure/install-azure-cli).
+- If restricting Ingress to the cluster, ensure Ports 9808 and 8095 are open.
### Supported Kubernetes versions
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Title: Frequently asked questions for Azure Kubernetes Service (AKS) description: Find answers to some of the common questions about Azure Kubernetes Service (AKS). Previously updated : 05/23/2021 Last updated : 06/17/2022
Azure automatically applies security patches to the Linux nodes in your cluster
- By upgrading your AKS cluster. The cluster upgrades [cordon and drain nodes][cordon-drain] automatically and then bring a new node online with the latest Ubuntu image and a new patch version or a minor Kubernetes version. For more information, see [Upgrade an AKS cluster][aks-upgrade]. - By using [node image upgrade](node-image-upgrade.md).
+## What is the size limit on a container image in AKS?
+
+AKS does not set a limit on the container image size. However, it is important to understand that the larger the image, the higher the memory demand. This could potentially exceed resource limits or the overall available memory of worker nodes. By default, memory for VM size Standard_DS2_v2 for an AKS cluster is set to 7 GiB.
+
+When a container image is excessively large, as in the Terabyte (TBs) range, kubelet might not be able to pull it from your container registry to a node due to lack of disk space.
+ ### Windows Server nodes For Windows Server nodes, Windows Update does not automatically run and apply the latest updates. On a regular schedule around the Windows Update release cycle and your own validation process, you should perform an upgrade on the cluster and the Windows Server node pool(s) in your AKS cluster. This upgrade process creates nodes that run the latest Windows Server image and patches, then removes the older nodes. For more information on this process, see [Upgrade a node pool in AKS][nodepool-upgrade].
-### Are there additional security threats relevant to AKS that customers should be aware of?
+### Are there security threats targeting AKS that customers should be aware of?
-Microsoft provides guidance on additional actions you can take to secure your workloads through services like [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks). The following is a list of additional security threats related to AKS and Kubernetes that customers should be aware of:
+Microsoft provides guidance for other actions you can take to secure your workloads through services like [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks). The following security threat is related to AKS and Kubernetes that customers should be aware of:
* [New large-scale campaign targets Kubeflow](https://techcommunity.microsoft.com/t5/azure-security-center/new-large-scale-campaign-targets-kubeflow/ba-p/2425750) - June 8, 2021 ## How does the managed Control Plane communicate with my Nodes?
-AKS uses a secure tunnel communication to allow the api-server and individual node kubelets to communicate even on separate virtual networks. The tunnel is secured through TLS encryption. The current main tunnel that is used by AKS is [Konnectivity, previously known as apiserver-network-proxy](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/). Please ensure that all network rules follow the [Azure required network rules and FQDNs](limit-egress-traffic.md).
+AKS uses a secure tunnel communication to allow the api-server and individual node kubelets to communicate even on separate virtual networks. The tunnel is secured through TLS encryption. The current main tunnel that is used by AKS is [Konnectivity, previously known as apiserver-network-proxy](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/). Verify all network rules follow the [Azure required network rules and FQDNs](limit-egress-traffic.md).
## Why are two resource groups created with AKS?
-AKS builds upon a number of Azure infrastructure resources, including virtual machine scale sets, virtual networks, and managed disks. This enables you to leverage many of the core capabilities of the Azure platform within the managed Kubernetes environment provided by AKS. For example, most Azure virtual machine types can be used directly with AKS and Azure Reservations can be used to receive discounts on those resources automatically.
+AKS builds upon many Azure infrastructure resources, including virtual machine scale sets, virtual networks, and managed disks. This enables you to apply many of the core capabilities of the Azure platform within the managed Kubernetes environment provided by AKS. For example, most Azure virtual machine types can be used directly with AKS and Azure Reservations can be used to receive discounts on those resources automatically.
To enable this architecture, each AKS deployment spans two resource groups:
AKS firewalls the API server egress so your admission controller webhooks need t
To protect the stability of the system and prevent custom admission controllers from impacting internal services in the kube-system, namespace AKS has an **Admissions Enforcer**, which automatically excludes kube-system and AKS internal namespaces. This service ensures the custom admission controllers don't affect the services running in kube-system.
-If you have a critical use case for having something deployed on kube-system (not recommended) which you require to be covered by your custom admission webhook, you may add the below label or annotation so that Admissions Enforcer ignores it.
+If you have a critical use case for deploying something on kube-system (not recommended) in support of your custom admission webhook, you may add the below label or annotation so that Admissions Enforcer ignores it.
Label: ```"admissions.enforcer/disabled": "true"``` or Annotation: ```"admissions.enforcer/disabled": true```
Windows Server support for node pool includes some limitations that are part of
AKS provides SLA guarantees as an optional feature with [Uptime SLA][uptime-sla].
-The Free SKU offered by default doesn't have a associated Service Level *Agreement*, but has a Service Level *Objective* of 99.5%. It could happen that transient connectivity issues are observed in case of upgrades, unhealthy underlay nodes, platform maintenance, application overwhelming the API Server with requests, etc. If your workload doesn't tolerate API Server restarts, then we suggest using Uptime SLA.
+The Free SKU offered by default doesn't have a associated Service Level *Agreement*, but has a Service Level *Objective* of 99.5%. Transient connectivity issues are observed if there was an upgrade, unhealthy underlay nodes, platform maintenance, or an application overwhelms the API Server with requests, etc. If your workload doesn't tolerate API Server restarts, then we suggest using Uptime SLA.
## Can I apply Azure reservation discounts to my AKS agent nodes?
-AKS agent nodes are billed as standard Azure virtual machines, so if you've purchased [Azure reservations][reservation-discounts] for the VM size that you're using in AKS, those discounts are automatically applied.
+AKS agent nodes are billed as standard Azure virtual machines. If you've purchased [Azure reservations][reservation-discounts] for the VM size that you're using in AKS, those discounts are automatically applied.
## Can I move/migrate my cluster between Azure tenants?
Moving or renaming your AKS cluster and its associated resources isn't supported
## Why is my cluster delete taking so long?
-Most clusters are deleted upon user request; in some cases, especially where customers are bringing their own Resource Group, or doing cross-RG tasks deletion can take additional time or fail. If you have an issue with deletes, double-check that you do not have locks on the RG, that any resources outside of the RG are disassociated from the RG, and so on.
+Most clusters are deleted upon user request; in some cases, especially where customers are bringing their own Resource Group, or doing cross-RG tasks deletion can take more time or fail. If you have an issue with deletes, double-check that you do not have locks on the RG, that any resources outside of the RG are disassociated from the RG, and so on.
## If I have pod / deployments in state 'NodeLost' or 'Unknown' can I still upgrade my cluster?
Confirm your service principal hasn't expired. See: [AKS service principal](./k
Confirm your service principal hasn't expired. See: [AKS service principal](./kubernetes-service-principal.md) and [AKS update credentials](./update-credentials.md). ## Can I scale my AKS cluster to zero?+ You can completely [stop a running AKS cluster](start-stop-cluster.md), saving on the respective compute costs. Additionally, you may also choose to [scale or autoscale all or specific `User` node pools](scale-cluster.md#scale-user-node-pools-to-0) to 0, maintaining only the necessary cluster configuration. You can't directly scale [system node pools](use-system-pools.md) to zero.
The following images have functional requirements to "Run as Root" and exception
## What is Azure CNI Transparent Mode vs. Bridge Mode?
-From v1.2.0 Azure CNI will have Transparent mode as default for single tenancy Linux CNI deployments. Transparent mode is replacing bridge mode. In this section, we will discuss more about the differences about both modes and what are the benefits/limitation for using Transparent mode in Azure CNI.
+Starting with version 1.2.0, Azure CNI sets Transparent mode as default for single tenancy Linux CNI deployments. Transparent mode is replacing bridge mode. In this section, we will discuss more about the differences about both modes and what are the benefits/limitation for using Transparent mode in Azure CNI.
### Bridge mode
root@k8s-agentpool1-20465682-1:/#
``` ### Transparent mode+ Transparent mode takes a straight forward approach to setting up Linux networking. In this mode, Azure CNI won't change any properties of eth0 interface in the Linux VM. This minimal approach of changing the Linux networking properties helps reduce complex corner case issues that clusters could face with Bridge mode. In Transparent Mode, Azure CNI will create and add host-side pod `veth` pair interfaces that will be added to the host network. Intra VM Pod-to-Pod communication is through ip routes that the CNI will add. Essentially Pod-to-Pod communication is over layer 3 and pod traffic is routed by L3 routing rules. :::image type="content" source="media/faq/transparent-mode.png" alt-text="Transparent mode topology":::
Below is an example ip route setup of transparent mode, each Pod's interface wil
Traditionally if your pod is running as a non-root user (which you should), you must specify a `fsGroup` inside the podΓÇÖs security context so that the volume can be readable and writable by the Pod. This requirement is covered in more detail in [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
-But one side-effect of setting `fsGroup` is that, each time a volume is mounted, Kubernetes must recursively `chown()` and `chmod()` all the files and directories inside the volume - with a few exceptions noted below. This happens even if group ownership of the volume already matches the requested `fsGroup`, and can be pretty expensive for larger volumes with lots of small files, which causes pod startup to take a long time. This scenario has been a known problem before v1.20 and the workaround is setting the Pod run as root:
+But one side-effect of setting `fsGroup` is that, each time a volume is mounted, Kubernetes must recursively `chown()` and `chmod()` all the files and directories inside the volume - with a few exceptions noted below. This happens even if group ownership of the volume already matches the requested `fsGroup`, and can be expensive for larger volumes with lots of small files, which causes pod startup to take a long time. This scenario has been a known problem before v1.20, and the workaround is setting the Pod run as root:
```yaml apiVersion: v1
spec:
fsGroup: 0 ```
-The issue has been resolved by Kubernetes v1.20, refer [Kubernetes 1.20: Granular Control of Volume Permission Changes](https://kubernetes.io/blog/2020/12/14/kubernetes-release-1.20-fsgroupchangepolicy-fsgrouppolicy/) for more details.
+The issue has been resolved with Kubernetes version 1.20. For more information, see [Kubernetes 1.20: Granular Control of Volume Permission Changes](https://kubernetes.io/blog/2020/12/14/kubernetes-release-1.20-fsgroupchangepolicy-fsgrouppolicy/).
## Can I use FIPS cryptographic libraries with deployments on AKS?
-FIPS-enabled nodes are currently are now Generally Available on Linux-based node pools. For more details, see [Add a FIPS-enabled node pool](use-multiple-node-pools.md#add-a-fips-enabled-node-pool).
+FIPS-enabled nodes are currently are now supported on Linux-based node pools. For more information, see [Add a FIPS-enabled node pool](use-multiple-node-pools.md#add-a-fips-enabled-node-pool).
## Can I configure NSGs with AKS?
-AKS doesn't apply Network Security Groups (NSGs) to its subnet and will not modify any of the NSGs associated with that subnet. AKS will only modify the NSGs at the NIC level. If you're using CNI, you also must ensure the security rules in the NSGs allow traffic between the node and pod CIDR ranges. If you're using kubenet, you also must ensure the security rules in the NSGs allow traffic between the node and pod CIDR. For more details, see [Network security groups](concepts-network.md#network-security-groups).
+AKS doesn't apply Network Security Groups (NSGs) to its subnet and doesn't modify any of the NSGs associated with that subnet. AKS only modifies the network interfaces NSGs settings. If you're using CNI, you also must ensure the security rules in the NSGs allow traffic between the node and pod CIDR ranges. If you're using kubenet, you must also ensure the security rules in the NSGs allow traffic between the node and pod CIDR. For more information, see [Network security groups](concepts-network.md#network-security-groups).
<!-- LINKS - internal -->
aks Kubernetes Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-portal.md
To access the Kubernetes resources, you must have access to the AKS cluster, the
For existing clusters, you may need to enable the Kubernetes resource view. To enable the resource view, follow the prompts in the portal for your cluster.
+### [Azure CLI](#tab/azure-cli)
+ > [!TIP] > The AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) can be added to limit API server access to only the firewall's public endpoint. Another option for such clusters is updating `--api-server-authorized-ip-ranges` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with below command or by searching "what is my IP address" in an internet browser.
CURRENT_IP=$(dig +short myip.opendns.com @resolver1.opendns.com)
az aks update -g $RG -n $AKSNAME --api-server-authorized-ip-ranges $CURRENT_IP/32 ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+> [!TIP]
+> The AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) can be added to limit API server access to only the firewall's public endpoint. Another option for such clusters is updating `-ApiServerAccessAuthorizedIpRange` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with below command or by searching "what is my IP address" in an internet browser.
+
+```azurepowershell
+# Retrieve your IP address
+$CURRENT_IP = (Invoke-RestMethod -Uri http://ipinfo.io/json).ip
+
+# Add to AKS approved list
+Set-AzAksCluster -ResourceGroupName $RG -Name $AKSNAME -ApiServerAccessAuthorizedIpRange $CURRENT_IP/32
+```
+++ ## Next steps This article showed you how to access Kubernetes resources for your AKS cluster. See [Deployments and YAML manifests][deployments] for a deeper understanding of cluster resources and the YAML files that are accessed with the Kubernetes resource viewer.
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
To learn more about creating a Windows Server node pool, see [Create an AKS clus
- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). - If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
-[Az account](/cli/azure/account) command.
+[az account](/cli/azure/account) command.
- Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* are registered on your subscription. To check the registration status:
- ```azurecli
+ ```azurecli-interactive
az provider show -n Microsoft.OperationsManagement -o table az provider show -n Microsoft.OperationalInsights -o table ``` If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using:
- ```azurecli
+ ```azurecli-interactive
az provider register --namespace Microsoft.OperationsManagement az provider register --namespace Microsoft.OperationalInsights ```
Two [Kubernetes Services][kubernetes-service] are also created:
## Test the application
-When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
+When the application runs, a Kubernetes service exposes the application front-end to the internet. This process can take a few minutes to complete.
Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] -- If you are unfamiliar with using the Bash environment in Azure Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).
+- If you're unfamiliar with the Azure Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).
-- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+- The identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
## Create an AKS cluster
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
## Connect to the cluster
-To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. If you are unfamiliar with the Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. If you're unfamiliar with the Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).
1. Open Cloud Shell using the `>_` button on the top of the Azure portal.
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
> [!NOTE] > To perform these operations in a local shell installation:
- >
- > 1. Verify Azure CLI is installed.
- > 2. Connect to Azure via the `az login` command.
+ > 1. Verify Azure CLI or Azure PowerShell is installed.
+ > 2. Connect to Azure via the `az login` or `Connect-AzAccount` command.
+
+### [Azure CLI](#tab/azure-cli)
2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command downloads credentials and configures the Kubernetes CLI to use them.
- ```azurecli
+ ```azurecli-interactive
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following command downloads credentials and configures the Kubernetes CLI to use them.
+
+ ```azurepowershell-interactive
+ Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+ ```
+
++ 3. Verify the connection to your cluster using `kubectl get` to return a list of the cluster nodes. ```console
To see the Azure Vote app in action, open a web browser to the external IP addre
## Delete cluster
-To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Select the **Delete** button on the AKS cluster dashboard. You can also use the [az aks delete][az-aks-delete] command in the Cloud Shell:
+To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Select the **Delete** button on the AKS cluster dashboard. You can also use the [az group delete][az-group-delete] command or the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
-```azurecli
-az aks delete --resource-group myResourceGroup --name myAKSCluster --yes --no-wait
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az group delete --name myResourceGroup --yes --no-wait
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myResourceGroup
``` ++ > [!NOTE]
-> When you delete the cluster, system-assigned managed identity is managed by the platform and does not require removal.
+> The AKS cluster was created with a system-assigned managed identity. This identity is managed by the platform and doesn't require removal.
## Next steps
To learn more about AKS by walking through a complete example, including buildin
<!-- LINKS - internal --> [kubernetes-concepts]: ../concepts-clusters-workloads.md [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
+[az-group-delete]: /cli/azure/group#az-group-delete
+[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
[az-aks-delete]: /cli/azure/aks#az_aks_delete [aks-monitor]: ../azure-monitor/containers/container-insights-overview.md [aks-network]: ../concepts-network.md
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
An [Azure resource group](../../azure-resource-manager/management/overview.md) i
The following example creates a resource group named *myResourceGroup* in the *eastus* region.
-Create a resource group using the [New-AzResourceGroup][new-azresourcegroup]
-cmdlet.
+Create a resource group using the [New-AzResourceGroup][new-azresourcegroup] cmdlet.
```azurepowershell-interactive New-AzResourceGroup -Name myResourceGroup -Location eastus
ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resource
## Create AKS cluster
-Create an AKS cluster using the [New-AzAksCluster][new-azakscluster] cmdlet with the *--WorkspaceResourceId* parameter to enable [Azure Monitor container insights][azure-monitor-containers].
-
-1. Generate an SSH key pair using the `ssh-keygen` command-line utility. For more details, see:
- * [Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../../virtual-machines/linux/mac-create-ssh-keys.md)
- * [How to use SSH keys with Windows on Azure](../../virtual-machines/linux/ssh-from-windows.md)
+Create an AKS cluster using the [New-AzAksCluster][new-azakscluster] cmdlet with the *-WorkspaceResourceId* parameter to enable [Azure Monitor container insights][azure-monitor-containers].
1. Create an AKS cluster named **myAKSCluster** with one node. ```azurepowershell-interactive
- New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1
+ New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1 -GenerateSshKey -WorkspaceResourceId <WORKSPACE_RESOURCE_ID>
``` After a few minutes, the command completes and returns information about the cluster.
Two [Kubernetes Services][kubernetes-service] are also created:
* An external service to access the Azure Vote application from the internet. 1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system
+ * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
1. Copy in the following YAML definition: ```yaml
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+### [Azure CLI](#tab/azure-cli)
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+### [Azure PowerShell](#tab/azure-powershell)
+
+- If you're running PowerShell locally, install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount][connect-azaccount] cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell]. If using Azure Cloud Shell, the latest version is already installed.
+++ - To create an AKS cluster using a Resource Manager template, you provide an SSH public key. If you need this resource, see the following section; otherwise skip to the [Review the template](#review-the-template) section. - The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
It takes a few minutes to create the AKS cluster. Wait for the cluster to be suc
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
+### [Azure CLI](#tab/azure-cli)
+ 1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command: ```azurecli
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
kubectl get nodes ```
- The following output example shows the single node created in the previous steps. Make sure the node status is *Ready*:
+ The following output example shows the three nodes created in the previous steps. Make sure the node status is *Ready*:
```output NAME STATUS ROLES AGE VERSION
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6 ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+1. Install `kubectl` locally using the [Install-AzAksKubectl][install-azakskubectl] cmdlet:
+
+ ```azurepowershell
+ Install-AzAksKubectl
+ ```
+
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them.
+
+ ```azurepowershell-interactive
+ Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+ ```
+
+3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+
+ ```azurepowershell-interactive
+ kubectl get nodes
+ ```
+
+ The following output example shows the three nodes created in the previous steps. Make sure the node status is *Ready*:
+
+ ```plaintext
+ NAME STATUS ROLES AGE VERSION
+ aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6
+ aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6
+ aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6
+ ```
+++ ### Deploy the application A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
Two [Kubernetes Services][kubernetes-service] are also created:
* An external service to access the Azure Vote application from the internet. 1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system
+ * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
1. Copy in the following YAML definition: ```yaml
To see the Azure Vote app in action, open a web browser to the external IP addre
## Clean up resources
+### [Azure CLI](#tab/azure-cli)
+ To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources. ```azurecli-interactive az group delete --name myResourceGroup --yes --no-wait ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myResourceGroup
+```
+++ > [!NOTE] > The AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal.
To learn more about AKS, and walk through a complete code to deployment example,
[az-aks-browse]: /cli/azure/aks#az_aks_browse [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
+[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
[az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete
+[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
[azure-cli-install]: /cli/azure/install-azure-cli
+[install-azure-powershell]: /powershell/azure/install-az-ps
+[connect-azaccount]: /powershell/module/az.accounts/Connect-AzAccount
[sp-delete]: ../kubernetes-service-principal.md#additional-considerations [azure-portal]: https://portal.azure.com [kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
This article assumes a basic understanding of Kubernetes concepts. For more info
- The identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). - If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
-[Az account](/cli/azure/account) command.
+[az account](/cli/azure/account) command.
### Limitations
The following example output shows the resource group created successfully:
To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI][azure-cni-about] (advanced) network plugin. For more detailed information to help plan out the required subnet ranges and network considerations, see [configure Azure CNI networking][use-advanced-networking]. Use the [az aks create][az-aks-create] command to create an AKS cluster named *myAKSCluster*. This command will create the necessary network resources if they don't exist. * The cluster is configured with two nodes.
-* The `--windows-admin-password` and `--windows-admin-username` parameters set the administrator credentials for any Windows Server nodes on the cluster and must meet [Windows Server password requirements][windows-server-password]. If you don't specify the *windows-admin-password* parameter, you will be prompted to provide a value.
+* The `--windows-admin-password` and `--windows-admin-username` parameters set the administrator credentials for any Windows Server nodes on the cluster and must meet [Windows Server password requirements][windows-server-password]. If you don't specify the `--windows-admin-password` parameter, you will be prompted to provide a value.
* The node pool uses `VirtualMachineScaleSets`. > [!NOTE]
Create a username to use as administrator credentials for the Windows Server nod
echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME ```
-Create your cluster ensuring you specify `--windows-admin-username` parameter. The following example command creates a cluster using the value from *WINDOWS_USERNAME* you set in the previous command. Alternatively you can provide a different username directly in the parameter instead of using *WINDOWS_USERNAME*. The following command will also prompt you to create a password for the administrator credentials for the Windows Server nodes on your cluster. Alternatively, you can use the *windows-admin-password* parameter and specify your own value there.
+Create your cluster ensuring you specify `--windows-admin-username` parameter. The following example command creates a cluster using the value from *WINDOWS_USERNAME* you set in the previous command. Alternatively you can provide a different username directly in the parameter instead of using *WINDOWS_USERNAME*. The following command will also prompt you to create a password for the administrator credentials for the Windows Server nodes on your cluster. Alternatively, you can use the `--windows-admin-password` parameter and specify your own value there.
```azurecli-interactive az aks create \
After a few minutes, the command completes and returns JSON-formatted informatio
By default, an AKS cluster is created with a node pool that can run Linux containers. Use `az aks nodepool add` command to add an additional node pool that can run Windows Server containers alongside the Linux node pool.
-```azurecli
+```azurecli-interactive
az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
When creating a Windows node pool, the default operating system will be Windows
[!INCLUDE [preview features callout](../includes/preview/preview-callout.md)]
-### Install the `aks-preview` Azure CLI
+### Install the `aks-preview` extension
You also need the *aks-preview* Azure CLI extension version `0.5.68` or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command, or install any available updates by using the [az extension update][az-extension-update] command.
Beginning in Kubernetes version 1.20 and greater, you can specify `containerd` a
> [!IMPORTANT] > When using `containerd` with Windows Server 2019 node pools: > - Both the control plane and Windows Server 2019 node pools must use Kubernetes version 1.20 or greater.
-> - When creating or updating a node pool to run Windows Server containers, the default value for *node-vm-size* is *Standard_D2s_v3* which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the *node-vm-size* parameter, please check the list of [restricted VM sizes][restricted-vm-sizes].
+> - When creating or updating a node pool to run Windows Server containers, the default value for `--node-vm-size` is *Standard_D2s_v3* which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the `--node-vm-size` parameter, please check the list of [restricted VM sizes][restricted-vm-sizes].
> - It is highly recommended that you use [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly. ### Add a Windows Server node pool with `containerd`
Use the `az aks nodepool add` command to add a node pool that can run Windows Se
> [!NOTE] > If you do not specify the *WindowsContainerRuntime=containerd* custom header, the node pool will use Docker as the container runtime.
-```azurecli
+```azurecli-interactive
az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
The above command creates a new Windows Server node pool using `containerd` as t
Use the `az aks nodepool upgrade` command to upgrade a specific node pool from Docker to `containerd`.
-```azurecli
+```azurecli-interactive
az aks nodepool upgrade \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
The above command upgrades a node pool named *npwd* to the `containerd` runtime.
To upgrade all existing node pools in a cluster to use the `containerd` runtime for all Windows Server node pools:
-```azurecli
+```azurecli-interactive
az aks upgrade \ --resource-group myResourceGroup \ --name myAKSCluster \
To verify the connection to your cluster, use the [kubectl get][kubectl-get] com
kubectl get nodes -o wide ```
-The following example output shows the all the nodes in the cluster. Make sure that the status of all nodes is *Ready*:
+The following example output shows all nodes in the cluster. Make sure that the status of all nodes is *Ready*:
```output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
A Kubernetes manifest file defines a desired state for the cluster, such as what
The ASP.NET sample application is provided as part of the [.NET Framework Samples][dotnet-samples] and runs in a Windows Server container. AKS requires Windows Server containers to be based on images of *Windows Server 2019* or greater. The Kubernetes manifest file must also define a [node selector][node-selector] to tell your AKS cluster to run your ASP.NET sample application's pod on a node that can run Windows Server containers.
-Create a file named `sample.yaml` and copy in the following YAML definition. If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system:
+Create a file named `sample.yaml` and copy in the following YAML definition. If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system:
```yaml apiVersion: apps/v1
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
module and connect to your Azure account using the
[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell].
-* You also must install the [Az.Aks](/powershell/module/az.aks) PowerShell module:
-
- ```azurepowershell-interactive
- Install-Module Az.Aks
- ```
[!INCLUDE [cloud-shell-try-it](../../../includes/cloud-shell-try-it.md)]
ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resource
## Create an AKS cluster
-Use the `ssh-keygen` command-line utility to generate an SSH key pair. For more details, see
-[Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../../virtual-machines/linux/mac-create-ssh-keys.md).
- To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI][azure-cni-about] (advanced) network plugin. For more detailed information to help plan out the required subnet ranges and network considerations, see
network resources if they don't exist.
> node pool. ```azurepowershell-interactive
-$Username = Read-Host -Prompt 'Please create a username for the administrator credentials on your Windows Server containers: '
-$Password = Read-Host -Prompt 'Please create a password for the administrator credentials on your Windows Server containers: ' -AsSecureString
-New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -NetworkPlugin azure -NodeVmSetType VirtualMachineScaleSets -WindowsProfileAdminUserName $Username -WindowsProfileAdminUserPassword $Password
+$AdminCreds = Get-Credential -Message 'Please create the administrator credentials for your Windows Server containers'
+New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -NetworkPlugin azure -NodeVmSetType VirtualMachineScaleSets -WindowsProfileAdminUserName $AdminCreds.UserName -WindowsProfileAdminUserPassword $AdminCreds.Password -GenerateSshKey
``` > [!Note]
New-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -
``` The above command creates a new node pool named **npwin** and adds it to the **myAKSCluster**. When
-creating a node pool to run Windows Server containers, the default value for **VmSize** is
-**Standard_D2s_v3**. If you choose to set the **VmSize** parameter, check the list of
+creating a node pool to run Windows Server containers, the default value for `-VmSize` is
+**Standard_D2s_v3**. If you choose to set the `-VmSize` parameter, check the list of
[restricted VM sizes][restricted-vm-sizes]. The minimum recommended size is **Standard_D2s_v3**. The previous command also uses the default subnet in the default vnet created when running `New-AzAksCluster`.
To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes comma
you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the `Install-AzAksKubectl` cmdlet:
-```azurepowershell-interactive
+```azurepowershell
Install-AzAksKubectl ```
of **Windows Server 2019** or greater. The Kubernetes manifest file must also de
[node selector][node-selector] to tell your AKS cluster to run your ASP.NET sample application's pod on a node that can run Windows Server containers.
-Create a file named `sample.yaml` and copy in the following YAML definition. If you use the Azure
-Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical
-system:
+Create a file named `sample.yaml` and copy in the following YAML definition. If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system:
```yaml apiVersion: apps/v1
service/sample created
## Test the application
-When the application runs, a Kubernetes service exposes the application frontend to the internet.
+When the application runs, a Kubernetes service exposes the application front end to the internet.
This process can take a few minutes to complete. Occasionally the service can take longer than a few minutes to provision. Allow up to 10 minutes in these cases.
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md
New features typically move through *alpha* and *beta* status before they become
AKS supports three minor versions of Kubernetes. Once a new minor patch version is introduced, the oldest minor version and patch releases supported are retired. Minor Kubernetes updates happen on a periodic basis. To stay within support, ensure you have a governance process to check for necessary upgrades. For more information, see [Supported Kubernetes versions AKS][aks-supported-versions].
+### [Azure CLI](#tab/azure-cli)
+ To check the versions that are available for your cluster, use the [az aks get-upgrades][az-aks-get-upgrades] command as shown in the following example: ```azurecli-interactive
-az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster
+az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --output table
``` You can then upgrade your AKS cluster using the [az aks upgrade][az-aks-upgrade] command. The upgrade process safely:
You can then upgrade your AKS cluster using the [az aks upgrade][az-aks-upgrade]
* Schedules pods on remaining nodes. * Deploys a new node running the latest OS and Kubernetes versions.
+### [Azure PowerShell](#tab/azure-powershell)
+
+To check the versions that are available for your cluster, use the [Get-AzAksUpgradeProfile][get-azaksupgradeprofile] cmdlet as shown in the following example:
+
+```azurepowershell-interactive
+Get-AzAksUpgradeProfile -ResourceGroupName myResourceGroup -ClusterName myAKSCluster |
+ Select-Object -Property Name, ControlPlaneProfileKubernetesVersion -ExpandProperty ControlPlaneProfileUpgrade |
+ Format-Table -Property *
+```
+
+You can then upgrade your AKS cluster using the [Set-AzAksCluster][set-azakscluster] command. The upgrade process safely:
+* Cordons and drains one node at a time.
+* Schedules pods on remaining nodes.
+* Deploys a new node running the latest OS and Kubernetes versions.
+++ >[!IMPORTANT] > Test new minor versions in a dev test environment and validate that your workload remains healthy with the new Kubernetes version. > > Kubernetes may deprecate APIs (like in version 1.16) that your workloads rely on. When bringing new versions into production, consider using [multiple node pools on separate versions](use-multiple-node-pools.md) and upgrade individual pools one at a time to progressively roll the update across a cluster. If running multiple clusters, upgrade one cluster at a time to progressively monitor for impact or changes. >
+>### [Azure CLI](#tab/azure-cli)
+>
>```azurecli-interactive >az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version KUBERNETES_VERSION >```
+>
+>### [Azure PowerShell](#tab/azure-powershell)
+>
+>```azurepowershell-interactive
+>Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -KubernetesVersion <KUBERNETES_VERSION>
+>```
+>
+>
+ For more information about upgrades in AKS, see [Supported Kubernetes versions in AKS][aks-supported-versions] and [Upgrade an AKS cluster][aks-upgrade].
For Windows Server nodes, regularly perform a node image upgrade operation to sa
<!-- INTERNAL LINKS --> [az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
+[get-azaksupgradeprofile]: /powershell/module/az.aks/get-azaksupgradeprofile
[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
+[set-azakscluster]: /powershell/module/az.aks/set-azakscluster
[aks-supported-versions]: supported-kubernetes-versions.md [aks-upgrade]: upgrade-cluster.md [aks-best-practices-identity]: concepts-identity.md
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
In this quickstart, you will get familiar with using the [Dapr cluster extension
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
-* [Azure CLI installed](/cli/azure/install-azure-cli).
+* [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install] installed.
* An AKS or Arc-enabled Kubernetes cluster with the [Dapr cluster extension][dapr-overview] enabled ## Clone the repository
You should see the latest JSON in the response.
## Clean up resources
+### [Azure CLI](#tab/azure-cli)
+ Use the [az group delete][az-group-delete] command to remove the resource group, the cluster, the namespace, and all related resources.
-```azurecli-interactive
+```azurecli
az group delete --name MyResourceGroup ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use the [Remove-AzResourceGroup][remove-azresourcegroup] command to remove the resource group, the cluster, the namespace, and all related resources.
+
+```azurepowershell
+Remove-AzResourceGroup -Name MyResourceGroup
+```
+++ ## Next steps After successfully deploying this sample application:
After successfully deploying this sample application:
<!-- LINKS --> <!-- INTERNAL -->
+[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-powershell-install]: /powershell/azure/install-az-ps
[cluster-extensions]: ./cluster-extensions.md [dapr-overview]: ./dapr.md [az-group-delete]: /cli/azure/group#az-group-delete
+[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
<!-- EXTERNAL --> [hello-world-gh]: https://github.com/dapr/quickstarts/tree/v1.4.0/hello-kubernetes
aks Quickstart Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-event-grid.md
In this quickstart, you'll create an AKS cluster and subscribe to AKS events.
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
-* [Azure CLI installed](/cli/azure/install-azure-cli).
+* [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install] installed.
### Register the `EventgridPreview` preview feature To use the feature, you must also enable the `EventgridPreview` feature flag on your subscription.
+### [Azure CLI](#tab/azure-cli)
+ Register the `EventgridPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: ```azurecli-interactive
az provider register --namespace Microsoft.ContainerService
[!INCLUDE [event-grid-register-provider-cli.md](../../includes/event-grid-register-provider-cli.md)]
+### [Azure PowerShell](#tab/azure-powershell)
+
+Register the `EventgridPreview` feature flag by using the [Register-AzProviderPreviewFeature][register-azproviderpreviewfeature] cmdlet, as shown in the following example:
+
+```azurepowershell-interactive
+Register-AzProviderPreviewFeature -ProviderNamespace Microsoft.ContainerService -Name EventgridPreview
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [Get-AzProviderPreviewFeature][get-azproviderpreviewfeature] cmdlet:
+
+```azurepowershell-interactive
+Get-AzProviderPreviewFeature -ProviderNamespace Microsoft.ContainerService -Name EventgridPreview |
+ Format-Table -Property Name, @{name='State'; expression={$_.Properties.State}}
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [Register-AzResourceProvider][register-azresourceprovider] command:
+
+```azurepowershell-interactive
+Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerService
+```
++++ ## Create an AKS cluster
+### [Azure CLI](#tab/azure-cli)
+ Create an AKS cluster using the [az aks create][az-aks-create] command. The following example creates a resource group *MyResourceGroup* and a cluster named *MyAKS* with one node in the *MyResourceGroup* resource group:
-```azurecli
+```azurecli-interactive
az group create --name MyResourceGroup --location eastus az aks create -g MyResourceGroup -n MyAKS --location eastus --node-count 1 --generate-ssh-keys ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Create an AKS cluster using the [New-AzAksCluster][new-azakscluster] command. The following example creates a resource group *MyResourceGroup* and a cluster named *MyAKS* with one node in the *MyResourceGroup* resource group:
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name MyResourceGroup -Location eastus
+New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS -Location eastus -NodeCount 1 -GenerateSshKey
+```
+++ ## Subscribe to AKS events
+### [Azure CLI](#tab/azure-cli)
+ Create a namespace and event hub using [az eventhubs namespace create][az-eventhubs-namespace-create] and [az eventhubs eventhub create][az-eventhubs-eventhub-create]. The following example creates a namespace *MyNamespace* and an event hub *MyEventGridHub* in *MyNamespace*, both in the *MyResourceGroup* resource group.
-```azurecli
+```azurecli-interactive
az eventhubs namespace create --location eastus --name MyNamespace -g MyResourceGroup az eventhubs eventhub create --name MyEventGridHub --namespace-name MyNamespace -g MyResourceGroup ```
az eventhubs eventhub create --name MyEventGridHub --namespace-name MyNamespace
Subscribe to the AKS events using [az eventgrid event-subscription create][az-eventgrid-event-subscription-create]:
-```azurecli
+```azurecli-interactive
SOURCE_RESOURCE_ID=$(az aks show -g MyResourceGroup -n MyAKS --query id --output tsv) ENDPOINT=$(az eventhubs eventhub show -g MyResourceGroup -n MyEventGridHub --namespace-name MyNamespace --query id --output tsv) az eventgrid event-subscription create --name MyEventGridSubscription \
az eventgrid event-subscription create --name MyEventGridSubscription \
Verify your subscription to AKS events using `az eventgrid event-subscription list`:
-```azurecli
+```azurecli-interactive
az eventgrid event-subscription list --source-resource-id $SOURCE_RESOURCE_ID ```
The following example output shows you're subscribed to events from the *MyAKS*
] ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Create a namespace and event hub using [New-AzEventHubNamespace][new-azeventhubnamespace] and [New-AzEventHub][new-azeventhub]. The following example creates a namespace *MyNamespace* and an event hub *MyEventGridHub* in *MyNamespace*, both in the *MyResourceGroup* resource group.
+
+```azurepowershell-interactive
+New-AzEventHubNamespace -Location eastus -Name MyNamespace -ResourceGroupName MyResourceGroup
+New-AzEventHub -Name MyEventGridHub -Namespace MyNamespace -ResourceGroupName MyResourceGroup
+```
+
+> [!NOTE]
+> The *name* of your namespace must be unique.
+
+Subscribe to the AKS events using [New-AzEventGridSubscription][new-azeventgridsubscription]:
+
+```azurepowershell-interactive
+$SOURCE_RESOURCE_ID = (Get-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS).Id
+$ENDPOINT = (Get-AzEventHub -ResourceGroupName MyResourceGroup -EventHubName MyEventGridHub -Namespace MyNamespace).Id
+$params = @{
+ EventSubscriptionName = 'MyEventGridSubscription'
+ ResourceId = $SOURCE_RESOURCE_ID
+ EndpointType = 'eventhub'
+ Endpoint = $ENDPOINT
+}
+New-AzEventGridSubscription @params
+```
+
+Verify your subscription to AKS events using `Get-AzEventGridSubscription`:
+
+```azurepowershell-interactive
+Get-AzEventGridSubscription -ResourceId $SOURCE_RESOURCE_ID | Select-Object -ExpandProperty PSEventSubscriptionsList
+```
+
+The following example output shows you're subscribed to events from the *MyAKS* cluster and those events are delivered to the *MyEventGridHub* event hub:
+
+```Output
+EventSubscriptionName : MyEventGridSubscription
+Id : /subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/MyAKS/providers/Microsoft.EventGrid/eventSubscriptions/MyEventGridSubscription
+Type : Microsoft.EventGrid/eventSubscriptions
+Topic : /subscriptions/SUBSCRIPTION_ID/resourceGroups/myresourcegroup/providers/microsoft.containerservice/managedclusters/myaks
+Filter : Microsoft.Azure.Management.EventGrid.Models.EventSubscriptionFilter
+Destination : Microsoft.Azure.Management.EventGrid.Models.EventHubEventSubscriptionDestination
+ProvisioningState : Succeeded
+Labels :
+EventTtl : 1440
+MaxDeliveryAttempt : 30
+EventDeliverySchema : EventGridSchema
+ExpirationDate :
+DeadLetterEndpoint :
+Endpoint : /subscriptions/SUBSCRIPTION_ID/resourceGroups/MyResourceGroup/providers/Microsoft.EventHub/namespaces/MyNamespace/eventhubs/MyEventGridHub
+```
+++ When AKS events occur, you'll see those events appear in your event hub. For example, when the list of available Kubernetes versions for your clusters changes, you'll see a `Microsoft.ContainerService.NewKubernetesVersionAvailable` event. For more information on the events AKS emits, see [Azure Kubernetes Service (AKS) as an Event Grid source][aks-events]. ## Delete the cluster and subscriptions
+### [Azure CLI](#tab/azure-cli)
+ Use the [az group delete][az-group-delete] command to remove the resource group, the AKS cluster, namespace, and event hub, and all related resources. ```azurecli-interactive az group delete --name MyResourceGroup --yes --no-wait ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, the AKS cluster, namespace, and event hub, and all related resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name MyResourceGroup
+```
+++ > [!NOTE] > When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. >
To learn more about AKS, and walk through a complete code to deployment example,
> [!div class="nextstepaction"] > [AKS tutorial][aks-tutorial]
+[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-powershell-install]: /powershell/azure/install-az-ps
[aks-events]: ../event-grid/event-schema-aks.md [aks-tutorial]: ./tutorial-kubernetes-prepare-app.md [az-aks-create]: /cli/azure/aks#az_aks_create
+[new-azakscluster]: /powershell/module/az.aks/new-azakscluster
[az-eventhubs-namespace-create]: /cli/azure/eventhubs/namespace#az-eventhubs-namespace-create
+[new-azeventhubnamespace]: /powershell/module/az.eventhub/new-azeventhubnamespace
[az-eventhubs-eventhub-create]: /cli/azure/eventhubs/eventhub#az-eventhubs-eventhub-create
+[new-azeventhub]: /powershell/module/az.eventhub/new-azeventhub
[az-eventgrid-event-subscription-create]: /cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create
+[new-azeventgridsubscription]: /powershell/module/az.eventgrid/new-azeventgridsubscription
[az-feature-register]: /cli/azure/feature#az_feature_register
+[register-azproviderpreviewfeature]: /powershell/module/az.resources/register-azproviderpreviewfeature
[az-feature-list]: /cli/azure/feature#az_feature_list
+[get-azproviderpreviewfeature]: /powershell/module/az.resources/get-azproviderpreviewfeature
[az-provider-register]: /cli/azure/provider#az_provider_register
+[register-azresourceprovider]: /powershell/module/az.resources/register-azresourceprovider
[az-group-delete]: /cli/azure/group#az_group_delete [sp-delete]: kubernetes-service-principal.md#other-considerations
+[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-helm.md
In this quickstart, you'll use Helm to package and run an application on AKS. Fo
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
-* [Azure CLI installed](/cli/azure/install-azure-cli).
+* [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install] installed.
* [Helm v3 installed][helm-install]. ## Create an Azure Container Registry You'll need to store your container images in an Azure Container Registry (ACR) to run your application in your AKS cluster using Helm. Provide your own registry name unique within Azure and containing 5-50 alphanumeric characters. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
+### [Azure CLI](#tab/azure-cli)
+ The below example uses [az acr create][az-acr-create] to create an ACR named *MyHelmACR* in *MyResourceGroup* with the *Basic* SKU.
-```azurecli
+```azurecli-interactive
az group create --name MyResourceGroup --location eastus az acr create --resource-group MyResourceGroup --name MyHelmACR --sku Basic ```
Output will be similar to the following example. Take note of your *loginServer*
} ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+The below example uses the [New-AzContainerRegistry][new-azcontainerregistry] cmdlet to create an ACR named *MyHelmACR* in *MyResourceGroup* with the *Basic* SKU.
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name MyResourceGroup -Location eastus
+New-AzContainerRegistry -ResourceGroupName MyResourceGroup -Name MyHelmACR -Sku Basic
+```
+
+Output will be similar to the following example. Take note of your *LoginServer* value for your ACR since you'll use it in a later step. In the below example, *myhelmacr.azurecr.io* is the *LoginServer* for *MyHelmACR*.
+
+```output
+Registry Name Sku LoginServer CreationDate Provisioni AdminUserE StorageAccountName
+ ngState nabled
+- -- - -
+MyHelmACR Basic myhelmacr.azurecr.io 5/30/2022 9:16:14 PM Succeeded False
+```
+++ ## Create an AKS cluster Your new AKS cluster needs access to your ACR to pull the container images and run them. Use the following command to: * Create an AKS cluster called *MyAKS* and attach *MyHelmACR*. * Grant the *MyAKS* cluster access to your *MyHelmACR* ACR.
+### [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az aks create --resource-group MyResourceGroup --name MyAKS --location eastus --attach-acr MyHelmACR --generate-ssh-keys ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyAKS -Location eastus -AcrNameToAttach MyHelmACR -GenerateSshKey
+```
+++ ## Connect to your AKS cluster To connect a Kubernetes cluster locally, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-1. Install `kubectl` locally using the `az aks install-cli` command:
+### [Azure CLI](#tab/azure-cli)
+
+1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command:
```azurecli az aks install-cli ```
-2. Configure `kubectl` to connect to your Kubernetes cluster using the `az aks get-credentials` command. The following command example gets credentials for the AKS cluster named *MyAKS* in the *MyResourceGroup*:
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command example gets credentials for the AKS cluster named *MyAKS* in the *MyResourceGroup*:
- ```azurecli
+ ```azurecli-interactive
az aks get-credentials --resource-group MyResourceGroup --name MyAKS ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+1. Install `kubectl` locally using the [Install-AzAksKubectl][install-azakskubectl] cmdlet:
+
+ ```azurepowershell
+ Install-AzAksKubectl
+ ```
+
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following command example gets credentials for the AKS cluster named *MyAKS* in the *MyResourceGroup*:
+
+ ```azurepowershell-interactive
+ Import-AzAksCredential -ResourceGroupName MyResourceGroup -Name MyAKS
+ ```
+++ ## Download the sample application This quickstart uses the [Azure Vote application][azure-vote-app]. Clone the application from GitHub and navigate to the `azure-vote` directory.
cd azure-voting-app-redis/azure-vote/
Using the preceding Dockerfile, run the [az acr build][az-acr-build] command to build and push an image to the registry. The `.` at the end of the command provides the location of the source code directory path (in this case, the current directory). The `--file` parameter takes in the path of the Dockerfile relative to this source code directory path.
-```azurecli
+```azurecli-interactive
az acr build --image azure-vote-front:v1 --registry MyHelmACR --file Dockerfile . ```
Navigate to your application's load balancer in a browser using the `<EXTERNAL-I
## Delete the cluster
+### [Azure CLI](#tab/azure-cli)
+ Use the [az group delete][az-group-delete] command to remove the resource group, the AKS cluster, the container registry, the container images stored in the ACR, and all related resources. ```azurecli-interactive az group delete --name MyResourceGroup --yes --no-wait ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, the AKS cluster, the container registry, the container images stored in the ACR, and all related resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name MyResourceGroup
+```
+++ > [!NOTE] > If the AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal. >
For more information about using Helm, see the Helm documentation.
> [!div class="nextstepaction"] > [Helm documentation][helm-documentation]
+[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-powershell-install]: /powershell/azure/install-az-ps
[az-acr-create]: /cli/azure/acr#az_acr_create
+[new-azcontainerregistry]: /powershell/module/az.containerregistry/new-azcontainerregistry
[az-acr-build]: /cli/azure/acr#az_acr_build [az-group-delete]: /cli/azure/group#az_group_delete
-[az aks get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[az aks install-cli]: /cli/azure/aks#az_aks_install_cli
+[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
+[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
+[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git [kubectl]: https://kubernetes.io/docs/user-guide/kubectl/ [helm]: https://helm.sh/
aks Resize Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md
This lack of persistence also applies to the resize operation, thus, resizing AK
Suppose you want to resize an existing node pool, called `nodepool1`, from SKU size Standard_DS2_v2 to Standard_DS3_v2. To accomplish this task, you'll need to create a new node pool using Standard_DS3_v2, move workloads from `nodepool1` to the new node pool, and remove `nodepool1`. In this example, we'll call this new node pool `mynodepool`. ```bash kubectl get nodes
kube-system metrics-server-774f99dbf4-h52hn 1/1 Running 1
## Create a new node pool with the desired SKU
+### [Azure CLI](#tab/azure-cli)
+ Use the [az aks nodepool add][az-aks-nodepool-add] command to create a new node pool called `mynodepool` with three nodes using the `Standard_DS3_v2` VM SKU: ```azurecli-interactive
az aks nodepool add \
``` > [!NOTE]
-> Every AKS cluster must contain at least one system node pool with at least one node. In the below example, we are using a `--mode` of `System`, as the cluster is assumed to have only one node pool, necessitating a `System` node pool to replace it. A node pool's mode can be [updated at any time][update-node-pool-mode].
+> Every AKS cluster must contain at least one system node pool with at least one node. In the example above, we are using a `--mode` of `System`, as the cluster is assumed to have only one node pool, necessitating a `System` node pool to replace it. A node pool's mode can be [updated at any time][update-node-pool-mode].
When resizing, be sure to consider other requirements and configure your node pool accordingly. You may need to modify the above command. For a full list of the configuration options, see the [az aks nodepool add][az-aks-nodepool-add] reference page. After a few minutes, the new node pool has been created: ```bash kubectl get nodes
aks-nodepool1-31721111-vmss000001 Ready agent 10d v1.21.9
aks-nodepool1-31721111-vmss000002 Ready agent 10d v1.21.9 ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use the [New-AzAksNodePool][new-azaksnodepool] cmdlet to create a new node pool called `mynodepool` with three nodes using the `Standard_DS3_v2` VM SKU:
+
+```azurepowershell-interactive
+$params = @{
+ ResourceGroupName = 'myResourceGroup'
+ ClusterName = 'myAKSCluster'
+ Name = 'mynodepool'
+ Count = 3
+ VMSize = 'Standard_DS3_v2'
+}
+New-AzAksNodePool @params
+```
+
+After a few minutes, the new node pool has been created:
++
+```bash
+kubectl get nodes
+
+NAME STATUS ROLES AGE VERSION
+aks-mynodepool-20823458-vmss000000 Ready agent 23m v1.21.9
+aks-mynodepool-20823458-vmss000001 Ready agent 23m v1.21.9
+aks-mynodepool-20823458-vmss000002 Ready agent 23m v1.21.9
+aks-nodepool1-31721111-vmss000000 Ready agent 10d v1.21.9
+aks-nodepool1-31721111-vmss000001 Ready agent 10d v1.21.9
+aks-nodepool1-31721111-vmss000002 Ready agent 10d v1.21.9
+```
+
+> [!NOTE]
+> Every AKS cluster must contain at least one system node pool with at least one node. In the example below, we are updating a node pool's mode to `System`, as the cluster is assumed to have only one node pool, necessitating a `System` node pool to replace it. A node pool's mode can be [updated at any time][update-node-pool-mode].
+
+```azurepowershell-interactive
+$myAKSCluster = Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster
+($myAKSCluster.AgentPoolProfiles | Where-Object Name -eq 'mynodepool').Mode = 'System'
+$myAKSCluster | Set-AzAksCluster
+```
+
+When resizing, be sure to consider other requirements and configure your node pool accordingly. You may need to modify the above command. For a full list of the configuration options, see the [New-AzAksNodePool][new-azaksnodepool] reference page.
+++ ## Cordon the existing nodes Cordoning marks specified nodes as unschedulable and prevents any more pods from being added to the nodes.
By default, your cluster has AKS_managed pod disruption budgets (such as `coredn
## Remove the existing node pool
-To delete the existing node pool, use the Azure portal or the [az aks delete][az-aks-delete] command:
+### [Azure CLI](#tab/azure-cli)
+
+To delete the existing node pool, use the Azure portal or the [az aks nodepool delete][az-aks-nodepool-delete] command:
> [!IMPORTANT] > When you delete a node pool, AKS doesn't perform cordon and drain. To minimize the disruption of rescheduling pods currently running on the node pool you are going to delete, perform a cordon and drain on all nodes in the node pool before deleting.
az aks nodepool delete \
--name nodepool1 ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+To delete the existing node pool, use the Azure portal or the [Remove-AzAksNodePool][remove-azaksnodepool] cmdlet:
+
+> [!IMPORTANT]
+> When you delete a node pool, AKS doesn't perform cordon and drain. To minimize the disruption of rescheduling pods currently running on the node pool you are going to delete, perform a cordon and drain on all nodes in the node pool before deleting.
+
+```azurepowershell-interactive
+$params = @{
+ ResourceGroupName = 'myResourceGroup'
+ ClusterName = 'myAKSCluster'
+ Name = 'nodepool1'
+ Force = $true
+}
+Remove-AzAksNodePool @params
+```
+++ After completion, the final result is the AKS cluster having a single, new node pool with the new, desired SKU size and all the applications and pods properly running: ```bash kubectl get nodes
After resizing a node pool by cordoning and draining, learn more about [using mu
<!-- LINKS --> [az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
-[az-aks-delete]: /cli/azure/aks#az_aks_delete
+[new-azaksnodepool]: /powershell/module/az.aks/new-azaksnodepool
+[az-aks-nodepool-delete]: /cli/azure/aks/nodepool#az_aks_nodepool_delete
+[remove-azaksnodepool]: /powershell/module/az.aks/remove-azaksnodepool
[aks-support-policies]: support-policies.md#user-customization-of-agent-nodes [update-node-pool-mode]: use-system-pools.md#update-existing-cluster-system-and-user-node-pools [pod-disruption-budget]: operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets
aks Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-cluster.md
If the resource needs of your applications change, you can manually scale an AKS
## Scale the cluster nodes
+### [Azure CLI](#tab/azure-cli)
+ First, get the *name* of your node pool using the [az aks show][az-aks-show] command. The following example gets the node pool name for the cluster named *myAKSCluster* in the *myResourceGroup* resource group: ```azurecli-interactive
The following example output shows the cluster has successfully scaled to one no
} ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+First, get the *name* of your node pool using the [Get-AzAksCluster][get-azakscluster] command. The following example gets the node pool name for the cluster named *myAKSCluster* in the *myResourceGroup* resource group:
+
+```azurepowershell-interactive
+Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
+ Select-Object -ExpandProperty AgentPoolProfiles
+```
+
+The following example output shows that the *name* is *nodepool1*:
+
+```Output
+Name : nodepool1
+Count : 1
+VmSize : Standard_D2_v2
+OsDiskSizeGB : 128
+VnetSubnetID :
+MaxPods : 30
+OsType : Linux
+MaxCount :
+MinCount :
+Mode : System
+EnableAutoScaling :
+Type : VirtualMachineScaleSets
+OrchestratorVersion : 1.23.3
+ProvisioningState : Succeeded
+...
+```
+
+Use the [Set-AzAksCluster][set-azakscluster] command to scale the cluster nodes. The following example scales a cluster named *myAKSCluster* to a single node. Provide your own `-NodeName` from the previous command, such as *nodepool1*:
+
+```azurepowershell-interactive
+Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1 -NodeName <your node pool name>
+```
+
+The following example output shows the cluster has successfully scaled to one node, as shown in the *AgentPoolProfiles* property:
+
+```Output
+Name : nodepool1
+Count : 1
+VmSize : Standard_D2_v2
+OsDiskSizeGB : 128
+VnetSubnetID :
+MaxPods : 30
+OsType : Linux
+MaxCount :
+MinCount :
+Mode : System
+EnableAutoScaling :
+Type : VirtualMachineScaleSets
+OrchestratorVersion : 1.23.3
+ProvisioningState : Succeeded
+...
+```
++ ## Scale `User` node pools to 0 Unlike `System` node pools that always require running nodes, `User` node pools allow you to scale to 0. To learn more on the differences between system and user node pools, see [System and user node pools](use-system-pools.md).
-To scale a user pool to 0, you can use the [az aks nodepool scale][az-aks-nodepool-scale] in alternative to the above `az aks scale` command, and set 0 as your node count.
+### [Azure CLI](#tab/azure-cli)
+To scale a user pool to 0, you can use the [az aks nodepool scale][az-aks-nodepool-scale] in alternative to the above `az aks scale` command, and set 0 as your node count.
```azurecli-interactive az aks nodepool scale --name <your node pool name> --cluster-name myAKSCluster --resource-group myResourceGroup --node-count 0
az aks nodepool scale --name <your node pool name> --cluster-name myAKSCluster -
You can also autoscale `User` node pools to 0 nodes, by setting the `--min-count` parameter of the [Cluster Autoscaler](cluster-autoscaler.md) to 0.
+### [Azure PowerShell](#tab/azure-powershell)
+
+To scale a user pool to 0, you can use the [Update-AzAksNodePool][update-azaksnodepool] in alternative to the above `Set-AzAksCluster` command, and set 0 as your node count.
+
+```azurepowershell-interactive
+Update-AzAksNodePool -Name <your node pool name> -ClusterName myAKSCluster -ResourceGroupName myResourceGroup -NodeCount 0
+```
+
+You can also autoscale `User` node pools to 0 nodes, by setting the `-NodeMinCount` parameter of the [Cluster Autoscaler](cluster-autoscaler.md) to 0.
+++ ## Next steps In this article, you manually scaled an AKS cluster to increase or decrease the number of nodes. You can also use the [cluster autoscaler][cluster-autoscaler] to automatically scale your cluster.
In this article, you manually scaled an AKS cluster to increase or decrease the
<!-- LINKS - internal --> [aks-tutorial]: ./tutorial-kubernetes-prepare-app.md [az-aks-show]: /cli/azure/aks#az_aks_show
+[get-azakscluster]: /powershell/module/az.aks/get-azakscluster
[az-aks-scale]: /cli/azure/aks#az_aks_scale
+[set-azakscluster]: /powershell/module/az.aks/set-azakscluster
[cluster-autoscaler]: cluster-autoscaler.md
-[az-aks-nodepool-scale]: /cli/azure/aks/nodepool#az_aks_nodepool_scale
+[az-aks-nodepool-scale]: /cli/azure/aks/nodepool#az_aks_nodepool_scale
+[update-azaksnodepool]: /powershell/module/az.aks/update-azaksnodepool
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgr
## Before you begin
-This article requires Azure CLI version 2.0.65 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+### [Azure CLI](#tab/azure-cli)
+
+This article requires that you are running the Azure CLI version 2.0.65 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+This tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
++ > [!WARNING] > An AKS cluster upgrade triggers a cordon and drain of your nodes. If you have a low compute quota available, the upgrade may fail. For more information, see [increase quotas](../azure-portal/supportability/regional-quota-requests.md) ## Check for available AKS cluster upgrades
+### [Azure CLI](#tab/azure-cli)
+ To check which Kubernetes releases are available for your cluster, use the [az aks get-upgrades][az-aks-get-upgrades] command. The following example checks for available upgrades to *myAKSCluster* in *myResourceGroup*: ```azurecli-interactive
ERROR: Table output unavailable. Use the --query option to specify an appropriat
> [!IMPORTANT] > If no upgrade is available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. Attempting to upgrade a cluster to a newer Kubernetes version when `az aks get-upgrades` shows no upgrades available is not supported.
+### [Azure PowerShell](#tab/azure-powershell)
+
+To check which Kubernetes releases are available for your cluster, use the [Get-AzAksUpgradeProfile][get-azaksupgradeprofile] command. The following example checks for available upgrades to *myAKSCluster* in *myResourceGroup*:
+
+```azurepowershell-interactive
+Get-AzAksUpgradeProfile -ResourceGroupName myResourceGroup -ClusterName myAKSCluster |
+ Select-Object -Property Name, ControlPlaneProfileKubernetesVersion -ExpandProperty ControlPlaneProfileUpgrade |
+ Format-Table -Property *
+```
+
+> [!NOTE]
+> When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. All upgrades must be performed sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* is not allowed.
+>
+> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available.
+
+The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
+
+```Output
+Name ControlPlaneProfileKubernetesVersion IsPreview KubernetesVersion
+- --
+default 1.18.10 1.19.1
+default 1.18.10 1.19.3
+```
+
+> [!IMPORTANT]
+> If no upgrade is available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. Attempting to upgrade a cluster to a newer Kubernetes version when `Get-AzAksUpgradeProfile` shows no upgrades available is not supported.
+++ ## Customize node surge upgrade > [!Important]
az aks nodepool update -n mynodepool -g MyResourceGroup --cluster-name MyManaged
## Upgrade an AKS cluster
+### [Azure CLI](#tab/azure-cli)
+ With a list of available versions for your AKS cluster, use the [az aks upgrade][az-aks-upgrade] command to upgrade. During the upgrade process, AKS will: - add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version. - [cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications (if you're using max surge it will [cordon and drain][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified).
To confirm that the upgrade was successful, use the [az aks show][az-aks-show] c
az aks show --resource-group myResourceGroup --name myAKSCluster --output table ```
-The following example output shows that the cluster now runs *1.18.10*:
+The following example output shows that the cluster now runs *1.19.1*:
```json Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn - - - -
-myAKSCluster eastus myResourceGroup 1.18.10 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io
+myAKSCluster eastus myResourceGroup 1.19.1 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+With a list of available versions for your AKS cluster, use the [Set-AzAksCluster][set-azakscluster] cmdlet to upgrade. During the upgrade process, AKS will:
+- add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version.
+- [cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications (if you're using max surge it will [cordon and drain][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified).
+- When the old node is fully drained, it will be reimaged to receive the new version and it will become the buffer node for the following node to be upgraded.
+- This process repeats until all nodes in the cluster have been upgraded.
+- At the end of the process, the last buffer node will be deleted, maintaining the existing agent node count and zone balance.
++
+```azurepowershell-interactive
+Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -KubernetesVersion <KUBERNETES_VERSION>
```
+It takes a few minutes to upgrade the cluster, depending on how many nodes you have.
+
+> [!IMPORTANT]
+> Ensure that any `PodDisruptionBudgets` (PDBs) allow for at least 1 pod replica to be moved at a time otherwise the drain/evict operation will fail.
+> If the drain operation fails, the upgrade operation will fail by design to ensure that the applications are not disrupted. Please correct what caused the operation to stop (incorrect PDBs, lack of quota, and so on) and re-try the operation.
+
+To confirm that the upgrade was successful, use the [Get-AzAksCluster][get-azakscluster] command:
+
+```azurepowershell-interactive
+Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
+ Format-Table -Property Name, Location, KubernetesVersion, ProvisioningState, Fqdn
+```
+
+The following example output shows that the cluster now runs *1.19.1*:
+
+```Output
+Name Location KubernetesVersion ProvisioningState Fqdn
+- -- -- -- -
+myAKSCluster eastus 1.19.1 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io
+```
+++ ## View the upgrade events When you upgrade your cluster, the following Kubenetes events may occur on each node:
AKS uses best-effort zone balancing in node groups. During an Upgrade surge, zon
If you have PVCs backed by Azure LRS Disks, theyΓÇÖll be bound to a particular zone and may fail to recover immediately if the surge node doesnΓÇÖt match the zone of the PVC. This could cause downtime on your application when the Upgrade operation continues to drain nodes but the PVs are bound to a zone. To handle this case and maintain high availability, configure a [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) on your application. This allows Kubernetes to respect your availability requirements during Upgrade's drain operation. - ## Next steps This article showed you how to upgrade an existing AKS cluster. To learn more about deploying and managing AKS clusters, see the set of tutorials.
This article showed you how to upgrade an existing AKS cluster. To learn more ab
<!-- LINKS - internal --> [aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md [azure-cli-install]: /cli/azure/install-azure-cli
+[azure-powershell-install]: /powershell/azure/install-az-ps
[az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
+[get-azaksupgradeprofile]: /powershell/module/az.aks/get-azaksupgradeprofile
[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
+[set-azakscluster]: /powershell/module/az.aks/set-azakscluster
[az-aks-show]: /cli/azure/aks#az_aks_show
+[get-azakscluster]: /powershell/module/az.aks/get-azakscluster
[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update [az-feature-list]: /cli/azure/feature#az_feature_list
aks Use System Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-system-pools.md
In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped t
## Before you begin
-* You need the Azure CLI version 2.3.1 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+### [Azure CLI](#tab/azure-cli)
+
+You need the Azure CLI version 2.3.1 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+You need the Azure PowerShell version 7.5.0 or later installed and configured. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][install-azure-powershell].
++ ## Limitations
You can do the following operations with node pools:
## Create a new AKS cluster with a system node pool
+### [Azure CLI](#tab/azure-cli)
+ When you create a new AKS cluster, you automatically create a system node pool with a single node. The initial node pool defaults to a mode of type system. When you create new node pools with `az aks nodepool add`, those node pools are user node pools unless you explicitly specify the mode parameter. The following example creates a resource group named *myResourceGroup* in the *eastus* region.
Use the [az aks create][az-aks-create] command to create an AKS cluster. The fol
az aks create -g myResourceGroup --name myAKSCluster --node-count 1 --generate-ssh-keys ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+When you create a new AKS cluster, you automatically create a system node pool with a single node. The initial node pool defaults to a mode of type system. When you create new node pools with `New-AzAksNodePool`, those node pools are user node pools. A node pool's mode can be [updated at any time][update-node-pool-mode].
+
+The following example creates a resource group named *myResourceGroup* in the *eastus* region.
+
+```azurepowershell-interactive
+New-AzResourceGroup -ResourceGroupName myResourceGroup -Location eastus
+```
+
+Use the [New-AzAksCluster][new-azakscluster] cmdlet to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one dedicated system pool containing one node. For your production workloads, ensure you are using system node pools with at least three nodes. This operation may take several minutes to complete.
+
+```azurepowershell-interactive
+# Create a new AKS cluster with a single system pool
+New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1 -GenerateSshKey
+```
+++ ## Add a dedicated system node pool to an existing AKS cluster
+### [Azure CLI](#tab/azure-cli)
+ > [!Important] > You can't change node taints through the CLI after the node pool is created.
az aks nodepool add \
--node-taints CriticalAddonsOnly=true:NoSchedule \ --mode System ```+
+### [Azure PowerShell](#tab/azure-powershell)
+
+You can add one or more system node pools to existing AKS clusters. It's recommended to schedule your application pods on user node pools, and dedicate system node pools to only critical system pods. This prevents rogue application pods from accidentally killing system pods. Enforce this behavior with the `CriticalAddonsOnly=true:NoSchedule` [taint][aks-taints] for your system node pools.
+
+The following command adds a dedicated node pool of mode type system with a default count of three nodes.
+
+```azurepowershell-interactive
+# By default, New-AzAksNodePool creates a user node pool
+# We need to update the node pool's mode to System later
+New-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -Name systempool -Count 3
+
+# Update the node pool's mode to System and add the 'CriticalAddonsOnly=true:NoSchedule' taint
+$myAKSCluster = Get-AzAksCluster -ResourceGroupName myResourceGroup2 -Name myAKSCluster
+$systemPool = $myAKSCluster.AgentPoolProfiles | Where-Object Name -eq 'systempool'
+$systemPool.Mode = 'System'
+$nodeTaints = [System.Collections.Generic.List[string]]::new()
+$NodeTaints.Add('CriticalAddonsOnly=true:NoSchedule')
+$systemPool.NodeTaints = $NodeTaints
+$myAKSCluster | Set-AzAksCluster
+```
+++ ## Show details for your node pool
-You can check the details of your node pool with the following command.
+You can check the details of your node pool with the following command.
+
+### [Azure CLI](#tab/azure-cli)
```azurecli-interactive az aks nodepool show -g myResourceGroup --cluster-name myAKSCluster -n systempool
A mode of type **System** is defined for system node pools, and a mode of type *
{ "agentPoolType": "VirtualMachineScaleSets", "availabilityZones": null,
- "count": 1,
+ "count": 3,
"enableAutoScaling": null, "enableNodePublicIp": false, "id": "/subscriptions/yourSubscriptionId/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/agentPools/systempool",
A mode of type **System** is defined for system node pools, and a mode of type *
"orchestratorVersion": "1.16.10", "osDiskSizeGb": 128, "osType": "Linux",
- "provisioningState": "Failed",
+ "provisioningState": "Succeeded",
"proximityPlacementGroupId": null, "resourceGroup": "myResourceGroup", "scaleSetEvictionPolicy": null,
A mode of type **System** is defined for system node pools, and a mode of type *
} ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Get-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -Name systempool
+```
+
+A mode of type **System** is defined for system node pools, and a mode of type **User** is defined for user node pools. For a system pool, verify the taint is set to `CriticalAddonsOnly=true:NoSchedule`, which will prevent application pods from beings scheduled on this node pool.
+
+```Output
+Count : 3
+VmSize : Standard_D2_v2
+OsDiskSizeGB : 128
+VnetSubnetID :
+MaxPods : 30
+OsType : Linux
+MaxCount :
+MinCount :
+EnableAutoScaling :
+AgentPoolType : VirtualMachineScaleSets
+OrchestratorVersion : 1.23.3
+ProvisioningState : Succeeded
+AvailabilityZones : {}
+EnableNodePublicIP :
+ScaleSetPriority :
+ScaleSetEvictionPolicy :
+NodeTaints : {CriticalAddonsOnly=true:NoSchedule}
+Id : /subscriptions/yourSubscriptionId/resourcegroups/myResourceGroup/providers
+ /Microsoft.ContainerService/managedClusters/myAKSCluster/agentPools/systempool
+Name : systempool
+Type : Microsoft.ContainerService/managedClusters/agentPools
+```
+++ ## Update existing cluster system and user node pools
+### [Azure CLI](#tab/azure-cli)
+ > [!NOTE] > An API version of 2020-03-01 or greater must be used to set a system node pool mode. Clusters created on API versions older than 2020-03-01 contain only user node pools as a result. To receive system node pool functionality and benefits on older clusters, update the mode of existing node pools with the following commands on the latest Azure CLI version.
This command changes a user node pool to a system node pool.
az aks nodepool update -g myResourceGroup --cluster-name myAKSCluster -n mynodepool --mode system ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+> [!NOTE]
+> An API version of 2020-03-01 or greater must be used to set a system node pool mode. Clusters created on API versions older than 2020-03-01 contain only user node pools as a result. To receive system node pool functionality and benefits on older clusters, update the mode of existing node pools with the following commands on the latest Azure PowerShell version.
+
+You can change modes for both system and user node pools. You can change a system node pool to a user pool only if another system node pool already exists on the AKS cluster.
+
+This command changes a system node pool to a user node pool.
+
+```azurepowershell-interactive
+$myAKSCluster = Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster
+($myAKSCluster.AgentPoolProfiles | Where-Object Name -eq 'mynodepool').Mode = 'User'
+$myAKSCluster | Set-AzAksCluster
+```
+
+This command changes a user node pool to a system node pool.
+
+```azurepowershell-interactive
+$myAKSCluster = Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster
+($myAKSCluster.AgentPoolProfiles | Where-Object Name -eq 'mynodepool').Mode = 'System'
+$myAKSCluster | Set-AzAksCluster
+```
+++ ## Delete a system node pool > [!Note]
az aks nodepool update -g myResourceGroup --cluster-name myAKSCluster -n mynodep
You must have at least two system node pools on your AKS cluster before you can delete one of them.
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli-interactive az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster -n mynodepool ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Remove-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -Name mynodepool
+```
+++ ## Clean up resources
+### [Azure CLI](#tab/azure-cli)
+ To delete the cluster, use the [az group delete][az-group-delete] command to delete the AKS resource group: ```azurecli-interactive az group delete --name myResourceGroup --yes --no-wait ```
+### [Azure PowerShell](#tab/azure-powershell)
+To delete the cluster, use the [Remove-AzResourceGroup][remove-azresourcegroup] command to delete the AKS resource group:
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myResourceGroup
+```
++ ## Next steps
In this article, you learned how to create and manage system node pools in an AK
[aks-windows]: windows-container-cli.md [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-aks-create]: /cli/azure/aks#az_aks_create
+[new-azakscluster]: /powershell/module/az.aks/new-azakscluster
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add [az-aks-nodepool-list]: /cli/azure/aks/nodepool#az_aks_nodepool_list [az-aks-nodepool-update]: /cli/azure/aks/nodepool#az_aks_nodepool_update
In this article, you learned how to create and manage system node pools in an AK
[az-extension-update]: /cli/azure/extension#az_extension_update [az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete
+[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
[az-group-deployment-create]: /cli/azure/group/deployment#az_group_deployment_create [gpu-cluster]: gpu-cluster.md [install-azure-cli]: /cli/azure/install-azure-cli
+[install-azure-powershell]: /powershell/azure/install-az-ps
[operator-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md [quotas-skus-regions]: quotas-skus-regions.md [supported-versions]: supported-kubernetes-versions.md
In this article, you learned how to create and manage system node pools in an AK
[vm-sizes]: ../virtual-machines/sizes.md [use-multiple-node-pools]: use-multiple-node-pools.md [maximum-pods]: configure-azure-cni.md#maximum-pods-per-node
+[update-node-pool-mode]: use-system-pools.md#update-existing-cluster-system-and-user-node-pools
api-management Api Management Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md
Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct
| Azure Monitor logs and metrics | No | Yes | Yes | Yes | Yes | | Static IP | No | Yes | Yes | Yes | Yes | | [WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes |
-| [GraphQL APIs (preview)](graphql-api.md) | Yes | Yes | Yes | Yes | Yes |
+| [GraphQL APIs](graphql-api.md) | Yes | Yes | Yes | Yes | Yes |
+| [GraphQL resolvers (preview)](graphql-schema-resolve-api.md) | Yes | Yes | Yes | Yes | Yes |
<sup>1</sup> Enables the use of Azure AD (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/> <sup>2</sup> Including related functionality e.g. users, groups, issues, applications and email templates and notifications.<br/>
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
In the first setup example, all your APIs are managed only from within your virt
* **Listener**: The listener has a front-end port, a protocol (Http or Https, these values are case sensitive), and the TLS/SSL certificate name (if configuring TLS offload). * **Rule**: The rule binds a listener to a back-end server pool. * **Custom health probe**: Application Gateway, by default, uses IP address-based probes to figure out which servers in `BackendAddressPool` are active. API Management only responds to requests with the correct host header, so the default probes fail. You define a custom health probe to help the application gateway determine that the service is alive and should forward requests.
-* **Custom domain certificates**: To access API Management from the internet, create a CNAME mapping of its host name to the Application Gateway front-end DNS name. This mapping ensures that the host name header and certificate sent to Application Gateway and forwarded to API Management are ones that API Management recognizes as valid. In this example, we'll use three certificates. They're for API Management's gateway (the back end), the developer portal, and the management endpoint.
+* **Custom domain certificates**: To access API Management from the internet, create DNS records to map its host names to the Application Gateway front-end IP address. This mapping ensures that the host name header and certificate sent to Application Gateway and forwarded to API Management are ones that API Management recognizes as valid. In this example, we'll use three certificates. They're for API Management's gateway (the back end), the developer portal, and the management endpoint.
### Expose the developer portal and management endpoint externally through Application Gateway
To create an Application Gateway resource:
Ensure that the health status of each back-end pool is Healthy. If you need to troubleshoot an unhealthy back end or a back end with unknown health status, see [Troubleshoot back-end health issues in Application Gateway](../application-gateway/application-gateway-backend-health-troubleshooting.md).
-## Create a CNAME record from the public DNS name
+## Create DNS records to access API Management endpoints from the internet
-After the gateway is created, configure the front end for communication. When you use a public IP address, Application Gateway requires a dynamically assigned DNS name, which might not be easy to use.
+After the gateway is created, configure communication to API Management from the internet. Create DNS A-records that map each of the API Management endpoint host names that you configured to the application gateway's static public IP address. In this article, example host names are `api.contoso.net`, `portal.contoso.net`, and `management.contoso.net`.
-Use the application gateway's DNS name to create a CNAME record that points the API Management gateway host name (`api.contoso.net` in the preceding examples) to this DNS name. To configure the front-end IP CNAME record, retrieve the details of the application gateway and its associated IP/DNS name by using the `PublicIPAddress` element. Don't use A-records because the VIP might change when the gateway restarts.
-
-```powershell
-Get-AzPublicIpAddress -ResourceGroupName $resGroupName -Name "publicIP01"
-```
-
-For testing purposes, you might update the hosts file on your local machine with entries that map the application gateway's public IP address to each of the API Management endpoint host names that you configured. Examples are `api.contoso.net`, `portal.contoso.net`, and `management.contoso.net`.
+For testing purposes, you might update the hosts file on your local machine with entries that map the application gateway's public IP address to the API Management endpoint host names.
## Summary
API Management configured in a virtual network provides a single gateway interfa
## Next steps
-* Set up using an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/private-webapp-with-app-gateway-and-apim).
+* Set up using an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-internal-vnet-application-gateway).
* Learn more about Application Gateway: * [Application Gateway overview](../application-gateway/overview.md)
api-management Api Management Howto Mutual Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates.md
Using key vault certificates is recommended because it helps improve API Managem
1. For steps to create a key vault, see [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md). 1. Enable a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md) in the API Management instance.
-1. Assign a [key vault access policy](../key-vault/general/assign-access-policy-portal.md) to the managed identity with permissions to get and list certificates from the vault. To add the policy:
+1. Assign a [key vault access policy](../key-vault/general/assign-access-policy-portal.md) to the managed identity with permissions to get and list secrets from the vault. To add the policy:
1. In the portal, navigate to your key vault. 1. Select **Settings > Access policies > + Add Access Policy**. 1. Select **Secret permissions**, then select **Get** and **List**. 1. In **Select principal**, select the resource name of your managed identity. If you're using a system-assigned identity, the principal is the name of your API Management instance. 1. Create or import a certificate to the key vault. See [Quickstart: Set and retrieve a certificate from Azure Key Vault using the Azure portal](../key-vault/certificates/quick-create-portal.md).
+1. When adding a key vault certificate to your API Management instance, you must have permissions to list secrets from the key vault.
[!INCLUDE [api-management-key-vault-network](../../includes/api-management-key-vault-network.md)]
api-management Api Management Howto Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-properties.md
Using key vault secrets is recommended because it helps improve API Management s
1. Select **Secret permissions**, then select **Get** and **List**. 1. In **Select principal**, select the resource name of your managed identity. If you're using a system-assigned identity, the principal is the name of your API Management instance. 1. Create or import a secret to the key vault. See [Quickstart: Set and retrieve a secret from Azure Key Vault using the Azure portal](../key-vault/secrets/quick-create-portal.md).-
-To use the key vault secret, [add or edit a named value](#add-or-edit-a-named-value), and specify a type of **Key vault**. Select the secret from the key vault.
+1. When adding a key vault secret to your API Management instance, you must have permissions to list secrets from the key vault.
[!INCLUDE [api-management-key-vault-network](../../includes/api-management-key-vault-network.md)]
api-management Policy Fragments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md
After creating a policy fragment, you can view and update the properties of a po
1. Review **Policy document references** for policy definitions that include the fragment. Before a fragment can be deleted, you must remove the fragment references from all policy definitions. 1. After all references are removed, select **Delete**.
+## Next steps
+ For more information about working with policies, see: + [Tutorial: Transform and protect APIs](transform-api.md)
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
app-service App Service Web Restore Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-restore-snapshots.md
- Title: Restore app from a snapshot
-description: Learn how to restore your app from a snapshot. Recover from unexpected data loss in Premium tier with the automatic shadow copies.
-- Previously updated : 02/17/2022----
-# Restore an app in Azure from a snapshot
-This article shows you how to restore an app in [Azure App Service](../app-service/overview.md) from a snapshot. You can restore your app to a previous state, based on one of your app's snapshots. You do not need to enable snapshot backups; the platform automatically saves a hourly snapshot of each app's content and configuration for data recovery purposes. Hourly snapshots for the last 30 days are available. The retention period and snapshot frequency are not configurable.
-
-Restoring from snapshots is available to apps running in one of the **Standard** or **Premium** tiers. For information about scaling up your app, see [Scale up an app in Azure](manage-scale-up.md).
-
-> [!NOTE]
-> Snapshot restore is not available for:
->
-> - App Service environments (**Isolated** tier)
-> - Azure Functions in the [**Consumption**](../azure-functions/consumption-plan.md) or [**Elastic Premium**](../azure-functions/functions-premium-plan.md) pricing plans.
->
-> Snapshot restore is available in preview for Azure Functions in [dedicated (App Service)](../azure-functions/dedicated-plan.md) **Standard** or **Premium** tiers.
-
-## Snapshots vs Backups
-
-Snapshots are incremental shadow copies and offer several advantages over [standard backups](manage-backup.md):
--- No file copy errors due to file locks.-- Higher snapshot size (maximum 30 GB).-- Enabled by default in supported pricing tiers and no configuration required.-- Restore to a new or existing App Service app or slot in any Azure region.-
-## What snapshot restore includes
-
-The following table shows which content is restored when you restore a snapshot:
-
-|Settings| Restored?|
-|-|-|
-| **Windows apps**: All app content under `%HOME%` directory<br/>**Linux apps**: All app content under `/home` directory<br/>**Custom containers (Windows and Linux)**: Content in [persistent storage](configure-custom-container.md?pivots=container-linux#use-persistent-shared-storage)| Yes |
-| Content of the [run-from-ZIP package](deploy-run-package.md)| No |
-| Content from any [custom mounted Azure storage](configure-connect-to-azure-storage.md?pivots=container-windows)| No |
-
-> [!NOTE]
-> Maximum supported size for snapshot restore is 30GB. Snapshot restore fails if your storage size is greater than 30GB. To reduce your storage size, consider moving files like logs, images, audios, and videos to [Azure Storage](../storage/index.yml), for example.
-
-The following table shows which app configuration is restored:
-
-|Settings| Restored?|
-|-|-|
-|[Native log settings](troubleshoot-diagnostic-logs.md), including the Azure Storage account and container settings | Yes |
-|Application Insights configuration | Yes |
-|[Health check](monitor-instances-health-check.md) | Yes |
-| Network features, such as [private endpoints](networking/private-endpoint.md), [hybrid connections](app-service-hybrid-connections.md), and [virtual network integration](overview-vnet-integration.md) | No|
-|[Authentication](overview-authentication-authorization.md)| No|
-|[Managed identities](overview-managed-identity.md)| No |
-|[Custom domains](app-service-web-tutorial-custom-domain.md)| No |
-|[TLS/SSL](configure-ssl-bindings.md)| No |
-|[Scale out](../azure-monitor/autoscale/autoscale-get-started.md?toc=/azure/app-service/toc.json)| No |
-|[Diagnostics with Azure Monitor](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor)| No |
-|[Alerts and Metrics](../azure-monitor/alerts/alerts-classic-portal.md)| No |
-|[Backup](manage-backup.md)| No |
-|Associated [deployment slots](deploy-staging-slots.md)| No |
-|Any connected database that [standard backup](manage-backup.md#what-gets-backed-up) supports| No |
-
-## Restore from a snapshot
-
-> [!NOTE]
-> App Service stops the target app or target slot while restoring a snapshot. To minimize downtime for the production app, restore the snapshot to a [deployment slot](deploy-staging-slots.md) first, then [swap](deploy-staging-slots.md#swap-two-slots) into production.
-
-# [Azure portal](#tab/portal)
-
-1. On the **Settings** page of your app in the [Azure portal](https://portal.azure.com), click **Backups** to display the **Backups** page. Then click **Restore** under the **Snapshot** section.
-
- :::image type="content" source="./media/app-service-web-restore-snapshots/1.png" alt-text="Screenshot that shows how to restore an app from a snapshot.":::
-
-2. In the **Restore** page, select the snapshot to restore.
-
- <!-- ![Screenshot that shows how to select the snapshot to restore. ](./media/app-service-web-restore-snapshots/2.png) -->
-
-3. Specify the destination for the app restore in **Restore destination**. To restore to a [deployment slot](deploy-staging-slots.md), select **Existing app**.
-
- <!-- ![Screenshot that shows how to specify the restoration destination.](./media/app-service-web-restore-snapshots/3.png) -->
-
- > [!NOTE]
- > It's recommended that you restore to a deployment slot and then perform a swap into production. If you choose **Overwrite**, all existing data in your app's current file system is erased and overwritten. Before you click **OK**, make sure that it is what you want to do.
- >
-
-4. You can choose to restore your site configuration.
-
- :::image type="content" source="./media/app-service-web-restore-snapshots/4.png" alt-text="Screenshot that shows how to restore site configuration.":::
-
-5. Click **OK**.
-
-# [Azure CLI](#tab/cli)
-
-1. List the restorable snapshots for your app and copy the timestamp of the one you want to restore.
-
- ```azurecli-interactive
- az webapp config snapshot list --name <app-name> --resource-group <group-name>
- ```
-
-2. To restore the snapshot by overwriting the app's content and configuration:
-
- ```azurecli-interactive
- az webapp config snapshot restore --name <app-name> --resource-group <group-name> --time <snapshot-timestamp>
- ```
-
- To restore the snapshot to a different app:
-
- ```azurecli-interactive
- az webapp config snapshot restore --name <target-app-name> --resource-group <target-group-name> --source-name <source-app-name> --source-resource-group <source-group-name> --time <source-snapshot-timestamp>
- ```
-
- To restore app content only and not the app configuration, use the `--restore-content-only` parameter. For more information, see [az webapp config snapshot restore](/cli/azure/webapp/config/snapshot#az-webapp-config-snapshot-restore).
-
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
For ASP.NET and ASP.NET Core developers, setting connection strings in App Servi
For other language stacks, it's better to use [app settings](#configure-app-settings) instead, because connection strings require special formatting in the variable keys in order to access the values. > [!NOTE]
-> There is one case where you may want to use connection strings instead of app settings for non-.NET languages: certain Azure database types are backed up along with the app _only_ if you configure a connection string for the database in your App Service app. For more information, see [What gets backed up](manage-backup.md#what-gets-backed-up). If you don't need this automated backup, then use app settings.
+> There is one case where you may want to use connection strings instead of app settings for non-.NET languages: certain Azure database types are backed up along with the app _only_ if you configure a connection string for the database in your App Service app. For more information, see [Create a custom backup](manage-backup.md#create-a-custom-backup). If you don't need this automated backup, then use app settings.
At runtime, connection strings are available as environment variables, prefixed with the following connection types:
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
App Service Environment v3 uses Isolated v2 App Service plans that are priced an
## Back up and restore
-The [back up](../manage-backup.md) and [restore](../web-sites-restore.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [requirements and restrictions](../manage-backup.md#requirements-and-restrictions) of this feature.
+The [back up and restore](../manage-backup.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [details](../manage-backup.md#automatic-vs-custom-backups) of this feature.
-The step-by-step instructions in the current documentation for [back up](../manage-backup.md) and [restore](../web-sites-restore.md) should be sufficient to allow you to use this feature. When restoring, the **Storage** option lets you select any backup ZIP file from any existing Azure Storage account container in your subscription. A sample of a restore configuration is given in the following screenshot.
+The step-by-step instructions in the current documentation for [backup and restore](../manage-backup.md) should be sufficient to allow you to use this feature. When restoring, the **Storage** option lets you select any backup ZIP file from any existing Azure Storage account container in your subscription. A sample of a restore configuration is given in the following screenshot.
![back up and restore sample](./media/migration/back-up-restore-sample.png) |Benefits |Limitations | |||
-|Quick - should only take 5-10 minutes per app |Support is limited to [certain database types](../manage-backup.md#what-gets-backed-up) |
+|Quick - should only take 5-10 minutes per app |Support is limited to [certain database types](../manage-backup.md#automatic-vs-custom-backups) |
|Multiple apps can be restored at the same time (restoration needs to be configured for each app individually) |Old and new environments as well as supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription | |In-app MySQL databases are automatically backed up without any configuration |Backups can be up to 10 GB of app and database content, up to 4 GB of which can be the database backup. If the backup size exceeds this limit, you get an error. | |Can restore the app to a snapshot of a previous state |Using a [firewall enabled storage account](../../storage/common/storage-network-security.md) as the destination for your backups isn't supported |
The step-by-step instructions in the current documentation for [back up](../mana
## Clone your app to an App Service Environment v3
-[Cloning your apps](../app-service-web-app-cloning.md) is another feature that can be used to get your **Windows** apps onto your App Service Environment v3. There are limitations with cloning apps. These limitations are the same as those for the App Service Backup feature, see [Back up an app in Azure App Service](../manage-backup.md#requirements-and-restrictions).
+[Cloning your apps](../app-service-web-app-cloning.md) is another feature that can be used to get your **Windows** apps onto your App Service Environment v3. There are limitations with cloning apps. These limitations are the same as those for the App Service Backup feature, see [Back up an app in Azure App Service](../manage-backup.md#whats-included-in-an-automatic-backup).
> [!NOTE] > Cloning apps is supported on Windows App Service only.
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
|Benefits |Limitations | ||| |Can be automated using PowerShell |Only supported on Windows App Service |
-|Multiple apps can be cloned at the same time (cloning needs to be configured for each app individually or using a script) |Support is limited to [certain database types](../manage-backup.md#what-gets-backed-up) |
+|Multiple apps can be cloned at the same time (cloning needs to be configured for each app individually or using a script) |Support is limited to [certain database types](../manage-backup.md#automatic-vs-custom-backups) |
|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Old and new environments as well as supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription | ## Manually create your apps on an App Service Environment v3
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
Title: Back up an app
-description: Learn how to create backups of your apps in Azure App Service. Run manual or scheduled backups. Customize backups by including the attached database.
+description: Learn how to restore backups of your apps in Azure App Service or configure custom backups. Customize backups by including the linked database.
ms.assetid: 6223b6bd-84ec-48df-943f-461d84605694 Previously updated : 09/02/2021 Last updated : 06/17/2022
-# Back up your app in Azure
+# Back up and restore your app in Azure App Service
-The Backup and Restore feature in [Azure App Service](overview.md) lets you easily
-create app backups manually or on a schedule. You can configure the backups to be retained up to an indefinite amount of time. You can restore the app to a snapshot of a previous state by overwriting the existing app or restoring to another app.
+In [Azure App Service](overview.md), you can easily restore app backups. You can also make on-demand custom backups or configure scheduled custom backups. You can restore a backup by overwriting an existing app by restoring to a new app or slot. This article shows you how to restore a backup and make custom backups.
-For information on restoring an app from backup, see [Restore an app in Azure](web-sites-restore.md).
+Backup and restore**Standard**, **Premium**, **Isolated**. For more information about scaling your App Service plan to use a higher tier, see [Scale up an app in Azure](manage-scale-up.md).
-<a name="whatsbackedup"></a>
+## Automatic vs custom backups
+
+There are two types of backups in App Service. Automatic backups made for your app regularly as long as it's in a supported pricing tier. Custom backups require initial configuration, and can be made on-demand or on a schedule. The following table shows the differences between the two types.
+
+||Automatic backups | Custom backups |
+|-|-|-|
+| Pricing tiers | **Standard**, **Premium**. | **Standard**, **Premium**, **Isolated**. |
+| Configuration required | No. | Yes. |
+| Backup size | 30 GB. | 10 GB, 4 GB of which can be the linked database. |
+| Linked database | Not backed up. | The following linked databases can be backed up: [SQL Database](/azure/azure-sql/database/), [Azure Database for MySQL](../mysql/index.yml), [Azure Database for PostgreSQL](../postgresql/index.yml), [MySQL in-app](https://azure.microsoft.com/blog/mysql-in-app-preview-app-service/). |
+| [Storage account](../storage/index.yml) required | No. | Yes. |
+| Backup frequency | Hourly, not configurable. | Configurable. |
+| Retention | 30 days, not configurable. | 0-30 days or indefinite. |
+| Donwloadable | No. | Yes, as Azure Storage blobs. |
+| Partial backups | Not supported. | Supported. |
+
+<!-
+
+## Restore a backup
+
+> [!NOTE]
+> App Service stops the target app or target slot while restoring a backup. To minimize downtime for the production app, restore the backup to a [deployment slot](deploy-staging-slots.md) first, then [swap](deploy-staging-slots.md#swap-two-slots) into production.
+
+# [Azure portal](#tab/portal)
-## What gets backed up
+1. In your app management page in the [Azure portal](https://portal.azure.com), in the left menu, select **Backups**. The **Backups** page lists all the automatic and custom backups for your app and the status of each.
+
+ :::image type="content" source="./media/manage-backup/open-backups-page.png" alt-text="Screenshot that shows how to open the backups page.":::
-App Service can back up the following information to an Azure storage account and container that you have configured your app to use.
+1. Select the backup to restore by clicking it's **Restore** link.
+
+ :::image type="content" source="./media/manage-backup/click-restore-link.png" alt-text="Screenshot that shows how to select the restore link.":::
+
+1. The **Backup details** section is automatically populated for you.
-* App configuration
-* File content
-* Database connected to your app
+1. Specify the restore destination in **Choose a destination**. To restore to a new app, select **Create new** under the **App Service** box. To restore to a new [deployment slot](deploy-staging-slots.md), select **Create new** under the **Deployment slot** box.
-The following database solutions are supported with backup feature:
+ If you choose an existing slot, all existing data in its file system is erased and overwritten. The production slot has the same name as the app name.
+
+1. You can choose to restore your site configuration under **Advanced options**.
+
+1. Click **Restore**.
-- [SQL Database](https://azure.microsoft.com/services/sql-database/)-- [Azure Database for MySQL](https://azure.microsoft.com/services/mysql)-- [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql)-- [MySQL in-app](https://azure.microsoft.com/blog/mysql-in-app-preview-app-service/)
+# [Azure CLI](#tab/cli)
+
+<!-- # [Automatic backups](#tab/auto)
+ -->
> [!NOTE]
-> Each backup is a complete offline copy of your app, not an incremental update.
->
+> These CLI steps apply to automatic backups only.
-<a name="requirements"></a>
+1. List the automatic backups for your app. In the command output, copy the `time` property of the backup you want to restore.
-## Requirements and restrictions
+ ```azurecli-interactive
+ az webapp config snapshot list --name <app-name> --resource-group <group-name>
+ ```
-* The Backup and Restore feature requires the App Service plan to be in the **Standard**, **Premium**, or **Isolated** tier. For more information about scaling your App Service plan to use a higher tier, see [Scale up an app in Azure](manage-scale-up.md). **Premium** and **Isolated** tiers allow a greater number of daily back ups than **Standard** tier.
-* You need an Azure storage account and container in the same subscription as the app that you want to back up. For more information on Azure storage accounts, see [Azure storage account overview](../storage/common/storage-account-overview.md).
-* Backups can be up to 10 GB of app and database content, up to 4GB of which can be the database backup. If the backup size exceeds this limit, you get an error.
-* Backups of [TLS enabled Azure Database for MySQL](../mysql/concepts-ssl-connection-security.md) is not supported. If a backup is configured, you will encounter backup failures.
-* Backups of [TLS enabled Azure Database for PostgreSQL](../postgresql/concepts-ssl-connection-security.md) is not supported. If a backup is configured, you will encounter backup failures.
-* In-app MySQL databases are automatically backed up without any configuration. If you make manual settings for in-app MySQL databases, such as adding connection strings, the backups may not work correctly.
-* Using a [firewall enabled storage account](../storage/common/storage-network-security.md) as the destination for your backups is not supported. If a backup is configured, you will encounter backup failures.
-* Using a [private endpoint enabled storage account](../storage/common/storage-private-endpoints.md) for backup and restore is not supported.
+2. To restore the automatic backup by overwriting the app's content and configuration:
-<a name="manualbackup"></a>
+ ```azurecli-interactive
+ az webapp config snapshot restore --name <app-name> --resource-group <group-name> --time <snapshot-timestamp>
+ ```
-## Create a manual backup
+ To restore the automatic backup to a different app:
-1. In the [Azure portal](https://portal.azure.com), navigate to your app's page, select **Backups**. The **Backups** page is displayed.
+ ```azurecli-interactive
+ az webapp config snapshot restore --name <target-app-name> --resource-group <target-group-name> --source-name <source-app-name> --source-resource-group <source-group-name> --time <source-snapshot-timestamp>
+ ```
- ![Backups page](./media/manage-backup/access-backup-page.png)
+ To restore app content only and not the app configuration, use the `--restore-content-only` parameter. For more information, see [az webapp config snapshot restore](/cli/webapp/config/snapshot#az-webapp-config-snapshot-restore).
- > [!NOTE]
- > If you see the following message, click it to upgrade your App Service plan before you can proceed with backups.
- > For more information, see [Scale up an app in Azure](manage-scale-up.md).
- > :::image type="content" source="./media/manage-backup/upgrade-plan.png" alt-text="Screenshot of a banner with a message to upgrade the App Service plan to access the Backup and Restore feature.":::
- >
- >
+<!-- # [Custom backups](#tab/custom)
+
+1. List the custom backups for your app and copy the `namePropertiesName` and `storageAccountUrl` properties of the backup you want to restore.
+
+ ```azurecli-interactive
+ az webapp config backup list --webapp-name <app-name> --resource-group <group-name>
+ ```
+
+2. To restore the custom backup by overwriting the app's content and configuration:
-2. In the **Backup** page, select **Backup is not configured. Click here to configure backup for your app**.
+ ```azurecli-interactive
+ az webapp config backup restore --webapp-name <app-name> --resource-group <group-name> --backup-name <namePropertiesName> --container-url <storageAccountUrl> --overwrite
+ ```
- ![Click Configure](./media/manage-backup/configure-start.png)
+ To restore the custom backup to a different app:
-3. In the **Backup Configuration** page, click **Storage not configured** to configure a storage account.
+ ```azurecli-interactive
+ az webapp config backup restore --webapp-name <source-app-name> --resource-group <group-name> --target-name <target-app-name> --time <source-snapshot-timestamp> --backup-name <namePropertiesName> --container-url <storageAccountUrl>
+ ```
+
+-- -->
+
+--
+
+<a name="manualbackup"></a>
- :::image type="content" source="./media/manage-backup/configure-storage.png" alt-text="Screenshot of the Backup Storage section with the Storage not configured setting selected.":::
+## Create a custom backup
-4. Choose your backup destination by selecting a **Storage Account** and **Container**. The storage account must belong to the same subscription as the app you want to back up. If you wish, you can create a new storage account or a new container in the respective pages. When you're done, click **Select**.
+1. In your app management page in the [Azure portal](https://portal.azure.com), in the left menu, select **Backups**.
-5. In the **Backup Configuration** page that is still left open, you can configure **Backup Database**, then select the databases you want to include in the backups (SQL Database or MySQL), then click **OK**.
+ :::image type="content" source="./media/manage-backup/open-backups-page.png" alt-text="Screenshot that shows how to open the backups page.":::
- :::image type="content" source="./media/manage-backup/configure-database.png" alt-text="Screenshot of Backup Database section showing the Include in backup selection.":::
+1. At the top of the **Backups** page, select **Configure custom backups**.
+
+1. In **Storage account**, select an existing storage account (in the same subscription) or select **Create new**. Do the same with **Container**.
+
+ To back up the linked database(s), select **Next: Advanced** > **Include database**, and select the database(s) to back up.
> [!NOTE]
- > For a database to appear in this list, its connection string must exist in the **Connection strings** section of the **Application settings** page for your app.
+ > For a supported database to appear in this list, its connection string must exist in the **Connection strings** section of the **Configuration** page for your app.
>
- > In-app MySQL databases are automatically backed up without any configuration. If you make settings for in-app MySQL databases manually, such as adding connection strings, the backups may not work correctly.
+ > In-app MySQL databases are backed up always without any configuration. If you make settings for in-app MySQL databases manually, such as adding connection strings, the backups may not work correctly.
> >
-6. In the **Backup Configuration** page, click **Save**.
-7. In the **Backups** page, click **Backup**.
+1. Click **Configure**.
- ![BackUpNow button](./media/manage-backup/manual-backup.png)
+ Once the storage account and container is configured, you can initiate an on-demand backup at any time. On-demand backups are retained indefinitely.
- You see a progress message during the backup process.
+1. At the top of the **Backups** page, select **Backup Now**.
-Once the storage account and container is configured, you can initiate a manual backup at any time. Manual backups are retained indefinitely.
+ :::image type="content" source="./media/manage-backup/manual-backup.png" alt-text="Screenshot that shows how to make an on-demand backup.":::
-<a name="automatedbackups"></a>
+ The custom backup is displayed in the list with a progress indicator. If it fails with an error, you can select the line item to see the error message.
-## Configure automated backups
+<a name="automatedbackups"></a>
-1. In the **Backup Configuration** page, set **Scheduled backup** to **On**.
+## Configure custom scheduled backups
- ![Enable automated backups](./media/manage-backup/scheduled-backup.png)
+1. In the **Configure custom backups** page, select **Set schedule**.
-2. Configure the backup schedule as desired and select **OK**.
+1. Configure the backup schedule as desired and select **Configure**.
<a name="partialbackups"></a>
-## Configure Partial Backups
+## Configure partial backups
-Sometimes you don't want to back up everything on your app. Here are a few examples:
+Partial backups are supported for custom backups. Sometimes you don't want to back up everything on your app. Here are a few examples:
-* You [set up weekly backups](#configure-automated-backups) of your app that contains static content that never changes, such as old blog posts or images.
+* You [set up weekly backups](#configure-custom-scheduled-backups) of your app that contains static content that never changes, such as old blog posts or images.
* Your app has over 10 GB of content (that's the max amount you can back up at a time). * You don't want to back up the log files.
-Partial backups allow you choose exactly which files you want to back up.
-
-> [!NOTE]
-> Individual databases in the backup can be 4GB max but the total max size of the backup is 10GB
-
-### Exclude files from your backup
+To exclude folders and files from being stored in your future backups, create a `_backup.filter` file in the [`%HOME%\site\wwwroot` folder](operating-system-functionality.md#network-drives-unc-shares) of your app. Specify the list of files and folders you want to exclude in this file.
-Suppose you have an app that contains log files and static images that have been backup once and are not going to change. In such cases, you can exclude those folders and files from being stored in your future backups. To exclude files and folders from your backups, create a `_backup.filter` file in the `D:\home\site\wwwroot` folder of your app. Specify the list of files and folders you want to exclude in this file.
-
-You can access your files by navigating to `https://<app-name>.scm.azurewebsites.net/DebugConsole`. If prompted, sign in to your Azure account.
+> [!TIP]
+> You can access your files by navigating to `https://<app-name>.scm.azurewebsites.net/DebugConsole`. If prompted, sign in to your Azure account.
Identify the folders that you want to exclude from your backups. For example, you want to filter out the highlighted folder and files.
-![Images Folder](./media/manage-backup/kudu-images.png)
-Create a file called `_backup.filter` and put the preceding list in the file, but remove `D:\home`. List one directory or file per line. So the content of the file should be:
+Create a file called `_backup.filter` and put the preceding list in the file, but remove the root `%HOME%`. List one directory or file per line. So the content of the file should be:
``` \site\wwwroot\Images\brand.png
Create a file called `_backup.filter` and put the preceding list in the file, bu
Upload `_backup.filter` file to the `D:\home\site\wwwroot\` directory of your site using [ftp](deploy-ftp.md) or any other method. If you wish, you can create the file directly using Kudu `DebugConsole` and insert the content there.
-Run backups the same way you would normally do it, [manually](#create-a-manual-backup) or [automatically](#configure-automated-backups). Now, any files and folders that are specified in `_backup.filter` is excluded from the future backups scheduled or manually initiated.
+Run backups the same way you would normally do it, [custom on-demand](#create-a-custom-backup) or [custom scheduled](#configure-custom-scheduled-backups). Any files and folders that are specified in `_backup.filter` are excluded from the future backups.
> [!NOTE]
-> You restore partial backups of your site the same way you would [restore a regular backup](web-sites-restore.md). The restore process does the right thing.
->
-> When a full backup is restored, all content on the site is replaced with whatever is in the backup. If a file is on the site, but not in the backup it gets deleted. But when a partial backup is restored, any content that is located in one of the restricted directories, or any restricted file, is left as is.
+> `_backup.filter` changes the way a restore works. Without `_backup.filter`, restoring a backup deletes all existing files in the app and replaces them with the files in the backup. With `_backup.filter`, any content in the app's file system that's included in `_backup.filter` is left as is (not deleted).
> <a name="aboutbackups"></a>
The database backup for the app is stored in the root of the .zip file. For SQL
> [!WARNING] > Altering any of the files in your **websitebackups** container can cause the backup to become invalid and therefore non-restorable.
-## Troubleshooting
+## Error messages
-The **Backups** page shows you the status of each backup. If you click on a failed backup, you can get log details regarding the failure. Use the following table to help you troubleshoot your backup. If the failure isn't documented in the table, open a support ticket.
+The **Backups** page shows you the status of each backup. To get log details regarding a failed backup, select the line item in the list. Use the following table to help you troubleshoot your backup. If the failure isn't documented in the table, open a support ticket.
| Error | Fix | | - | - |
The **Backups** page shows you the status of each backup. If you click on a fail
| A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server). | Check that the connection string is valid. Allow the app's [outbound IPs](overview-inbound-outbound-ips.md) in the database server settings. | | Cannot open server "\<name>" requested by the login. The login failed. | Check that the connection string is valid. | | Missing mandatory parameters for valid Shared Access Signature. | Delete the backup schedule and reconfigure it. |
-| SSL connection is required. Please specify SSL options and retry. when trying to connect. | Use the built-in backup feature in Azure MySQL or Azure Postgressql instead. |
+| SSL connection is required. Please specify SSL options and retry. when trying to connect. | SSL connectivity to Azure Database for MySQL and Azure Database for PostgreSQL isn't supported for database backups. Use the native backup feature in the respective database instead. |
## Automate with scripts
For samples, see:
- [Azure CLI samples](samples-cli.md) - [Azure PowerShell samples](samples-powershell.md)
+## Frequently asked questions
+
+<a name="requirements"></a>
+
+<a name="whatsbackedup"></a>
+
+- [Are the backups incremental updates or complete backups?](#are-the-backups-incremental-updates-or-complete-backups)
+- [Does Azure Functions support automatic backups?](#does-azure-functions-support-automatic-backups)
+- [What's included in an automatic backup?](#whats-included-in-an-automatic-backup)
+- [What's included in a custom backup?](#whats-included-in-a-custom-backup)
+- [Why is my linked database not backed up?](#why-is-my-linked-database-not-backed-up)
+- [What happens if the backup size exceeds the allowable maximum?](#what-happens-if-the-backup-size-exceeds-the-allowable-maximum)
+- [Can I use a storage account that has security features enabled?](#can-i-use-a-storage-account-that-has-security-features-enabled)
+- [## How do I restore to an app in a different subscription?](#how-do-i-restore-to-an-app-in-a-different-subscription)
+
+#### Are the backups incremental updates or complete backups?
+
+Each backup is a complete offline copy of your app, not an incremental update.
+
+#### Does Azure Functions support automatic backups?
+
+Automatic backups are available in preview for Azure Functions in [dedicated (App Service)](../azure-functions/dedicated-plan.md) **Standard** or **Premium** tiers. Function apps in the [**Consumption**](../azure-functions/consumption-plan.md) or [**Elastic Premium**](../azure-functions/functions-premium-plan.md) pricing tiers aren't supported for automatic backups.
+
+#### What's included in an automatic backup?
+
+The following table shows which content is backed up in an automatic backup:
+
+|Settings| Restored?|
+|-|-|
+| **Windows apps**: All app content under `%HOME%` directory<br/>**Linux apps**: All app content under `/home` directory<br/>**Custom containers (Windows and Linux)**: Content in [persistent storage](configure-custom-container.md?pivots=container-linux#use-persistent-shared-storage)| Yes |
+| Content of the [run-from-ZIP package](deploy-run-package.md)| No |
+| Content from any [custom mounted Azure storage](configure-connect-to-azure-storage.md?pivots=container-windows)| No |
+
+The following table shows which app configuration is restored when you choose to restore app configuration:
+
+|Settings| Restored?|
+|-|-|
+|[Native log settings](troubleshoot-diagnostic-logs.md), including the Azure Storage account and container settings | Yes |
+|Application Insights configuration | Yes |
+|[Health check](monitor-instances-health-check.md) | Yes |
+| Network features, such as [private endpoints](networking/private-endpoint.md), [hybrid connections](app-service-hybrid-connections.md), and [virtual network integration](overview-vnet-integration.md) | No|
+|[Authentication](overview-authentication-authorization.md)| No|
+|[Managed identities](overview-managed-identity.md)| No |
+|[Custom domains](app-service-web-tutorial-custom-domain.md)| No |
+|[TLS/SSL](configure-ssl-bindings.md)| No |
+|[Scale out](../azure-monitor/autoscale/autoscale-get-started.md?toc=/azure/app-service/toc.json)| No |
+|[Diagnostics with Azure Monitor](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor)| No |
+|[Alerts and Metrics](../azure-monitor/alerts/alerts-classic-portal.md)| No |
+|[Backup](manage-backup.md)| No |
+|Associated [deployment slots](deploy-staging-slots.md)| No |
+|Any linked database that [custom backup](#whats-included-in-a-custom-backup) supports| No |
+
+#### What's included in a custom backup?
+
+A custom backup (on-demand backup or scheduled backup) includes all content and configuration that's included in an [automatic backup](#whats-included-in-an-automatic-backup), plus any linked database, up to the allowable maximum size.
+
+#### Why is my linked database not backed up?
+
+Linked databases are backed up only for custom backups, up to the allowable maximum size. If the maximum backup size (10 GB) or the maximum database size (4 GB) is exceeded, your backup fails. Here are a few common reasons why your linked database isn't backed up:
+
+* Backups of [TLS enabled Azure Database for MySQL](../mysql/concepts-ssl-connection-security.md) isn't supported. If a backup is configured, you'll encounter backup failures.
+* Backups of [TLS enabled Azure Database for PostgreSQL](../postgresql/concepts-ssl-connection-security.md) isn't supported. If a backup is configured, you'll encounter backup failures.
+* In-app MySQL databases are automatically backed up without any configuration. If you make manual settings for in-app MySQL databases, such as adding connection strings, the backups may not work correctly.
+
+#### What happens if the backup size exceeds the allowable maximum?
+
+Automatic backups can't be restored if the backup size exceeds the maximum size. Similarly, custom backups fail if the maximum backup size or the maximum database size is exceeded. To reduce your storage size, consider moving files like logs, images, audio, and videos to Azure Storage, for example.
+
+#### Can I use a storage account that has security features enabled?
+
+The following security features in Azure storage aren't supported for custom backups:
+
+* Using a [firewall enabled storage account](../storage/common/storage-network-security.md) as the destination for your backups isn't supported. If a backup is configured, you will encounter backup failures.
+* Using a [private endpoint enabled storage account](../storage/common/storage-private-endpoints.md) for backup and restore isn't supported.
+
+#### How do I restore to an app in a different subscription?
+
+1. Make a custom backup to an Azure Storage container.
+1. [Download the backup ZIP file](../storage/blobs/storage-quickstart-blobs-portal.md) to your local machine.
+1. In the **Backups** page for your target app, select **Restore** in the top menu.
+1. In **Backup details**, select **Storage** in **Source**.
+1. Select the preferred storage account.
+1. In **Zip file**, select **Upload file**.
+1. In Name, select **Browse** and select the downloaded ZIP file.
+1. Configure the rest of the sections like in [Restore a backup](#restore-a-backup).
+ <a name="nextsteps"></a> ## Next Steps
-For information on restoring an app from a backup, see [Restore an app in Azure](web-sites-restore.md).
+[Azure Blob Storage documentation](../storage/blobs/index.yml)
app-service Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-disaster-recovery.md
App Service resources are region-specific and can't be moved across regions. You
## Prerequisites -- None. [Restoring from snapshot](app-service-web-restore-snapshots.md) usually requires **Premium** tier, but in disaster recovery mode, it's automatically enabled for your impacted app, regardless which tier the impacted app is in.
+- None. [Restoring an automatic backup](manage-backup.md#restore-a-backup) usually requires **Standard** or **Premium** tier, but in disaster recovery mode, it's automatically enabled for your impacted app, regardless which tier the impacted app is in.
## Prepare
If you only want to recover the files from the impacted app without restoring it
![Screenshot of a FileZilla file hierarchy. The wwwroot folder is highlighted, and its shortcut menu is visible. In that menu, Download is highlighted.](media/manage-disaster-recovery/download-content.png) ## Next steps
-[Restore an app in Azure from a snapshot](app-service-web-restore-snapshots.md)
+[Backup and restore](manage-backup.md)
app-service Manage Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-move-across-regions.md
Certain resources, such as imported certificates or hybrid connections, contain
1. [Create a back up of the source app](manage-backup.md). 1. [Create an app in a new App Service plan, in the target region](app-service-plan-manage.md#create-an-app-service-plan).
-2. [Restore the back up in the target app](web-sites-restore.md)
+2. [Restore the back up in the target app](manage-backup.md)
2. If you use a custom domain, [bind it preemptively to the target app](manage-custom-dns-migrate-domain.md#bind-the-domain-name-preemptively) with `awverify.` and [enable the domain in the target app](manage-custom-dns-migrate-domain.md#enable-the-domain-for-your-app). 3. Configure everything else in your target app to be the same as the source app and verify your configuration. 4. When you're ready for the custom domain to point to the target app, [remap the domain name](manage-custom-dns-migrate-domain.md#remap-the-active-dns-name).
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 06/03/2022 Last updated : 06/16/2022
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
When prompted, provide the administrator username and password for the SQL datab
> [!NOTE] > The CLI command does everything the app needs to successfully connect to the database, including: >
-> - In your App Service app, adds a connection string with the name `AZURE_SQL_CONNECTIONSTRING`, which your code can use for its database connection. If the connection string is already in use, `AZURE_SQL_<connection-name>_CONNECTIONSTRING` is used for the name instead.
+> - In your App Service app, detects the platform as .NET and adds a connection string with the name `AZURE_SQL_CONNECTIONSTRING`, which your code can use for its database connection. If the connection string is already in use, `AZURE_SQL_<connection-name>_CONNECTIONSTRING` is used for the name instead.
> - In your SQL database server, allows Azure services to access the SQL database server. Copy this connection string value from the output for later.
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md
description: Learn how to get a PHP app working in Azure, with connection to a M
ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73 ms.devlang: php Previously updated : 03/04/2022 Last updated : 06/13/2022 zone_pivot_groups: app-service-platform-windows-linux
zone_pivot_groups: app-service-platform-windows-linux
::: zone-end In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Stream diagnostic logs from Azure > * Manage the app in the Azure portal ## Prerequisites To complete this tutorial: - [Install Git](https://git-scm.com/)-- [Install PHP 5.6.4 or above](https://php.net/downloads.php)
+- [Install PHP 7.4](https://php.net/downloads.php)
- [Install Composer](https://getcomposer.org/doc/00-intro.md)-- Enable the following PHP extensions Laravel needs: OpenSSL, PDO-MySQL, Mbstring, Tokenizer, XML - [Install and start MySQL](https://dev.mysql.com/doc/refman/5.7/en/installing.html)
+- <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> to run commands in any shell to provision and configure Azure resources.
## Prepare local MySQL
In this step, you create a database in your local MySQL server for your use in t
### Connect to local MySQL server
-In a terminal window, connect to your local MySQL server. You can use this terminal window to run all the commands in this tutorial.
+In a local terminal window, connect to your local MySQL server. You can use this terminal window to run all the commands in this tutorial.
-```bash
+```terminal
mysql -u root -p ```
In this step, you get a Laravel sample application, configure its database conne
### Clone the sample
-In the terminal window, `cd` to a working directory.
+1. `cd` to a working directory.
1. Clone the sample repository and change to the repository root.
- ```bash
+ ```terminal
git clone https://github.com/Azure-Samples/laravel-tasks cd laravel-tasks ```
-1. Ensure the default branch is `main`.
-
- ```bash
- git branch -m main
- ```
-
- > [!TIP]
- > The branch name change isn't required by App Service. But, since many repositories are changing their default branch to `main`, this tutorial also shows you how to deploy a repository from `main`. For more information, see [Change deployment branch](deploy-local-git.md#change-deployment-branch).
- 1. Install the required packages.
- ```bash
+ ```terminal
composer install ```
DB_USERNAME=root
DB_PASSWORD=<root_password> ```
-For information on how Laravel uses the _.env_ file, see [Laravel Environment Configuration](https://laravel.com/docs/5.4/configuration#environment-configuration).
+For information on how Laravel uses the _.env_ file, see [Laravel Environment Configuration](https://laravel.com/docs/8.x#environment-based-configuration).
### Run the sample locally
-1. Run [Laravel database migrations](https://laravel.com/docs/5.4/migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the _database/migrations_ directory in the Git repository.
+1. Run [Laravel database migrations](https://laravel.com/docs/8.x/migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the _database/migrations_ directory in the Git repository.
- ```bash
+ ```terminal
php artisan migrate ``` 1. Generate a new Laravel application key.
- ```bash
+ ```terminal
php artisan key:generate ``` 1. Run the application.
- ```bash
+ ```terminal
php artisan serve ```
For information on how Laravel uses the _.env_ file, see [Laravel Environment Co
1. To stop PHP, enter `Ctrl + C` in the terminal.
-## Create MySQL in Azure
-
-In this step, you create a MySQL database in [Azure Database for MySQL](../mysql/index.yml). Later, you set up the PHP application to connect to this database.
-
-### Create a resource group
+## Deploy Laravel sample to App Service
+### Deploy sample code
-### Create a MySQL server
-
-In the Cloud Shell, create a server in Azure Database for MySQL with the [`az mysql server create`](/cli/azure/mysql/server#az-mysql-server-create) command.
-
-In the following command, substitute a unique server name for the *\<mysql-server-name>* placeholder, a user name for the *\<admin-user>*, and a password for the *\<admin-password>* placeholder. The server name is used as part of your MySQL endpoint (`https://<mysql-server-name>.mysql.database.azure.com`), so the name needs to be unique across all servers in Azure. For details on selecting MySQL DB SKU, see [Create an Azure Database for MySQL server](../mysql/quickstart-create-mysql-server-database-using-azure-cli.md#create-an-azure-database-for-mysql-server).
-
-```azurecli-interactive
-az mysql server create --resource-group myResourceGroup --name <mysql-server-name> --location "West Europe" --admin-user <admin-user> --admin-password <admin-password> --sku-name B_Gen5_1
-```
-
-When the MySQL server is created, the Azure CLI shows information similar to the following example:
-
-<pre>
-{
- "administratorLogin": "&lt;admin-user&gt;",
- "administratorLoginPassword": null,
- "fullyQualifiedDomainName": "&lt;mysql-server-name&gt;.mysql.database.azure.com",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.DBforMySQL/servers/&lt;mysql-server-name&gt;",
- "location": "westeurope",
- "name": "&lt;mysql-server-name&gt;",
- "resourceGroup": "myResourceGroup",
- ...
- - &lt; Output has been truncated for readability &gt;
-}
-</pre>
-
-### Configure server firewall
-
-1. In the Cloud Shell, create a firewall rule for your MySQL server to allow client connections by using the [`az mysql server firewall-rule create`](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-create) command. When both starting IP and end IP are set to 0.0.0.0, the firewall is only opened for other Azure resources.
-
- ```azurecli-interactive
- az mysql server firewall-rule create --name allAzureIPs --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
- ```
-
- > [!TIP]
- > You can be even more restrictive in your firewall rule by [using only the outbound IP addresses your app uses](overview-inbound-outbound-ips.md#find-outbound-ips).
- >
-
-1. In the Cloud Shell, run the command again to allow access from your local computer by replacing *\<your-ip-address>* with [your local IPv4 IP address](https://www.whatsmyip.org/).
-
- ```azurecli-interactive
- az mysql server firewall-rule create --name AllowLocalClient --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address=<your-ip-address> --end-ip-address=<your-ip-address>
- ```
-
-### Create a production database
-
-1. In the local terminal window, connect to the MySQL server in Azure. Use the value you specified previously for _&lt;admin-user>_ and _&lt;mysql-server-name>_. When prompted for a password, use the password you specified when you created the database in Azure.
-
- ```bash
- mysql -u <admin-user>@<mysql-server-name> -h <mysql-server-name>.mysql.database.azure.com -P 3306 -p
- ```
-
-1. At the `mysql` prompt, create a database.
-
- ```sql
- CREATE DATABASE sampledb;
- ```
-1. Create a database user called _phpappuser_ and give it all privileges in the `sampledb` database. For simplicity of the tutorial, use _MySQLAzure2017_ as the password.
+1. In the root directory of the respository, add a file called *.deployment*. This file tells App Service to run a custom deployment script during build automation. Copy the following text into it as its content:
- ```sql
- CREATE USER 'phpappuser' IDENTIFIED BY 'MySQLAzure2017';
- GRANT ALL PRIVILEGES ON sampledb.* TO 'phpappuser';
```-
-1. Exit the server connection by entering `quit`.
-
- ```sql
- quit
+ [config]
+ command = bash deploy.sh
```
-## Connect app to Azure MySQL
-
-In this step, you connect the PHP application to the MySQL database you created in Azure Database for MySQL.
-
-<a name="devconfig"></a>
-
-### Configure the database connection
-
-In the repository root, create an _.env.production_ file and copy the following variables into it. Replace the placeholder_&lt;mysql-server-name>_ in both *DB_HOST* and *DB_USERNAME*.
-
-```
-APP_ENV=production
-APP_DEBUG=true
-APP_KEY=
-
-DB_CONNECTION=mysql
-DB_HOST=<mysql-server-name>.mysql.database.azure.com
-DB_DATABASE=sampledb
-DB_USERNAME=phpappuser@<mysql-server-name>
-DB_PASSWORD=MySQLAzure2017
-MYSQL_SSL=true
-```
-
-Save the changes.
-
-> [!TIP]
-> To secure your MySQL connection information, this file is already excluded from the Git repository (See _.gitignore_ in the repository root). Later, you learn how to set up the environment variables in App Service to connect to your database in Azure Database for MySQL. With environment variables, you don't need the *.env* file in App Service.
->
-
-### Configure TLS/SSL certificate
-
-By default, Azure Database for MySQL enforces TLS connections from clients. To connect to your MySQL database in Azure, you must use the [_.pem_ certificate supplied by Azure Database for MySQL](../mysql/howto-configure-ssl.md).
-
-Open _config/database.php_ and add the `sslmode` and `options` parameters to `connections.mysql`, as shown in the following code.
--
-```php
-'mysql' => [
- ...
- 'sslmode' => env('DB_SSLMODE', 'prefer'),
- 'options' => (env('MYSQL_SSL')) ? [
- PDO::MYSQL_ATTR_SSL_KEY => '/ssl/BaltimoreCyberTrustRoot.crt.pem',
- ] : []
-],
-```
---
-```php
-'mysql' => [
- ...
- 'sslmode' => env('DB_SSLMODE', 'prefer'),
- 'options' => (env('MYSQL_SSL') && extension_loaded('pdo_mysql')) ? [
- PDO::MYSQL_ATTR_SSL_KEY => '/ssl/BaltimoreCyberTrustRoot.crt.pem',
- ] : []
-],
-```
+ > [!NOTE]
+ > The deployment process installs [Composer](https://getcomposer.org/) packages at the end. App Service on Windows does not run these automations during default deployment, so this sample repository has two additional files in its root directory to enable it:
+ >
+ > - `deploy.sh` - The custom deployment script. If you review the file, you see that it runs `php composer.phar install`.
+ > - `composer.phar` - The Composer package manager.
+ >
+ > You can use this approach to add any step to your [Git-based](deploy-local-git.md) or [ZIP](deploy-zip.md) deployment to App Service. For more information, see [Custom Deployment Script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script).
+ >
+
::: zone-end
-The certificate `BaltimoreCyberTrustRoot.crt.pem` is provided in the repository for convenience in this tutorial.
+1. From the command line, sign in to Azure using the [`az login`](/cli/azure#az_login) command.
-### Test the application locally
-
-1. Run Laravel database migrations with _.env.production_ as the environment file to create the tables in your MySQL database in Azure Database for MySQL. Remember that _.env.production_ has the connection information to your MySQL database in Azure.
-
- ```bash
- php artisan migrate --env=production --force
+ ```azurecli
+ az login
```
-1. _.env.production_ doesn't have a valid application key yet. Generate a new one for it in the terminal.
+1. Deploy the code in your local folder using the [`az webapp up`](/cli/azure/webapp#az_webapp_up) command. Replace *\<app-name>* with a unique name for your app.
- ```bash
- php artisan key:generate --env=production --force
+ ::: zone pivot="platform-windows"
+
+ ```azurecli
+ az webapp up --resource-group myResourceGroup --name <app-name> --location "West Europe" --sku FREE --runtime "php|7.4" --os-type=windows
```-
-1. Run the sample application with _.env.production_ as the environment file.
-
- ```bash
- php artisan serve --env=production
+
+ ::: zone-end
+
+ ::: zone pivot="platform-linux"
+
+ ```azurecli
+ az webapp up --resource-group myResourceGroup --name <app-name> --location "West Europe" --sku FREE --runtime "php|7.4" --os-type=linux
```
+
+ ::: zone-end
-1. Go to `http://localhost:8000`. If the page loads without errors, the PHP application is connecting to the MySQL database in Azure.
-
-1. Add a few tasks in the page.
-
- ![PHP connects successfully to Azure Database for MySQL](./media/tutorial-php-mysql-app/mysql-connect-success.png)
-
-1. To stop PHP, enter `Ctrl + C` in the terminal.
-
-### Commit your changes
-
-Run the following Git commands to commit your changes:
-
-```bash
-git add .
-git commit -m "database.php updates"
-```
-
-Your app is ready to be deployed.
-
-## Deploy to Azure
-
-In this step, you deploy the MySQL-connected PHP application to Azure App Service.
-
-### Configure a deployment user
--
-### Create an App Service plan
-------
-<a name="create"></a>
-### Create a web app
-------
-### Configure database settings
-
-In App Service, you set environment variables as _app settings_ by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command.
-
-The following command configures the app settings `DB_HOST`, `DB_DATABASE`, `DB_USERNAME`, and `DB_PASSWORD`. Replace the placeholders _&lt;app-name>_ and _&lt;mysql-server-name>_.
+ [!include [az webapp up command note](../../includes/app-service-web-az-webapp-up-note.md)]
-```azurecli-interactive
-az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings DB_HOST="<mysql-server-name>.mysql.database.azure.com" DB_DATABASE="sampledb" DB_USERNAME="phpappuser@<mysql-server-name>" DB_PASSWORD="MySQLAzure2017" MYSQL_SSL="true"
-```
-
-You can use the PHP [getenv](https://www.php.net/manual/en/function.getenv.php) method to access the settings. the Laravel code uses an [env](https://laravel.com/docs/5.4/helpers#method-env) wrapper over the PHP `getenv`. For example, the MySQL configuration in _config/database.php_ looks like the following code:
-
-```php
-'mysql' => [
- 'driver' => 'mysql',
- 'host' => env('DB_HOST', 'localhost'),
- 'database' => env('DB_DATABASE', 'forge'),
- 'username' => env('DB_USERNAME', 'forge'),
- 'password' => env('DB_PASSWORD', ''),
- ...
-],
-```
### Configure Laravel environment variables Laravel needs an application key in App Service. You can configure it with app settings.
-1. In the local terminal window, use `php artisan` to generate a new application key without saving it to _.env_.
+1. Use `php artisan` to generate a new application key without saving it to _.env_.
- ```bash
+ ```terminal
php artisan key:generate --show ```
-1. In the Cloud Shell, set the application key in the App Service app by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command. Replace the placeholders _&lt;app-name>_ and _&lt;outputofphpartisankey:generate>_.
+1. Set the application key in the App Service app by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command. Replace the placeholders _&lt;app-name>_ and _&lt;outputofphpartisankey:generate>_.
```azurecli-interactive
- az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings APP_KEY="<output_of_php_artisan_key:generate>" APP_DEBUG="true"
+ az webapp config appsettings set --settings APP_KEY="<output_of_php_artisan_key:generate>" APP_DEBUG="true"
``` `APP_DEBUG="true"` tells Laravel to return debugging information when the deployed app encounters errors. When running a production application, set it to `false`, which is more secure.
Laravel needs an application key in App Service. You can configure it with app s
::: zone pivot="platform-windows"
-Set the virtual application path for the app. This step is required because the [Laravel application lifecycle](https://laravel.com/docs/5.4/lifecycle) begins in the _public_ directory instead of the application's root directory. Other PHP frameworks whose lifecycle start in the root directory can work without manual configuration of the virtual application path.
+Set the virtual application path for the app. This step is required because the [Laravel application lifecycle](https://laravel.com/docs/8.x/lifecycle#lifecycle-overview) begins in the _public_ directory instead of the application's root directory. Other PHP frameworks whose lifecycle start in the root directory can work without manual configuration of the virtual application path.
-In the Cloud Shell, set the virtual application path by using the [`az resource update`](/cli/azure/resource#az-resource-update) command. Replace the _&lt;app-name>_ placeholder.
+Set the virtual application path by using the [`az resource update`](/cli/azure/resource#az-resource-update) command. Replace the _&lt;app-name>_ placeholder.
```azurecli-interactive az resource update --name web --resource-group myResourceGroup --namespace Microsoft.Web --resource-type config --parent sites/<app_name> --set properties.virtualApplications[0].physicalPath="site\wwwroot\public" --api-version 2015-06-01
By default, Azure App Service points the root virtual application path (_/_) to
::: zone pivot="platform-linux"
-[Laravel application lifecycle](https://laravel.com/docs/5.4/lifecycle) begins in the _public_ directory instead of the application's root directory. The default PHP Docker image for App Service uses Apache, and it doesn't let you customize the `DocumentRoot` for Laravel. But you can use `.htaccess` to rewrite all requests to point to _/public_ instead of the root directory. In the repository root, an `.htaccess` is added already for this purpose. With it, your Laravel application is ready to be deployed.
+[Laravel application lifecycle](https://laravel.com/docs/8.x/lifecycle#lifecycle-overview) begins in the _public_ directory instead of the application's root directory. The default PHP Docker image for App Service uses Apache, and it doesn't let you customize the `DocumentRoot` for Laravel. But you can use `.htaccess` to rewrite all requests to point to _/public_ instead of the root directory. In the repository root, an `.htaccess` is added already for this purpose. With it, your Laravel application is ready to be deployed.
For more information, see [Change site root](configure-language-php.md#change-site-root). ::: zone-end
-### Push to Azure from Git
---
- <pre>
- Counting objects: 3, done.
- Delta compression using up to 8 threads.
- Compressing objects: 100% (3/3), done.
- Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
- Total 3 (delta 2), reused 0 (delta 0)
- remote: Updating branch 'main'.
- remote: Updating submodules.
- remote: Preparing deployment for commit id 'a5e076db9c'.
- remote: Running custom deployment command...
- remote: Running deployment command...
- ...
- &lt; Output has been truncated for readability &gt;
- </pre>
-
- > [!NOTE]
- > You may notice that the deployment process installs [Composer](https://getcomposer.org/) packages at the end. App Service does not run these automations during default deployment, so this sample repository has three additional files in its root directory to enable it:
- >
- > - `.deployment` - This file tells App Service to run `bash deploy.sh` as the custom deployment script.
- > - `deploy.sh` - The custom deployment script. If you review the file, you will see that it runs `php composer.phar install` after `npm install`.
- > - `composer.phar` - The Composer package manager.
- >
- > You can use this approach to add any step to your Git-based deployment to App Service. For more information, see [Custom Deployment Script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script).
- >
--
+If you browse to `https://<app-name>.azurewebsites.net` now and see a `Whoops, looks like something went wrong` message, then you have configured your App Service app properly and it's running in Azure. It just doesn't have database connectivity yet. In the next step, you create a MySQL database in [Azure Database for MySQL](../mysql/index.yml).
-
- <pre>
- Counting objects: 3, done.
- Delta compression using up to 8 threads.
- Compressing objects: 100% (3/3), done.
- Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
- Total 3 (delta 2), reused 0 (delta 0)
- remote: Updating branch 'main'.
- remote: Updating submodules.
- remote: Preparing deployment for commit id 'a5e076db9c'.
- remote: Running custom deployment command...
- remote: Running deployment command...
- ...
- &lt; Output has been truncated for readability &gt;
- </pre>
-
+## Create MySQL in Azure
-### Browse to the Azure app
+1. Create a MySQL server in Azure with the [`az mysql server create`](/cli/azure/mysql/server#az-mysql-server-create) command.
-Browse to `http://<app-name>.azurewebsites.net` and add a few tasks to the list.
+ In the following command, substitute a unique server name for the *\<mysql-server-name>* placeholder, a user name for the *\<admin-user>*, and a password for the *\<admin-password>* placeholder. The server name is used as part of your MySQL endpoint (`https://<mysql-server-name>.mysql.database.azure.com`), so the name needs to be unique across all servers in Azure. For details on selecting MySQL DB SKU, see [Create an Azure Database for MySQL server](../mysql/quickstart-create-mysql-server-database-using-azure-cli.md#create-an-azure-database-for-mysql-server).
+ ```azurecli-interactive
+ az mysql server create --resource-group myResourceGroup --name <mysql-server-name> --location "West Europe" --admin-user <admin-user> --admin-password <admin-password> --sku-name B_Gen5_1
+ ```
-Congratulations, you're running a data-driven PHP app in Azure App Service.
+1. Create a database called `sampledb` by using the [`az mysql db create`](/cli/azure/mysql/db#az-mysql-db-create) command.
-## Update model locally and redeploy
+ ```azurecli-interactive
+ az mysql db create --resource-group myResourceGroup --server-name <mysql-server-name> --name sampledb
+ ```
-In this step, you make a simple change to the `task` data model and the webapp, and then publish the update to Azure.
+## Connect the app to the database
-For the tasks scenario, you change the application so that you can mark a task as complete.
+Configure the connection between your app and the SQL database by using the [az webapp connection create mysql](/cli/azure/webapp/connection/create#az-webapp-connection-create-mysql) command. `--target-resource-group` is the resource group that contains the MySQL database.
-### Add a column
-1. In the local terminal window, go to the root of the Git repository.
+```azurecli-interactive
+az webapp connection create mysql --resource-group myResourceGroup --name <app-name> --target-resource-group myResourceGroup --server <mysql-server-name> --database sampledb --connection my_laravel_db --client-type php
+```
-1. Generate a new database migration for the `tasks` table:
- ```bash
- php artisan make:migration add_complete_column --table=tasks
- ```
-1. This command shows you the name of the migration file that's generated. Find this file in _database/migrations_ and open it.
+```azurecli-interactive
+az webapp connection create mysql --resource-group myResourceGroup --name <app-name> --target-resource-group myResourceGroup --server <mysql-server-name> --database sampledb --connection my_laravel_db
+```
-1. Replace the `up` method with the following code:
- ```php
- public function up()
- {
- Schema::table('tasks', function (Blueprint $table) {
- $table->boolean('complete')->default(False);
- });
- }
- ```
+When prompted, provide the administrator username and password for the MySQL database.
- The preceding code adds a boolean column in the `tasks` table called `complete`.
+> [!NOTE]
+> The CLI command does everything the app needs to successfully connect to the database, including:
+>
+> - In your App Service app, adds [six app settings](../service-connector/how-to-integrate-mysql.md#php-mysqli-secret--connection-string) with the names `AZURE_MYSQL_<setting>`, which your code can use for its database connection. If the app setting names are already in use, the `AZURE_MYSQL_<connection-name>_<setting>` format is used instead.
+> - In your MySQL database server, allows Azure services to access the MySQL database server.
-1. Replace the `down` method with the following code for the rollback action:
+## Generate the database schema
- ```php
- public function down()
- {
- Schema::table('tasks', function (Blueprint $table) {
- $table->dropColumn('complete');
- });
- }
- ```
+<a name="devconfig"></a>
-1. In the local terminal window, run Laravel database migrations to make the change in the local database.
+1. Allow access to the Azure database from your local computer by using the [az mysql server firewall-rule create](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-create) and replacing *\<your-ip-address>* with [your local IPv4 IP address](https://www.whatsmyip.org/).
- ```bash
- php artisan migrate
+ ```azurecli-interactive
+ az mysql server firewall-rule create --name AllowLocalClient --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address=<your-ip-address> --end-ip-address=<your-ip-address>
```
- Based on the [Laravel naming convention](https://laravel.com/docs/5.4/eloquent#defining-models), the model `Task` (see _app/Task.php_) maps to the `tasks` table by default.
-
-### Update application logic
+ > [!TIP]
+ > Once the firewall rule for your local computer is enabled, you can connect to the server like any MySQL server with the `mysql` client. For example:
+ > ```terminal
+ > mysql -u <admin-user>@<mysql-server-name> -h <mysql-server-name>.mysql.database.azure.com -P 3306 -p
+ > ```
-1. Open the *routes/web.php* file. The application defines its routes and business logic here.
+1. Generate the environment variables from the [service connector you created earlier](#connect-the-app-to-the-database) by running the [`az webapp connection list-configuration`](/cli/azure/webapp/connection/create#az-webapp-connection-create-mysql) command.
-1. At the end of the file, add a route with the following code:
+ ::: zone pivot="platform-windows"
- ```php
- /**
- * Toggle Task completeness
- */
- Route::post('/task/{id}', function ($id) {
- error_log('INFO: post /task/'.$id);
- $task = Task::findOrFail($id);
+ ```powershell
+ $Settings = az webapp connection list-configuration --resource-group myResourceGroup --name <app-name> --connection my_laravel_db --query configurations | ConvertFrom-Json
+ foreach ($s in $Settings) { New-Item -Path Env:$($s.name) -Value $s.value}
+ ```
- $task->complete = !$task->complete;
- $task->save();
+ > [!TIP]
+ > These commands are equivalent to setting the database variables manually like this:
+ >
+ > ```powershell
+ > New-Item -Path Env:AZURE_MYSQL_DBNAME -Value ...
+ > New-Item -Path Env:AZURE_MYSQL_HOST -Value ...
+ > New-Item -Path Env:AZURE_MYSQL_PORT -Value ...
+ > New-Item -Path Env:AZURE_MYSQL_FLAG -Value ...
+ > New-Item -Path Env:AZURE_MYSQL_USERNAME -Value ...
+ > New-Item -Path Env:AZURE_MYSQL_PASSWORD -Value ...
+ > ```
+ ::: zone-end
- return redirect('/');
- });
+ ::: zone pivot="platform-linux"
+
+ ```bash
+ export $(az webapp connection list-configuration --resource-group myResourceGroup --name <app-name> --connection my_laravel_db --query "configurations[].[name,value] | [*].join('=',@)" --output tsv)
```
- The preceding code makes a simple update to the data model by toggling the value of `complete`.
+ > [!TIP]
+ > The [JMESPath query](https://jmespath.org/) in `--query` and the `--output tsv` formatting let you feed the output directly into the `export` command. It's equivalent to setting the database variables manually like this:
+ >
+ > ```powershell
+ > export AZURE_MYSQL_DBNAME=...
+ > export AZURE_MYSQL_HOST=...
+ > export AZURE_MYSQL_PORT=...
+ > export AZURE_MYSQL_FLAG=...
+ > export AZURE_MYSQL_USERNAME=...
+ > export AZURE_MYSQL_PASSWORD=...
+ > ```
-### Update the view
+ <!-- export $(az webapp connection list-configuration -g myResourceGroup -n <app-name> --connection my-laravel-db | jq -r '.configurations[] | "\(.name)=\(.value)"')
+ -->
+ ::: zone-end
-1. Open the *resources/views/tasks.blade.php* file. Find the `<tr>` opening tag and replace it with:
+1. Open _config/database.php_ and find the `mysql` section. It's already set up to retrieve connection secrets from environment variables.
- ```html
- <tr class="{{ $task->complete ? 'success' : 'active' }}" >
+ ```php
+ 'mysql' => [
+ 'driver' => 'mysql',
+ 'url' => env('DATABASE_URL'),
+ 'host' => env('DB_HOST', '127.0.0.1'),
+ 'port' => env('DB_PORT', '3306'),
+ 'database' => env('DB_DATABASE', 'forge'),
+ 'username' => env('DB_USERNAME', 'forge'),
+ 'password' => env('DB_PASSWORD', ''),
+ ...
+ ],
```
- The preceding code changes the row color depending on whether the task is complete.
-
-1. In the next line, you have the following code:
+ Change the default environment variables to the ones that the service connector created:
- ```html
- <td class="table-text"><div>{{ $task->name }}</div></td>
+ ```php
+ 'mysql' => [
+ 'driver' => 'mysql',
+ 'url' => env('DATABASE_URL'),
+ 'host' => env('AZURE_MYSQL_HOST', '127.0.0.1'),
+ 'port' => env('AZURE_MYSQL_PORT', '3306'),
+ 'database' => env('AZURE_MYSQL_DBNAME', 'forge'),
+ 'username' => env('AZURE_MYSQL_USERNAME', 'forge'),
+ 'password' => env('AZURE_MYSQL_PASSWORD', ''),
+ ...
+ ],
```
- Replace the entire line with the following code:
-
- ```html
- <td>
- <form action="{{ url('task/'.$task->id) }}" method="POST">
- {{ csrf_field() }}
-
- <button type="submit" class="btn btn-xs">
- <i class="fa {{$task->complete ? 'fa-check-square-o' : 'fa-square-o'}}"></i>
- </button>
- {{ $task->name }}
- </form>
- </td>
- ```
+ > [!TIP]
+ > PHP uses the [getenv](https://www.php.net/manual/en/function.getenv.php) method to access the settings. the Laravel code uses an [env](https://laravel.com/docs/8.x/helpers#method-env) wrapper over the PHP `getenv`.
- The preceding code adds the submit button that references the route that you defined earlier.
+1. By default, Azure Database for MySQL enforces TLS connections from clients. To connect to your MySQL database in Azure, you must use the [_.pem_ certificate supplied by Azure Database for MySQL](../mysql/single-server/how-to-configure-ssl.md). The certificate `BaltimoreCyberTrustRoot.crt.pem` is provided in the sample repository for convenience in this tutorial. At the bottom of the `mysql` section in _config/database.php_, change the `options` parameter to the following code:
-### Test the changes locally
+ ```php
+ 'options' => extension_loaded('pdo_mysql') ? array_filter([
+ PDO::MYSQL_ATTR_SSL_KEY => '/ssl/BaltimoreCyberTrustRoot.crt.pem',
+ ]) : [],
+ ```
-1. In the local terminal window, run the development server from the root directory of the Git repository.
+1. Your sample app is now configured to connect to the Azure MySQL database. Run Laravel database migrations again to create the tables and run the sample app.
```bash
+ php artisan migrate
php artisan serve ```
-1. To see the task status change, go to `http://localhost:8000` and select the checkbox.
+1. Go to `http://localhost:8000`. If the page loads without errors, the PHP application is connecting to the MySQL database in Azure.
- ![Added check box to task](./media/tutorial-php-mysql-app/complete-checkbox.png)
+1. Add a few tasks in the page.
+
+ ![PHP connects successfully to Azure Database for MySQL](./media/tutorial-php-mysql-app/mysql-connect-success.png)
1. To stop PHP, enter `Ctrl + C` in the terminal.
-### Publish changes to Azure
+## Deploy changes to Azure
-1. In the local terminal window, run Laravel database migrations with the production connection string to make the change in the Azure database.
+1. Deploy your code changes by running `az webapp up` again.
- ```bash
- php artisan migrate --env=production --force
+ ::: zone pivot="platform-windows"
+
+ ```azurecli
+ az webapp up --os-type=windows
```-
-1. Commit all the changes in Git, and then push the code changes to Azure.
-
- ```bash
- git add .
- git commit -m "added complete checkbox"
- git push azure main
+
+ ::: zone-end
+
+ ::: zone pivot="platform-linux"
+
+ ```azurecli
+ az webapp up --runtime "php|7.4" --os-type=linux
```
+
+ > [!NOTE]
+ > `--runtime` is still needed for deployment with `az webapp up`. Otherwise, the runtime is detected to be Node.js due to the presence of *package.json*.
-1. Once the `git push` is complete, go to the Azure app and test the new functionality.
+ ::: zone-end
- ![Model and database changes published to Azure](media/tutorial-php-mysql-app/complete-checkbox-published.png)
+1. Browse to `http://<app-name>.azurewebsites.net` and add a few tasks to the list.
-If you add any task, they're retained in the database. Updates to the data schema leave existing data intact.
+ :::image type="content" source="./media/tutorial-php-mysql-app/php-mysql-in-azure.png" alt-text="Screenshot of the Azure app example titled Task List showing new tasks added.":::
-## Stream diagnostic logs
+Congratulations, you're running a data-driven PHP app in Azure App Service.
+## Stream diagnostic logs
While the PHP application runs in Azure App Service, you can get the console logs piped to your terminal. That way, you can get the same diagnostic messages to help you debug application errors.
-To start log streaming, use the [`az webapp log tail`](/cli/azure/webapp/log#az-webapp-log-tail) command in the Cloud Shell.
+To start log streaming, use the [`az webapp log tail`](/cli/azure/webapp/log#az-webapp-log-tail) command.
```azurecli-interactive
-az webapp log tail --name <app_name> --resource-group myResourceGroup
+az webapp log tail
``` Once log streaming has started, refresh the Azure app in the browser to get some web traffic. You can now see console logs piped to the terminal. If you don't see console logs immediately, check again in 30 seconds. To stop log streaming at any time, enter `Ctrl`+`C`. - ::: zone pivot="platform-linux"
+> [!NOTE]
+> You can also inspect the log files from the browser at `https://<app-name>.scm.azurewebsites.net/api/logs/docker`.
::: zone-end > [!TIP] > A PHP application can use the standard [error_log()](https://php.net/manual/function.error-log.php) to output to the console. The sample application uses this approach in _app/Http/routes.php_. >
-> As a web framework, [Laravel uses Monolog](https://laravel.com/docs/5.4/errors) as the logging provider. To see how to get Monolog to output messages to the console, see [PHP: How to use monolog to log to console (php://out)](https://stackoverflow.com/questions/25787258/php-how-to-use-monolog-to-log-to-console-php-out).
->
+> As a web framework, [Laravel uses Monolog](https://laravel.com/docs/8.x/logging) as the logging provider. To see how to get Monolog to output messages to the console, see [PHP: How to use monolog to log to console (php://out)](https://stackoverflow.com/questions/25787258/php-how-to-use-monolog-to-log-to-console-php-out).
>
-## Manage the Azure app
-
-1. Go to the [Azure portal](https://portal.azure.com) to manage the app you created.
-
-1. From the left menu, select **App Services**, and then select the name of your Azure app.
-
- ![Portal navigation to Azure app](./media/tutorial-php-mysql-app/access-portal.png)
-
- You see your app's Overview page. In this page, you can do basic management tasks like stop, start, restart, browse, and delete.
-
- The left menu provides pages for configuring your app.
-
- ![App Service page in Azure portal](./media/tutorial-php-mysql-app/web-app-blade.png)
- [!INCLUDE [cli-samples-clean-up](../../includes/cli-samples-clean-up.md)] <a name="next"></a>
app-service Web Sites Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/web-sites-restore.md
- Title: Restore app from backup
-description: Learn how to restore your app from a backup. Certain linked databases can be restored along with the app in one operation.
-- Previously updated : 07/06/2016---
-# Restore an app in Azure
-This article shows you how to restore an app in [Azure App Service](../app-service/overview.md)
-that you have previously backed up (see [Back up your app in Azure](manage-backup.md)). You can restore your app
-with its linked databases on-demand to a previous state, or create a new app based on one of
-your original app's backups. Azure App Service supports the following databases for backup and restore:
-- [SQL Database](https://azure.microsoft.com/services/sql-database/)-- [Azure Database for MySQL](https://azure.microsoft.com/services/mysql)-- [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql)-- [MySQL in-app](https://blogs.msdn.microsoft.com/appserviceteam/2017/03/06/announcing-general-availability-for-mysql-in-app)-
-Restoring from backups is available to apps running in **Standard** and **Premium** tier. For information about scaling
-up your app, see [Scale up an app in Azure](manage-scale-up.md). **Premium** tier allows a greater number of daily
-backups to be performed than **Standard** tier.
-
-<a name="PreviousBackup"></a>
-
-## Restore an app from an existing backup
-1. On the **Settings** page of your app in the Azure portal, click **Backups** to display the **Backups** page. Then click **Restore**.
-
- ![Choose restore now][ChooseRestoreNow]
-2. In the **Restore** page, first select the backup source.
-
- ![Screenshot that shows where to select the backup source.](./media/web-sites-restore/021ChooseSource1.png)
-
- The **App backup** option shows you all the existing backups of the current app, and you can easily select one.
- The **Storage** option lets you select any backup ZIP file from any existing Azure Storage account and container in your subscription.
- If you're trying to restore a backup of another app, use the **Storage** option.
-3. Then, specify the destination for the app restore in **Restore destination**.
-
- ![Screenshot that shows where to specify the destination for the app restore.](./media/web-sites-restore/022ChooseDestination1.png)
-
- > [!WARNING]
- > If you choose **Overwrite**, all existing data in your current app is erased and overwritten. Before you click **OK**,
- > make sure that it is exactly what you want to do.
- >
- >
-
- > [!WARNING]
- > If the App Service is writing data to the database while you are restoring it, it may result in symptoms such as violation of PRIMARY KEY and data loss. It is suggested to stop the App Service first before you start to restore the database.
- >
- >
-
- You can select **Existing App** to restore the app backup to another app in the same resource group. Before you use this option, you should have already created another app in your resource group with mirroring database configuration to the one defined in the app backup. You can also Create a **New** app to restore your content to.
-
-4. Click **OK**.
-
-<a name="StorageAccount"></a>
-
-## Download or delete a backup from a storage account
-1. From the main **Browse** page of the Azure portal, select **Storage accounts**. A list of your existing storage accounts is displayed.
-2. Select the storage account that contains the backup that you want to download or delete. The page for the storage account is displayed.
-3. In the storage account page, select the container you want
-
- ![View Containers][ViewContainers]
-4. Select backup file you want to download or delete.
-
- ![ViewContainers](./media/web-sites-restore/03ViewFiles.png)
-5. Click **Download** or **Delete** depending on what you want to do.
-
-<a name="OperationLogs"></a>
-
-## Monitor a restore operation
-To see details about the success or failure of the app restore operation, navigate to the **Activity Log** page in the Azure portal.
-
-
-Scroll down to find the desired restore operation and click to select it.
-
-The details page displays the available information related to the restore operation.
-
-## Automate with scripts
-
-You can automate backup management with scripts, using the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/).
-
-For samples, see:
--- [Azure CLI samples](samples-cli.md)-- [Azure PowerShell samples](samples-powershell.md)-
-<!-- ## Next Steps
-You can backup and restore App Service apps using REST API. -->
--
-<!-- IMAGES -->
-[ChooseRestoreNow]: ./media/web-sites-restore/02ChooseRestoreNow1.png
-[ViewContainers]: ./media/web-sites-restore/03ViewContainers.png
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
azure-arc Create Postgresql Hyperscale Server Group Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-data-studio.md
In a few minutes, your creation should successfully complete.
- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
+- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities)
azure-arc Create Postgresql Hyperscale Server Group Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md
You might want read the following important topics before you proceed. (If you'r
- [Overview of Azure Arc-enabled data services](overview.md) - [Connectivity modes and requirements](connectivity.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
+- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities)
If you prefer to try things out without provisioning a full environment yourself, get started quickly with [Azure Arc jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/). You can do this on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE), or in an Azure virtual machine (VM).
Be aware of the following considerations when you're deploying:
- [Scale out your Azure Arc-enabled for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Expanding persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)-- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
+- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities)
azure-arc Create Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
There are important topics you may want read before you proceed with creation:
- [Overview of Azure Arc-enabled data services](overview.md) - [Connectivity modes and requirements](connectivity.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
+- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities)
If you prefer to try out things without provisioning a full environment yourself, get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655
- [Scale out your Azure Arc-enabled for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Expanding Persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)-- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
+- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities)
azure-arc Scale Out In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/scale-out-in-postgresql-hyperscale-server-group.md
The scale-in operation is an online operation. Your applications continue to acc
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale. - [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
+- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities)
azure-arc Scale Up Down Postgresql Hyperscale Server Group Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/scale-up-down-postgresql-hyperscale-server-group-using-cli.md
In a default configuration, only the minimum memory is set to 256Mi as it is the
> Setting a minimum does not mean the server group will necessarily use that minimum. It means that if the server group needs it, it is guaranteed to be allocated at least this minimum. For example, let's consider we set `--minCpu 2`. It does not mean that the server group will be using at least 2 vCores at all times. It instead means that the sever group may start using less than 2 vCores if it does not need that much and it is guaranteed to be allocated at least 2 vCores if it needs them later on. It implies that the Kubernetes cluster allocates resources to other workloads in such a way that it can allocate 2 vCores to the server group if it ever needs them. Also, scaling up and down is not a online operation as it requires the restart of the kubernetes pods. >[!NOTE]
->Before you modify the configuration of your system please make sure to familiarize yourself with the Kubernetes resource model [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
+>Before you modify the configuration of your system please make sure to familiarize yourself with the Kubernetes resource model [here](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities)
## Scale up and down the server group
az postgres arc-server edit -n postgres01 --cores-request coordinator='',worker=
- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
+- [Kubernetes resource model](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/resources.md#resource-quantities)
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Previously updated : 03/08/2022 Last updated : 06/16/2022
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc description: Sample Azure Resource Graph queries for Azure Arc showing use of resource types and tables to access Azure Arc related resources and properties. Previously updated : 03/08/2022 Last updated : 06/16/2022
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled servers description: Sample Azure Resource Graph queries for Azure Arc-enabled servers showing use of resource types and tables to access Azure Arc-enabled servers related resources and properties. Previously updated : 03/08/2022 Last updated : 06/16/2022
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
azure-functions Durable Functions Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-billing.md
Title: Durable functions billing - Azure Functions
description: Learn about the internal behaviors of Durable Functions and how they affect billing for Azure Functions. Previously updated : 08/31/2019 Last updated : 05/26/2022 #Customer intent: As a developer, I want to understand how using Durable Functions influences my Azure consumption bill.
When executing orchestrator functions in Azure Functions [Consumption plan](../c
## Awaiting and yielding in orchestrator functions
-When an orchestrator function waits for an asynchronous action to finish by using **await** in C# or **yield** in JavaScript, the runtime considers that particular execution to be finished. The billing for the orchestrator function stops at that point. It doesn't resume until the next orchestrator function replay. You aren't billed for any time spent awaiting or yielding in an orchestrator function.
+When an orchestrator function waits for an asynchronous task to complete, the runtime considers that particular function invocation to be finished. The billing for the orchestrator function stops at that point. It doesn't resume until the next orchestrator function replay. You aren't billed for any time spent awaiting or yielding in an orchestrator function.
> [!NOTE]
-> Functions calling other functions is considered by some to be an antipattern. This is because of a problem known as _double billing_. When a function calls another function directly, both run at the same time. The called function is actively running code while the calling function is waiting for a response. In this case, you must pay for the time the calling function spends waiting for the called function to run.
+> Functions calling other functions is considered by some to be a Serverless anti-pattern. This is because of a problem known as _double billing_. When a function calls another function directly, both run at the same time. The called function is actively running code while the calling function is waiting for a response. In this case, you must pay for the time the calling function spends waiting for the called function to run.
> > There is no double billing in orchestrator functions. An orchestrator function's billing stops while it waits for the result of an activity function or sub-orchestration. ## Durable HTTP polling
-Orchestrator functions can make long-running HTTP calls to external endpoints as described in the [HTTP features article](durable-functions-http-features.md). The **CallHttpAsync** method in C# and the **callHttp** method in JavaScript might internally poll an HTTP endpoint while following the [asynchronous 202 pattern](durable-functions-http-features.md#http-202-handling).
+Orchestrator functions can make long-running HTTP calls to external endpoints as described in the [HTTP features article](durable-functions-http-features.md). The _"call HTTP"_ APIs might internally poll an HTTP endpoint while following the [asynchronous 202 pattern](durable-functions-http-features.md#http-202-handling).
There currently isn't direct billing for internal HTTP polling operations. However, internal polling might cause the orchestrator function to periodically replay. You'll be billed standard charges for these internal function replays.
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-bindings.md
Title: Bindings for Durable Functions - Azure description: How to use triggers and bindings for the Durable Functions extension for Azure Functions. Previously updated : 02/08/2022 Last updated : 05/27/2022
The [Durable Functions](durable-functions-overview.md) extension introduces thre
The orchestration trigger enables you to author [durable orchestrator functions](durable-functions-types-features-overview.md#orchestrator-functions). This trigger executes when a new orchestration instance is scheduled and when an existing orchestration instance receives an event. Examples of events that can trigger orchestrator functions include durable timer expirations, activity function responses, and events raised by external clients.
-When you author functions in .NET, the orchestration trigger is configured using the [OrchestrationTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.orchestrationtriggerattribute) .NET attribute.
+When you author functions in .NET, the orchestration trigger is configured using the [OrchestrationTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.orchestrationtriggerattribute) .NET attribute. For Java, the `@DurableOrchestrationTrigger` annotation is used.
When you write orchestrator functions in scripting languages, like JavaScript, Python, or PowerShell, the orchestration trigger is defined by the following JSON object in the `bindings` array of the *function.json* file:
The orchestration trigger binding supports both inputs and outputs. Here are som
### Trigger sample
-The following example code shows what the simplest "Hello World" orchestrator function might look like:
+The following example code shows what the simplest "Hello World" orchestrator function might look like. Note that this example orchestrator doesn't actually schedule any tasks.
# [C#](#tab/csharp)
The following example code shows what the simplest "Hello World" orchestrator fu
public static string Run([OrchestrationTrigger] IDurableOrchestrationContext context) { string name = context.GetInput<string>();
- // ... do some work ...
return $"Hello {name}!"; } ```
const df = require("durable-functions");
module.exports = df.orchestrator(function*(context) { const name = context.df.getInput();
- // ... do some work ...
return `Hello ${name}!`; }); ```
import azure.durable_functions as df
def orchestrator_function(context: df.DurableOrchestrationContext): input = context.get_input()
- # Do some work
return f"Hello {input['name']}!" main = df.Orchestrator.create(orchestrator_function)
main = df.Orchestrator.create(orchestrator_function)
param($Context) $input = $Context.Input
+$input
+```
-# Do some work
+# [Java](#tab/java)
-$output
+```java
+@FunctionName("HelloWorldOrchestration")
+public String helloWorldOrchestration(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ return String.format("Hello %s!", ctx.getInput(String.class));
+ });
+}
```
$output = Invoke-DurableActivity -FunctionName 'SayHello' -Input $name
$output ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("HelloWorld")
+public String helloWorldOrchestration(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ String input = ctx.getInput(String.class);
+ String result = ctx.callActivity("SayHello", input, String.class).await();
+ return result;
+ });
+}
+```
+ ## Activity trigger The activity trigger enables you to author functions that are called by orchestrator functions, known as [activity functions](durable-functions-types-features-overview.md#activity-functions).
-If you're authoring functions in .NET, the activity trigger is configured using the [ActivityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.activitytriggerattribute) .NET attribute.
+If you're authoring functions in .NET, the activity trigger is configured using the [ActivityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.activitytriggerattribute) .NET attribute. For Java, the `@DurableActivityTrigger` annotation is used.
If you're using JavaScript, Python, or PowerShell, the activity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
param($name)
"Hello $name!" ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("SayHello")
+public String sayHello(@DurableActivityTrigger(name = "name") String name) {
+ return String.format("Hello %s!", name);
+}
+```
+ ### Using input and output bindings
The orchestration client binding enables you to write functions that interact wi
* Send events to them while they're running. * Purge instance history.
-If you're using .NET, you can bind to the orchestration client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) attribute ([OrchestrationClientAttribute](/dotnet/api/microsoft.azure.webjobs.orchestrationclientattribute?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x).
+If you're using .NET, you can bind to the orchestration client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) attribute ([OrchestrationClientAttribute](/dotnet/api/microsoft.azure.webjobs.orchestrationclientattribute?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x). For Java, use the `@DurableClientInput` annotation.
If you're using scripting languages, like JavaScript, Python, or PowerShell, the durable client trigger is defined by the following JSON object in the `bindings` array of *function.json*:
If you're using scripting languages, like JavaScript, Python, or PowerShell, the
### Client usage
-In .NET functions, you typically bind to [IDurableClient](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableclient) ([DurableOrchestrationClient](/dotnet/api/microsoft.azure.webjobs.durableorchestrationclient?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x), which gives you full access to all orchestration client APIs supported by Durable Functions. In other languages, you must use the language-specific SDK to get access to a client object.
+In .NET functions, you typically bind to [IDurableClient](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableclient) ([DurableOrchestrationClient](/dotnet/api/microsoft.azure.webjobs.durableorchestrationclient?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x), which gives you full access to all orchestration client APIs supported by Durable Functions. For Java, you bind to the `DurableClientContext` class. In other languages, you must use the language-specific SDK to get access to a client object.
Here's an example queue-triggered function that starts a "HelloWorld" orchestration.
module.exports = async function (context) {
# [Python](#tab/python)
-**function.json**
+**`function.json`**
```json { "bindings": [
module.exports = async function (context) {
} ```
-**__init__.py**
+**`__init__.py`**
```python import json import azure.functions as func
param([string] $input, $TriggerMetadata)
$InstanceId = Start-DurableOrchestration -FunctionName $FunctionName -Input $input ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("QueueStart")
+public void queueStart(
+ @QueueTrigger(name = "input", queueName = "durable-function-trigger", connection = "Storage") String input,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
+ // Orchestration input comes from the queue message content.
+ durableContext.getClient().scheduleNewOrchestrationInstance("HelloWorld", input);
+}
+```
+ More details on starting instances can be found in [Instance management](durable-functions-instance-management.md).
If you're using JavaScript, Python, or PowerShell, the entity trigger is defined
} ```
+> [!NOTE]
+> Entity triggers are not yet supported in Java.
+ By default, the name of an entity is the name of the function. ### Trigger behavior
If you're using scripting languages (like C# scripting, JavaScript, or Python) f
} ```
+> [!NOTE]
+> Entity clients are not yet supported in Java.
+ * `taskHub` - Used in scenarios where multiple function apps share the same storage account but need to be isolated from each other. If not specified, the default value from `host.json` is used. This value must match the value used by the target entity functions. * `connectionName` - The name of an app setting that contains a storage account connection string. The storage account represented by this connection string must be the same one used by the target entity functions. If not specified, the default storage account connection string for the function app is used.
azure-functions Durable Functions Code Constraints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-code-constraints.md
Title: Durable orchestrator code constraints - Azure Functions
description: Orchestration function replay and code constraints for Azure Durable Functions. Previously updated : 11/02/2019 Last updated : 05/06/2022 #Customer intent: As a developer, I want to learn what coding restrictions exist for durable orchestrations and why they exist so that I can avoid introducing bugs in my app logic.
This section provides some simple guidelines that help ensure your code is deter
Orchestrator functions can call any API in their target languages. However, it's important that orchestrator functions call only deterministic APIs. A *deterministic API* is an API that always returns the same value given the same input, no matter when or how often it's called.
-The following table shows examples of APIs that you should avoid because they are *not* deterministic. These restrictions apply only to orchestrator functions. Other function types don't have such restrictions.
+The following sections provide guidance on APIs and patterns that you should avoid because they are *not* deterministic. These restrictions apply only to orchestrator functions. Other function types don't have such restrictions.
+
+> [!NOTE]
+> Several types of code constraints are described below. This list is unfortunately not comprehensive and some use cases might not be covered. The most important thing to consider when writing orchestrator code is whether an API you're using is deterministic. Once you're comfortable with thinking this way, it's easy to understand which APIs are safe to use and which are not without needing to refer to this documented list.
+
+#### Dates and times
+
+APIs that return the current date or time are nondeterministic and should never be used in orchestrator functions. This is because each orchestrator function replay will produce a different value. You should instead use the Durable Functions equivalent API for getting the current date or time, which remains consistent across replays.
+
+# [C#](#tab/csharp)
+
+Do not use `DateTime.Now`, `DateTime.UtcNow`, or equivalent APIs for getting the current time. Classes such as [`Stopwatch`](/dotnet/api/system.diagnostics.stopwatch) should also be avoided. For .NET in-process orchestrator functions, use the `IDurableOrchestrationContext.CurrentUtcDateTime` property to get the current time. For .NET isolated orchestrator functions, use the `TaskOrchestrationContext.CurrentDateTimeUtc` property to get the current time.
+
+```csharp
+DateTime startTime = context.CurrentUtcDateTime;
+// do some work
+TimeSpan totalTime = context.CurrentUtcDateTime.Subtract(startTime);
+```
+
+# [JavaScript](#tab/javascript)
+
+Do not use APIs like `new Date()` or `Date.now()` to get the current date and time. Instead, use `DurableOrchestrationContext.currentUtcDateTime`.
+
+```javascript
+// create a timer that expires 2 minutes from now
+const expiration = moment.utc(context.df.currentUtcDateTime).add(2, "m");
+const timeoutTask = context.df.createTimer(expiration.toDate());
+```
+
+# [Python](#tab/python)
+
+Do not use `datetime.now()`, `gmtime()`, or similar APIs to get the current time. Instead, use `DurableOrchestrationContext.current_utc_datetime`.
+
+```python
+# create a timer that expires 2 minutes from now
+expiration = context.current_utc_datetime + timedelta(seconds=120)
+timeout_task = context.create_timer(expiration)
+```
+
+# [PowerShell](#tab/powershell)
+
+Do not use cmdlets like `Get-Date` or .NET APIs like `[System.DateTime]::Now` to get the current time. Instead, use `$Context.CurrentUtcDateTime`.
+
+```powershell
+$expiryTime = $Context.Input.ExpiryTime
+while ($Context.CurrentUtcDateTime -lt $expiryTime) {
+ # do work
+}
+```
+
+# [Java](#tab/java)
+
+Do not use APIs like `LocalDateTime.now()` or `Instant.now()` to get the current date and time. Instead, use `TaskOrchestrationContext.getCurrentInstant()`.
+
+```java
+Instant startTime = ctx.getCurrentInstant();
+// do some work
+Duration totalTime = Duration.between(startTime, ctx.getCurrentInstant());
+```
+++
+#### GUIDs and UUIDs
+
+APIs that return a random GUID or UUID are nondeterministic because the generated value is different for each replay. Depending on which language you use, a built-in API for generating deterministic GUIDs or UUIDs may be available. Otherwise, use an activity function to return a randomly generated GUID or UUID.
+
+# [C#](#tab/csharp)
+
+Do not use APIs like `Guid.NewGuid()` to generate random GUIDs. Instead, use the context object's `NewGuid()` API to generate a random GUID that's safe for orchestrator replay.
+
+```csharp
+Guid randomGuid = context.NewGuid();
+```
+
+> [!NOTE]
+> GUIDs generated with orchestration context APIs are [Type 5 UUIDs](https://en.wikipedia.org/wiki/Universally_unique_identifier#Versions_3_and_5_(namespace_name-based)).
++
+# [JavaScript](#tab/javascript)
+
+Do not use the `uuid` module or the `crypto.randomUUID()` function to generate random UUIDs. Instead, use the context object's built-in `newGuid()` method to generate a random GUID that's safe for orchestrator replay.
+
+```javascript
+const randomGuid = context.df.newGuid();
+```
+
+> [!NOTE]
+> UUIDs generated with orchestration context APIs are [Type 5 UUIDs](https://en.wikipedia.org/wiki/Universally_unique_identifier#Versions_3_and_5_(namespace_name-based)).
-| API category | Reason | Workaround |
-| | | - |
-| Dates and times | APIs that return the current date or time are nondeterministic because the returned value is different for each replay. | Use the [CurrentUtcDateTime](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationcontext.currentutcdatetime) property in .NET, the `currentUtcDateTime` API in JavaScript, or the `current_utc_datetime` API in Python, which are safe for replay. Similarly, avoid "stopwatch" type objects (like the [Stopwatch class in .NET](/dotnet/api/system.diagnostics.stopwatch)). If you need to measure elapsed time, store the value of `CurrentUtcDateTime` at the beginning of execution, and subtract that value from `CurrentUtcDateTime` when execution concludes. |
-| GUIDs and UUIDs | APIs that return a random GUID or UUID are nondeterministic because the generated value is different for each replay. | Use [NewGuid](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationcontext.newguid) in .NET, `newGuid` in JavaScript, and `new_guid` in Python to safely generate random GUIDs. |
-| Random numbers | APIs that return random numbers are nondeterministic because the generated value is different for each replay. | Use an activity function to return random numbers to an orchestration. The return values of activity functions are always safe for replay. |
-| Bindings | Input and output bindings typically do I/O and are nondeterministic. An orchestrator function must not directly use even the [orchestration client](durable-functions-bindings.md#orchestration-client) and [entity client](durable-functions-bindings.md#entity-client) bindings. | Use input and output bindings inside client or activity functions. |
-| Network | Network calls involve external systems and are nondeterministic. | Use activity functions to make network calls. If you need to make an HTTP call from your orchestrator function, you also can use the [durable HTTP APIs](durable-functions-http-features.md#consuming-http-apis). |
-| Blocking APIs | Blocking APIs like `Thread.Sleep` in .NET and similar APIs can cause performance and scale problems for orchestrator functions and should be avoided. In the Azure Functions Consumption plan, they can even result in unnecessary runtime charges. | Use alternatives to blocking APIs when they're available. For example, use `CreateTimer` to introduce delays in orchestration execution. [Durable timer](durable-functions-timers.md) delays don't count towards the execution time of an orchestrator function. |
-| Async APIs | Orchestrator code must never start any async operation except by using the `IDurableOrchestrationContext` API, the `context.df` API in JavaScript, or the `context` API in Python. For example, you can't use `Task.Run`, `Task.Delay`, and `HttpClient.SendAsync` in .NET or `setTimeout` and `setInterval` in JavaScript. The Durable Task Framework runs orchestrator code on a single thread. It can't interact with any other threads that might be called by other async APIs. | An orchestrator function should make only durable async calls. Activity functions should make any other async API calls. |
-| Async JavaScript functions | You can't declare JavaScript orchestrator functions as `async` because the Node.js runtime doesn't guarantee that asynchronous functions are deterministic. | Declare JavaScript orchestrator functions as synchronous generator functions |
-| Python Coroutines | You can't declare Python orchestrator functions as coroutines, i.e declare them with the `async` keyword, because coroutine semantics do not align with the Durable Functions replay model. | Declare Python orchestrator functions as generators, meaning that you should expect the `context` API to use `yield` instead of `await`. |
-| Threading APIs | The Durable Task Framework runs orchestrator code on a single thread and can't interact with any other threads. Introducing new threads into an orchestration's execution can result in nondeterministic execution or deadlocks. | Orchestrator functions should almost never use threading APIs. For example, in .NET, avoid using `ConfigureAwait(continueOnCapturedContext: false)`; this ensures task continuations run on the orchestrator function's original `SynchronizationContext`. If such APIs are necessary, limit their use to only activity functions. |
-| Static variables | Avoid using nonconstant static variables in orchestrator functions because their values can change over time, resulting in nondeterministic runtime behavior. | Use constants, or limit the use of static variables to activity functions. |
-| Environment variables | Don't use environment variables in orchestrator functions. Their values can change over time, resulting in nondeterministic runtime behavior. | Environment variables must be referenced only from within client functions or activity functions. |
-| Infinite loops | Avoid infinite loops in orchestrator functions. Because the Durable Task Framework saves execution history as the orchestration function progresses, an infinite loop can cause an orchestrator instance to run out of memory. | For infinite loop scenarios, use APIs like `ContinueAsNew` in .NET, `continueAsNew` in JavaScript, or `continue_as_new` in Python to restart the function execution and to discard previous execution history. |
-Although applying these constraints might seem difficult at first, in practice they're easy to follow.
+# [Python](#tab/python)
-The Durable Task Framework attempts to detect violations of the preceding rules. If it finds a violation, the framework throws a **NonDeterministicOrchestrationException** exception. However, this detection behavior won't catch all violations, and you shouldn't depend on it.
+Do not use the `uuid` module to generate random UUIDs. Instead, use the context object's built-in `new_guid()` method to generate a random UUID that's safe for orchestrator replay.
+
+```python
+randomGuid = context.new_guid()
+```
+
+> [!NOTE]
+> UUIDs generated with orchestration context APIs are [Type 5 UUIDs](https://en.wikipedia.org/wiki/Universally_unique_identifier#Versions_3_and_5_(namespace_name-based)).
+
+# [PowerShell](#tab/powershell)
+
+Do not use cmdlets like `New-Guid` or .NET APIs like `[System.Guid]::NewGuid()` directly in orchestrator functions. Instead, generate random GUIDs in activity functions and return them to the orchestrator functions.
+
+# [Java](#tab/java)
+
+Do not use the `java.util.UUID.randomUUID()` or similar methods for generating new UUIDs directly in orchestrator functions. Instead, generate random UUIDs in activity functions and return them to the orchestrator functions.
+++
+#### Random numbers
+
+ Use an activity function to return random numbers to an orchestrator function. The return values of activity functions are always safe for replay because they are saved into the orchestration history.
+
+ Alternatively, a random number generator with a fixed seed value can be used directly in an orchestrator function. This approach is safe as long as the same sequence of numbers is generated for each orchestration replay.
+
+#### Bindings
+
+An orchestrator function must not use any bindings, including even the [orchestration client](durable-functions-bindings.md#orchestration-client) and [entity client](durable-functions-bindings.md#entity-client) bindings. Always use input and output bindings from within a client or activity function. This is important because orchestrator functions may be replayed multiple times, causing nondeterministic and duplicate I/O with external systems.
+
+#### Static variables
+
+Avoid using static variables in orchestrator functions because their values can change over time, resulting in nondeterministic runtime behavior. Instead, use constants, or limit the use of static variables to activity functions.
+
+> [!NOTE]
+> Even outside of orchestrator functions, using static variables in Azure Functions can be problematic for a variety of reasons since there's no guarantee that static state will persist across multiple function executions. Static variables should be avoided except in very specific usecases, such as best-effort in-memory caching in activity or entity functions.
+
+#### Environment variables
+
+Do not use environment variables in orchestrator functions. Their values can change over time, resulting in nondeterministic runtime behavior. If an orchestrator function needs configuration that's defined in an environment variable, you must pass the configuration value into the orchestrator function as an input or as the return value of an activity function.
+
+#### Network and HTTP
+
+Use activity functions to make outbound network calls. If you need to make an HTTP call from your orchestrator function, you also can use the [durable HTTP APIs](durable-functions-http-features.md#consuming-http-apis).
+
+#### Thread-blocking APIs
+
+Blocking APIs like "sleep" can cause performance and scale problems for orchestrator functions and should be avoided. In the Azure Functions Consumption plan, they can even result in unnecessary execution time charges. Use alternatives to blocking APIs when they're available. For example, use [Durable timers](durable-functions-timers.md) to create delays that are safe for replay and don't count towards the execution time of an orchestrator function.
+
+#### Async APIs
+
+Orchestrator code must never start any async operation except those defined by the orchestration trigger's context object. For example, never use `Task.Run`, `Task.Delay`, and `HttpClient.SendAsync` in .NET or `setTimeout` and `setInterval` in JavaScript. An orchestrator function should only schedule async work using Durable SDK APIs, like scheduling activity functions. Any other type of async invocations should be done inside activity functions.
+
+#### Async JavaScript functions
+
+Always declare JavaScript orchestrator functions as synchronous generator functions. You must not declare JavaScript orchestrator functions as `async` because the Node.js runtime doesn't guarantee that asynchronous functions are deterministic.
+
+#### Python coroutines
+
+You must not declare Python orchestrator functions as coroutines. In other words, never declare Python orchestrator functions with the `async` keyword because coroutine semantics do not align with the Durable Functions replay model. You must always declare Python orchestrator functions as generators, meaning that you should expect the `context` API to use `yield` instead of `await`.
+
+#### .NET threading APIs
+
+The Durable Task Framework runs orchestrator code on a single thread and can't interact with any other threads. Running async continuations on a worker pool thread an orchestration's execution can result in nondeterministic execution or deadlocks. For this reason, orchestrator functions should almost never use threading APIs. For example, never use `ConfigureAwait(continueOnCapturedContext: false)` in an orchestrator function. This ensures that task continuations run on the orchestrator function's original `SynchronizationContext`.
+
+> [!NOTE]
+> The Durable Task Framework attempts to detect accidental use of non-orchestrator threads in orchestrator functions. If it finds a violation, the framework throws a **NonDeterministicOrchestrationException** exception. However, this detection behavior won't catch all violations, and you shouldn't depend on it.
## Versioning
A durable orchestration might run continuously for days, months, years, or even
> [!NOTE] > This section describes internal implementation details of the Durable Task Framework. You can use durable functions without knowing this information. It is intended only to help you understand the replay behavior.
-Tasks that can safely wait in orchestrator functions are occasionally referred to as *durable tasks*. The Durable Task Framework creates and manages these tasks. Examples are the tasks returned by **CallActivityAsync**, **WaitForExternalEvent**, and **CreateTimer** in .NET orchestrator functions.
+Tasks that can safely wait in orchestrator functions are occasionally referred to as *durable tasks*. The Durable Task Framework creates and manages these tasks. Examples are the tasks returned by `CallActivityAsync`, `WaitForExternalEvent`, and `CreateTimer` in .NET orchestrator functions.
These durable tasks are internally managed by a list of `TaskCompletionSource` objects in .NET. During replay, these tasks are created as part of orchestrator code execution. They're finished as the dispatcher enumerates the corresponding history events.
azure-functions Durable Functions Custom Orchestration Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-custom-orchestration-status.md
param($name)
"Hello $name" ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("HelloCities")
+public String helloCitiesOrchestrator(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ String result = "";
+ result += ctx.callActivity("SayHello", "Tokyo", String.class).await() + ", ";
+ ctx.setCustomStatus("Tokyo");
+ result += ctx.callActivity("SayHello", "London", String.class).await() + ", ";
+ ctx.setCustomStatus("London");
+ result += ctx.callActivity("SayHello", "Seattle", String.class).await();
+ ctx.setCustomStatus("Seattle");
+ return result;
+ });
+}
+
+@FunctionName("SayHello")
+public String sayHello(@DurableActivityTrigger(name = "name") String name) {
+ return String.format("Hello %s!", name);
+}
+```
And then the client will receive the output of the orchestration only when `CustomStatus` field is set to "London":
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
The feature is not currently implemented in PowerShell
+# [Java](#tab/java)
+
+```java
+@FunctionName("StartHelloCities")
+public HttpResponseMessage startHelloCities(
+ @HttpTrigger(name = "req") HttpRequestMessage<Void> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext,
+ final ExecutionContext context) throws InterruptedException {
+
+ DurableTaskClient client = durableContext.getClient();
+ String instanceId = client.scheduleNewOrchestrationInstance("HelloCities");
+ context.getLogger().info("Created new Java orchestration with instance ID = " + instanceId);
+
+ OrchestrationMetadata metadata = client.waitForInstanceStart(instanceId, Duration.ofMinutes(5), true);
+ while (metadata.readCustomStatusAs(String.class) != "London") {
+ Thread.sleep(200);
+ metadata = client.getInstanceMetadata(instanceId, true);
+ }
+
+ return req.createResponseBuilder(HttpStatus.OK).build();
+}
+```
+ ### Output customization
if ($userChoice -eq 3) {
# Wait for user selection and refine the recommendation ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("CityRecommender")
+public String cityRecommender(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ int userChoice = ctx.getInput(int.class);
+ switch (userChoice) {
+ case 1:
+ ctx.setCustomStatus(new Recommendation(
+ new String[]{ "Tokyo", "Seattle" },
+ new String[]{ "Spring", "Summer" }));
+ break;
+ case 2:
+ ctx.setCustomStatus(new Recommendation(
+ new String[]{ "Seattle", "London" },
+ new String[]{ "Summer" }));
+ break;
+ case 3:
+ ctx.setCustomStatus(new Recommendation(
+ new String[]{ "Tokyo", "London" },
+ new String[]{ "Spring", "Summer" }));
+ break;
+ }
+
+ // Wait for user selection with an external event handler
+ });
+}
+
+class Recommendation {
+ public Recommendation() { }
+
+ public Recommendation(String[] cities, String[] seasons) {
+ this.recommendedCities = cities;
+ this.recommendedSeasons = seasons;
+ }
+
+ public String[] recommendedCities;
+ public String[] recommendedSeasons;
+}
+```
+ ### Instruction specification
if ($isBookingConfirmed) {
return $isBookingConfirmed ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("ReserveTicket")
+public String reserveTicket(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ String userID = ctx.getInput(String.class);
+ int discount = ctx.callActivity("CalculateDiscount", userID, int.class).await();
+ ctx.setCustomStatus(new DiscountInfo(discount, 60, "https://www.myawesomebookingweb.com"));
+
+ boolean isConfirmed = ctx.waitForExternalEvent("BookingConfirmed", boolean.class).await();
+ if (isConfirmed) {
+ ctx.setCustomStatus("Thank you for confirming your booking.");
+ } else {
+ ctx.setCustomStatus("There was a problem confirming your booking. Please try again.");
+ }
+
+ return isConfirmed;
+ });
+}
+
+class DiscountInfo {
+ public DiscountInfo() { }
+ public DiscountInfo(int discount, int discountTimeout, String bookingUrl) {
+ this.discount = discount;
+ this.discountTimeout = discountTimeout;
+ this.bookingUrl = bookingUrl;
+ }
+ public int discount;
+ public int discountTimeout;
+ public String bookingUrl;
+}
+```
+
-## Sample
+## Querying custom status with HTTP
-In the following sample, the custom status is set first;
+The following example shows how custom status values can be queried using the built-in HTTP APIs.
# [C#](#tab/csharp)
Set-DurableCustomStatus -CustomStatus @{ nextActions = @('A', 'B', 'C');
# ...do more work... ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("MyCustomStatusOrchestrator")
+public String myCustomStatusOrchestrator(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ // ... do work ...
+
+ // update the status of the orchestration with some arbitrary data
+ CustomStatusPayload payload = new CustomStatusPayload();
+ payload.nextActions = new String[] { "A", "B", "C" };
+ payload.foo = 2;
+ ctx.setCustomStatus(payload);
+
+ // ... do more work ...
+ });
+}
+
+class CustomStatusPayload {
+ public String[] nextActions;
+ public int foo;
+}
+```
+ While the orchestration is running, external clients can fetch this custom status:
azure-functions Durable Functions Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-diagnostics.md
Title: Diagnostics in Durable Functions - Azure
description: Learn how to diagnose problems with the Durable Functions extension for Azure Functions. Previously updated : 06/29/2021 Last updated : 05/26/2022
+ms.devlang: csharp, java, javascript, python
# Diagnostics in Durable Functions in Azure
The verbosity of tracking data emitted to Application Insights can be configured
{ "logging": { "logLevel": {
- "Host.Triggers.DurableTask": "Information",
+ "Host.Triggers.DurableTask": "Information",
}, } } ```
-By default, all _non-replay_ tracking events are emitted. The volume of data can be reduced by setting `Host.Triggers.DurableTask` to `"Warning"` or `"Error"` in which case tracking events will only be emitted for exceptional situations. To enable emitting the verbose orchestration replay events, set the `logReplayEvents` to `true` in the [host.json](durable-functions-bindings.md#host-json) configuration file.
+By default, all *non-replay* tracking events are emitted. The volume of data can be reduced by setting `Host.Triggers.DurableTask` to `"Warning"` or `"Error"` in which case tracking events will only be emitted for exceptional situations. To enable emitting the verbose orchestration replay events, set the `logReplayEvents` to `true` in the [host.json](durable-functions-bindings.md#host-json) configuration file.
> [!NOTE] > By default, Application Insights telemetry is sampled by the Azure Functions runtime to avoid emitting data too frequently. This can cause tracking information to be lost when many lifecycle events occur in a short period of time. The [Azure Functions Monitoring article](../configure-monitoring.md#configure-sampling) explains how to configure this behavior.
module.exports = df.orchestrator(function*(context){
``` # [Python](#tab/python)+ ```python import logging import azure.functions as func
def orchestrator_function(context: df.DurableOrchestrationContext):
main = df.Orchestrator.create(orchestrator_function) ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("FunctionChain")
+public String functionChain(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState,
+ ExecutionContext functionContext) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ Logger log = functionContext.getLogger();
+ log.info("Calling F1.");
+ ctx.callActivity("F1").await();
+ log.info("Calling F2.");
+ ctx.callActivity("F2").await();
+ log.info("Calling F3.");
+ ctx.callActivity("F3").await();
+ log.info("Done!");
+ });
+}
+```
+ The resulting log data is going to look something like the following example output:
def orchestrator_function(context: df.DurableOrchestrationContext):
if not context.is_replaying: logging.info("Calling F3.") yield context.call_activity("F3")
+ logging.info("Done!")
return None main = df.Orchestrator.create(orchestrator_function) ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("FunctionChain")
+public String functionChain(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState,
+ ExecutionContext functionContext) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ Logger log = functionContext.getLogger();
+ if (!ctx.getIsReplaying()) log.info("Calling F1.");
+ ctx.callActivity("F1").await();
+ if (!ctx.getIsReplaying()) log.info("Calling F2.");
+ ctx.callActivity("F2").await();
+ if (!ctx.getIsReplaying()) log.info("Calling F3.");
+ ctx.callActivity("F3").await();
+ log.info("Done!");
+ });
+}
+```
+ With the previously mentioned changes, the log output is as follows:
def orchestrator_function(context: df.DurableOrchestrationContext):
main = df.Orchestrator.create(orchestrator_function) ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("SetStatusTest")
+public String setStatusTest(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ // ...do work...
+
+ // update the status of the orchestration with some arbitrary data
+ ctx.setCustomStatus(new Object() {
+ public final double completionPercentage = 90.0;
+ public final String status = "Updating database records";
+ });
+
+ // ...do more work...
+ });
+}
+```
+ While the orchestration is running, external clients can fetch this custom status:
azure-functions Durable Functions Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-entities.md
Title: Durable entities - Azure Functions
description: Learn what durable entities are and how to use them in the Durable Functions extension for Azure Functions. Previously updated : 12/17/2019 Last updated : 05/10/2022
+ms.devlang: csharp, java, javascript, python
#Customer intent: As a developer, I want to learn what durable entities are and how to use them to solve distributed, stateful problems in my applications.
Entity functions define operations for reading and updating small pieces of stat
Entities provide a means for scaling out applications by distributing the work across many entities, each with a modestly sized state. > [!NOTE]
-> Entity functions and related functionality are only available in [Durable Functions 2.0](durable-functions-versions.md#migrate-from-1x-to-2x) and above. They are currently supported in .NET, JavaScript, and Python.
+> Entity functions and related functionality are only available in [Durable Functions 2.0](durable-functions-versions.md#migrate-from-1x-to-2x) and above. They are currently supported in .NET, JavaScript, and Python, but not in PowerShell or Java.
## General concepts
azure-functions Durable Functions Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-error-handling.md
Title: Handling errors in Durable Functions - Azure description: Learn how to handle errors in the Durable Functions extension for Azure Functions. Previously updated : 07/13/2020 Last updated : 05/09/2022
+ms.devlang: csharp, javascript, powershell, python, java
# Handling errors in Durable Functions (Azure Functions)
try {
} ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("TransferFunds")
+public String transferFunds(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ TransferOperation transfer = ctx.getInput(TransferOperation.class);
+ ctx.callActivity(
+ "DebitAccount",
+ new OperationArgs(transfer.sourceAccount, transfer.amount)).await();
+ try {
+ ctx.callActivity(
+ "CreditAccount",
+ new OperationArgs(transfer.destinationAccount, transfer.amount)).await();
+ } catch (TaskFailedException ex) {
+ // Refund the source account on failure
+ ctx.callActivity(
+ "CreditAccount",
+ new OperationArgs(transfer.sourceAccount, transfer.amount)).await();
+ }
+ });
+}
+```
main = df.Orchestrator.create(orchestrator_function)
param($Context) $retryOptions = New-DurableRetryOptions `
- -FirstRetryInterval (New-Timespan -Seconds 5) `
+ -FirstRetryInterval (New-TimeSpan -Seconds 5) `
-MaxNumberOfAttempts 3 Invoke-DurableActivity -FunctionName 'FlakyFunction' -RetryOptions $retryOptions ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("TimerOrchestratorWithRetry")
+public String timerOrchestratorWithRetry(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ final int maxAttempts = 3;
+ final Duration firstRetryInterval = Duration.ofSeconds(5);
+ RetryPolicy policy = new RetryPolicy(maxAttempts, firstRetryInterval);
+ ctx.callActivity("FlakeyFunction", new TaskOptions(policy)).await();
+ // ...
+ });
+}
+```
+ The activity function call in the previous example takes a parameter for configuring an automatic retry policy. There are several options for customizing the automatic retry policy:
The activity function call in the previous example takes a parameter for configu
* **Backoff coefficient**: The coefficient used to determine rate of increase of backoff. Defaults to 1. * **Max retry interval**: The maximum amount of time to wait in between retry attempts. * **Retry timeout**: The maximum amount of time to spend doing retries. The default behavior is to retry indefinitely.
-* **Handle**: A user-defined callback can be specified to determine whether a function should be retried.
-> [!NOTE]
-> User-defined callbacks aren't currently supported by Durable Functions in JavaScript (`context.df.RetryOptions`).
+## Custom retry handlers
+
+When using the .NET isolated worker or Java, you also have the option to implement retry handlers in code. This is useful when declarative retry policies are not expressive enough. For languages that don't support custom retry handlers, you still have the option of implementing retry policies using loops, exception handling, and timers for injecting delays between retries.
+
+# [C#](#tab/csharp)
+
+```csharp
+TaskOptions retryOptions = TaskOptions.FromRetryHandler(retryContext =>
+{
+ // Don't retry anything that derives from ApplicationException
+ if (!retryContext.LastFailure.IsCausedBy<ApplicationException>())
+ {
+ return false;
+ }
+ // Quit after N attempts
+ return retryContext.LastAttemptNumber < 3;
+});
+
+try
+{
+ await ctx.CallActivityAsync("FlakeyActivity", options: retryOptions);
+}
+catch (TaskFailedException)
+{
+ // Case when the retry handler returns false...
+}
+```
+
+# [JavaScript](#tab/javascript)
+
+JavaScript doesn't currently support custom retry handlers. However, you still have the option of implementing retry logic directly in the orchestrator function using loops, exception handling, and timers for injecting delays between retries.
+
+# [Python](#tab/python)
+
+Python doesn't currently support custom retry handlers. However, you still have the option of implementing retry logic directly in the orchestrator function using loops, exception handling, and timers for injecting delays between retries.
+
+# [PowerShell](#tab/powershell)
+
+PowerShell doesn't currently support custom retry handlers. However, you still have the option of implementing retry logic directly in the orchestrator function using loops, exception handling, and timers for injecting delays between retries.
+
+# [Java](#tab/java)
+
+```java
+RetryHandler retryHandler = retryCtx -> {
+ // Don't retry anything that derives from RuntimeException
+ if (retryCtx.getLastFailure().isCausedBy(RuntimeException.class)) {
+ return false;
+ }
+
+ // Quit after N attempts
+ return retryCtx.getLastAttemptNumber() < 3;
+};
+
+TaskOptions options = new TaskOptions(retryHandler);
+try {
+ ctx.callActivity("FlakeyActivity", options).await();
+} catch (TaskFailedException ex) {
+ // Case when the retry handler returns false...
+}
+```
++ ## Function timeouts
-You might want to abandon a function call within an orchestrator function if it's taking too long to complete. The proper way to do this today is by creating a [durable timer](durable-functions-timers.md) using `context.CreateTimer` (.NET), `context.df.createTimer` (JavaScript), or `context.create_timer` (Python) in conjunction with `Task.WhenAny` (.NET), `context.df.Task.any` (JavaScript), or `context.task_any` (Python) , as in the following example:
+You might want to abandon a function call within an orchestrator function if it's taking too long to complete. The proper way to do this today is by creating a [durable timer](durable-functions-timers.md) with an "any" task selector, as in the following example:
# [C#](#tab/csharp)
module.exports = df.orchestrator(function*(context) {
} }); ```+ # [Python](#tab/python) ```python
main = df.Orchestrator.create(orchestrator_function)
```powershell param($Context)
-$expiryTime = New-TimeSpan -Seconds 30
+$expiryTime = New-TimeSpan -Seconds 30
$activityTask = Invoke-DurableActivity -FunctionName 'FlakyFunction'-NoWait $timerTask = Start-DurableTimer -Duration $expiryTime -NoWait
else {
} ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("TimerOrchestrator")
+public String timerOrchestrator(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ Task<Void> activityTask = ctx.callActivity("SlowFunction");
+ Task<Void> timeoutTask = ctx.createTimer(Duration.ofMinutes(30));
+
+ Task<?> winner = ctx.anyOf(activityTask, timeoutTask).await();
+ if (winner == activityTask) {
+ // success case
+ return true;
+ } else {
+ // timeout case
+ return false;
+ }
+ });
+}
+```
+ > [!NOTE]
azure-functions Durable Functions Eternal Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-eternal-orchestrations.md
Title: Eternal orchestrations in Durable Functions - Azure
description: Learn how to implement eternal orchestrations by using the Durable Functions extension for Azure Functions. Previously updated : 07/14/2020 Last updated : 05/09/2022
+ms.devlang: csharp, javascript, python, java
# Eternal orchestrations in Durable Functions (Azure Functions)
As explained in the [orchestration history](durable-functions-orchestrations.md#
## Resetting and restarting
-Instead of using infinite loops, orchestrator functions reset their state by calling the `ContinueAsNew` (.NET), `continueAsNew` (JavaScript), or `continue_as_new` (Python) method of the [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger). This method takes a single JSON-serializable parameter, which becomes the new input for the next orchestrator function generation.
+Instead of using infinite loops, orchestrator functions reset their state by calling the *continue-as-new* method of the [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger). This method takes a JSON-serializable parameter, which becomes the new input for the next orchestrator function generation.
-When `ContinueAsNew` is called, the instance enqueues a message to itself before it exits. The message restarts the instance with the new input value. The same instance ID is kept, but the orchestrator function's history is effectively truncated.
+When *continue-as-new* is called, the orchestration instance restarts itself with the new input value. The same instance ID is kept, but the orchestrator function's history is reset.
> [!NOTE]
-> The Durable Task Framework maintains the same instance ID but internally creates a new *execution ID* for the orchestrator function that gets reset by `ContinueAsNew`. This execution ID is generally not exposed externally, but it may be useful to know about when debugging orchestration execution.
+> The Durable Task Framework maintains the same instance ID but internally creates a new *execution ID* for the orchestrator function that gets reset by *continue-as-new*. This execution ID is not exposed externally, but it may be useful to know about when debugging orchestration execution.
## Periodic work example
def orchestrator_function(context: df.DurableOrchestrationContext):
main = df.Orchestrator.create(orchestrator_function) ```
+# [PowerShell](#tab/powershell)
+
+PowerShell doesn't support *continue-as-new*.
+
+# [Java](#tab/java)
+
+```java
+@FunctionName("Periodic_Cleanup_Loop")
+public String periodicCleanupLoop(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ ctx.callActivity("DoCleanup").await();
+
+ ctx.createTimer(Duration.ofHours(1)).await();
+
+ ctx.continueAsNew(null);
+ });
+}
+```
+ The difference between this example and a timer-triggered function is that cleanup trigger times here are not based on a schedule. For example, a CRON schedule that executes a function every hour will execute it at 1:00, 2:00, 3:00 etc. and could potentially run into overlap issues. In this example, however, if the cleanup takes 30 minutes, then it will be scheduled at 1:00, 2:30, 4:00, etc. and there is no chance of overlap. ## Starting an eternal orchestration
-Use the `StartNewAsync` (.NET), the `startNew` (JavaScript), `start_new` (Python) method to start an eternal orchestration, just like you would any other orchestration function.
+Use the *start-new* or *schedule-new* durable client method to start an eternal orchestration, just like you would any other orchestration function.
> [!NOTE] > If you need to ensure a singleton eternal orchestration is running, it's important to maintain the same instance `id` when starting the orchestration. For more information, see [Instance Management](durable-functions-instance-management.md).
module.exports = async function (context, req) {
return client.createCheckStatusResponse(context.bindingData.req, instanceId); }; ```+ # [Python](#tab/python) ```python
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
```
+# [PowerShell](#tab/powershell)
+
+PowerShell doesn't support *continue-as-new*.
+
+# [Java](#tab/java)
+
+```java
+@FunctionName("Trigger_Eternal_Orchestration")
+public HttpResponseMessage triggerEternalOrchestration(
+ @HttpTrigger(name = "req") HttpRequestMessage<?> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
+
+ String instanceID = "StaticID";
+ DurableTaskClient client = durableContext.getClient();
+ client.scheduleNewOrchestrationInstance("Periodic_Cleanup_Loop", null, instanceID);
+ return durableContext.createCheckStatusResponse(req, instanceID);
+}
+```
+ ## Exit from an eternal orchestration If an orchestrator function needs to eventually complete, then all you need to do is *not* call `ContinueAsNew` and let the function exit.
-If an orchestrator function is in an infinite loop and needs to be stopped, use the `TerminateAsync` (.NET), `terminate` (JavaScript), or `terminate` (Python) method of the [orchestration client binding](durable-functions-bindings.md#orchestration-client) to stop it. For more information, see [Instance Management](durable-functions-instance-management.md).
+If an orchestrator function is in an infinite loop and needs to be stopped, use the *terminate* API of the [orchestration client binding](durable-functions-bindings.md#orchestration-client) to stop it. For more information, see [Instance Management](durable-functions-instance-management.md).
## Next steps
azure-functions Durable Functions External Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-external-events.md
Title: Handling external events in Durable Functions - Azure description: Learn how to handle external events in the Durable Functions extension for Azure Functions. Previously updated : 07/13/2020 Last updated : 05/09/2022
+ms.devlang: csharp, javascript, powershell, python, java
# Handling external events in Durable Functions (Azure Functions)
Orchestrator functions have the ability to wait and listen for external events.
## Wait for events
-The [WaitForExternalEvent](/dotnet/api/microsoft.azure.webjobs.durableorchestrationcontextbase.waitforexternalevent?view=azure-dotnet-legacy&preserve-view=true) (.NET), `waitForExternalEvent` (JavaScript), and `wait_for_external_event` (Python) methods of the [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger) allow an orchestrator function to asynchronously wait and listen for an external event. The listening orchestrator function declares the *name* of the event and the *shape of the data* it expects to receive.
+The *"wait-for-external-event"* API of the [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger) allows an orchestrator function to asynchronously wait and listen for an event delivered by an external client. The listening orchestrator function declares the *name* of the event and the *shape of the data* it expects to receive.
# [C#](#tab/csharp)
if ($approved) {
} ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("WaitForExternalEvent")
+public String waitForExternalEvent(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ boolean approved = ctx.waitForExternalEvent("Approval", boolean.class).await();
+ if (approved) {
+ // approval granted - do the approved action
+ } else {
+ // approval denied - send a notification
+ }
+ });
+}
+```
+ The preceding example listens for a specific single event and takes action when it's received.
if ($winner -eq $event1) {
# ... } ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("Select")
+public String selectOrchestrator(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ Task<Void> event1 = ctx.waitForExternalEvent("Event1");
+ Task<Void> event2 = ctx.waitForExternalEvent("Event2");
+ Task<Void> event3 = ctx.waitForExternalEvent("Event3");
+
+ Task<?> winner = ctx.anyOf(event1, event2, event3).await();
+ if (winner == event1) {
+ // ...
+ } else if (winner == event2) {
+ // ...
+ } else if (winner == event3) {
+ // ...
+ }
+ });
+}
+```
+ The previous example listens for *any* of multiple events. It's also possible to wait for *all* events.
Wait-DurableTask -Task @($gate1, $gate2, $gate3)
Invoke-ActivityFunction -FunctionName 'IssueBuildingPermit' -Input $applicationId ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("NewBuildingPermit")
+public String newBuildingPermit(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ String applicationId = ctx.getInput(String.class);
+
+ Task<Void> gate1 = ctx.waitForExternalEvent("CityPlanningApproval");
+ Task<Void> gate2 = ctx.waitForExternalEvent("FireDeptApproval");
+ Task<Void> gate3 = ctx.waitForExternalEvent("BuildingDeptApproval");
+
+ // all three departments must grant approval before a permit can be issued
+ ctx.allOf(List.of(gate1, gate2, gate3)).await();
+
+ ctx.callActivity("IssueBuildingPermit", applicationId).await();
+ });
+}
+```
+
-`WaitForExternalEvent` waits indefinitely for some input. The function app can be safely unloaded while waiting. If and when an event arrives for this orchestration instance, it is awakened automatically and immediately processes the event.
+The *"wait-for-external-event"* API waits indefinitely for some input. The function app can be safely unloaded while waiting. If and when an event arrives for this orchestration instance, it is awakened automatically and immediately processes the event.
> [!NOTE]
-> If your function app uses the Consumption Plan, no billing charges are incurred while an orchestrator function is awaiting a task from `WaitForExternalEvent` (.NET), `waitForExternalEvent` (JavaScript), or `wait_for_external_event` (Python), no matter how long it waits.
+> If your function app uses the Consumption Plan, no billing charges are incurred while an orchestrator function is awaiting an external event task, no matter how long it waits.
## Send events
-You can use the [RaiseEventAsync](/dotnet/api/microsoft.azure.webjobs.durableorchestrationclientbase.raiseeventasync?view=azure-dotnet-legacy&preserve-view=true) (.NET) or `raiseEventAsync` (JavaScript) methods to send an external event to an orchestration. These methods are exposed by the [orchestration client](durable-functions-bindings.md#orchestration-client) binding. You can also use the built-in [raise event HTTP API](durable-functions-http-api.md#raise-event) to send an external event to an orchestration.
+You can use the *"raise-event"* API defined by the [orchestration client](durable-functions-bindings.md#orchestration-client) binding to send an external event to an orchestration. You can also use the built-in [raise event HTTP API](durable-functions-http-api.md#raise-event) to send an external event to an orchestration.
-A raised event includes an *instance ID*, an *eventName*, and *eventData* as parameters. Orchestrator functions handle these events using the `WaitForExternalEvent` (.NET) or `waitForExternalEvent` (JavaScript) APIs. The *eventName* must match on both the sending and receiving ends in order for the event to be processed. The event data must also be JSON-serializable.
+A raised event includes an *instance ID*, an *eventName*, and *eventData* as parameters. Orchestrator functions handle these events using the [*"wait-for-external-event"*](#wait-for-events) APIs. The *eventName* must match on both the sending and receiving ends in order for the event to be processed. The event data must also be JSON-serializable.
-Internally, the "raise event" mechanisms enqueue a message that gets picked up by the waiting orchestrator function. If the instance is not waiting on the specified *event name,* the event message is added to an in-memory queue. If the orchestration instance later begins listening for that *event name,* it will check the queue for event messages.
+Internally, the *"raise-event"* mechanisms enqueue a message that gets picked up by the waiting orchestrator function. If the instance is not waiting on the specified *event name,* the event message is added to an in-memory queue. If the orchestration instance later begins listening for that *event name,* it will check the queue for event messages.
> [!NOTE] > If there is no orchestration instance with the specified *instance ID*, the event message is discarded.
param($instanceId)
Send-DurableExternalEvent -InstanceId $InstanceId -EventName "Approval" ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("ApprovalQueueProcessor")
+public void approvalQueueProcessor(
+ @QueueTrigger(name = "instanceID", queueName = "approval-queue") String instanceID,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
+ durableContext.getClient().raiseEvent(instanceID, "Approval", true);
+}
+```
+
-Internally, `RaiseEventAsync` (.NET), `raiseEvent` (JavaScript), `raise_event` (Python), or `Send-DurableExternalEvent` (PowerShell) enqueues a message that gets picked up by the waiting orchestrator function. If the instance is not waiting on the specified *event name,* the event message is added to an in-memory queue. If the orchestration instance later begins listening for that *event name,* it will check the queue for event messages.
+Internally, the "*raise-event*" API enqueues a message that gets picked up by the waiting orchestrator function. If the instance is not waiting on the specified *event name,* the event message is added to an in-memory buffer. If the orchestration instance later begins listening for that *event name,* it will check the buffer for event messages and trigger the task that was waiting for it.
> [!NOTE] > If there is no orchestration instance with the specified *instance ID*, the event message is discarded.
azure-functions Durable Functions Http Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-http-features.md
Title: HTTP features in Durable Functions - Azure Functions
description: Learn about the integrated HTTP features in the Durable Functions extension for Azure Functions. Previously updated : 05/11/2021 Last updated : 05/10/2022
+ms.devlang: csharp, java, javascript, powershell, python
# HTTP Features
Push-OutputBinding -Name Response -Value $Response
] } ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("HttpStart")
+public HttpResponseMessage httpStart(
+ @HttpTrigger(name = "req", route = "orchestrators/{functionName}") HttpRequestMessage<?> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext,
+ @BindingName("functionName") String functionName,
+ final ExecutionContext context) {
+
+ DurableTaskClient client = durableContext.getClient();
+ String instanceId = client.scheduleNewOrchestrationInstance(functionName);
+ context.getLogger().info("Created new Java orchestration with instance ID = " + instanceId);
+ return durableContext.createCheckStatusResponse(req, instanceId);
+}
+```
+ Starting an orchestrator function by using the HTTP-trigger functions shown previously can be done using any HTTP client. The following cURL command starts an orchestrator function named `DoWork`:
main = df.Orchestrator.create(orchestrator_function)
# [PowerShell](#tab/powershell)
-The feature is currently supported in PowerShell.
+This feature isn't available in PowerShell.
+
+# [Java](#tab/java)
+
+This feature isn't available in Java.
main = df.Orchestrator.create(orchestrator_function)
# [PowerShell](#tab/powershell)
-The feature is currently supported in PowerShell.
+This feature isn't available in PowerShell.
+
+# [Java](#tab/java)
+
+This feature isn't available in Java.
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-instance-management.md
Title: Manage instances in Durable Functions - Azure
description: Learn how to manage instances in the Durable Functions extension for Azure Functions. Previously updated : 05/11/2021 Last updated : 05/25/2022
+ms.devlang: csharp, java, javascript, python
#Customer intent: As a developer, I want to understand the options provided for managing my Durable Functions orchestration instances, so I can keep my orchestrations running efficiently and make improvements.
Orchestrations in Durable Functions are long-running stateful functions that can
## Start instances
-The `StartNewAsync` (.NET), `startNew` (JavaScript), or `start_new` (Python) method on the [orchestration client binding](durable-functions-bindings.md#orchestration-client) starts a new orchestration instance. Internally, this method writes a message via the [Durable Functions storage provider](durable-functions-storage-providers.md) and then returns. This message asynchronously triggers the start of an [orchestration function](durable-functions-types-features-overview.md#orchestrator-functions) with the specified name.
+The *start-new* (or *schedule-new*) method on the [orchestration client binding](durable-functions-bindings.md#orchestration-client) starts a new orchestration instance. Internally, this method writes a message via the [Durable Functions storage provider](durable-functions-storage-providers.md) and then returns. This message asynchronously triggers the start of an [orchestration function](durable-functions-types-features-overview.md#orchestrator-functions) with the specified name.
The parameters for starting a new orchestration instance are as follows:
public static async Task Run(
<a name="javascript-function-json"></a>Unless otherwise specified, the examples on this page use the HTTP trigger with the following function.json.
-**function.json**
+**`function.json`**
```json {
public static async Task Run(
> [!NOTE] > This example targets Durable Functions version 2.x. In version 1.x, use `orchestrationClient` instead of `durableClient`.
-**index.js**
+**`index.js`**
```javascript const df = require("durable-functions");
module.exports = async function(context, input) {
# [Python](#tab/python)
-<a name="javascript-function-json"></a>Unless otherwise specified, the examples on this page use the HTTP trigger with the following function.json.
+<a name="python-function-json"></a>Unless otherwise specified, the examples on this page use the HTTP trigger with the following function.json.
-**function.json**
+**`function.json`**
```json {
module.exports = async function(context, input) {
> [!NOTE] > This example targets Durable Functions version 2.x. In version 1.x, use `orchestrationClient` instead of `durableClient`.
-**__init__.py**
+**`__init__.py`**
```python import logging
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
```
+# [Java](#tab/java)
+
+```java
+@FunctionName("HelloWorldQueueTrigger")
+public void helloWorldQueueTrigger(
+ @QueueTrigger(name = "input", queueName = "start-queue", connection = "Storage") String input,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext,
+ final ExecutionContext context) {
+ DurableTaskClient client = durableContext.getClient();
+ String instanceID = client.scheduleNewOrchestrationInstance("HelloWorld");
+ context.getLogger().info("Scheduled orchestration with ID = " + instanceID);
+}
+```
+
+If you want to wait for the orchestrator to start before returning from your function, you can also use the `waitForInstanceStart()` method.
+
+```java
+// wait up to 30 seconds for the scheduled orchestration to enter the "Running" state
+client.waitForInstanceStart(instanceID, Duration.ofSeconds(30));
+```
+ ### Azure Functions Core Tools
func durable start-new --function-name HelloWorld --input @counter-data.json --t
After starting new orchestration instances, you'll most likely need to query their runtime status to learn whether they are running, have completed, or have failed.
-The `GetStatusAsync` (.NET), `getStatus` (JavaScript), or the `get_status` (Python) method on the [orchestration client binding](durable-functions-bindings.md#orchestration-client) queries the status of an orchestration instance.
+The *get-status* method on the [orchestration client binding](durable-functions-bindings.md#orchestration-client) queries the status of an orchestration instance.
It takes an `instanceId` (required), `showHistory` (optional), `showHistoryOutput` (optional), and `showInput` (optional) as parameters.
The method returns an object with the following properties:
* **History**: The execution history of the orchestration. This field is only populated if `showHistory` is set to `true`. > [!NOTE]
-> An orchestrator is not marked as `Completed` until all of its scheduled tasks have finished _and_ the orchestrator has returned. In other words, it is not sufficient for an orchestrator to reach its `return` statement for it to be marked as `Completed`. This is particularly relevant for cases where `WhenAny` is used; those orchestrators often `return` before all the scheduled tasks have executed.
+> An orchestrator is not marked as `Completed` until all of its scheduled tasks have finished *and* the orchestrator has returned. In other words, it is not sufficient for an orchestrator to reach its `return` statement for it to be marked as `Completed`. This is particularly relevant for cases where `WhenAny` is used; those orchestrators often `return` before all the scheduled tasks have executed.
-This method returns `null` (.NET), `undefined` (JavaScript), or `None` (Python) if the instance doesn't exist.
+This method returns `null` (.NET and Java), `undefined` (JavaScript), or `None` (Python) if the instance doesn't exist.
# [C#](#tab/csharp)
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
# example: if (existing_instance.runtime_status is df.OrchestrationRuntimeStatus.Running) { ... ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("GetStatus")
+public void getStatus(
+ @QueueTrigger(name = "instanceID", queueName = "check-status-queue", connection = "Storage") String instanceID,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext,
+ final ExecutionContext context) {
+ DurableTaskClient client = durableContext.getClient();
+ OrchestrationMetadata metadata = client.getInstanceMetadata(instanceID, false);
+ if (metadata != null) {
+ OrchestrationRuntimeStatus status = metadata.getRuntimeStatus();
+ switch (status) {
+ // do something based on the current status
+ }
+ }
+}
+```
+ ### Azure Functions Core Tools
func durable get-history --id 0ab8c55a66644d68a3a8b220b12d209c
## Query all instances
-You can use the [ListInstancesAsync](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationclient.listinstancesasync) (.NET), [getStatusAll](/javascript/api/durable-functions/durableorchestrationclient#durable-functions-durableorchestrationclient-getstatusall) (JavaScript), or `get_status_all` (Python) method to query the statuses of all orchestration instances in your [task hub](durable-functions-task-hubs.md). This method returns a list of objects that represent the orchestration instances matching the query parameters.
+You can use APIs in your language SDK to query the statuses of all orchestration instances in your [task hub](durable-functions-task-hubs.md). This *"list-instances"* or *"get-status"* API returns a list of objects that represent the orchestration instances matching the query parameters.
# [C#](#tab/csharp)
module.exports = async function(context, req) {
}; ```
+See [Start instances](#javascript-function-json) for the function.json configuration.
+ # [Python](#tab/python) ```python
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
logging.log(json.dumps(instance)) ```
-See [Start instances](#javascript-function-json) for the function.json configuration.
+See [Start instances](#python-function-json) for the function.json configuration.
+
+# [Java](#tab/java)
+```java
+@FunctionName("GetAllStatus")
+public String getAllStatus(
+ @HttpTrigger(name = "req", methods = {HttpMethod.GET}) HttpRequestMessage<?> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
+ DurableTaskClient client = durableContext.getClient();
+ OrchestrationStatusQuery noFilter = new OrchestrationStatusQuery();
+ OrchestrationStatusQueryResult result = client.queryInstances(noFilter);
+ return "Found " + result.getOrchestrationState().size() + " orchestrations.";
+}
+```
### Azure Functions Core Tools
func durable get-instances
What if you don't really need all the information that a standard instance query can provide? For example, what if you're just looking for the orchestration creation time, or the orchestration runtime status? You can narrow your query by applying filters.
-Use the [ListInstancesAsync](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationclient.listinstancesasync) (.NET) or [getStatusBy](/javascript/api/durable-functions/durableorchestrationclient#durable-functions-durableorchestrationclient-getstatusby) (JavaScript) method to get a list of orchestration instances that match a set of predefined filters.
- # [C#](#tab/csharp) ```csharp
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
logging.log(json.dumps(instance)) ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("GetRunningInstances")
+public String getRunningInstances(
+ @HttpTrigger(name = "req", methods = {HttpMethod.GET}) HttpRequestMessage<?> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
+ DurableTaskClient client = durableContext.getClient();
+ OrchestrationStatusQuery filter = new OrchestrationStatusQuery()
+ .setRuntimeStatusList(List.of(OrchestrationRuntimeStatus.PENDING, OrchestrationRuntimeStatus.RUNNING))
+ .setCreatedTimeFrom(Instant.now().minus(Duration.ofDays(7)))
+ .setCreatedTimeTo(Instant.now().minus(Duration.ofDays(1)));
+ OrchestrationStatusQueryResult result = client.queryInstances(filter);
+ return "Found " + result.getOrchestrationState().size() + " orchestrations.";
+}
+```
+ ### Azure Functions Core Tools
func durable get-instances --created-after 2021-03-10T13:57:31Z --created-before
If you have an orchestration instance that is taking too long to run, or you just need to stop it before it completes for any reason, you can terminate it.
-You can use the `TerminateAsync` (.NET), `terminate` (JavaScript), or the `terminate` (Python) method of the [orchestration client binding](durable-functions-bindings.md#orchestration-client) to terminate instances. The two parameters are an `instanceId` and a `reason` string, which are written to logs and to the instance status.
+The two parameters for the terminate API are an *instance ID* and a *reason* string, which are written to logs and to the instance status.
# [C#](#tab/csharp)
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
return client.terminate(instance_id, reason) ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("TerminateInstance")
+public void terminateInstance(
+ @HttpTrigger(name = "req", methods = {HttpMethod.POST}) HttpRequestMessage<String> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
+ String instanceID = req.getBody();
+ String reason = "Found a bug";
+ durableContext.getClient().terminate(instanceID, reason);
+}
+```
+ A terminated instance will eventually transition into the `Terminated` state. However, this transition will not happen immediately. Rather, the terminate operation will be queued in the task hub along with other operations for that instance. You can use the [instance query](#query-instances) APIs to know when a terminated instance has actually reached the `Terminated` state.
func durable terminate --id 0ab8c55a66644d68a3a8b220b12d209c --reason "Found a b
In some scenarios, orchestrator functions need to wait and listen for external events. Examples scenarios where this is useful include the [monitoring](durable-functions-overview.md#monitoring) and [human interaction](durable-functions-overview.md#human) scenarios.
-You can send event notifications to running instances by using the `RaiseEventAsync` (.NET), `raiseEvent` (JavaScript), or `raise_event` (Python) method of the [orchestration client](durable-functions-bindings.md#orchestration-client). Instances that can handle these events are those that are awaiting a call to `WaitForExternalEvent` (.NET), yielding to a `waitForExternalEvent` (JavaScript) task, or yielding a `wait_for_external_event` (Python) task.
+You can send event notifications to running instances by using the *raise event* API of the [orchestration client](durable-functions-bindings.md#orchestration-client). Orchestrations can listen and respond to these events using the *wait for external event* orchestrator API.
-The parameters to `RaiseEventAsync` (.NET) and `raiseEvent` (JavaScript) are as follows:
+The parameters for *raise event* are as follows:
-* **InstanceId**: The unique ID of the instance.
-* **EventName**: The name of the event to send.
-* **EventData**: A JSON-serializable payload to send to the instance.
+* *Instance ID*: The unique ID of the instance.
+* *Event name*: The name of the event to send.
+* *Event data*: A JSON-serializable payload to send to the instance.
# [C#](#tab/csharp)
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
return client.raise_event(instance_id, 'MyEvent', event_data) ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("RaiseEvent")
+public void raiseEvent(
+ @HttpTrigger(name = "req", methods = {HttpMethod.POST}) HttpRequestMessage<String> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
+ String instanceID = req.getBody();
+ String eventName = "MyEvent";
+ int[] eventData = { 1, 2, 3 };
+ durableContext.getClient().raiseEvent(instanceID, eventName, eventData);
+}
+```
+ > [!NOTE]
func durable raise-event --id 1234567 --event-name MyOtherEvent --event-data 3
In long-running orchestrations, you may want to wait and get the results of an orchestration. In these cases, it's also useful to be able to define a timeout period on the orchestration. If the timeout is exceeded, the state of the orchestration should be returned instead of the results.
-The `WaitForCompletionOrCreateCheckStatusResponseAsync` (.NET), the `waitForCompletionOrCreateCheckStatusResponse` (JavaScript), or the `wait_for_completion_or_create_check_status_response` (Python) method can be used to get the actual output from an orchestration instance synchronously. By default, these methods use a default value of 10 seconds for `timeout`, and 1 second for `retryInterval`.
+The *"wait for completion or create check status response"* API can be used to get the actual output from an orchestration instance synchronously. By default, this method has a default timeout of 10 seconds and a polling interval of 1 second.
Here is an example HTTP-trigger function that demonstrates how to use this API:
def get_time_in_seconds(req: func.HttpRequest, query_parameter_name: str):
return query_value if query_value != None else 1000 ```
+# [Java](#tab/java)
+
+Java doesn't currently have a single method for this scenario. However, it can be implemented using a few extra lines of code.
+
+<!-- Tracking issue: https://github.com/microsoft/durabletask-java/issues/64 -->
+
+```java
+@FunctionName("HttpStartAndWait")
+public HttpResponseMessage httpStartAndWait(
+ @HttpTrigger(name = "req", route = "orchestrators/{functionName}/wait", methods = {HttpMethod.POST}) HttpRequestMessage<?> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext,
+ @BindingName("functionName") String functionName,
+ final ExecutionContext context) {
+
+ DurableTaskClient client = durableContext.getClient();
+ String instanceId = client.scheduleNewOrchestrationInstance(functionName);
+ context.getLogger().info("Created new Java orchestration with instance ID = " + instanceId);
+ try {
+ String timeoutString = req.getQueryParameters().get("timeout");
+ Integer timeoutInSeconds = Integer.parseInt(timeoutString);
+ OrchestrationMetadata orchestration = client.waitForInstanceStart(
+ instanceId,
+ Duration.ofSeconds(timeoutInSeconds),
+ true /* getInputsAndOutputs */);
+ return req.createResponseBuilder(HttpStatus.OK)
+ .body(orchestration.getSerializedOutput())
+ .header("Content-Type", "application/json")
+ .build();
+ } catch (Exception timeoutEx) {
+ // timeout expired - return a 202 response
+ return durableContext.createCheckStatusResponse(req, instanceId);
+ }
+}
+```
+ Call the function with the following line. Use 2 seconds for the timeout and 0.5 seconds for the retry interval:
Call the function with the following line. Use 2 seconds for the timeout and 0.5
curl -X POST "http://localhost:7071/orchestrators/E1_HelloSequence/wait?timeout=2&retryInterval=0.5" ```
+> [!NOTE]
+> The above cURL command assumes you have an orchestrator function named `E1_HelloSequence` in your project. Because of how the HTTP trigger function is written, you can replace it with the name of any orchestrator function in your project.
+ Depending on the time required to get the response from the orchestration instance, there are two cases: * The orchestration instances complete within the defined timeout (in this case 2 seconds), and the response is the actual orchestration instance output, delivered synchronously:
Transfer-Encoding: chunked
## Retrieve HTTP management webhook URLs
-You can use an external system to monitor or to raise events to an orchestration. External systems can communicate with Durable Functions through the webhook URLs that are part of the default response described in [HTTP API URL discovery](durable-functions-http-features.md#http-api-url-discovery). The webhook URLs can alternatively be accessed programmatically using the [orchestration client binding](durable-functions-bindings.md#orchestration-client). The `CreateHttpManagementPayload` (.NET) or the `createHttpManagementPayload` (JavaScript) methods can be used to get a serializable object that contains these webhook URLs.
+You can use an external system to monitor or to raise events to an orchestration. External systems can communicate with Durable Functions through the webhook URLs that are part of the default response described in [HTTP API URL discovery](durable-functions-http-features.md#http-api-url-discovery). The webhook URLs can alternatively be accessed programmatically using the [orchestration client binding](durable-functions-bindings.md#orchestration-client). Specifically, the *create HTTP management payload* API can be used to get a serializable object that contains these webhook URLs.
-The `CreateHttpManagementPayload` (.NET) and `createHttpManagementPayload` (JavaScript) methods have one parameter:
+The *create HTTP management payload* API has one parameter:
-* **instanceId**: The unique ID of the instance.
+* *Instance ID*: The unique ID of the instance.
The methods return an object with the following string properties:
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.co
payload: payload }) ```+
+# [Java](#tab/java)
+
+<!-- Tracking issue: https://github.com/microsoft/durabletask-java/issues/63 -->
+
+> [!NOTE]
+> This feature is currently not supported in Java.
+ ## Rewind instances (preview)
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
return client.rewind(instance_id, reason) ``` -->
+# [Java](#tab/java)
+
+> [!NOTE]
+> This feature is currently not supported in Java.
+
+<!--
+Tracking issue: https://github.com/microsoft/durabletask-java/issues/65
+
+```java
+@FunctionName("Rewind")
+public void rewind(
+ @HttpTrigger(name = "req", methods = {HttpMethod.POST}) HttpRequestMessage<String> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
+ String instanceID = req.getBody();
+ String reason = "Failed due to external configuration issue";
+ durableContext.getClient().rewind(instanceID, reason);
+}
+```
+-->
+ ### Azure Functions Core Tools
func durable rewind --id 0ab8c55a66644d68a3a8b220b12d209c --reason "Orchestrator
## Purge instance history
-To remove all the data associated with an orchestration, you can purge the instance history. For example, you might want to delete any Azure Table rows and large message blobs associated with a completed instance. To do so, use the `PurgeInstanceHistoryAsync` (.NET), `purgeInstanceHistory` (JavaScript), or `purge_instance_history` (Python) method of the [orchestration client](durable-functions-bindings.md#orchestration-client) object.
+To remove all the data associated with an orchestration, you can purge the instance history. For example, you might want to delete any storage resources associated with a completed instance. To do so, use the *purge instance* API defined by the [orchestration client](durable-functions-bindings.md#orchestration-client).
-This method has two overloads. The first overload purges history by the ID of the orchestration instance:
+This first example shows how to purge a single orchestration instance.
# [C#](#tab/csharp)
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
return client.purge_instance_history(instance_id) ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("PurgeInstance")
+public HttpResponseMessage purgeInstance(
+ @HttpTrigger(name = "req", methods = {HttpMethod.POST}, route = "purge/{instanceID}") HttpRequestMessage<?> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext,
+ @BindingName("instanceID") String instanceID) {
+ PurgeResult result = durableContext.getClient().purgeInstance(instanceID);
+ if (result.getDeletedInstanceCount() == 0) {
+ return req.createResponseBuilder(HttpStatus.NOT_FOUND)
+ .body("No completed instance with ID '" + instanceID + "' was found!")
+ .build();
+ } else {
+ return req.createResponseBuilder(HttpStatus.OK)
+ .body("Successfully purged data for " + instanceID)
+ .build();
+ }
+}
+```
+ The next example shows a timer-triggered function that purges the history for all orchestration instances that completed after the specified time interval. In this case, it removes data for all instances completed 30 or more days ago. This example function is scheduled to run once per day, at 12:00 PM UTC:
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
return client.purge_instance_history_by(created_time_from, created_time_to, runtime_statuses) ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("PurgeInstances")
+public void purgeInstances(
+ @TimerTrigger(name = "purgeTimer", schedule = "0 0 12 * * *") String timerInfo,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext,
+ ExecutionContext context) {
+ PurgeInstanceCriteria criteria = new PurgeInstanceCriteria()
+ .setCreatedTimeFrom(Instant.now().minus(Duration.ofDays(60)))
+ .setCreatedTimeTo(Instant.now().minus(Duration.ofDays(30)))
+ .setRuntimeStatusList(List.of(OrchestrationRuntimeStatus.COMPLETED));
+ PurgeResult result = durableContext.getClient().purgeInstances(criteria);
+ context.getLogger().info(String.format("Purged %d instance(s)", result.getDeletedInstanceCount()));
+}
+```
+ > [!NOTE]
azure-functions Durable Functions Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-orchestrations.md
Title: Durable Orchestrations - Azure Functions
description: Introduction to the orchestration feature for Azure Durable Functions. Previously updated : 05/11/2021 Last updated : 05/06/2022
+ms.devlang: csharp, javascript, powershell, python, java
#Customer intent: As a developer, I want to understand durable orchestrations so that I can use them effectively in my applications.
-# Durable Orchestrations
+# Durable orchestrations
Durable Functions is an extension of [Azure Functions](../functions-overview.md). You can use an *orchestrator function* to orchestrate the execution of other Durable functions within a function app. Orchestrator functions have the following characteristics:
An orchestration's instance ID is a required parameter for most [instance manage
Orchestrator functions reliably maintain their execution state by using the [event sourcing](/azure/architecture/patterns/event-sourcing) design pattern. Instead of directly storing the current state of an orchestration, the Durable Task Framework uses an append-only store to record the full series of actions the function orchestration takes. An append-only store has many benefits compared to "dumping" the full runtime state. Benefits include increased performance, scalability, and responsiveness. You also get eventual consistency for transactional data and full audit trails and history. The audit trails support reliable compensating actions.
-Durable Functions uses event sourcing transparently. Behind the scenes, the `await` (C#) or `yield` (JavaScript/Python) operator in an orchestrator function yields control of the orchestrator thread back to the Durable Task Framework dispatcher. The dispatcher then commits any new actions that the orchestrator function scheduled (such as calling one or more child functions or scheduling a durable timer) to storage. The transparent commit action updates the execution history of the orchestration instance by appending all new events into storage, much like an append-only log. Similarly, the commit action creates messages in storage to schedule the actual work. At this point, the orchestrator function can be unloaded from memory. By default, Durable Functions uses Azure Storage as its runtime state store, but other [storage providers are also supported](durable-functions-storage-providers.md).
+Durable Functions uses event sourcing transparently. Behind the scenes, the `await` (C#) or `yield` (JavaScript/Python) operator in an orchestrator function yields control of the orchestrator thread back to the Durable Task Framework dispatcher. In the case of Java, there is no special language keyword. Instead, calling `.await()` on a task will yield control back to the dispatcher via a custom `Throwable`. The dispatcher then commits any new actions that the orchestrator function scheduled (such as calling one or more child functions or scheduling a durable timer) to storage. The transparent commit action updates the execution history of the orchestration instance by appending all new events into storage, much like an append-only log. Similarly, the commit action creates messages in storage to schedule the actual work. At this point, the orchestrator function can be unloaded from memory. By default, Durable Functions uses Azure Storage as its runtime state store, but other [storage providers are also supported](durable-functions-storage-providers.md).
When an orchestration function is given more work to do (for example, a response message is received or a durable timer expires), the orchestrator wakes up and re-executes the entire function from the start to rebuild the local state. During the replay, if the code tries to call a function (or do any other async work), the Durable Task Framework consults the execution history of the current orchestration. If it finds that the [activity function](durable-functions-types-features-overview.md#activity-functions) has already executed and yielded a result, it replays that function's result and the orchestrator code continues to run. Replay continues until the function code is finished or until it has scheduled new async work.
The event-sourcing behavior of the Durable Task Framework is closely coupled wit
# [C#](#tab/csharp) ```csharp
-[FunctionName("E1_HelloSequence")]
+[FunctionName("HelloCities")]
public static async Task<List<string>> Run( [OrchestrationTrigger] IDurableOrchestrationContext context) { var outputs = new List<string>();
- outputs.Add(await context.CallActivityAsync<string>("E1_SayHello", "Tokyo"));
- outputs.Add(await context.CallActivityAsync<string>("E1_SayHello", "Seattle"));
- outputs.Add(await context.CallActivityAsync<string>("E1_SayHello", "London"));
+ outputs.Add(await context.CallActivityAsync<string>("SayHello", "Tokyo"));
+ outputs.Add(await context.CallActivityAsync<string>("SayHello", "Seattle"));
+ outputs.Add(await context.CallActivityAsync<string>("SayHello", "London"));
// returns ["Hello Tokyo!", "Hello Seattle!", "Hello London!"] return outputs;
const df = require("durable-functions");
module.exports = df.orchestrator(function*(context) { const output = [];
- output.push(yield context.df.callActivity("E1_SayHello", "Tokyo"));
- output.push(yield context.df.callActivity("E1_SayHello", "Seattle"));
- output.push(yield context.df.callActivity("E1_SayHello", "London"));
+ output.push(yield context.df.callActivity("SayHello", "Tokyo"));
+ output.push(yield context.df.callActivity("SayHello", "Seattle"));
+ output.push(yield context.df.callActivity("SayHello", "London"));
// returns ["Hello Tokyo!", "Hello Seattle!", "Hello London!"] return output;
$output += Invoke-DurableActivity -FunctionName 'SayHello' -Input 'London'
$output ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("HelloCities")
+public String helloCitiesOrchestrator(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ String result = "";
+ result += ctx.callActivity("SayHello", "Tokyo", String.class).await() + ", ";
+ result += ctx.callActivity("SayHello", "Seattle", String.class).await() + ", ";
+ result += ctx.callActivity("SayHello", "London", String.class).await();
+ return result;
+ });
+}
+```
+
-At each `await` (C#) or `yield` (JavaScript/Python) statement, the Durable Task Framework checkpoints the execution state of the function into some durable storage backend (Azure Table storage by default). This state is what's referred to as the *orchestration history*.
+Whenever an activity function is scheduled, the Durable Task Framework checkpoints the execution state of the function into some durable storage backend (Azure Table storage by default). This state is what's referred to as the *orchestration history*.
### History table
Upon completion, the history of the function shown earlier looks something like
| PartitionKey (InstanceId) | EventType | Timestamp | Input | Name | Result | Status | |-|--|-|--|-||--|
-| eaee885b | ExecutionStarted | 2021-05-05T18:45:28.852Z | null | E1_HelloSequence | | |
+| eaee885b | ExecutionStarted | 2021-05-05T18:45:28.852Z | null | HelloCities | | |
| eaee885b | OrchestratorStarted | 2021-05-05T18:45:32.362Z | | | | |
-| eaee885b | TaskScheduled | 2021-05-05T18:45:32.670Z | | E1_SayHello | | |
+| eaee885b | TaskScheduled | 2021-05-05T18:45:32.670Z | | SayHello | | |
| eaee885b | OrchestratorCompleted | 2021-05-05T18:45:32.670Z | | | | | | eaee885b | TaskCompleted | 2021-05-05T18:45:34.201Z | | | """Hello Tokyo!""" | | | eaee885b | OrchestratorStarted | 2021-05-05T18:45:34.232Z | | | | |
-| eaee885b | TaskScheduled | 2021-05-05T18:45:34.435Z | | E1_SayHello | | |
+| eaee885b | TaskScheduled | 2021-05-05T18:45:34.435Z | | SayHello | | |
| eaee885b | OrchestratorCompleted | 2021-05-05T18:45:34.435Z | | | | | | eaee885b | TaskCompleted | 2021-05-05T18:45:34.763Z | | | """Hello Seattle!""" | | | eaee885b | OrchestratorStarted | 2021-05-05T18:45:34.857Z | | | | |
-| eaee885b | TaskScheduled | 2021-05-05T18:45:34.857Z | | E1_SayHello | | |
+| eaee885b | TaskScheduled | 2021-05-05T18:45:34.857Z | | SayHello | | |
| eaee885b | OrchestratorCompleted | 2021-05-05T18:45:34.857Z | | | | | | eaee885b | TaskCompleted | 2021-05-05T18:45:34.919Z | | | """Hello London!""" | | | eaee885b | OrchestratorStarted | 2021-05-05T18:45:35.032Z | | | | |
Upon completion, the history of the function shown earlier looks something like
A few notes on the column values: * **PartitionKey**: Contains the instance ID of the orchestration.
-* **EventType**: Represents the type of the event. May be one of the following types:
- * **OrchestratorStarted**: The orchestrator function resumed from an await or is running for the first time. The `Timestamp` column is used to populate the deterministic value for the `CurrentUtcDateTime` (.NET), `currentUtcDateTime` (JavaScript), and `current_utc_datetime` (Python) APIs.
- * **ExecutionStarted**: The orchestrator function started executing for the first time. This event also contains the function input in the `Input` column.
- * **TaskScheduled**: An activity function was scheduled. The name of the activity function is captured in the `Name` column.
- * **TaskCompleted**: An activity function completed. The result of the function is in the `Result` column.
- * **TimerCreated**: A durable timer was created. The `FireAt` column contains the scheduled UTC time at which the timer expires.
- * **TimerFired**: A durable timer fired.
- * **EventRaised**: An external event was sent to the orchestration instance. The `Name` column captures the name of the event and the `Input` column captures the payload of the event.
- * **OrchestratorCompleted**: The orchestrator function awaited.
- * **ContinueAsNew**: The orchestrator function completed and restarted itself with new state. The `Result` column contains the value, which is used as the input in the restarted instance.
- * **ExecutionCompleted**: The orchestrator function ran to completion (or failed). The outputs of the function or the error details are stored in the `Result` column.
+* **EventType**: Represents the type of the event. You can find detailed descriptions of all the history event types [here](https://github.com/Azure/durabletask/tree/main/src/DurableTask.Core/History#readme).
* **Timestamp**: The UTC timestamp of the history event. * **Name**: The name of the function that was invoked. * **Input**: The JSON-formatted input of the function.
A few notes on the column values:
> [!WARNING] > While it's useful as a debugging tool, don't take any dependency on this table. It may change as the Durable Functions extension evolves.
-Every time the function resumes from an `await` (C#) or `yield` (JavaScript/Python), the Durable Task Framework reruns the orchestrator function from scratch. On each rerun, it consults the execution history to determine whether the current async operation has taken place. If the operation took place, the framework replays the output of that operation immediately and moves on to the next `await` (C#) or `yield` (JavaScript/Python). This process continues until the entire history has been replayed. Once the current history has been replayed, the local variables will have been restored to their previous values.
+Every time the function is resumed after waiting for a task to complete, the Durable Task Framework reruns the orchestrator function from scratch. On each rerun, it consults the execution history to determine whether the current async task has completed. If the execution history shows that the task has already completed, the framework replays the output of that task and moves on to the next task. This process continues until the entire execution history has been replayed. Once the current execution history has been replayed, the local variables will have been restored to their previous values.
## Features and patterns
For more information and for examples, see the [Sub-orchestrations](durable-func
### Durable timers
-Orchestrations can schedule *durable timers* to implement delays or to set up timeout handling on async actions. Use durable timers in orchestrator functions instead of `Thread.Sleep` and `Task.Delay` (C#), or `setTimeout()` and `setInterval()` (JavaScript), or `time.sleep()` (Python).
+Orchestrations can schedule *durable timers* to implement delays or to set up timeout handling on async actions. Use durable timers in orchestrator functions instead of language-native "sleep" APIs.
For more information and for examples, see the [Durable timers](durable-functions-timers.md) article.
def orchestrator_function(context: df.DurableOrchestrationContext):
The feature is not currently supported in PowerShell.
+# [Java](#tab/java)
+
+The feature is not currently supported in Java.
+ In addition to supporting basic request/response patterns, the method supports automatic handling of common async HTTP 202 polling patterns, and also supports authentication with external services using [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
param($location)
"Hello $($location.City), $($location.State)!" # ... ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("GetWeatherOrchestrator")
+public String getWeatherOrchestrator(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ var location = new Location();
+ location.city = "Seattle";
+ location.state = "WA";
+ String weather = ctx.callActivity("GetWeather", location, String.class).await();
+ return weather;
+ });
+}
+
+@FunctionName("GetWeather")
+public String getWeather(@DurableActivityTrigger(name = "location") Location location) {
+ if (location.city.equals("Seattle") && location.state.equals("WA")) {
+ return "Cloudy";
+ } else {
+ return "Unknown";
+ }
+}
+
+class Location {
+ public String city;
+ public String state;
+}
+```
+ ## Next steps
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions is designed to work with all Azure Functions programming langu
| Language stack | Azure Functions Runtime versions | Language worker version | Minimum bundles version | | - | - | - | - |
-| .NET / C# / F# | Functions 1.0+ | In-process (GA) <br/> Out-of-process ([preview](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions)) | N/A |
+| .NET / C# / F# | Functions 1.0+ | In-process (GA) <br/> Out-of-process ([preview](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions)) | n/a |
| JavaScript/TypeScript | Functions 2.0+ | Node 8+ | 2.x bundles | | Python | Functions 2.0+ | Python 3.7+ | 2.x bundles | | PowerShell | Functions 3.0+ | PowerShell 7+ | 2.x bundles |
-| Java (coming soon) | Functions 3.0+ | Java 8+ | 4.x bundles |
+| Java (preview) | Functions 3.0+ | Java 8+ | 4.x bundles |
Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio 2019](durable-functions-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md).
Invoke-DurableActivity -FunctionName 'F4' -Input $Z
You can use the `Invoke-DurableActivity` command to invoke other functions by name, pass parameters, and return function output. Each time the code calls `Invoke-DurableActivity` without the `NoWait` switch, the Durable Functions framework checkpoints the progress of the current function instance. If the process or virtual machine recycles midway through the execution, the function instance resumes from the preceding `Invoke-DurableActivity` call. For more information, see the next section, Pattern #2: Fan out/fan in.
+# [Java](#tab/java)
+
+```java
+@FunctionName("Chaining")
+public String helloCitiesOrchestrator(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ String input = ctx.getInput(String.class);
+ int x = ctx.callActivity("F1", input, int.class).await();
+ int y = ctx.callActivity("F2", x, int.class).await();
+ int z = ctx.callActivity("F3", y, int.class).await();
+ return ctx.callActivity("F4", z, double.class).await();
+ });
+}
+```
+
+You can use the `ctx` object to invoke other functions by name, pass parameters, and return function output. The output of these method calls is a `Task<V>` object where `V` is the type of data returned by the invoked function. Each time you call `Task<V>.await()`, the Durable Functions framework checkpoints the progress of the current function instance. If the process unexpectedly recycles midway through the execution, the function instance resumes from the preceding `Task<V>.await()` call. For more information, see the next section, Pattern #2: Fan out/fan in.
+
+> [!NOTE]
+> The orchestrator function logic must implemented as a lambda function and wrapped by a call to `OrchestrationRunner.loadAndRun(...)` as shown in the above example.
+ ### <a name="fan-in-out"></a>Pattern #2: Fan out/fan in
The fan-out work is distributed to multiple instances of the `F2` function. Plea
The automatic checkpointing that happens at the `Wait-ActivityFunction` call ensures that a potential midway crash or reboot doesn't require restarting an already completed task.
+# [Java](#tab/java)
+
+```java
+@FunctionName("FanOutFanIn")
+public String fanOutFanInOrchestrator(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ List<?> batch = ctx.callActivity("F1", List.class).await();
+
+ // Schedule each task to run in parallel
+ List<Task<Integer>> parallelTasks = IntStream.range(0, batch.size())
+ .mapToObj(i -> ctx.callActivity("F2", i, Integer.class))
+ .collect(Collectors.toList());
+
+ // Wait for all tasks to complete, then return the aggregated sum of the results
+ List<Integer> results = ctx.allOf(parallelTasks).await();
+ return results.stream().reduce(0, Integer::sum);
+ });
+}
+```
+
+The fan-out work is distributed to multiple instances of the `F2` function. The work is tracked by using a dynamic list of tasks. `ctx.allOf(parallelTasks).await()` is called to wait for all the called functions to finish. Then, the `F2` function outputs are aggregated from the dynamic task list and returned as the orchestrator function's output.
+
+The automatic checkpointing that happens at the `.await()` call on `ctx.allOf(parallelTasks)` ensures that an unexpected process recycle doesn't require restarting any already completed tasks.
+ > [!NOTE]
The async HTTP API pattern addresses the problem of coordinating the state of lo
![A diagram of the HTTP API pattern](./media/durable-functions-concepts/async-http-api.png)
-Durable Functions provides **built-in support** for this pattern, simplifying or even removing the code you need to write to interact with long-running function executions. For example, the Durable Functions quickstart samples ([C#](durable-functions-create-first-csharp.md) and [JavaScript](quickstart-js-vscode.md)) show a simple REST command that you can use to start new orchestrator function instances. After an instance starts, the extension exposes webhook HTTP APIs that query the orchestrator function status.
+Durable Functions provides **built-in support** for this pattern, simplifying or even removing the code you need to write to interact with long-running function executions. For example, the Durable Functions quickstart samples ([C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), and [Java](quickstart-java.md)) show a simple REST command that you can use to start new orchestrator function instances. After an instance starts, the extension exposes webhook HTTP APIs that query the orchestrator function status.
The following example shows REST commands that start an orchestrator and query its status. For clarity, some protocol details are omitted from the example.
while ($Context.CurrentUtcDateTime -lt $expiryTime) {
$output ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("Monitor")
+public String monitorOrchestrator(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ JobInfo jobInfo = ctx.getInput(JobInfo.class);
+ String jobId = jobInfo.getJobId();
+ Instant expiryTime = jobInfo.getExpirationTime();
+
+ while (ctx.getCurrentInstant().compareTo(expiryTime) < 0) {
+ String status = ctx.callActivity("GetJobStatus", jobId, String.class).await();
+
+ // Perform an action when a condition is met
+ if (status.equals("Completed")) {
+ // send an alert and exit
+ ctx.callActivity("SendAlert", jobId).await();
+ break;
+ } else {
+ // wait N minutes before doing the next poll
+ Duration pollingDelay = jobInfo.getPollingDelay();
+ ctx.createTimer(pollingDelay).await();
+ }
+ }
+
+ return "done";
+ });
+}
+```
+
-When a request is received, a new orchestration instance is created for that job ID. The instance polls a status until a condition is met and the loop is exited. A durable timer controls the polling interval. Then, more work can be performed, or the orchestration can end. When `nextCheck` exceeds `expiryTime`, the monitor ends.
+When a request is received, a new orchestration instance is created for that job ID. The instance polls a status until either a condition is met or until a timeout expires. A durable timer controls the polling interval. Then, more work can be performed, or the orchestration can end.
### <a name="human"></a>Pattern #5: Human interaction
$output
``` To create the durable timer, call `Start-DurableTimer`. The notification is received by `Start-DurableExternalEventListener`. Then, `Wait-DurableTask` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout).
+# [Java](#tab/java)
+
+```java
+@FunctionName("ApprovalWorkflow")
+public String approvalWorkflow(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ ApprovalInfo approvalInfo = ctx.getInput(ApprovalInfo.class);
+ ctx.callActivity("RequestApproval", approvalInfo).await();
+
+ Duration timeout = Duration.ofHours(72);
+ try {
+ // Wait for an approval. A TaskCanceledException will be thrown if the timeout expires.
+ boolean approved = ctx.waitForExternalEvent("ApprovalEvent", timeout, boolean.class).await();
+ approvalInfo.setApproved(approved);
+
+ ctx.callActivity("ProcessApproval", approvalInfo).await();
+ } catch (TaskCanceledException timeoutEx) {
+ ctx.callActivity("Escalate", approvalInfo).await();
+ }
+ });
+}
+```
+
+The `ctx.waitForExternalEvent(...).await()` method call pauses the orchestration until it receives an event named `ApprovalEvent`, which has a `boolean` payload. If the event is received, an activity function is called to process the approval result. However, if no such event is received before the `timeout` (72 hours) expires, a `TaskCanceledException` is raised and the `Escalate` activity function is called.
+
+> [!NOTE]
+> There is no charge for time spent waiting for external events when running in the Consumption plan.
+ An external client can deliver the event notification to a waiting orchestrator function by using the [built-in HTTP APIs](durable-functions-http-api.md#raise-event): ```bash
async def main(client: str):
Send-DurableExternalEvent -InstanceId $InstanceId -EventName "ApprovalEvent" -EventData "true"
-``````
+```
+
+# [Java](#tab/java)
+
+```java
+@FunctionName("RaiseEventToOrchestration")
+public void raiseEventToOrchestration(
+ @HttpTrigger(name = "instanceId") String instanceId,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
+
+ DurableTaskClient client = durableContext.getClient();
+ client.raiseEvent(instanceId, "ApprovalEvent", true);
+}
+```
main = df.Entity.create(entity_function)
Durable entities are currently not supported in PowerShell.
+# [Java](#tab/java)
+
+Durable entities are currently not supported in Java.
+ Clients can enqueue *operations* for (also known as "signaling") an entity function using the [entity client binding](durable-functions-bindings.md#entity-client).
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
Durable entities are currently not supported in PowerShell.
+# [Java](#tab/java)
+
+Durable entities are currently not supported in Java.
+ Entity functions are available in [Durable Functions 2.0](durable-functions-versions.md) and above for C#, JavaScript, and Python.
You can get started with Durable Functions in under 10 minutes by completing one
* [JavaScript using Visual Studio Code](quickstart-js-vscode.md) * [Python using Visual Studio Code](quickstart-python-vscode.md) * [PowerShell using Visual Studio Code](quickstart-powershell-vscode.md)
+* [Java using Maven](quickstart-java.md)
In these quickstarts, you locally create and test a "hello world" durable function. You then publish the function code to Azure. The function you create orchestrates and chains together calls to other functions.
In these quickstarts, you locally create and test a "hello world" durable functi
Durable Functions is developed in collaboration with Microsoft Research. As a result, the Durable Functions team actively produces research papers and artifacts; these include:
-* [Durable Functions: Semantics for Stateful Serverless](https://www.microsoft.com/en-us/research/uploads/prod/2021/10/DF-Semantics-Final.pdf) _(OOPSLA'21)_
-* [Serverless Workflows with Durable Functions and Netherite](https://arxiv.org/pdf/2103.00033.pdf) _(pre-print)_
+* [Durable Functions: Semantics for Stateful Serverless](https://www.microsoft.com/research/uploads/prod/2021/10/DF-Semantics-Final.pdf) *(OOPSLA'21)*
+* [Serverless Workflows with Durable Functions and Netherite](https://arxiv.org/pdf/2103.00033.pdf) *(pre-print)*
## Learn more
azure-functions Durable Functions Sequence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-sequence.md
Title: Function chaining in Durable Functions - Azure
description: Learn how to run a Durable Functions sample that executes a sequence of functions. Previously updated : 02/08/2022 Last updated : 06/16/2022 ms.devlang: csharp, javascript, python # Function chaining in Durable Functions - Hello sequence sample
-Function chaining refers to the pattern of executing a sequence of functions in a particular order. Often the output of one function needs to be applied to the input of another function. This article describes the chaining sequence that you create when you complete the Durable Functions quickstart ([C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), or [Python](quickstart-python-vscode.md)). For more information about Durable Functions, see [Durable Functions overview](durable-functions-overview.md).
+Function chaining refers to the pattern of executing a sequence of functions in a particular order. Often the output of one function needs to be applied to the input of another function. This article describes the chaining sequence that you create when you complete the Durable Functions quickstart ([C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), or [Java](quickstart-java.md)). For more information about Durable Functions, see [Durable Functions overview](durable-functions-overview.md).
[!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)]
azure-functions Durable Functions Serialization And Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-serialization-and-persistence.md
Title: Data persistence and serialization in Durable Functions - Azure description: Learn how the Durable Functions extension for Azure Functions persists data-+ Previously updated : 02/11/2021 Last updated : 05/26/2022
+ms.devlang: csharp, java, javascript, python
#Customer intent: As a developer, I want to understand what data is persisted to durable storage, how that data is serialized, and how I can customize it when it doesn't work the way my app needs it to.
Alternatively, .NET users have the option of implementing custom serialization p
### Default serialization logic
-Durable Functions internally uses [Json.NET](https://www.newtonsoft.com/json/help/html/Introduction.htm) to serialize orchestration and entity data to JSON. The default settings Durable Functions uses for Json.NET are:
+Durable Functions for .NET in-process internally uses [Json.NET](https://www.newtonsoft.com/json/help/html/Introduction.htm) to serialize orchestration and entity data to JSON. The default settings Durable Functions uses for Json.NET are:
**Inputs, Outputs, and State:**
namespace MyApplication
} ```
+### .NET Isolated and System.Text.Json
+
+Durable Functions running in the [.NET Isolated worker process](../dotnet-isolated-process-guide.md) use [System.Text.Json](/dotnet/api/system.text.json) libraries for serialization rather than Newtonsoft.Json. There is currently no support for injecting serialization settings. However, attributes may be used to control aspects of serialization.
+
+For more information on the built-in support for JSON serialization in .NET, see the [JSON serialization and deserialization in .NET overview documentation](/dotnet/standard/serialization/system-text-json-overview).
+ # [JavaScript](#tab/javascript) ### Serialization and deserialization logic
It is strongly recommended to use type annotations to ensure Durable Functions s
For custom data types, you must define the JSON serialization and deserialization of a data type by exporting a static `to_json` and `from_json` method from your class.
+# [Java](#tab/java)
+
+Java uses the [Jackson v2.x](https://github.com/FasterXML/jackson#jackson-project-home-github) libraries for serialization and deserialization of data payloads. You can use [Jackson annotations](https://github.com/FasterXML/jackson-annotations/wiki/Jackson-Annotations) on your POJO types to customize the serialization behavior.
+
azure-functions Durable Functions Singletons https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-singletons.md
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
```
+# [Java](#tab/java)
+
+```java
+@FunctionName("HttpStartSingle")
+public HttpResponseMessage runSingle(
+ @HttpTrigger(name = "req") HttpRequestMessage<?> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
+
+ String instanceID = "StaticID";
+ DurableTaskClient client = durableContext.getClient();
+
+ // Check to see if an instance with this ID is already running
+ OrchestrationMetadata metadata = client.getInstanceMetadata(instanceID, false);
+ if (metadata.isRunning()) {
+ return req.createResponseBuilder(HttpStatus.CONFLICT)
+ .body("An instance with ID '" + instanceID + "' already exists.")
+ .build();
+ }
+
+ // No such instance exists - create a new one. De-dupe is handled automatically
+ // in the storage layer if another function tries to also use this instance ID.
+ client.scheduleNewOrchestrationInstance("MyOrchestration", null, instanceID);
+ return durableContext.createCheckStatusResponse(req, instanceID);
+}
+```
+
-By default, instance IDs are randomly generated GUIDs. In the previous example, however, the instance ID is passed in route data from the URL. The code calls `GetStatusAsync`(C#), `getStatus` (JavaScript), or `get_status` (Python) to check if an instance having the specified ID is already running. If no such instance is running, a new instance is created with that ID.
+By default, instance IDs are randomly generated GUIDs. In the previous example, however, the instance ID is passed in route data from the URL. The code then fetches the orchestration instance metadata to check if an instance having the specified ID is already running. If no such instance is running, a new instance is created with that ID.
> [!NOTE]
-> There is a potential race condition in this sample. If two instances of **HttpStartSingle** execute concurrently, both function calls will report success, but only one orchestration instance will actually start. Depending on your requirements, this may have undesirable side effects. For this reason, it is important to ensure that no two requests can execute this trigger function concurrently.
+> There is a potential race condition in this sample. If two instances of **HttpStartSingle** execute concurrently, both function calls will report success, but only one orchestration instance will actually start. Depending on your requirements, this may have undesirable side effects.
The implementation details of the orchestrator function don't actually matter. It could be a regular orchestrator function that starts and completes, or it could be one that runs forever (that is, an [Eternal Orchestration](durable-functions-eternal-orchestrations.md)). The important point is that there is only ever one instance running at a time.
azure-functions Durable Functions Sub Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-sub-orchestrations.md
Title: Sub-orchestrations for Durable Functions - Azure description: How to call orchestrations from orchestrations in the Durable Functions extension for Azure Functions. Previously updated : 11/03/2019 Last updated : 05/09/2022
In addition to calling activity functions, orchestrator functions can call other orchestrator functions. For example, you can build a larger orchestration out of a library of smaller orchestrator functions. Or you can run multiple instances of an orchestrator function in parallel.
-An orchestrator function can call another orchestrator function using the `CallSubOrchestratorAsync` or the `CallSubOrchestratorWithRetryAsync` methods in .NET, the `callSubOrchestrator` or `callSubOrchestratorWithRetry` methods in JavaScript, and the `call_sub_orchestrator` or `call_sub_orchestrator_with_retry` methods in Python. The [Error Handling & Compensation](durable-functions-error-handling.md#automatic-retry-on-failure) article has more information on automatic retry.
+An orchestrator function can call another orchestrator function using the *"call-sub-orchestrator"* API. The [Error Handling & Compensation](durable-functions-error-handling.md#automatic-retry-on-failure) article has more information on automatic retry.
Sub-orchestrator functions behave just like activity functions from the caller's perspective. They can return a value, throw an exception, and can be awaited by the parent orchestrator function. > [!NOTE]
-> Sub-orchestrations are currently supported in .NET, JavaScript, and Python.
+> Sub-orchestrations are not yet supported in PowerShell.
## Example
def orchestrator_function(context: df.DurableOrchestrationContext):
# Step 4: ... ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("DeviceProvisioningOrchestration")
+public String deviceProvisioningOrchestration(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ // Step 1: Create an installation package in blob storage and return a SAS URL.
+ String deviceId = ctx.getInput(String.class);
+ String blobUri = ctx.callActivity("CreateInstallPackage", deviceId, String.class).await();
+
+ // Step 2: Notify the device that the installation package is ready.
+ String[] args = { deviceId, blobUri };
+ ctx.callActivity("SendPackageUrlToDevice", args).await();
+
+ // Step 3: Wait for the device to acknowledge that it has downloaded the new package.
+ ctx.waitForExternalEvent("DownloadCompletedAck").await();
+
+ // Step 4: ...
+ });
+}
+```
+
-This orchestrator function can be used as-is for one-off device provisioning or it can be part of a larger orchestration. In the latter case, the parent orchestrator function can schedule instances of `DeviceProvisioningOrchestration` using the `CallSubOrchestratorAsync` (.NET), `callSubOrchestrator` (JavaScript), or `call_sub_orchestrator` (Python) API.
+This orchestrator function can be used as-is for one-off device provisioning or it can be part of a larger orchestration. In the latter case, the parent orchestrator function can schedule instances of `DeviceProvisioningOrchestration` using the *"call-sub-orchestrator"* API.
Here is an example that shows how to run multiple orchestrator functions in parallel.
def orchestrator_function(context: df.DurableOrchestrationContext):
# ... ```+
+# [Java](#tab/java)
++
+```java
+@FunctionName("ProvisionNewDevices")
+public String provisionNewDevices(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ List<?> deviceIDs = ctx.getInput(List.class);
+
+ // Schedule each device provisioning sub-orchestration to run in parallel
+ List<Task<Void>> parallelTasks = deviceIDs.stream()
+ .map(device -> ctx.callSubOrchestrator("DeviceProvisioningOrchestration", device))
+ .collect(Collectors.toList());
+
+ // ...
+ });
+}
+```
+ > [!NOTE]
azure-functions Durable Functions Task Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-task-hubs.md
Title: Task hubs in Durable Functions - Azure
description: Learn what a task hub is in the Durable Functions extension for Azure Functions. Learn how to configure task hubs. Previously updated : 05/12/2021 Last updated : 05/10/2022
The task hub property in the `function.json` file is set via App Setting:
} ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("HttpStart")
+public HttpResponseMessage httpStart(
+ @HttpTrigger(name = "req", route = "orchestrators/{functionName}") HttpRequestMessage<?> req,
+ @DurableClientInput(name = "durableContext", taskHub = "%MyTaskHub%") DurableClientContext durableContext,
+ @BindingName("functionName") String functionName,
+ final ExecutionContext context) {
+
+ DurableTaskClient client = durableContext.getClient();
+ String instanceId = client.scheduleNewOrchestrationInstance(functionName);
+ context.getLogger().info("Created new Java orchestration with instance ID = " + instanceId);
+ return durableContext.createCheckStatusResponse(req, instanceId);
+}
+```
+ > [!NOTE]
azure-functions Durable Functions Timers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-timers.md
Title: Timers in Durable Functions - Azure description: Learn how to implement durable timers in the Durable Functions extension for Azure Functions. Previously updated : 07/13/2020 Last updated : 05/09/2022
+ms.devlang: csharp, javascript, powershell, python, java
# Timers in Durable Functions (Azure Functions)
-[Durable Functions](durable-functions-overview.md) provides *durable timers* for use in orchestrator functions to implement delays or to set up timeouts on async actions. Durable timers should be used in orchestrator functions instead of `Thread.Sleep` and `Task.Delay` (C#), or `setTimeout()` and `setInterval()` (JavaScript), or `time.sleep()` (Python).
+[Durable Functions](durable-functions-overview.md) provides *durable timers* for use in orchestrator functions to implement delays or to set up timeouts on async actions. Durable timers should be used in orchestrator functions instead of "sleep" or "delay" APIs that may be built into the language.
-You create a durable timer by calling the [`CreateTimer` (.NET)](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableorchestrationcontext.createtimer), the [`createTimer` (JavaScript)](/javascript/api/durable-functions/durableorchestrationcontext#durable-functions-durableorchestrationcontext-createtimer), or the [`create_timer` (Python)](/python/api/azure-functions-durable/azure.durable_functions.durableorchestrationcontext#azure-durable-functions-durableorchestrationcontext-create-timer) method of the [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger). The method returns a task that completes on a specified date and time.
+Durable timers are tasks that are created using the appropriate "create timer" API for the provided language, as shown below, and take either a due time or a duration as an argument.
+
+# [C#](#tab/csharp)
+
+```csharp
+// Put the orchestrator to sleep for 72 hours
+DateTime dueTime = context.CurrentUtcDateTime.AddHours(72);
+await context.CreateTimer(dueTime, CancellationToken.None);
+```
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+// Put the orchestrator to sleep for 72 hours
+// Note that DateTime comes from the "luxon" module
+const deadline = DateTime.fromJSDate(context.df.currentUtcDateTime, {zone: 'utc'}).plus({ hours: 72 });
+yield context.df.createTimer(deadline.toJSDate());
+```
+
+# [Python](#tab/python)
+
+```python
+# Put the orchestrator to sleep for 72 hours
+due_time = context.current_utc_datetime + timedelta(hours=72)
+durable_timeout_task = context.create_timer(due_time)
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+# Put the orchestrator to sleep for 72 hours
+@duration = New-TimeSpan -Hours 72
+Start-DurableTimer -Duration $duration
+```
+
+# [Java](#tab/java)
+
+```java
+// Put the orchestrator to sleep for 72 hours
+ctx.createTimer(Duration.ofHours(72)).await();
+```
+++
+When you "await" the timer task, the orchestrator function will sleep until the specified expiration time.
+
+> [!NOTE]
+> Orchestrations will continue to process other incoming events while waiting for a timer task to expire.
## Timer limitations
-When you create a timer that expires at 4:30 pm, the underlying Durable Task Framework enqueues a message that becomes visible only at 4:30 pm. When running in the Azure Functions Consumption plan, the newly visible timer message will ensure that the function app gets activated on an appropriate VM.
+When you create a timer that expires at 4:30 pm UTC, the underlying Durable Task Framework enqueues a message that becomes visible only at 4:30 pm UTC. If the function app is scaled down to zero instances in the meantime, the newly visible timer message will ensure that the function app gets activated again on an appropriate VM.
> [!NOTE] > * Starting with [version 2.3.0](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.3.0) of the Durable Extension, Durable timers are unlimited for .NET apps. For JavaScript, Python, and PowerShell apps, as well as .NET apps using earlier versions of the extension, Durable timers are limited to six days. When you are using an older extension version or a non-.NET language runtime and need a delay longer than six days, use the timer APIs in a `while` loop to simulate a longer delay.
-> * Always use `CurrentUtcDateTime` instead of `DateTime.UtcNow` in .NET or `currentUtcDateTime` instead of `Date.now` or `Date.UTC` in JavaScript when computing the fire time for durable timers. For more information, see the [orchestrator function code constraints](durable-functions-code-constraints.md) article.
+> * Don't use built-in date/time APIs for getting the current time. When calculating a future date for a timer to expire, always use the orchestrator function's current time API. For more information, see the [orchestrator function code constraints](durable-functions-code-constraints.md#dates-and-times) article.
## Usage for delay
for ($num = 0 ; $num -le 9 ; $num++){
Invoke-DurableActivity -FunctionName 'SendBillingEvent' } ```+
+# [Java](#tab/java)
+
+```java
+@FunctionName("BillingIssuer")
+public String billingIssuer(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ for (int i = 0; i < 10; i++) {
+ ctx.createTimer(Duration.ofDays(1)).await();
+ ctx.callActivity("SendBillingEvent").await();
+ }
+ return "done";
+ });
+}
+```
+ > [!WARNING]
else {
} ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("TryGetQuote")
+public String tryGetQuote(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ Task<Void> activityTask = ctx.callActivity("GetQuote");
+ Task<Void> timerTask = ctx.createTimer(Duration.ofSeconds(30));
+
+ Task<?> winner = ctx.anyOf(activityTask, timerTask);
+ if (winner == activityTask) {
+ // success case
+ return true;
+ } else {
+ // timeout case
+ return false;
+ }
+ });
+}
+```
+ > [!WARNING]
-> Use a `CancellationTokenSource` (.NET) or call `cancel()` on the returned `TimerTask` (JavaScript) to cancel a durable timer if your code will not wait for it to complete. The Durable Task Framework will not change an orchestration's status to "completed" until all outstanding tasks are completed or canceled.
+> In .NET, JavaScript, Python, and PowerShell, you must cancel any created durable timers if your code will not wait for them to complete. See the examples above for how to cancel pending timers. The Durable Task Framework will not change an orchestration's status to "Completed" until all outstanding tasks, including durable timer tasks, are either completed or canceled.
-This cancellation mechanism doesn't terminate in-progress activity function or sub-orchestration executions. Rather, it simply allows the orchestrator function to ignore the result and move on. If your function app uses the Consumption plan, you'll still be billed for any time and memory consumed by the abandoned activity function. By default, functions running in the Consumption plan have a timeout of five minutes. If this limit is exceeded, the Azure Functions host is recycled to stop all execution and prevent a runaway billing situation. The [function timeout is configurable](../functions-host-json.md#functiontimeout).
+This cancellation mechanism using the *when-any* pattern doesn't terminate in-progress activity function or sub-orchestration executions. Rather, it simply allows the orchestrator function to ignore the result and move on. If your function app uses the Consumption plan, you'll still be billed for any time and memory consumed by the abandoned activity function. By default, functions running in the Consumption plan have a timeout of five minutes. If this limit is exceeded, the Azure Functions host is recycled to stop all execution and prevent a runaway billing situation. The [function timeout is configurable](../functions-host-json.md#functiontimeout).
For a more in-depth example of how to implement timeouts in orchestrator functions, see the [Human Interaction & Timeouts - Phone Verification](durable-functions-phone-verification.md) article.
azure-functions Durable Functions Types Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-types-features-overview.md
In addition to triggering orchestrator or entity functions, the *durable client*
## Next steps
-To get started, create your first durable function in [C#](durable-functions-create-first-csharp.md) or [JavaScript](quickstart-js-vscode.md).
+To get started, create your first durable function in [C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), or [Java](quickstart-java.md).
> [!div class="nextstepaction"] > [Read more about Durable Functions orchestrations](durable-functions-orchestrations.md)
azure-functions Durable Functions Unit Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-unit-testing.md
Last updated 11/03/2019
-# Durable Functions unit testing
+# Durable Functions unit testing (C#)
Unit testing is an important part of modern software development practices. Unit tests verify business logic behavior and protect from introducing unnoticed breaking changes in the future. Durable Functions can easily grow in complexity so introducing unit tests will help to avoid breaking changes. The following sections explain how to unit test the three function types - Orchestration client, orchestrator, and activity functions. > [!NOTE]
-> This article provides guidance for unit testing for Durable Functions apps targeting Durable Functions 2.x. For more information about the differences between versions, see the [Durable Functions versions](durable-functions-versions.md) article.
+> This article provides guidance for unit testing for Durable Functions apps written in C# for the .NET in-process worker and targeting Durable Functions 2.x. For more information about the differences between versions, see the [Durable Functions versions](durable-functions-versions.md) article.
## Prerequisites
azure-functions Durable Functions Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-versioning.md
Title: Versioning in Durable Functions - Azure
description: Learn how to implement versioning in the Durable Functions extension for Azure Functions. Previously updated : 05/12/2021 Last updated : 05/26/2022
There are several examples of breaking changes to be aware of. This article disc
### Changing activity or entity function signatures
-A signature change refers to a change in the name, input, or output of a function. If this kind of change is made to an activity or entity function, it could break any orchestrator function that depends on it. If you update the orchestrator function to accommodate this change, you could break existing in-flight instances.
+A signature change refers to a change in the name, input, or output of a function. If this kind of change is made to an activity or entity function, it could break any orchestrator function that depends on it. This is especially true for type-safe languages. If you update the orchestrator function to accommodate this change, you could break existing in-flight instances.
-As an example, suppose we have the following C# orchestrator function.
+As an example, suppose we have the following orchestrator function.
+
+# [C#](#tab/csharp)
```csharp [FunctionName("FooBar")]
public static Task Run([OrchestrationTrigger] IDurableOrchestrationContext conte
} ```
-This simplistic function takes the results of **Foo** and passes it to **Bar**. Let's assume we need to change the return value of **Foo** from `bool` to `string` to support a wider variety of result values. The result looks like this:
+# [Java](#tab/java)
+
+```java
+@FunctionName("FooBar")
+public String fooBarOrchestration(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ boolean result = ctx.callActivity("Foo", boolean.class).await();
+ ctx.callActivity("Bar", result).await();
+ });
+}
+```
+++
+This simplistic function takes the results of **Foo** and passes it to **Bar**. Let's assume we need to change the return value of **Foo** from a Boolean to a String to support a wider variety of result values. The result looks like this:
+
+# [C#](#tab/csharp)
```csharp [FunctionName("FooBar")]
public static Task Run([OrchestrationTrigger] IDurableOrchestrationContext conte
} ```
-This change works fine for all new instances of the orchestrator function but breaks any in-flight instances. For example, consider the case where an orchestration instance calls a function named `Foo`, gets back a boolean value, and then checkpoints. If the signature change is deployed at this point, the checkpointed instance will fail immediately when it resumes and replays the call to `Foo`. This failure happens because the result in the history table is `bool` but the new code tries to deserialize it into `string`, resulting in a runtime exception for type-safe languages.
+# [Java](#tab/java)
+
+```java
+@FunctionName("FooBar")
+public String fooBarOrchestration(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ String result = ctx.callActivity("Foo", String.class).await();
+ ctx.callActivity("Bar", result).await();
+ });
+}
+```
+++
+This change works fine for all new instances of the orchestrator function but breaks any in-flight instances. For example, consider the case where an orchestration instance calls a function named `Foo`, gets back a boolean value, and then checkpoints. If the signature change is deployed at this point, the checkpointed instance will fail immediately when it resumes and replays the call to `Foo`. This failure happens because the result in the history table is a Boolean value but the new code tries to deserialize it into a String value, resulting in a runtime exception for type-safe languages.
This example is just one of many different ways that a function signature change can break existing instances. In general, if an orchestrator needs to change the way it calls a function, then the change is likely to be problematic.
This example is just one of many different ways that a function signature change
The other class of versioning problems come from changing the orchestrator function code in a way that changes the execution path for in-flight instances.
-Consider the following C# orchestrator function:
+Consider the following orchestrator function:
+
+# [C#](#tab/csharp)
```csharp [FunctionName("FooBar")]
public static Task Run([OrchestrationTrigger] IDurableOrchestrationContext conte
} ```
+# [Java](#tab/java)
+
+```java
+@FunctionName("FooBar")
+public String fooBarOrchestration(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ boolean result = ctx.callActivity("Foo", boolean.class).await();
+ ctx.callActivity("Bar", result).await();
+ });
+}
+```
+++ Now let's assume you want to make a change to add a new function call in between the two existing function calls.
+# [C#](#tab/csharp)
+ ```csharp [FunctionName("FooBar")] public static Task Run([OrchestrationTrigger] IDurableOrchestrationContext context)
public static Task Run([OrchestrationTrigger] IDurableOrchestrationContext conte
} ```
-This change adds a new function call to *SendNotification* between *Foo* and *Bar*. There are no signature changes. The problem arises when an existing instance resumes from the call to *Bar*. During replay, if the original call to *Foo* returned `true`, then the orchestrator replay will call into *SendNotification*, which is not in its execution history. The Durable Task Framework detects this inconsistency and throws a `NonDeterministicOrchestrationException` because it encountered a call to *SendNotification* when it expected to see a call to *Bar*. The same type of problem can occur when adding API calls to other durable operations, like creating durable timers, waiting for external events, calling sub-orchestrations, etc.
+# [Java](#tab/java)
+
+```java
+@FunctionName("FooBar")
+public String fooBarOrchestration(
+ @DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ boolean result = ctx.callActivity("Foo", boolean.class).await();
+ if (result) {
+ ctx.callActivity("SendNotification").await();
+ }
+
+ ctx.callActivity("Bar", result).await();
+ });
+}
+```
+++
+This change adds a new function call to *SendNotification* between *Foo* and *Bar*. There are no signature changes. The problem arises when an existing instance resumes from the call to *Bar*. During replay, if the original call to *Foo* returned `true`, then the orchestrator replay will call into *SendNotification*, which is not in its execution history. The runtime detects this inconsistency and raises a *non-deterministic orchestration* error because it encountered a call to *SendNotification* when it expected to see a call to *Bar*. The same type of problem can occur when adding API calls to other durable operations, like creating durable timers, waiting for external events, calling sub-orchestrations, etc.
## Mitigation strategies
Here are some of the strategies for dealing with versioning challenges:
The naive approach to versioning is to do nothing and let in-flight orchestration instances fail. Depending on the type of change, the following types of failures may occur.
-* Orchestrations may fail with a `NonDeterministicOrchestrationException` error.
+* Orchestrations may fail with a *non-deterministic orchestration* error.
* Orchestrations may get stuck indefinitely, reporting a `Running` status. * If a function gets removed, any function that tries to call it may fail with an error. * If a function gets removed after it was scheduled to run, then the app may experience low-level runtime failures in the Durable Task Framework engine, potentially resulting in severe performance degradation.
azure-functions Durable Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-versions.md
Title: Durable Functions versions overview - Azure Functions
description: Learn about Durable Functions versions. Previously updated : 12/23/2020 Last updated : 05/06/2022
Install the latest 2.x version of the Durable Functions bindings extension in yo
#### JavaScript, Python, and PowerShell
-Durable Functions 2.x is available in version 2.x of the [Azure Functions extension bundle](../functions-bindings-register.md#extension-bundles).
+Durable Functions 2.x is available starting in version 2.x of the [Azure Functions extension bundle](../functions-bindings-register.md#extension-bundles).
-Python support in Durable Functions requires Durable Functions 2.x.
+Python support in Durable Functions requires Durable Functions 2.x or greater.
-To update the extension bundle version in your project, open host.json and update the `extensionBundle` section to use version 2.x (`[2.*, 3.0.0)`).
+To update the extension bundle version in your project, open host.json and update the `extensionBundle` section to use version 3.x (`[3.*, 4.0.0)`).
```json { "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[2.*, 3.0.0)"
+ "version": "[3.*, 4.0.0)"
} } ```
To update the extension bundle version in your project, open host.json and updat
> [!NOTE] > If Visual Studio Code is not displaying the correct templates after you change the extension bundle version, reload the window by running the *Developer: Reload Window* command (<kbd>Ctrl+R</kbd> on Windows and Linux, <kbd>Command+R</kbd> on macOS).
+#### Java (preview)
+
+Durable Functions 2.x is available starting in version 4.x of the [Azure Functions extension bundle](../functions-bindings-register.md#extension-bundles). You must use the Azure Functions 3.0 runtime or greater to execute Java functions.
+
+To update the extension bundle version in your project, open host.json and update the `extensionBundle` section to use version 4.x (`[4.*, 5.0.0)`). Because Java support is currently in preview, you must also use the `Microsoft.Azure.Functions.ExtensionBundle.Preview` bundle, which is different from product-ready bundles.
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+ #### .NET Update your .NET project to use the latest version of the [Durable Functions bindings extension](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask).
In version 1.x, if a task hub name wasn't specified in host.json, it was default
#### Public interface changes (.NET only)
-In version 1.x, the various _context_ objects supported by Durable Functions have abstract base classes intended for use in unit testing. As part of Durable Functions 2.x, these abstract base classes are replaced with interfaces.
+In version 1.x, the various *context* objects supported by Durable Functions have abstract base classes intended for use in unit testing. As part of Durable Functions 2.x, these abstract base classes are replaced with interfaces.
The following table represents the main changes:
azure-functions Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-java.md
+
+ Title: Create your first durable function in Azure using Java (Preview)
+description: Create an Azure Durable Function in Java
++ Last updated : 06/14/2022+
+ms.devlang: java
+++
+# Create your first durable function in Java (Preview)
+
+_Durable Functions_ is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you.
+
+In this article, you learn how to create and test a "hello world" durable function in your Java project. This function will orchestrate and chain together calls to other functions.
+
+## Prerequisites
+
+To complete this tutorial, you need:
+
+- The [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 11 or 8.
+
+- [Apache Maven](https://maven.apache.org), version 3.0 or above.
+
+- Latest version of the [Azure Functions Core Tools](../functions-run-local.md).
+
+- An Azure Storage account, which requires that you have an Azure subscription.
++
+## Add required dependencies and plugins
+
+Add the following to your `pom.xml`:
+
+```xml
+<properties>
+ <azure.functions.maven.plugin.version>1.18.0</azure.functions.maven.plugin.version>
+ <azure.functions.java.library.version>2.0.1</azure.functions.java.library.version>
+ <durabletask.azure.functions>1.0.0-beta.1</durabletask.azure.functions>
+ <functionAppName>your-unique-app-name</functionAppName>
+</properties>
+
+<dependencies>
+ <dependency>
+ <groupId>com.microsoft.azure.functions</groupId>
+ <artifactId>azure-functions-java-library</artifactId>
+ <version>${azure.functions.java.library.version}</version>
+ </dependency>
+ <dependency>
+ <groupId>com.microsoft</groupId>
+ <artifactId>durabletask-azure-functions</artifactId>
+ <version>${durabletask.azure.functions}</version>
+ </dependency>
+</dependencies>
+
+<build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-compiler-plugin</artifactId>
+ <version>3.8.1</version>
+ </plugin>
+ <plugin>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>azure-functions-maven-plugin</artifactId>
+ <version>${azure.functions.maven.plugin.version}</version>
+ <configuration>
+ <appName>${functionAppName}</appName>
+ <resourceGroup>java-functions-group</resourceGroup>
+ <appServicePlanName>java-functions-app-service-plan</appServicePlanName>
+ <region>westus</region>
+ <runtime>
+ <os>windows</os>
+ <javaVersion>11</javaVersion>
+ </runtime>
+ <appSettings>
+ <property>
+ <name>FUNCTIONS_EXTENSION_VERSION</name>
+ <value>~4</value>
+ </property>
+ </appSettings>
+ </configuration>
+ <executions>
+ <execution>
+ <id>package-functions</id>
+ <goals>
+ <goal>package</goal>
+ </goals>
+ </execution>
+ </executions>
+ </plugin>
+ <plugin>
+ <artifactId>maven-clean-plugin</artifactId>
+ <version>3.1.0</version>
+ </plugin>
+ </plugins>
+</build>
+```
+
+## Add required JSON files
+
+Add a `host.json` file to your project directory. It should look similar to the following:
+
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "logLevel": {
+ "DurableTask.AzureStorage": "Warning",
+ "DurableTask.Core": "Warning"
+ }
+ },
+ "extensions": {
+ "durableTask": {
+ "hubName": "JavaTestHub"
+ }
+ },
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+
+It's important to note that only the Azure Functions v4 _Preview_ bundle currently has the necessary support for Durable Functions for Java.
+
+Add a `local.settings.json` file to your project directory. You should have the connection string of your Azure Storage account configured for `AzureWebJobsStorage`:
+
+```json
+{
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "<your storage account connection string>",
+ "FUNCTIONS_WORKER_RUNTIME": "java"
+ }
+}
+```
+
+## Creating your functions
+
+The most basic Durable Functions app contains three functions:
+
+- _Orchestrator function_ - describes a workflow that orchestrates other functions.
+- _Activity function_ - called by the orchestrator function, performs work, and optionally returns a value.
+- _Client function_ - a regular Azure Function that starts an orchestrator function. This example uses an HTTP triggered function.
+
+The sample code below shows a simple example of each:
+
+```java
+import java.util.Optional;
+
+import com.microsoft.azure.functions.*;
+import com.microsoft.azure.functions.annotation.*;
+
+import com.microsoft.durabletask.*;
+import com.microsoft.durabletask.azurefunctions.*;
+
+public class DurableFunctionsSample {
+ /**
+ * This HTTP-triggered function starts the orchestration.
+ */
+ @FunctionName("StartHelloCities")
+ public HttpResponseMessage startHelloCities(
+ @HttpTrigger(name = "req", methods = {HttpMethod.POST}) HttpRequestMessage<Optional<String>> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext,
+ final ExecutionContext context) {
+
+ DurableTaskClient client = durableContext.getClient();
+ String instanceId = client.scheduleNewOrchestrationInstance("HelloCities");
+ context.getLogger().info("Created new Java orchestration with instance ID = " + instanceId);
+ return durableContext.createCheckStatusResponse(req, instanceId);
+ }
+
+ /**
+ * This is the orchestrator function, which can schedule activity functions, create durable timers,
+ * or wait for external events in a way that's completely fault-tolerant. The OrchestrationRunner.loadAndRun()
+ * static method is used to take the function input and execute the orchestrator logic.
+ */
+ @FunctionName("HelloCities")
+ public String helloCitiesOrchestrator(@DurableOrchestrationTrigger(name = "runtimeState") String runtimeState) {
+ return OrchestrationRunner.loadAndRun(runtimeState, ctx -> {
+ String result = "";
+ result += ctx.callActivity("SayHello", "Tokyo", String.class).await() + ", ";
+ result += ctx.callActivity("SayHello", "London", String.class).await() + ", ";
+ result += ctx.callActivity("SayHello", "Seattle", String.class).await();
+ return result;
+ });
+ }
+
+ /**
+ * This is the activity function that gets invoked by the orchestrator function.
+ */
+ @FunctionName("SayHello")
+ public String sayHello(@DurableActivityTrigger(name = "name") String name) {
+ return String.format("Hello %s!", name);
+ }
+}
+```
+
+## Test the function locally
+
+Azure Functions Core Tools lets you run an Azure Functions project on your local development computer.
+
+1. If you are using Visual Studio Code, open a new terminal window and run the following commands to build the project:
+
+ ```bash
+ mvn clean package
+ ```
+
+ Then run the durable function:
+
+ ```bash
+ mvn azure-functions:run
+ ```
+
+2. In the Terminal panel, copy the URL endpoint of your HTTP-triggered function.
+
+ ![Azure local output](media/quickstart-java/mvn-functions-run.png)
+
+3. Using a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. You should get a response similar to the following:
+
+ ```json
+ {
+ "id": "d1b33a60-333f-4d6e-9ade-17a7020562a9",
+ "purgeHistoryDeleteUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/d1b33a60-333f-4d6e-9ade-17a7020562a9?code=ACCupah_QfGKoFXydcOHH9ffcnYPqjkddSawzRjpp1PQAzFueJ2tDw==",
+ "sendEventPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/d1b33a60-333f-4d6e-9ade-17a7020562a9/raiseEvent/{eventName}?code=ACCupah_QfGKoFXydcOHH9ffcnYPqjkddSawzRjpp1PQAzFueJ2tDw==",
+ "statusQueryGetUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/d1b33a60-333f-4d6e-9ade-17a7020562a9?code=ACCupah_QfGKoFXydcOHH9ffcnYPqjkddSawzRjpp1PQAzFueJ2tDw==",
+ "terminatePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/d1b33a60-333f-4d6e-9ade-17a7020562a9/terminate?reason={text}&code=ACCupah_QfGKoFXydcOHH9ffcnYPqjkddSawzRjpp1PQAzFueJ2tDw=="
+ }
+ ```
+
+ The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It is not yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration.
+
+4. Copy the URL value for `statusQueryGetUri` and paste it in the browser's address bar and execute the request. Alternatively you can also continue to use Postman or cURL to issue the GET request.
+
+ The request will query the orchestration instance for the status. You should get an eventual response, which shows us the instance has completed, and includes the outputs or results of the durable function. It looks like:
+
+ ```json
+ {
+ "name": "HelloCities",
+ "instanceId": "d1b33a60-333f-4d6e-9ade-17a7020562a9",
+ "runtimeStatus": "Completed",
+ "input": null,
+ "customStatus": "",
+ "output": "Hello Tokyo!, Hello London!, Hello Seattle!",
+ "createdTime": "2022-06-15T05:00:02Z",
+ "lastUpdatedTime": "2022-06-15T05:00:06Z"
+ }
+ ```
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux-troubleshoot.md
We've seen that a clean re-install of the Agent will fix most issues. In fact th
Additional configurations | `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.d/*.conf` > [!NOTE]
- > Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [data menu Log Analytics Advanced Settings](../agents/agent-data-sources.md#configuring-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from Log Analytics **Advanced Settings** or for a single agent run the following:
+ > Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [Agents configuration](../agents/agent-data-sources.md#configuring-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from **Agents configuration** or for a single agent run the following:
> `sudo /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable && sudo rm /etc/opt/omi/conf/omsconfig/configuration/Current.mof* /etc/opt/omi/conf/omsconfig/configuration/Pending.mof*` ## Installation error codes
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
Previously updated : 09/24/2018 Last updated : 06/16/2022 # Collect custom metrics for a Linux VM with the InfluxData Telegraf agent
Now the agent will collect metrics from each of the input plug-ins specified and
1. Navigate to the new **Monitor** tab. Then select **Metrics**.
- ![Monitor - Metrics (preview)](./media/collect-custom-metrics-linux-telegraf/metrics.png)
1. Select your VM in the resource selector.
azure-monitor Solution Targeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-targeting.md
Last updated 06/08/2022
When you add a monitoring solution to your subscription, it's automatically deployed by default to all Windows and Linux agents connected to your Log Analytics workspace. You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. This article describes how to use **Solution Targeting** which is a feature that allows you to apply a scope to your solutions. ## How to target a solution There are three steps to targeting a solution as described in the following sections.
azure-monitor Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solutions.md
description: Get information on prepackaged collections of logic, visualization,
Previously updated : 11/21/2021 Last updated : 06/16/2022
You can add monitoring solutions to Azure Monitor for any applications and servi
## Use monitoring solutions
-The **Overview** page in Azure Monitor displays a tile for each solution installed in a Log Analytics workspace. To open this page, go to **Azure Monitor** in the [Azure portal](https://portal.azure.com). On the **Insights** menu, select **More** to open **Insights Hub**, and then select **Log Analytics workspaces**.
+The **Overview** page displays a tile for each solution installed in a Log Analytics workspace. To open this page, go to **Log Analytics workspaces** in the [Azure portal](https://portal.azure.com) and select your workspace. In the **General** section of the menu, select **Workspace Summary**.
-[![Screenshot that shows selections for opening Insights Hub.](media/solutions/insights-hub.png)](media/solutions/insights-hub.png#lightbox)
-Use the dropdown boxes at the top of the screen to change the workspace or the time range that's used for the tiles. Select the tile for a solution to open a view that includes a more detailed analysis of its collected data.
+
+Use the dropdown boxes at the top of the screen to change the time range that's used for the tiles. Select the tile for a solution to open a view that includes a more detailed analysis of its collected data.
[![Screenshot that shows statistics for monitoring solutions in the Azure portal.](media/solutions/overview.png)](media/solutions/overview.png#lightbox)
Get-AzMonitorLogAnalyticsSolution -ResourceGroupName MyResourceGroup
Monitoring solutions from Microsoft and partners are available from [Azure Marketplace](https://azuremarketplace.microsoft.com). You can search through available solutions and install them by using the following procedure. When you install a solution, you must select a [Log Analytics workspace](../logs/manage-access.md) where the solution will be installed and where its data will be collected.
-1. From the [list of solutions for your subscription](#list-installed-monitoring-solutions), select **Add**.
+1. From the [list of solutions for your subscription](#list-installed-monitoring-solutions), select **Create**.
1. Browse or search for a solution. You can also use [this search link](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/category/management-tools?page=1&subcategories=management-solutions). 1. Find the monitoring solution that you want and read through its description. 1. Select **Create** to start the installation process. 1. When you're prompted, specify the Log Analytics workspace and provide any required configuration for the solution.
-![Screenshot that shows solutions on Azure Marketplace.](media/solutions/install-solution.png)
- ### Install a solution from the community Members of the community can submit management solutions to Azure Quickstart Templates. You can install these solutions directly or download the templates for later installation.
azure-monitor Sql Insights Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-alerts.md
- Title: Create alerts with SQL Insights (preview)
-description: Create alerts with SQL Insights (preview) in Azure Monitor
--- Previously updated : 03/12/2021--
-# Create alerts with SQL Insights (preview)
-SQL Insights (preview) includes a set of alert rule templates you can use to create [alert rules in Azure Monitor](../alert/../alerts/alerts-overview.md) for common SQL issues. The alert rules in SQL Insights (preview) are log alert rules based on performance data stored in the *InsightsMetrics* table in Azure Monitor Logs.
-
-> [!NOTE]
-> To create an alert for SQL Insights (preview) using a resource manager template, see [Resource Manager template samples for SQL Insights (preview)](resource-manager-sql-insights.md#create-an-alert-rule-for-sql-insights).
--
-> [!NOTE]
-> If you have requests for more SQL Insights (preview) alert rule templates, please send feedback using the link at the bottom of this page or using the SQL Insights (preview) feedback link in the Azure portal.
-
-## Enable alert rules
-Use the following steps to enable the alerts in Azure Monitor from the Azure portal. The alert rules that are created will be scoped to all of the SQL resources monitored under the selected monitoring profile. When an alert rule is triggered, it will trigger on the specific SQL instance or database.
-
-> [!NOTE]
-> You can also create custom [log alert rules](../alerts/alerts-log.md) by running queries on the data sets in the *InsightsMetrics* table and then saving those queries as an alert rule.
-
-Select **SQL (preview)** from the **Insights** section of the Azure Monitor menu in the Azure portal. Click **Alerts**
--
-The **Alerts** pane opens on the right side of the page. By default, it will display fired alerts for SQL resources in the selected monitoring profile based on the alert rules you've already created. Select **Alert templates**, which will display the list of available templates you can use to create an alert rule.
--
-On the **Create Alert rule** page, review the default settings for the rule and edit them as needed. You can also select an [action group](../alerts/action-groups.md) to create notifications and actions when the alert rule is triggered. Click **Enable alert rule** to create the alert rule once you've verified all of its properties.
---
-To deploy the alert rule immediately, click **Deploy alert rule**. Click **View Template** if you want to view the rule template before actually deploying it.
--
-If you choose to view the templates, select **Deploy** from the template page to create the alert rule.
---
-## Next steps
-
-Learn more about [alerts in Azure Monitor](../alerts/alerts-overview.md).
-
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-enable.md
- Title: Enable SQL Insights (preview)
-description: Enable SQL Insights (preview) in Azure Monitor
--- Previously updated : 1/18/2022--
-# Enable SQL Insights (preview)
-This article describes how to enable [SQL Insights (preview)](sql-insights-overview.md) to monitor your SQL deployments. Monitoring is performed from an Azure virtual machine that makes a connection to your SQL deployments and uses Dynamic Management Views (DMVs) to gather monitoring data. You can control what datasets are collected and the frequency of collection using a monitoring profile.
-
-> [!NOTE]
-> To enable SQL Insights (preview) by creating the monitoring profile and virtual machine using a resource manager template, see [Resource Manager template samples for SQL Insights (preview)](resource-manager-sql-insights.md).
-
-To learn how to enable SQL Insights (preview), you can also refer to this Data Exposed episode.
-> [!VIDEO https://docs.microsoft.com/Shows/Data-Exposed/How-to-Set-up-Azure-Monitor-for-SQL-Insights/player?format=ny]
-
-## Create Log Analytics workspace
-SQL Insights stores its data in one or more [Log Analytics workspaces](../logs/data-platform-logs.md#log-analytics-workspaces). Before you can enable SQL Insights, you need to either [create a workspace](../logs/quick-create-workspace.md) or select an existing one. A single workspace can be used with multiple monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and access the features in SQL Insights, you must have the [Log Analytics contributor role](../logs/manage-access.md) in the workspace.
-
-## Create monitoring user
-You need a user (login) on the SQL deployments that you want to monitor. Follow the procedures below for different types of SQL deployments.
-
-The instructions below cover the process per type of SQL that you can monitor. To accomplish this with a script on several SQL resources at once, please refer to the following [README file](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Workbooks/Workloads/SQL/SQL%20Insights%20Onboarding%20Scripts/Permissions_LoginUser_Account_Creation-README.txt) and [example script](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Workbooks/Workloads/SQL/SQL%20Insights%20Onboarding%20Scripts/Permissions_LoginUser_Account_Creation.ps1).
-
-### Azure SQL Database
-
-> [!NOTE]
-> SQL Insights (preview) does not support the following Azure SQL Database scenarios:
-> - **Elastic pools**: Metrics cannot be gathered for elastic pools. Metrics cannot be gathered for databases within elastic pools.
-> - **Low service tiers**: Metrics cannot be gathered for databases on Basic, S0, S1, and S2 [service tiers](/azure/azure-sql/database/resource-limits-dtu-single-databases)
->
-> SQL Insights (preview) has limited support for the following Azure SQL Database scenarios:
-> - **Serverless tier**: Metrics can be gathered for databases using the [serverless compute tier](/azure/azure-sql/database/serverless-tier-overview). However, the process of gathering metrics will reset the auto-pause delay timer, preventing the database from entering an auto-paused state.
-
-Connect to an Azure SQL database with [SQL Server Management Studio](/azure/azure-sql/database/connect-query-ssms), [Query Editor (preview)](/azure/azure-sql/database/connect-query-portal) in the Azure portal, or any other SQL client tool.
-
-Run the following script to create a user with the required permissions. Replace *user* with a username and *mystrongpassword* with a strong password.
-
-```sql
-CREATE USER [user] WITH PASSWORD = N'mystrongpassword';
-GO
-GRANT VIEW DATABASE STATE TO [user];
-GO
-```
--
-Verify the user was created.
--
-```sql
-select name as username,
- create_date,
- modify_date,
- type_desc as type,
- authentication_type_desc as authentication_type
-from sys.database_principals
-where type not in ('A', 'G', 'R', 'X')
- and sid is not null
-order by username
-```
-
-### Azure SQL Managed Instance
-Connect to your Azure SQL Managed Instance using [SQL Server Management Studio](/azure/azure-sql/database/connect-query-ssms) or a similar tool, and execute the following script to create the monitoring user with the permissions needed. Replace *user* with a username and *mystrongpassword* with a strong password.
-
-
-```sql
-USE master;
-GO
-CREATE LOGIN [user] WITH PASSWORD = N'mystrongpassword';
-GO
-GRANT VIEW SERVER STATE TO [user];
-GO
-GRANT VIEW ANY DEFINITION TO [user];
-GO
-```
-
-### SQL Server
-Connect to SQL Server on your Azure virtual machine and use [SQL Server Management Studio](/azure/azure-sql/database/connect-query-ssms) or a similar tool to run the following script to create the monitoring user with the permissions needed. Replace *user* with a username and *mystrongpassword* with a strong password.
-
-```sql
-USE master;
-GO
-CREATE LOGIN [user] WITH PASSWORD = N'mystrongpassword';
-GO
-GRANT VIEW SERVER STATE TO [user];
-GO
-GRANT VIEW ANY DEFINITION TO [user];
-GO
-```
-
-Verify the user was created.
-
-```sql
-select name as username,
- create_date,
- modify_date,
- type_desc as type
-from sys.server_principals
-where type not in ('A', 'G', 'R', 'X')
- and sid is not null
-order by username
-```
-
-## Create Azure Virtual Machine
-You will need to create one or more Azure virtual machines that will be used to collect data to monitor SQL.
-
-> [!NOTE]
-> The [monitoring profiles](#create-sql-monitoring-profile) specifies what data you will collect from the different types of SQL you want to monitor. Each monitoring virtual machine can have only one monitoring profile associated with it. If you have a need for multiple monitoring profiles, then you need to create a virtual machine for each.
-
-### Azure virtual machine requirements
-The Azure virtual machine has the following requirements:
--- Operating system: Ubuntu 18.04 using Azure Marketplace [image](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.0001-com-ubuntu-pro-bionic). Custom images are not supported.-- Recommended minimum Azure virtual machine sizes: Standard_B2s (2 CPUs, 4 GiB memory) -- Deployed in any Azure region [supported](../agents/azure-monitor-agent-overview.md#supported-regions) by the Azure Monitor agent, and meeting all Azure Monitor agent [prerequisites](../agents/azure-monitor-agent-manage.md#prerequisites).-
-> [!NOTE]
-> The Standard_B2s (2 CPUs, 4 GiB memory) virtual machine size will support up to 100 connection strings. You shouldn't allocate more than 100 connections to a single virtual machine.
-
-Depending upon the network settings of your SQL resources, the virtual machines may need to be placed in the same virtual network as your SQL resources so they can make network connections to collect monitoring data.
-
-## Configure network settings
-Each type of SQL offers methods for your monitoring virtual machine to securely access SQL. The sections below cover the options based upon the SQL deployment type.
-
-### Azure SQL Database
-
-SQL Insights supports accessing your Azure SQL Database via its public endpoint as well as from its virtual network.
-
-For access via the public endpoint, you would add a rule under the **Firewall settings** page and the [IP firewall settings](/azure/azure-sql/database/network-access-controls-overview#ip-firewall-rules) section. For specifying access from a virtual network, you can set [virtual network firewall rules](/azure/azure-sql/database/network-access-controls-overview#virtual-network-firewall-rules) and set the [service tags required by the Azure Monitor agent](../agents/azure-monitor-agent-overview.md#networking). [This article](/azure/azure-sql/database/network-access-controls-overview#ip-vs-virtual-network-firewall-rules) describes the differences between these two types of firewall rules.
---
-### Azure SQL Managed Instance
-
-If your monitoring virtual machine will be in the same VNet as your SQL MI resources, then see [Connect inside the same VNet](/azure/azure-sql/managed-instance/connect-application-instance#connect-inside-the-same-vnet). If your monitoring virtual machine will be in the different VNet than your SQL MI resources, then see [Connect inside a different VNet](/azure/azure-sql/managed-instance/connect-application-instance#connect-inside-a-different-vnet).
-
-### SQL Server
-If your monitoring virtual machine is in the same VNet as your SQL virtual machine resources, then see [Connect to SQL Server within a virtual network](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql#connect-to-sql-server-within-a-virtual-network). If your monitoring virtual machine will be in the different VNet than your SQL virtual machine resources, then see [Connect to SQL Server over the internet](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql#connect-to-sql-server-over-the-internet).
-
-## Store monitoring password in Key Vault
-As a security best practice, we strongly recommend that you store your SQL user (login) passwords in a Key Vault, rather than entering them directly into your monitoring profile connection strings.
-
-When settings up your profile for SQL monitoring, you will need one of the following permissions on the Key Vault resource you intend to use:
--- Microsoft.Authorization/roleAssignments/write -- Microsoft.Authorization/roleAssignments/delete-
-If you have these permissions, a new Key Vault access policy will be automatically created as part of creating your SQL Monitoring profile that uses the Key Vault you specified.
-
-> [!IMPORTANT]
-> You need to ensure that network and security configuration allows the monitoring VM to access Key Vault. For more information, see [Access Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md) and [Configure Azure Key Vault networking settings](../../key-vault/general/how-to-azure-key-vault-network-security.md).
-
-## Create SQL monitoring profile
-Open SQL Insights (preview) by selecting **SQL (preview)** from the **Insights** section of the **Azure Monitor** menu in the Azure portal. Click **Create new profile**.
--
-The profile will store the information that you want to collect from your SQL systems. It has specific settings for:
--- Azure SQL Database -- Azure SQL Managed Instance-- SQL Server running on virtual machines -
-For example, you might create one profile named *SQL Production* and another named *SQL Staging* with different settings for frequency of data collection, what data to collect, and which workspace to send the data to.
-
-The profile is stored as a [data collection rule](../essentials/data-collection-rule-overview.md) resource in the subscription and resource group you select. Each profile needs the following:
--- Name. Cannot be edited once created.-- Location. This is an Azure region.-- Log Analytics workspace to store the monitoring data.-- Collection settings for the frequency and type of sql monitoring data to collect.-
-> [!NOTE]
-> The location of the profile should be in the same location as the Log Analytics workspace you plan to send the monitoring data to.
--
-Click **Create monitoring profile** once you've entered the details for your monitoring profile. It can take up to a minute for the profile to be deployed. If you don't see the new profile listed in **Monitoring profile** combo box, click the refresh button and it should appear once the deployment is completed. Once you've selected the new profile, select the **Manage profile** tab to add a monitoring machine that will be associated with the profile.
-
-### Add monitoring machine
-Select **Add monitoring machine** to open a context panel to choose the virtual machine to setup to monitor your SQL instances and provide the connection strings.
-
-Select the subscription and name of your monitoring virtual machine. If you're using Key Vault to store your password for the monitoring user, select the Key Vault resources with these secrets and enter the URI and secret name for the password to be used in the connection strings. See the next section for details on identifying the connection string for different SQL deployments.
--
-### Add connection strings
-The connection string specifies the login name that SQL Insights (preview) should use when logging into SQL to collect monitoring data. If you're using a Key Vault to store the password for your monitoring user, provide the Key Vault URI and name of the secret that contains the password.
-
-The connections string will vary for each type of SQL resource:
-
-#### Azure SQL Database
-TCP connections from the monitoring machine to the IP address and port used by the database must be allowed by any firewalls or [network security groups](../../virtual-network/network-security-groups-overview.md) (NSGs) that may exist on the network path. For details on IP addresses and ports, see [Azure SQL Database connectivity architecture](/azure/azure-sql/database/connectivity-architecture).
-
-Enter the connection string in the form:
-
-```
-sqlAzureConnections": [
- "Server=mysqlserver.database.windows.net;Port=1433;Database=mydatabase;User Id=$username;Password=$password;"
-]
-```
-
-Get the details from the **Connection strings** menu item for the database.
--
-To monitor a readable secondary, append `;ApplicationIntent=ReadOnly` to the connection string. SQL Insights supports monitoring a single secondary. The collected data will be tagged to reflect primary or secondary.
-
-#### Azure SQL Managed Instance
-TCP connections from the monitoring machine to the IP address and port used by the managed instance must be allowed by any firewalls or [network security groups](../../virtual-network/network-security-groups-overview.md) (NSGs) that may exist on the network path. For details on IP addresses and ports, see [Azure SQL Managed Instance connection types](/azure/azure-sql/managed-instance/connection-types-overview).
-
-Enter the connection string in the form:
-
-```
-"sqlManagedInstanceConnections": [
- "Server= mysqlserver.<dns_zone>.database.windows.net;Port=1433;User Id=$username;Password=$password;"
-]
-```
-Get the details from the **Connection strings** menu item for the managed instance. If using managed instance [public endpoint](/azure/azure-sql/managed-instance/public-endpoint-configure), replace port 1433 with 3342.
--
-To monitor a readable secondary, append `;ApplicationIntent=ReadOnly` to the connection string. SQL Insights supports monitoring of a single secondary. Collected data will be tagged to reflect Primary or Secondary.
-
-#### SQL Server
-The TCP/IP protocol must be enabled for the SQL Server instance you want to monitor. TCP connections from the monitoring machine to the IP address and port used by the SQL Server instance must be allowed by any firewalls or [network security groups](../../virtual-network/network-security-groups-overview.md) (NSGs) that may exist on the network path.
-
-If you want to monitor SQL Server configured for high availability (using either availability groups or failover cluster instances), we recommend monitoring each SQL Server instance in the cluster individually rather than connecting via an availability group listener or a failover cluster name. This ensures that monitoring data is collected regardless of the current instance role (primary or secondary).
-
-Enter the connection string in the form:
-
-```
-"sqlVmConnections": [
- "Server=SQLServerInstanceIPAddress;Port=1433;User Id=$username;Password=$password;"
-]
-```
-
-Use the IP address that the SQL Server instance listens on.
-
-If your SQL Server instance is configured to listen on a non-default port, replace 1433 with that port number in the connection string. If you're using Azure SQL virtual machine, you can see which port to use on the **Security** page for the resource.
--
-For any SQL Server instance, you can determine all IP addresses and ports it is listening on by connecting to the instance and executing the following T-SQL query, as long as there is at least one TCP connection to the instance:
-
-```sql
-SELECT DISTINCT local_net_address, local_tcp_port
-FROM sys.dm_exec_connections
-WHERE net_transport = 'TCP'
- AND
- protocol_type = 'TSQL';
-```
-
-## Monitoring profile created
-
-Select **Add monitoring virtual machine** to configure the virtual machine to collect data from your SQL resources. Do not return to the **Overview** tab. In a few minutes, the Status column should change to read "Collecting", you should see data for the SQL resources you have chosen to monitor.
-
-If you do not see data, see [Troubleshooting SQL Insights (preview)](sql-insights-troubleshoot.md) to identify the issue.
--
-> [!NOTE]
-> If you need to update your monitoring profile or the connection strings on your monitoring VMs, you may do so via the SQL Insights (preview) **Manage profile** tab. Once your updates have been saved the changes will be applied in approximately 5 minutes.
-
-## Next steps
--- See [Troubleshooting SQL Insights (preview)](sql-insights-troubleshoot.md) if SQL Insights isn't working properly after being enabled.
azure-monitor Sql Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-overview.md
- Title: Monitor your SQL deployments with SQL Insights (preview)
-description: Overview of SQL Insights (preview) in Azure Monitor
---- Previously updated : 04/14/2022--
-# Monitor your SQL deployments with SQL Insights (preview)
-
-SQL Insights (preview) is a comprehensive solution for monitoring any product in the [Azure SQL family](/azure/azure-sql/index). SQL Insights uses [dynamic management views](/azure/azure-sql/database/monitoring-with-dmvs) to expose the data that you need to monitor health, diagnose problems, and tune performance.
-
-SQL Insights performs all monitoring remotely. Monitoring agents on dedicated virtual machines connect to your SQL resources and remotely gather data. The gathered data is stored in [Azure Monitor Logs](../logs/data-platform-logs.md) to enable easy aggregation, filtering, and trend analysis. You can view the collected data from the SQL Insights [workbook template](../visualize/workbooks-overview.md), or you can delve directly into the data by using [log queries](../logs/get-started-queries.md).
-The following diagram details the steps taken by information from the database engine and Azure resource logs, and how they can be surfaced. For a more detailed diagram of Azure SQL logging, see [Monitoring and diagnostic telemetry](/azure/azure-sql/database/monitor-tune-overview#monitoring-and-diagnostic-telemetry).
--
-## Pricing
-There is no direct cost for SQL Insights (preview). All costs are incurred by the virtual machines that gather the data, the Log Analytics workspaces that store the data, and any alert rules configured on the data.
-
-### Virtual machines
-
-For virtual machines, you're charged based on the pricing published on the [virtual machines pricing page](https://azure.microsoft.com/pricing/details/virtual-machines/linux/). The number of virtual machines that you need will vary based on the number of connection strings you want to monitor. We recommend allocating one virtual machine of size Standard_B2s for every 100 connection strings. See [Azure virtual machine requirements](sql-insights-enable.md#azure-virtual-machine-requirements) for more details.
-
-### Log Analytics workspaces
-
-For the Log Analytics workspaces, you're charged based on the pricing published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). The Log Analytics workspaces that SQL Insights uses will incur costs for data ingestion, data retention, and (optionally) data export.
-
-Exact charges will vary based on the amount of data ingested, retained, and exported. The amount of this data will vary based on your database activity and the collection settings defined in your [monitoring profiles](sql-insights-enable.md#create-sql-monitoring-profile).
-
-### Alert rules
-
-For alert rules in Azure Monitor, you're charged based on the pricing published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). If you choose to [create alerts with SQL Insights (preview)](sql-insights-alerts.md), you're charged for any alert rules created and any notifications sent.
-
-## Supported versions
-SQL Insights (preview) supports the following versions of SQL Server:
-- SQL Server 2012 and newer-
-SQL Insights (preview) supports SQL Server running in the following environments:
-- Azure SQL Database-- Azure SQL Managed Instance-- SQL Server on Azure Virtual Machines (SQL Server running on virtual machines registered with the [SQL virtual machine](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm) provider)-- Azure VMs (SQL Server running on virtual machines not registered with the [SQL virtual machine](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm) provider)-
-SQL Insights (preview) has no support or has limited support for the following:
-- **Non-Azure instances**: SQL Server running on virtual machines outside Azure is not supported.-- **Azure SQL Database elastic pools**: Metrics can't be gathered for elastic pools or for databases within elastic pools.-- **Azure SQL Database low service tiers**: Metrics can't be gathered for databases on Basic, S0, S1, and S2 [service tiers](/azure/azure-sql/database/resource-limits-dtu-single-databases).-- **Azure SQL Database serverless tier**: Metrics can be gathered for databases through the serverless compute tier. However, the process of gathering metrics will reset the auto-pause delay timer, preventing the database from entering an auto-paused state.-- **Secondary replicas**: Metrics can be gathered for only a single secondary replica per database. If a database has more than one secondary replica, only one can be monitored.-- **Authentication with Azure Active Directory**: The only supported method of [authentication](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) for monitoring is SQL authentication. For SQL Server on Azure Virtual Machines, authentication through Active Directory on a custom domain controller is not supported. -
-## Regional availability
-
-SQL Insights (preview) is available in all Azure regions where Azure Monitor is [available](https://azure.microsoft.com/global-infrastructure/services/?products=monitor), with the exception of Azure government and national clouds.
-
-## Open SQL Insights
-
-To open SQL Insights:
-
-1. In the Azure portal, go to the **Azure Monitor** menu.
-1. In the **Insights** section, select **SQL (preview)**.
-1. Select a tile to load the experience for the SQL resource that you're monitoring.
--
-For more instructions, see [Enable SQL Insights (preview)](sql-insights-enable.md) and [Troubleshoot SQL Insights (preview)](sql-insights-troubleshoot.md).
-
-## Collected data
-SQL Insights performs all monitoring remotely. No agents are installed on the virtual machines running SQL Server.
-
-SQL Insights uses dedicated monitoring virtual machines to remotely collect data from your SQL resources. Each monitoring virtual machine has the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and the Workload Insights (WLI) extension installed.
-
-The WLI extension includes the open-source [Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/). SQL Insights uses [data collection rules](../essentials/data-collection-rule-overview.md) to specify the data collection settings for Telegraf's [SQL Server plug-in](https://www.influxdata.com/integration/microsoft-sql-server/).
-
-Different sets of data are available for Azure SQL Database, Azure SQL Managed Instance, and SQL Server. The following tables describe the available data. You can customize which datasets to collect and the frequency of collection when you [create a monitoring profile](sql-insights-enable.md#create-sql-monitoring-profile).
-
-The tables have the following columns:
-- **Friendly name**: Name of the query as shown in the Azure portal when you're creating a monitoring profile.-- **Configuration name**: Name of the query as shown in the Azure portal when you're editing a monitoring profile.-- **Namespace**: Name of the query as found in a Log Analytics workspace. This identifier appears in the **InsighstMetrics** table on the `Namespace` property in the `Tags` column.-- **DMVs**: Dynamic managed views that are used to produce the dataset.-- **Enabled by default**: Whether the data is collected by default.-- **Default collection frequency**: How often the data is collected by default.-
-### Data for Azure SQL Database
-
-| Friendly name | Configuration name | Namespace | DMVs | Enabled by default | Default collection frequency |
-|:|:|:|:|:|:|
-| DB wait stats | AzureSQLDBWaitStats | sqlserver_azuredb_waitstats | sys.dm_db_wait_stats | No | Not applicable |
-| DBO wait stats | AzureSQLDBOsWaitstats | sqlserver_waitstats |sys.dm_os_wait_stats | Yes | 60 seconds |
-| Memory clerks | AzureSQLDBMemoryClerks | sqlserver_memory_clerks | sys.dm_os_memory_clerks | Yes | 60 seconds |
-| Database I/O | AzureSQLDBDatabaseIO | sqlserver_database_io | sys.dm_io_virtual_file_stats<br>sys.database_files<br>tempdb.sys.database_files | Yes | 60 seconds |
-| Server properties | AzureSQLDBServerProperties | sqlserver_server_properties | sys.dm_os_job_object<br>sys.database_files<br>sys.[databases]<br>sys.[database_service_objectives] | Yes | 60 seconds |
-| Performance counters | AzureSQLDBPerformanceCounters | sqlserver_performance | sys.dm_os_performance_counters<br>sys.databases | Yes | 60 seconds |
-| Resource stats | AzureSQLDBResourceStats | sqlserver_azure_db_resource_stats | sys.dm_db_resource_stats | Yes | 60 seconds |
-| Resource governance | AzureSQLDBResourceGovernance | sqlserver_db_resource_governance | sys.dm_user_db_resource_governance | Yes | 60 seconds |
-| Requests | AzureSQLDBRequests | sqlserver_requests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | No | Not applicable |
-| Schedulers| AzureSQLDBSchedulers | sqlserver_schedulers | sys.dm_os_schedulers | No | Not applicable |
-
-### Data for Azure SQL Managed Instance
-
-| Friendly name | Configuration name | Namespace | DMVs | Enabled by default | Default collection frequency |
-|:|:|:|:|:|:|
-| Wait stats | AzureSQLMIOsWaitstats | sqlserver_waitstats | sys.dm_os_wait_stats | Yes | 60 seconds |
-| Memory clerks | AzureSQLMIMemoryClerks | sqlserver_memory_clerks | sys.dm_os_memory_clerks | Yes | 60 seconds |
-| Database I/O | AzureSQLMIDatabaseIO | sqlserver_database_io | sys.dm_io_virtual_file_stats<br>sys.master_files | Yes | 60 seconds |
-| Server properties | AzureSQLMIServerProperties | sqlserver_server_properties | sys.server_resource_stats | Yes | 60 seconds |
-| Performance counters | AzureSQLMIPerformanceCounters | sqlserver_performance | sys.dm_os_performance_counters<br>sys.databases| Yes | 60 seconds |
-| Resource stats | AzureSQLMIResourceStats | sqlserver_azure_db_resource_stats | sys.server_resource_stats | Yes | 60 seconds |
-| Resource governance | AzureSQLMIResourceGovernance | sqlserver_instance_resource_governance | sys.dm_instance_resource_governance | Yes | 60 seconds |
-| Requests | AzureSQLMIRequests | sqlserver_requests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | No | NA |
-| Schedulers | AzureSQLMISchedulers | sqlserver_schedulers | sys.dm_os_schedulers | No | Not applicable |
-
-### Data for SQL Server
-
-| Friendly name | Configuration name | Namespace | DMVs | Enabled by default | Default collection frequency |
-|:|:|:|:|:|:|
-| Wait stats | SQLServerWaitStatsCategorized | sqlserver_waitstats | sys.dm_os_wait_stats | Yes | 60 seconds |
-| Memory clerks | SQLServerMemoryClerks | sqlserver_memory_clerks | sys.dm_os_memory_clerks | Yes | 60 seconds |
-| Database I/O | SQLServerDatabaseIO | sqlserver_database_io | sys.dm_io_virtual_file_stats<br>sys.master_files | Yes | 60 seconds |
-| Server properties | SQLServerProperties | sqlserver_server_properties | sys.dm_os_sys_info | Yes | 60 seconds |
-| Performance counters | SQLServerPerformanceCounters | sqlserver_performance | sys.dm_os_performance_counters | Yes | 60 seconds |
-| Volume space | SQLServerVolumeSpace | sqlserver_volume_space | sys.master_files | Yes | 60 seconds |
-| SQL Server CPU | SQLServerCpu | sqlserver_cpu | sys.dm_os_ring_buffers | Yes | 60 seconds |
-| Schedulers | SQLServerSchedulers | sqlserver_schedulers | sys.dm_os_schedulers | No | Not applicable |
-| Requests | SQLServerRequests | sqlserver_requests | sys.dm_exec_sessions<br>sys.dm_exec_requests<br>sys.dm_exec_sql_text | No | Not applicable |
-| Availability replica states | SQLServerAvailabilityReplicaStates | sqlserver_hadr_replica_states | sys.dm_hadr_availability_replica_states<br>sys.availability_replicas<br>sys.availability_groups<br>sys.dm_hadr_availability_group_states | No | 60 seconds |
-| Availability database replicas | SQLServerDatabaseReplicaStates | sqlserver_hadr_dbreplica_states | sys.dm_hadr_database_replica_states<br>sys.availability_replicas | No | 60 seconds |
-
-## Next steps
--- For frequently asked questions about SQL Insights (preview), see [Frequently asked questions](../faq.yml).-- [Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance](/azure/azure-sql/database/monitor-tune-overview)
azure-monitor Sql Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-troubleshoot.md
- Title: Troubleshoot SQL Insights (preview)
-description: Learn how to troubleshoot SQL Insights (preview) in Azure Monitor.
--- Previously updated : 4/19/2022--
-# Troubleshoot SQL Insights (preview)
-To troubleshoot data collection issues in SQL Insights (preview), check the status of the monitoring machine on the **Manage profile** tab. The statuses are:
--- **Collecting** -- **Not collecting** -- **Collecting with errors**
-
-Select the status to see logs and more details that might help you resolve the problem.
--
-## Status: Not collecting
-The monitoring machine has a status of **Not collecting** if there's no data in *InsightsMetrics* for SQL in the last 10 minutes.
-
-> [!NOTE]
-> Make sure that you're trying to collect data from a [supported version of SQL](sql-insights-overview.md#supported-versions). For example, trying to collect data with a valid profile and connection string but from an unsupported version of Azure SQL Database will result in a **Not collecting** status.
-
-SQL Insights (preview) uses the following query to retrieve this information:
-
-```kusto
-InsightsMetrics
- | extend Tags = todynamic(Tags)
- | extend SqlInstance = tostring(Tags.sql_instance)
- | where TimeGenerated > ago(10m) and isnotempty(SqlInstance) and Namespace == 'sqlserver_server_properties' and Name == 'uptime'
-```
-
-Check if any logs from Telegraf help identify the root cause the problem. If there are log entries, you can select **Not collecting** and check the logs and troubleshooting info for common problems.
-
-If there are no log entries, check the logs on the monitoring virtual machine for the following services installed by two virtual machine extensions:
--- Microsoft.Azure.Monitor.AzureMonitorLinuxAgent
- - Service: mdsd
-- Microsoft.Azure.Monitor.Workloads.Workload.WLILinuxExtension
- - Service: wli
- - Service: ms-telegraf
- - Service: td-agent-bit-wli
- - Extension log to check installation failures: /var/log/azure/Microsoft.Azure.Monitor.Workloads.Workload.WLILinuxExtension/wlilogs.log
-
-### wli service logs
-
-Service logs: `/var/log/wli.log`
-
-To see recent logs: `tail -n 100 -f /var/log/wli.log`
-
-If you see the following error log, there's a problem with the mdsd service.
-
-```
-2021-01-27T06:09:28Z [Error] Failed to get config data. Error message: dial unix /var/run/mdsd/default_fluent.socket: connect: no such file or directory
-```
-
-### Telegraf service logs
-
-Service logs: `/var/log/ms-telegraf/telegraf.log`
-
-To see recent logs: `tail -n 100 -f /var/log/ms-telegraf/telegraf.log`
-
-To see recent error and warning logs: `tail -n 1000 /var/log/ms-telegraf/telegraf.log | grep "E\!\|W!"`
-
-The configuration that telegraf uses is generated by the wli service and placed in: `/etc/ms-telegraf/telegraf.d/wli`
-
-If a bad configuration is generated, the ms-telegraf service might fail to start. Check if the ms-telegraf service is running by using this command: `service ms-telegraf status`
-
-To see error messages from the telegraf service, run it manually by using the following command:
-
-```
-/usr/bin/ms-telegraf --config /etc/ms-telegraf/telegraf.conf --config-directory /etc/ms-telegraf/telegraf.d/wli --test
-```
-
-### mdsd service logs
-
-Check [prerequisites](../agents/azure-monitor-agent-manage.md#prerequisites) for the Azure Monitor agent.
-
-Prior to Azure Monitoring Agent v1.12, mdsd service logs were located in:
-- `/var/log/mdsd.err`-- `/var/log/mdsd.warn`-- `/var/log/mdsd.info`-
-From v1.12 onward, service logs are located in:
-- `/var/opt/microsoft/azuremonitoragent/log/`-- `/etc/opt/microsoft/azuremonitoragent/`-
-To see recent errors: `tail -n 100 -f /var/log/mdsd.err`
-
-If you need to contact support, collect the following information:
--- Logs in `/var/log/azure/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent/` -- Log in `/var/log/waagent.log` -- Logs in `/var/log/mdsd*`, or logs in `/var/opt/microsoft/azuremonitoragent/log/` and `/etc/opt/microsoft/azuremonitoragent/`.-- Files in `/etc/mdsd.d/`-- File `/etc/default/mdsd`-
-### Invalid monitoring virtual machine configuration
-
-One cause of the **Not collecting** status is an invalid configuration for the monitoring virtual machine. Here's the simplest form of configuration:
-
-```json
-{
- "version": 1,
- "secrets": {
- "telegrafPassword": {
- "keyvault": "https://mykeyvault.vault.azure.net/",
- "name": "sqlPassword"
- }
- },
- "parameters": {
- "sqlAzureConnections": [
- "Server=mysqlserver.database.windows.net;Port=1433;Database=mydatabase;User Id=telegraf;Password=$telegrafPassword;"
- ],
- "sqlVmConnections": [
- ],
- "sqlManagedInstanceConnections": [
- ]
- }
-}
-```
-
-This configuration specifies the replacement tokens to be used in the profile configuration on your monitoring virtual machine. It also allows you to reference secrets from Azure Key Vault, so you don't have to keep secret values in any configuration (which we strongly recommend).
-
-In this configuration, the database connection string includes a `$telegrafPassword` replacement token. SQL Insights replaces this token by the SQL authentication password retrieved from Key Vault. The Key Vault URI is specified in the `telegrafPassword` configuration section under `secrets`.
-
-#### Secrets
-Secrets are tokens whose values are retrieved at runtime from an Azure key vault. A secret is defined by a value pair that includes key vault URI and a secret name. This definition allows SQL Insights to get the value of the secret at runtime and use it in downstream configuration.
-
-You can define as many secrets as needed, including secrets stored in multiple key vaults.
-
-```json
- "secrets": {
- "<secret-token-name-1>": {
- "keyvault": "<key-vault-uri>",
- "name": "<key-vault-secret-name>"
- },
- "<secret-token-name-2>": {
- "keyvault": "<key-vault-uri-2>",
- "name": "<key-vault-secret-name-2>"
- }
- }
-```
-
-The permission to access the key vault is provided to a managed identity on the monitoring virtual machine. This managed identity must be granted the Get permission on all Key Vault secrets referenced in the monitoring profile configuration. This can be done from the Azure portal, PowerShell, the Azure CLI, or an Azure Resource Manager template.
-
-#### Parameters
-Parameters are tokens that can be referenced in the profile configuration via JSON templates. Parameters have a name and a value. Values can be any JSON type, including objects and arrays. A parameter is referenced in the profile configuration by its name, using this convention: `.Parameters.<name>`.
-
-Parameters can reference secrets in Key Vault by using the same convention. For example, `sqlAzureConnections` references the secret `telegrafPassword` by using the convention `$telegrafPassword`.
-
-At runtime, all parameters and secrets will be resolved and merged with the profile configuration to construct the actual configuration to be used on the machine.
-
-> [!NOTE]
-> The parameter names of `sqlAzureConnections`, `sqlVmConnections`, and `sqlManagedInstanceConnections` are all required in configuration, even if you don't provide connection strings for some of them.
-
-## Status: Collecting with errors
-The monitoring machine will have the status **Collecting with errors** if there's at least one recent *InsightsMetrics* log but there are also errors in the *Operation* table.
-
-SQL Insights uses the following queries to retrieve this information:
-
-```kusto
-InsightsMetrics
-    | extend Tags = todynamic(Tags)
-    | extend SqlInstance = tostring(Tags.sql_instance)
-    | where TimeGenerated > ago(240m) and isnotempty(SqlInstance) and Namespace == 'sqlserver_server_properties' and Name == 'uptime'
-```
-
-```kusto
-WorkloadDiagnosticLogs
-| summarize Errors = countif(Status == 'Error')
-```
-
-> [!NOTE]
-> If you don't see any data in `WorkloadDiagnosticLogs`, you might need to update your monitoring profile. From within SQL Insights in Azure portal, select **Manage profile** > **Edit profile** > **Update monitoring profile**.
-
-For common cases, we provide troubleshooting tips in our logs view:
--
-## Known issues
-
-During preview of SQL Insights, you may encounter the following known issues.
-
-* **'Login failed' error connecting to server or database**. Using certain special characters in SQL authentication passwords saved in the monitoring VM configuration or in Key Vault may prevent the monitoring VM from connecting to a SQL server or database. This set of characters includes parentheses, square and curly brackets, the dollar sign, forward and back slashes, and dot (`[ { ( ) } ] $ \ / .`).
-* Spaces in the database connection string attributes may be replaced with special characters, leading to database connection failures. For example, if the space in the `User Id` attribute is replaced with a special character, connections will fail with the **Login failed for user ''** error. To resolve, edit the monitoring profile configuration, and delete every special character appearing in place of a space. Some special characters may look indistinguishable from a space, thus you may want to delete every space character, type it again, and save the configuration.
-* Data collection and visualization may not work if the OS computer name of the monitoring VM is different from the monitoring VM name.
-
-## Best practices
-
-* **Ensure access to Key Vault from the monitoring VM**. If you use Key Vault to store SQL authentication passwords (strongly recommended), you need to ensure that network and security configuration allows the monitoring VM to access Key Vault. For more information, see [Access Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md) and [Configure Azure Key Vault networking settings](../../key-vault/general/how-to-azure-key-vault-network-security.md). To verify that the monitoring VM can access Key Vault, you can execute the following commands from an SSH session connected to the VM. You should be able to successfully retrieve the access token and the secret. Replace `[YOUR-KEY-VAULT-URL]`, `[YOUR-KEY-VAULT-SECRET]`, and `[YOUR-KEY-VAULT-ACCESS-TOKEN]` with actual values.
-
- ```bash
- # Get an access token for accessing Key Vault secrets
- curl 'http://[YOUR-KEY-VAULT-URL]/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net' -H Metadata:true
-
- # Get Key Vault secret
- curl 'https://[YOUR-KEY-VAULT-URL]/secrets/[YOUR-KEY-VAULT-SECRET]?api-version=2016-10-01' -H "Authorization: Bearer [YOUR-KEY-VAULT-ACCESS-TOKEN]"
- ```
-
-* **Update software on the monitoring VM**. We strongly recommend periodically updating the operating system and extensions on the monitoring VM. If an extension supports automatic upgrade, enable that option.
-
-* **Save previous configurations**. If you want to make changes to either monitoring profile or monitoring VM configuration, we recommend saving a working copy of your configuration data first. From the SQL Insights page in Azure portal, select **Manage profile** > **Edit profile**, and copy the text from **Current Monitoring Profile Config** to a file. Similarly, select **Manage profile** > **Configure** for the monitoring VM, and copy the text from **Current monitoring configuration** to a file. If data collection errors occur after configuration changes, you can compare the new configuration to the known working configuration using a text diff tool to help you find any changes that might have impacted collection.
-
-## Next steps
--- Get details on [enabling SQL Insights (preview)](sql-insights-enable.md).
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler.md
To enable Profiler on Linux, walk through the [ASP.NET Core Azure Linux web apps
## Pre-requisites -- An [Azure App Services ASP.NET/ASP.NET Core app](/app-service/quickstart-dotnetcore.md).
+- An [Azure App Services ASP.NET/ASP.NET Core app](../../app-service/quickstart-dotnetcore.md).
- [Application Insights resource](../app/create-new-resource.md) connected to your App Service app. ## Verify "Always On" setting is enabled
azure-monitor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Monitor description: Sample Azure Resource Graph queries for Azure Monitor showing use of resource types and tables to access Azure Monitor related resources and properties. Previously updated : 03/08/2022 Last updated : 06/16/2022
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
Last updated 06/08/2022
VM insights monitors the performance and health of your virtual machines and virtual machine scale sets, including their running processes and dependencies on other resources. It can help deliver predictable performance and availability of vital applications by identifying performance bottlenecks and network issues and can also help you understand whether an issue is related to other dependencies. > [!NOTE]
-> VM insights does not currently support [Azure Monitor agent](../agents/azure-monitor-agent-overview.md). You can
+> VM insights does not currently support [Azure Monitor agent](../agents/azure-monitor-agent-overview.md).
VM insights supports Windows and Linux operating systems on the following machines:
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 05/13/2022 Last updated : 06/17/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files standard network features are supported for the following reg
* Australia Central * Australia Central 2
+* Australia East
* Australia Southeast * East US * East US 2
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Title: Protect your Azure resources with a lock
-description: You can safeguard Azure resources from updates or deletions by locking down all users and roles.
+description: You can safeguard Azure resources from updates or deletions by locking all users and roles.
Last updated 05/13/2022
As an administrator, you can lock an Azure subscription, resource group, or resource to protect them from accidental user deletions and modifications. The lock overrides any user permissions.
-You can set locks that prevent either deletions or modifications. In the portal, these locks are called Delete and Read-only. In the command line, these locks are called **CanNotDelete** or **ReadOnly**. In the left navigation panel, the subscription lock feature's name is **Resource locks**, while the resource group lock feature's name is **Locks**.
+You can set locks that prevent either deletions or modifications. In the portal, these locks are called **Delete** and **Read-only**. In the command line, these locks are called **CanNotDelete** and **ReadOnly**. In the left navigation panel, the subscription lock feature's name is **Resource locks**, while the resource group lock feature's name is **Locks**.
- **CanNotDelete** means authorized users can read and modify a resource, but they can't delete it. - **ReadOnly** means authorized users can read a resource, but they can't delete or update it. Applying this lock is similar to restricting all authorized users to the permissions that the **Reader** role provides.
-Unlike role-based access control, you use management locks to apply a restriction across all users and roles. To learn about setting permissions for users and roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).
+Unlike role-based access control (RBAC), you use management locks to apply a restriction across all users and roles. To learn about setting permissions for users and roles, see [Azure RBAC](../../role-based-access-control/role-assignments-portal.md).
## Lock inheritance
When you [cancel an Azure subscription](../../cost-management-billing/manage/can
## Understand scope of locks > [!NOTE]
-> Locks only apply to control plane Azure operations and not data plane operations.
+> Locks only apply to control plane Azure operations and not to data plane operations.
Azure control plane operations go to `https://management.azure.com`. Azure data plane operations go to your service instance, such as `https://myaccount.blob.core.windows.net/`. See [Azure control plane and data plane](control-plane-and-data-plane.md). To discover which operations use the control plane URL, see the [Azure REST API](/rest/api/azure/).
The distinction means locks protect a resource from changes, but they don't rest
## Considerations before applying your locks
-Applying locks can lead to unexpected results. Some operations, which don't seem to modify a resource, require blocked actions. Locks prevent the POST method from sending data to the Azure Resource Manager API. Some common examples of blocked operations are:
+Applying locks can lead to unexpected results. Some operations, which don't seem to modify a resource, require blocked actions. Locks prevent the POST method from sending data to the Azure Resource Manager (ARM) API. Some common examples of blocked operations are:
--A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who don't have the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
+- A read-only lock on a **storage account** prevents users from listing the account keys. A POST request handles the Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation to protect access to the account keys. The account keys provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who don't have the account keys need to use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
-- A read-only lock on a **storage account** protects Azure Role-Based Access Control (RBAC) assignments scoped for a storage account or a data container (blob container or queue).
+- A read-only lock on a **storage account** protects RBAC assignments scoped for a storage account or a data container (blob container or queue).
- A cannot-delete lock on a **storage account** doesn't protect account data from deletion or modification. It only protects the storage account from deletion. If a request uses [data plane operations](control-plane-and-data-plane.md#data-plane), the lock on the storage account doesn't protect blob, queue, table, or file data within that storage account. If the request uses [control plane operations](control-plane-and-data-plane.md#control-plane), however, the lock protects those resources.
Applying locks can lead to unexpected results. Some operations, which don't seem
- A read-only lock on an **Application Gateway** prevents you from getting the backend health of the application gateway. That [operation uses a POST method](/rest/api/application-gateway/application-gateways/backend-health), which a read-only lock blocks. -- A read-only lock on a AKS cluster limits how you can access cluster resources through the portal. A read-only lock prevents you from using the AKS cluster's Kubernetes Resources section in the Azure portal to choose a cluster resource. These operations require a POST method request for authentication.
+- A read-only lock on an Azure Kubernetes Service (AKS) cluster limits how you can access cluster resources through the portal. A read-only lock prevents you from using the AKS cluster's Kubernetes resources section in the Azure portal to choose a cluster resource. These operations require a POST method request for authentication.
## Who can create or delete locks
If you try to delete the infrastructure resource group, you get an error stating
Instead, delete the service, which also deletes the infrastructure resource group.
-For managed applications, select the service you deployed.
+For managed applications, choose the service you deployed.
![Select service](./media/lock-resources/select-service.png)
To delete everything for the service, including the locked infrastructure resour
### Template
-When using an Azure Resource Manager template (ARM template) or Bicep file to deploy a lock, it's good to understand how the deployment scope and the lock scope work together. To apply a lock at the deployment scope, such as locking a resource group or a subscription, leave the scope property unset. When locking a resource, within the deployment scope, set the scope property on the lock.
+When using an ARM template or Bicep file to deploy a lock, it's good to understand how the deployment scope and the lock scope work together. To apply a lock at the deployment scope, such as locking a resource group or a subscription, leave the scope property unset. When locking a resource, within the deployment scope, set the scope property on the lock.
The following template applies a lock to the resource group it's deployed to. Notice there isn't a scope property on the lock resource because the lock scope matches the deployment scope. Deploy this template at the resource group level.
resource createRgLock 'Microsoft.Authorization/locks@2016-09-01' = {
-When applying a lock to a **resource** within the resource group, add the scope property. Set scope to the name of the resource to lock.
+When applying a lock to a **resource** within the resource group, add the scope property. Set the scope to the name of the resource to lock.
The following example shows a template that creates an app service plan, a website, and a lock on the website. The lock's scope is set to the website.
To create a lock, run:
PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/locks/{lock-name}?api-version={api-version} ```
-The scope could be a subscription, resource group, or resource. The lock name is whatever you want to call it. For the api-version, use **2016-09-01**.
+The scope could be a subscription, resource group, or resource. The lock name can be whatever you want to call it. For the API version, use **2016-09-01**.
In the request, include a JSON object that specifies the lock properties.
In the request, include a JSON object that specifies the lock properties.
- To learn about logically organizing your resources, see [Using tags to organize your resources](tag-resources.md). - You can apply restrictions and conventions across your subscription with customized policies. For more information, see [What is Azure Policy?](../../governance/policy/overview.md).-- For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see [Azure enterprise scaffold - prescriptive subscription governance](/azure/architecture/cloud-adoption-guide/subscription-governance).
+- For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see [Azure enterprise scaffold - prescriptive subscription governance](/azure/architecture/cloud-adoption-guide/subscription-governance).
azure-resource-manager Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Resource Manager description: Sample Azure Resource Graph queries for Azure Resource Manager showing use of resource types and tables to access Azure Resource Manager related resources and properties. Previously updated : 03/08/2022 Last updated : 06/16/2022
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
description: Shows the rules and restrictions for naming Azure resources.
Previously updated : 05/25/2022 Last updated : 06/17/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | locks | scope of assignment | 1-90 | Alphanumerics, periods, underscores, hyphens, and parenthesis.<br><br>Can't end in period. | > | policyAssignments | scope of assignment | 1-128 display name<br><br>1-64 resource name<br><br>1-24 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. | > | policyDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
-> | policySetDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name<br><br>1-64 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
+> | policySetDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
> | roleAssignments | tenant | 36 | Must be a globally unique identifier (GUID). | > | roleDefinitions | tenant | 36 | Must be a globally unique identifier (GUID). |
In the following tables, the term alphanumeric refers to:
> | disks | resource group | 1-80 | Alphanumerics, underscores, and hyphens. | > | galleries | resource group | 1-80 | Alphanumerics and periods.<br><br>Start and end with alphanumeric. | > | galleries / applications | gallery | 1-80 | Alphanumerics, hyphens, and periods.<br><br>Start and end with alphanumeric. |
-> | galleries / applications/versions | application | 32-bit integer | Numbers and periods. |
+> | galleries / applications/versions | application | 32-bit integer | Numbers and periods.<br/>(Each segment is converted to an int32. So each segment has a max value of 2,147,483,647.) |
> | galleries / images | gallery | 1-80 | Alphanumerics, underscores, hyphens, and periods.<br><br>Start and end with alphanumeric. |
-> | galleries / images / versions | image | 32-bit integer | Numbers and periods. |
+> | galleries / images / versions | image | 32-bit integer | Numbers and periods.<br/>(Each segment is converted to an int32. So each segment has a max value of 2,147,483,647.) |
> | images | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End with alphanumeric or underscore. | > | snapshots | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End with alphanumeric or underscore. | > | virtualMachines | resource group | 1-15 (Windows)<br>1-64 (Linux)<br><br>See note below. | Can't use spaces, control characters, or these characters:<br> `~ ! @ # $ % ^ & * ( ) = + _ [ ] { } \ | ; : . ' " , < > / ?`<br><br>Windows VMs can't include period or end with hyphen.<br><br>Linux VMs can't end with period or hyphen. |
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
azure-resource-manager Template Tutorial Add Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-functions.md
Title: Tutorial - add template functions description: Add template functions to your Azure Resource Manager template (ARM template) to construct values. Previously updated : 03/27/2020 Last updated : 06/17/2022
In this tutorial, you learn how to add [template functions](template-functions.m
We recommend that you complete the [tutorial about parameters](template-tutorial-add-parameters.md), but it's not required.
-You must have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have [Visual Studio Code](https://code.visualstudio.com/) installed and working with the Azure Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Review template
-At the end of the previous tutorial, your template had the following JSON:
+At the end of the previous tutorial, your template had the following JSON file:
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-sku/azuredeploy.json":::
-The location of the storage account is hard-coded to **eastus**. However, you may need to deploy the storage account to other regions. You're again facing an issue of your template lacking flexibility. You could add a parameter for location, but it would be great if its default value made more sense than just a hard-coded value.
+Suppose you hard-coded the location of the [Azure storage account](../../storage/common/storage-account-create.md) to **eastus**, but you need to deploy it to another region. You need to add a parameter to add flexibility to your template and allow it to have a different location.
## Use function
-If you've completed the [parameters tutorial](./template-tutorial-add-parameters.md#make-template-reusable), you used a function. When you added `"[parameters('storageName')]"` you used the [parameters](template-functions-deployment.md#parameters) function. The brackets indicate that the syntax inside the brackets is a [template expression](template-expressions.md). Resource Manager resolves the syntax rather than treating it as a literal value.
+If you completed the [parameters tutorial](./template-tutorial-add-parameters.md#make-template-reusable), you used a function. When you added `"[parameters('storageName')]"`, you used the [parameters](template-functions-deployment.md#parameters) function. The brackets indicate that the syntax inside the brackets is a [template expression](template-expressions.md). Resource Manager resolves the syntax instead of treating it as a literal value.
-Functions add flexibility to your template by dynamically getting values during deployment. In this tutorial, you use a function to get the location of the resource group you're using for deployment.
+Functions add flexibility to your template by dynamically getting values during deployment. In this tutorial, you use a function to get the resource group deployment location.
-The following example highlights the changes to add a parameter called `location`. The parameter default value calls the [resourceGroup](template-functions-resource.md#resourcegroup) function. This function returns an object with information about the resource group being used for deployment. One of the properties on the object is a location property. When you use the default value, the storage account location has the same location as the resource group. The resources inside a resource group don't have to share the same location. You can also provide a different location when needed.
+The following example highlights the changes to add a parameter called `location`. The parameter default value calls the [resourceGroup](template-functions-resource.md#resourcegroup) function. This function returns an object with information about the deployed resource group. One of the object properties is a location property. When you use the default value, the storage account and the resource group have the same location. The resources inside a group have different locations.
Copy the whole file and replace your template with its contents.
Copy the whole file and replace your template with its contents.
## Deploy template
-In the previous tutorials, you created a storage account in East US, but your resource group was created in Central US. For this tutorial, your storage account is created in the same region as the resource group. Use the default value for location, so you don't need to provide that parameter value. You must provide a new name for the storage account because you're creating a storage account in a different location. For example, use **store2** as the prefix instead of **store1**.
+In the previous tutorials, you created a storage account in the East US, but your resource group is created in the Central US. For this tutorial, you create a storage account in the same region as the resource group. Use the default value for location, so you don't need to provide that parameter value. You need to provide a new name for the storage account because you're creating a storage account in a different location. Use **store2**, for example, as the prefix instead of **store1**.
If you haven't created the resource group, see [Create resource group](template-tutorial-create-first-template.md#create-resource-group). The example assumes you've set the `templateFile` variable to the path to the template file, as shown in the [first tutorial](template-tutorial-create-first-template.md#deploy-template).
New-AzResourceGroupDeployment `
# [Azure CLI](#tab/azure-cli)
-To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
+To run this deployment command, you need to have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
```azurecli az deployment group create \
az deployment group create \
> [!NOTE]
-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
+> If the deployment fails, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
## Verify deployment
You can verify the deployment by exploring the resource group from the Azure por
1. Sign in to the [Azure portal](https://portal.azure.com). 1. From the left menu, select **Resource groups**.
-1. Select the resource group you deployed to.
-1. You see that a storage account resource has been deployed and has the same location as the resource group.
+1. Check the box to the left of **myResourceGroup** and select **myResourceGroup**.
+1. Select the resource group you created. The default name is **myResourceGroup**.
+1. Notice your deployed storage account and your resource group have the same location.
+ ## Clean up resources If you're moving on to the next tutorial, you don't need to delete the resource group.
-If you're stopping now, you might want to clean up the resources you deployed by deleting the resource group.
+If you're stopping now, you might want to delete the resource group.
-1. From the Azure portal, select **Resource group** from the left menu.
-2. Enter the resource group name in the **Filter by name** field.
-3. Select the resource group name.
+1. From the Azure portal, select **Resource groups** from the left menu.
+2. Type the resource group name in the **Filter for any field...** text field.
+3. Check the box next to **myResourceGroup** and select **myResourceGroup** or your resource group name.
4. Select **Delete resource group** from the top menu. ## Next steps
-In this tutorial, you used a function when defining the default value for a parameter. In this tutorial series, you'll continue using functions. By the end of the series, you'll add functions to every section of the template.
+In this tutorial, you use a function to define the default value for a parameter. In this tutorial series, you continue to use functions. By the end of the series, you add functions to every template section.
> [!div class="nextstepaction"] > [Add variables](template-tutorial-add-variables.md)
azure-resource-manager Template Tutorial Add Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-parameters.md
Title: Tutorial - add parameters to template description: Add parameters to your Azure Resource Manager template (ARM template) to make it reusable. Previously updated : 03/31/2020 Last updated : 06/15/2022
# Tutorial: Add parameters to your ARM template
-In the [previous tutorial](template-tutorial-add-resource.md), you learned how to add a storage account to the template and deploy it. In this tutorial, you learn how to improve the Azure Resource Manager template (ARM template) by adding parameters. This tutorial takes about **14 minutes** to complete.
+In the [previous tutorial](template-tutorial-add-resource.md), you learned how to add an [Azure storage account](../../storage/common/storage-account-create.md) to the template and deploy it. In this tutorial, you learn how to improve the Azure Resource Manager template (ARM template) by adding parameters. This tutorial takes about **14 minutes** to complete.
## Prerequisites We recommend that you complete the [tutorial about resources](template-tutorial-add-resource.md), but it's not required.
-You must have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have [Visual Studio Code](https://code.visualstudio.com/) installed and working with the Azure Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Review template
-At the end of the previous tutorial, your template had the following JSON:
+At the end of the previous tutorial, your template has the following JSON file:
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-storage/azuredeploy.json":::
-You may have noticed that there's a problem with this template. The storage account name is hard-coded. You can only use this template to deploy the same storage account every time. To deploy a storage account with a different name, you would have to create a new template, which obviously isn't a practical way to automate your deployments.
+You may notice that there's a problem with this template. The storage account name is hard-coded. You can only use this template to deploy the same storage account every time. To deploy a storage account with a different name, you need to create a new template, which obviously isn't a practical way to automate your deployments.
## Make template reusable
-To make your template reusable, let's add a parameter that you can use to pass in a storage account name. The highlighted JSON in the following example shows what changed in your template. The `storageName` parameter is identified as a string. The maximum length is set to 24 characters to prevent any names that are too long.
+To make your template reusable, let's add a parameter that you can use to pass in a storage account name. The highlighted JSON file in the following example shows the changes in your template. The `storageName` parameter is identified as a string. The storage account name is all lowercase letters or numbers and has a limit of 24 characters.
Copy the whole file and replace your template with its contents.
Copy the whole file and replace your template with its contents.
Let's deploy the template. The following example deploys the template with Azure CLI or PowerShell. Notice that you provide the storage account name as one of the values in the deployment command. For the storage account name, provide the same name you used in the previous tutorial.
-If you haven't created the resource group, see [Create resource group](template-tutorial-create-first-template.md#create-resource-group). The example assumes you've set the `templateFile` variable to the path to the template file, as shown in the [first tutorial](template-tutorial-create-first-template.md#deploy-template).
+If you haven't created the resource group, see [Create resource group](template-tutorial-create-first-template.md#create-resource-group). The example assumes you set the `templateFile` variable to the path of the template file, as shown in the [first tutorial](template-tutorial-create-first-template.md#deploy-template).
# [PowerShell](#tab/azure-powershell)
New-AzResourceGroupDeployment `
# [Azure CLI](#tab/azure-cli)
-To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
+To run this deployment command, you need to have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
```azurecli az deployment group create \
az deployment group create \
## Understand resource updates
-In the previous section, you deployed a storage account with the same name that you had created earlier. You may be wondering how the resource is affected by the redeployment.
+After you deploy a storage account with the same name you used earlier, you may wonder how the redeployment affects the resource.
-If the resource already exists and no change is detected in the properties, no action is taken. If the resource already exists and a property has changed, the resource is updated. If the resource doesn't exist, it's created.
+If the resource already exists, and there's no change in the properties, there's no need for further action. If the resource exists and a property changes, it updates. If the resource doesn't exist, it's created.
-This way of handling updates means your template can include all of the resources you need for an Azure solution. You can safely redeploy the template and know that resources are changed or created only when needed. For example, if you have added files to your storage account, you can redeploy the storage account without losing those files.
+This way of handling updates means your template can include all of the resources you need for an Azure solution. You can safely redeploy the template and know that resources change or are created only when needed. If you add files to your storage account, for example, you can redeploy the storage account without losing the files.
## Customize by environment
-Parameters enable you to customize the deployment by providing values that are tailored for a particular environment. For example, you can pass different values based on whether you're deploying to an environment for development, test, and production.
+Parameters let you customize the deployment by providing values that tailored for a particular environment. You can pass different values, for example, based on whether you're deploying to a development, testing, or production environment.
-The previous template always deployed a **Standard_LRS** storage account. You might want the flexibility to deploy different SKUs depending on the environment. The following example shows the changes to add a parameter for SKU. Copy the whole file and paste over your template.
+The previous template always deploys a standard locally redundant storage (LRS) **Standard_LRS** account. You might want the flexibility to deploy different stock keeping units (SKUs) depending on the environment. The following example shows the changes to add a parameter for SKU. Copy the whole file and paste it over your template.
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-sku/azuredeploy.json" range="1-40" highlight="10-23,32":::
-The `storageSKU` parameter has a default value. This value is used when a value isn't specified during the deployment. It also has a list of allowed values. These values match the values that are needed to create a storage account. You don't want users of your template to pass in SKUs that don't work.
+The `storageSKU` parameter has a default value. Use this value when the deployment doesn't specify it. It also has a list of allowed values. These values match the values that are needed to create a storage account. You want your template users to pass SKUs that work.
## Redeploy template
-You're ready to deploy again. Because the default SKU is set to **Standard_LRS**, you don't need to provide a value for that parameter.
+You're ready to deploy again. Because the default SKU is set to **Standard_LRS**, you've already provided a parameter value.
# [PowerShell](#tab/azure-powershell)
az deployment group create \
> [!NOTE]
-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
+> If the deployment fails, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
-To see the flexibility of your template, let's deploy again. This time set the SKU parameter to **Standard_GRS**. You can either pass in a new name to create a different storage account, or use the same name to update your existing storage account. Both options work.
+To see the flexibility of your template, let's deploy it again. This time set the SKU parameter to standard geo-redundant storage (GRS) **Standard_GRS**. You can either pass in a new name to create a different storage account or use the same name to update your existing storage account. Both options work.
# [PowerShell](#tab/azure-powershell)
az deployment group create \
-Finally, let's run one more test and see what happens when you pass in a SKU that isn't one of the allowed values. In this case, we test the scenario where a user of your template thinks **basic** is one of the SKUs.
+Finally, let's run one more test and see what happens when you pass in an SKU that isn't one of the allowed values. In this case, we test the scenario where your template user thinks **basic** is one of the SKUs.
# [PowerShell](#tab/azure-powershell)
az deployment group create \
-The command fails immediately with an error message that states which values are allowed. Resource Manager identifies the error before the deployment starts.
+The command fails at once with an error message that gives the allowed values. The ARM processor finds the error before the deployment starts.
## Clean up resources If you're moving on to the next tutorial, you don't need to delete the resource group.
-If you're stopping now, you might want to clean up the resources you deployed by deleting the resource group.
+If you're stopping now, you might want to clean up your deployed resources by deleting the resource group.
1. From the Azure portal, select **Resource group** from the left menu.
-2. Enter the resource group name in the **Filter by name** field.
-3. Select the resource group name.
+2. Type the resource group name in the **Filter for any field ...** text field.
+3. Check the box next to myResourceGroup and select **myResourceGroup** or your resource group name.
4. Select **Delete resource group** from the top menu. ## Next steps
-You improved the template created in the [first tutorial](template-tutorial-create-first-template.md) by adding parameters. In the next tutorial, you'll learn about template functions.
+You improved the template you created in the [first tutorial](template-tutorial-create-first-template.md) by adding parameters. In the next tutorial, you learn about template functions.
> [!div class="nextstepaction"]
-> [Add template functions](template-tutorial-add-functions.md)
+> [Add template functions](template-tutorial-add-functions.md)
azure-resource-manager Template Tutorial Add Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-resource.md
Title: Tutorial - Add resource to template description: Describes the steps to create your first Azure Resource Manager template (ARM template). You learn about the template file syntax and how to deploy a storage account. Previously updated : 03/27/2020 Last updated : 06/14/2022 -+ # Tutorial: Add a resource to your ARM template
-In the [previous tutorial](template-tutorial-create-first-template.md), you learned how to create a blank Azure Resource Manager template (ARM template) and deploy it. Now, you're ready to deploy an actual resource. In this tutorial, you add a storage account. It takes about **9 minutes** to complete this tutorial.
+In the [previous tutorial](template-tutorial-create-first-template.md), you learned how to create and deploy your first blank Azure Resource Manager template (ARM template). Now, you're ready to deploy an actual resource to that template. In this case, an [Azure storage account](../../storage/common/storage-account-create.md). It takes about **9 minutes** to complete this instruction.
## Prerequisites We recommend that you complete the [introductory tutorial about templates](template-tutorial-create-first-template.md), but it's not required.
-You must have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have [Visual Studio Code](https://code.visualstudio.com/) installed and working with the Azure Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Add resource
-To add a storage account definition to the existing template, look at the highlighted JSON in the following example. Instead of trying to copy sections of the template, copy the whole file and replace your template with its contents.
+To add an Azure storage account definition to the existing template, look at the highlighted JSON file in the following example. Instead of trying to copy sections of the template, copy the whole file and replace your template with its contents.
Replace `{provide-unique-name}` and the curly braces `{}` with a unique storage account name. > [!IMPORTANT]
-> The storage account name must be unique across Azure. The name must have only lowercase letters or numbers. It can be no longer than 24 characters. You might try a naming pattern like using **store1** as a prefix and then adding your initials and today's date. For example, the name you use could look like **store1abc09092019**.
+> The storage account name needs to be unique across Azure. It's only lowercase letters or numbers and has a limit of 24 characters. You can use a name like **store1** as a prefix and then add your initials and today's date. The name, for example, can be **store1abc06132022**.
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-storage/azuredeploy.json" range="1-19" highlight="5-17":::
-Guessing a unique name for a storage account isn't easy and doesn't work well for automating large deployments. Later in this tutorial series, you'll use template features that make it easier to create a unique name.
+Guessing a unique name for a storage account isn't easy and doesn't work well for automating large deployments. Later in this tutorial series, you use template features that make it easier to create a unique name.
## Resource properties
Every resource you deploy has at least the following three properties:
- `apiVersion`: Version of the REST API to use for creating the resource. Each resource provider publishes its own API versions, so this value is specific to the type. - `name`: Name of the resource.
-Most resources also have a `location` property, which sets the region where the resource is deployed.
+Most resources also have a `location` property, which sets the region where you deploy the resource.
The other properties vary by resource type and API version. It's important to understand the connection between the API version and the available properties, so let's jump into more detail.
-In this tutorial, you added a storage account to the template. You can see that API version at [storageAccounts 2021-04-01](/azure/templates/microsoft.storage/2021-04-01/storageaccounts). Notice that you didn't add all of the properties to your template. Many of the properties are optional. The `Microsoft.Storage` resource provider could release a new API version, but the version you're deploying doesn't have to change. You can continue using that version and know that the results of your deployment will be consistent.
+In this tutorial, you add a storage account to the template. You can see the storage account's API version at [storageAccounts 2021-04-01](/azure/templates/microsoft.storage/2021-04-01/storageaccounts). Notice that you don't add all the properties to your template. Many of the properties are optional. The `Microsoft.Storage` resource provider could release a new API version, but the version you're deploying doesn't have to change. You can continue using that version and know that the results of your deployment are consistent.
-If you view an older API version, such as [storageAccounts 2016-05-01](/azure/templates/microsoft.storage/2016-05-01/storageaccounts), you'll see that a smaller set of properties are available.
+If you view an older API version, such as [storageAccounts 2016-05-01](/azure/templates/microsoft.storage/2016-05-01/storageaccounts), you see that a smaller set of properties is available.
If you decide to change the API version for a resource, make sure you evaluate the properties for that version and adjust your template appropriately.
New-AzResourceGroupDeployment `
# [Azure CLI](#tab/azure-cli)
-To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
+To run this deployment command, you need to have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
```azurecli az deployment group create \
az deployment group create \
> [!NOTE]
-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
+> If the deployment fails, use the `verbose` switch to get information about the resources you're creating. Use the `debug` switch to get more information for debugging.
-Two possible deployment failures that you might encounter:
+These errors are two possible deployment failures that you might encounter:
-- `Error: Code=AccountNameInvalid; Message={provide-unique-name}` is not a valid storage account name. Storage account name must be between 3 and
-24 characters in length and use numbers and lower-case letters only.
+- `Error: Code=AccountNameInvalid; Message={provide-unique-name}` isn't a valid storage account name. The storage account name needs to be between 3 and 24 characters in length and use numbers and lower-case letters only.
In the template, replace `{provide-unique-name}` with a unique storage account name. See [Add resource](#add-resource).
Two possible deployment failures that you might encounter:
In the template, try a different storage account name.
-This deployment takes longer than your blank template deployment because the storage account is created. It can take about a minute but is usually faster.
+This deployment takes longer than your blank template deployment because you're creating a storage account. It can take about a minute.
## Verify deployment
You can verify the deployment by exploring the resource group from the Azure por
1. Sign in to the [Azure portal](https://portal.azure.com). 1. From the left menu, select **Resource groups**.
+1. Check the box to the left of **myResourceGroup** and select **myResourceGroup**
1. Select the resource group you deployed to. 1. You see that a storage account has been deployed. 1. Notice that the deployment label now says: **Deployments: 2 Succeeded**.
If you're moving on to the next tutorial, you don't need to delete the resource
If you're stopping now, you might want to clean up the resources you deployed by deleting the resource group. 1. From the Azure portal, select **Resource group** from the left menu.
-2. Enter the resource group name in the **Filter by name** field.
-3. Select the resource group name.
+2. Type the resource group name in the **Filter for any field ...** box.
+3. Check the box next to myResourceGroup and select **myResourceGroup** or the resource group name you chose.
4. Select **Delete resource group** from the top menu. ## Next steps
azure-resource-manager Template Tutorial Add Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-variables.md
Title: Tutorial - add variable to template description: Add variables to your Azure Resource Manager template (ARM template) to simplify the syntax. Previously updated : 03/27/2020 Last updated : 06/17/2022
# Tutorial: Add variables to your ARM template
-In this tutorial, you learn how to add a variable to your Azure Resource Manager template (ARM template). Variables simplify your templates by enabling you to write an expression once and reuse it throughout the template. This tutorial takes **7 minutes** to complete.
+In this tutorial, you learn how to add a variable to your Azure Resource Manager template (ARM template). Variables simplify your templates. They let you write an expression once and reuse it throughout the template. This tutorial takes **7 minutes** to complete.
## Prerequisites We recommend that you complete the [tutorial about functions](template-tutorial-add-functions.md), but it's not required.
-You must have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have [Visual Studio Code](https://code.visualstudio.com/) installed and working with the Azure Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Review template
-At the end of the previous tutorial, your template had the following JSON:
+At the end of the previous tutorial, your template had the following JSON file:
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-location/azuredeploy.json":::
-The parameter for the storage account name is hard-to-use because you have to provide a unique name. If you've completed the earlier tutorials in this series, you're probably tired of guessing a unique name. You solve this problem by adding a variable that constructs a unique name for the storage account.
+Your [Azure storage account](../../storage/common/storage-account-create.md) name needs to be unique to easily continue to build your ARM template. If you've completed the earlier tutorials in this series, you're tired of coming up with a unique name. You solve this problem by adding a variable that creates a unique name for your storage account.
## Use variable
The following example highlights the changes to add a variable to your template
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-variable/azuredeploy.json" range="1-47" highlight="5-9,29-31,36":::
-Notice that it includes a variable named `uniqueStorageName`. This variable uses four functions to construct a string value.
+Notice that it includes a variable named `uniqueStorageName`. This variable uses four functions to create a string value.
You're already familiar with the [parameters](template-functions-deployment.md#parameters) function, so we won't examine it.
-You're also familiar with the [resourceGroup](template-functions-resource.md#resourcegroup) function. In this case, you get the `id` property instead of the `location` property, as shown in the previous tutorial. The `id` property returns the full identifier of the resource group, including the subscription ID and resource group name.
+You're also familiar with the [resourceGroup](template-functions-resource.md#resourcegroup) function. In this case, you get the `id` property instead of the `location` property, as shown in the previous tutorial. The `id` property returns the full identifier of the resource group, including the subscription ID and the resource group name.
-The [uniqueString](template-functions-string.md#uniquestring) function creates a 13 character hash value. The returned value is determined by the parameters you pass in. For this tutorial, you use the resource group ID as the input for the hash value. That means you could deploy this template to different resource groups and get a different unique string value. However, you get the same value if you deploy to the same resource group.
+The [uniqueString](template-functions-string.md#uniquestring) function creates a 13-character hash value. The parameters you pass determine the returned value. For this tutorial, you use the resource group ID as the input for the hash value. That means you could deploy this template to different resource groups and get a different unique string value. You get the same value, however, if you deploy to the same resource group.
-The [concat](template-functions-string.md#concat) function takes values and combines them. For this variable, it takes the string from the parameter and the string from the `uniqueString` function, and combines them into one string.
+The [concat](template-functions-string.md#concat) function takes values and combines them. For this variable, it takes the string from the parameter and the string from the `uniqueString` function and combines them into one string.
-The `storagePrefix` parameter enables you to pass in a prefix that helps you identify storage accounts. You can create your own naming convention that makes it easier to identify storage accounts after deployment from a long list of resources.
+The `storagePrefix` parameter lets you pass in a prefix that helps you identify storage accounts. You can create your own naming convention that makes it easier to identify storage accounts after deployment from an extensive list of resources.
-Finally, notice that the storage name is now set to the variable instead of a parameter.
+Finally, notice that the storage account name is now set to the variable instead of a parameter.
## Deploy template
-Let's deploy the template. Deploying this template is easier than the previous templates because you provide just the prefix for the storage name.
+Let's deploy the template. Deploying this template is easier than the previous templates because you provide just the prefix for the storage account name.
If you haven't created the resource group, see [Create resource group](template-tutorial-create-first-template.md#create-resource-group). The example assumes you've set the `templateFile` variable to the path to the template file, as shown in the [first tutorial](template-tutorial-create-first-template.md#deploy-template).
New-AzResourceGroupDeployment `
# [Azure CLI](#tab/azure-cli)
-To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
+To run this deployment command, you need to have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
```azurecli az deployment group create \
az deployment group create \
> [!NOTE]
-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
+> If the deployment fails, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
## Verify deployment
You can verify the deployment by exploring the resource group from the Azure por
1. Sign in to the [Azure portal](https://portal.azure.com). 1. From the left menu, select **Resource groups**.
-1. Select the resource group you deployed to.
-1. You see that a storage account resource has been deployed. The name of the storage account is **store** plus a string of random characters.
+1. Select your resource group.
+1. Notice your deployed storage account name is **store**, plus a string of random characters.
## Clean up resources If you're moving on to the next tutorial, you don't need to delete the resource group.
-If you're stopping now, you might want to clean up the resources you deployed by deleting the resource group.
+If you're stopping now, you might want to delete the resource group.
-1. From the Azure portal, select **Resource group** from the left menu.
-2. Enter the resource group name in the **Filter by name** field.
-3. Select the resource group name.
+1. From the Azure portal, select **Resource groups** from the left menu.
+2. Type the resource group name in the **Filter for any field...** text field.
+3. Check the box next to **myResourceGroup** and select **myResourceGroup** or your resource group name.
4. Select **Delete resource group** from the top menu. ## Next steps
-In this tutorial, you added a variable that creates a unique name for a storage account. In the next tutorial, you return a value from the deployed storage account.
+In this tutorial, you add a variable that creates a unique storage account name. In the next tutorial, you return a value from the deployed storage account.
> [!div class="nextstepaction"] > [Add outputs](template-tutorial-add-outputs.md)
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Title: Use the Azure Video Indexer API description: This article describes how to get started with Azure Video Indexer API. Previously updated : 06/01/2022 Last updated : 06/14/2022
Debug.WriteLine(playerWidgetLink);
After you are done with this tutorial, delete resources that you are not planning to use.
+## Considerations
+
+* The JSON output produced by the API contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
+* We do not recommend that you use data directly from the artifacts folder for production purposes. Artifacts are intermediate outputs of the indexing process. They are essentially raw outputs of the various AI engines that analyze the videos; the artifacts schema may change over time.
+
+ It is recommended that you use the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, as described in [Get insights and artifacts produced by the API](video-indexer-output-json-v2.md#get-insights-produced-by-the-api) and **not** [Get-Video-Artifact-Download-Url](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url).
+ ## See also - [Azure Video Indexer overview](video-indexer-overview.md)
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-identity.md
You're responsible for NSX-T software-defined networking (SDN) configuration, fo
You can access NSX-T Manager using the built-in local user "admin" assigned to **Enterprise admin** role that gives full privileges to a user to manage NSX-T. While Microsoft manages the lifecycle of NSX-T, certain operations aren't allowed by a user. Operations not allowed include editing the configuration of host and edge transport nodes or starting an upgrade. For new users, Azure VMware Solution deploys them with a specific set of permissions needed by that user. The purpose is to provide a clear separation of control between the Azure VMware Solution control plane configuration and Azure VMware Solution private cloud user.
-For new private cloud deployments (in US West and Australia East) starting **June 2022**, NSX-T access will be provided with a built-in local user `cloudadmin` with a specific set of permissions to use only NSX-T functionality for workloads. The new **cloudadmin** user role will be rolled out in other regions in phases.
+For new private cloud deployments starting **June 2022**, NSX-T access will be provided with a built-in local user cloud admin assigned to the **cloudadmin** role with a specific set of permissions to use NSX-T functionality for workloads. The new **cloudadmin** role will be rolled out in phases in all regions starting with West US and Australia East.
> [!NOTE] > Admin access to NSX-T will not be provided to users for private cloud deployments created after **June 2022**.
The following permissions are assigned to the **cloudadmin** user in Azure VMwar
| System | All other | | Read-only |
-You can view the permissions granted to the Azure VMware Solution CloudAdmin role using the following steps:
+You can view the permissions granted to the Azure VMware Solution cloudadmin role on your Azure VMware Solution private cloud NSX-T.
1. Log in to the NSX-T Manager.
-1. Navigate to **Systems** > **Users and Roles** and locate **User Role Assignment**.
-1. The **Roles** column for the CloudAdmin user provides information on the NSX role-based access control (RBAC) roles assigned.
-1. Select the the **Roles** tab to view specific permissions associated with each of the NSX RBAC roles.
-1. To view **Permissions**, expand the **CloudAdmin** role and select a category like, Networking or Security.
+1. Navigate to **Systems** and locate **Users and Roles**.
+1. Select and expand the **cloudadmin** role, found under **Roles**.
+1. Select a category like, Networking or Security, to view the specific permissions.
> [!NOTE]
-> The current Azure VMware Solution with **NSX-T admin user** will eventually switch from **admin** user to **cloudadmin** user. You'll receive a notification through Azure Service Health that includes the timeline of this change so you can change the NSX-T credentials you've used for the other integration.
+> **Private clouds created before June 2022** will switch from **admin** role to **cloudadmin** role. You'll receive a notification through Azure Service Health that includes the timeline of this change so you can change the NSX-T credentials you've used for other integration.
## Next steps
azure-vmware Configure Site To Site Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-site-to-site-vpn-gateway.md
A virtual hub is a virtual network that is created and used by Virtual WAN. It's
>You can also [create a gateway in an existing hub](../virtual-wan/virtual-wan-expressroute-portal.md#existinghub). ## Create a VPN gateway
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
After you complete these steps, you'll have a production-ready environment for c
In your data center, you can connect or pair the VMware HCX Cloud Manager in Azure VMware Solution with the VMware HCX Connector. > [!IMPORTANT]
-> Although the VMware Configuration Maximum tool describes site pairs maximum to be 25 between the on-premises HCX Connector and HCX Cloud Manager, licensing limits this to three for HCX Advanced and 10 for HCX Enterprise Edition.
+> As per the VMware Configuration Maximum tool the maximum site pairs is 25 in a single HCX manager system, this includes inbound and outbound site pairings.
1. Sign in to your on-premises vCenter Server, and under **Home**, select **HCX**.
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-vmware-hcx.md
Last updated 03/29/2022
VMware HCX Advanced and its associated Cloud Manager are no longer pre-deployed in Azure VMware Solution. Instead, you'll install it through the Azure portal as an add-on. You'll still download the HCX Connector OVA and deploy the virtual appliance on your on-premises vCenter Server.
-Any edition of VMware HCX supports 25 site pairings (on-premises to cloud or cloud to cloud). The default is HCX Advanced, but you can open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to have HCX Enterprise Edition enabled. Once the service is generally available, you'll have 30 days to decide on your next steps. You can turn off or opt out of the HCX Enterprise Edition service but keep HCX Advanced as it's part of the node cost.
+Any edition of VMware HCX supports 25 site pairings (on-premises to cloud or cloud to cloud) in a single HCX manager system. The default is HCX Advanced, but you can open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to have HCX Enterprise Edition enabled. Once the service is generally available, you'll have 30 days to decide on your next steps. You can turn off or opt out of the HCX Enterprise Edition service but keep HCX Advanced as it's part of the node cost.
Downgrading from HCX Enterprise Edition to HCX Advanced is possible without redeploying. First, ensure you've reverted to an HCX Advanced configuration state and not using the Enterprise features. If you plan to downgrade, ensure that no scheduled migrations, features like RAV and [HCX Mobility Optimized Networking (MON)](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html) are in use, and site pairings are three or fewer.
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
cognitive-services Configure Qna Maker Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/configure-qna-maker-resources.md
The high-level idea as represented above is as follows:
1. Set up two parallel [QnA Maker services](set-up-qnamaker-service-azure.md) in [Azure paired regions](../../../availability-zones/cross-region-replication-azure.md).
-1. [Backup](../../../app-service/manage-backup.md) your primary QnA Maker App service and [restore](../../../app-service/web-sites-restore.md) it in the secondary setup. This will ensure that both setups work with the same hostname and keys.
+1. [Backup](../../../app-service/manage-backup.md) your primary QnA Maker App service and [restore](../../../app-service/manage-backup.md) it in the secondary setup. This will ensure that both setups work with the same hostname and keys.
1. Keep the primary and secondary Azure search indexes in sync. Use the GitHub sample [here](https://github.com/pchoudhari/QnAMakerBackupRestore) to see how to backup-restore Azure indexes.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/overview.md
Previously updated : 05/13/2022 Last updated : 06/17/2022
Creating a CLU project typically involves several different steps.
Follow these steps to get the most out of your model:
-1. **Build schema**: Know your data and define the actions and relevant information that needs to be recognized from user's input utterances. In this step you create the [intents](glossary.md#intent) that you want to assign to user's utterances, and the relevant [entities](glossary.md#entity) you want extracted.
+1. **Define your schema**: Know your data and define the actions and relevant information that needs to be recognized from user's input utterances. In this step you create the [intents](glossary.md#intent) that you want to assign to user's utterances, and the relevant [entities](glossary.md#entity) you want extracted.
-2. **Label data**: The quality of data labeling is a key factor in determining model performance.
+2. **Label your data**: The quality of data labeling is a key factor in determining model performance.
-3. **Train model**: Your model starts learning from your labeled data.
+3. **Train the model**: Your model starts learning from your labeled data.
-4. **Viewmodel evaluation details**: View the evaluation details for your model to determine how well it performs when introduced to new data.
+4. **View the model's performance**: View the evaluation details for your model to determine how well it performs when introduced to new data.
-5. **Deploy model**: Deploying a model makes it available for use via the [Runtime API](https://aka.ms/clu-runtime-api).
+6. **Improve the model**: After reviewing the model's performance, you can then learn how you can improve the model.
-6. **Predict intents and entities**: Use your custom model to predict intents and entities from user's utterances.
+7. **Deploy the model**: Deploying a model makes it available for use via the [Runtime API](https://aka.ms/clu-apis).
+
+8. **Predict intents and entities**: Use your custom model to predict intents and entities from user's utterances.
## Reference documentation and code samples
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/overview.md
Previously updated : 05/06/2022 Last updated : 06/17/2022
Using custom NER typically involves several different steps.
2. **Label consistently**: The same entity should have the same label across all the files. 3. **Label completely**: Label all the instances of the entity in all your files.
-3. **Train model**: Your model starts learning from your labeled data.
+3. **Train the model**: Your model starts learning from your labeled data.
-4. **View the model evaluation details**: After training is completed, view the model's evaluation details and its performance.
+4. **View the model's performance**: After training is completed, view the model's evaluation details and its performance.
-5. **Improve the model**: After reviewing model evaluation details, you can go ahead and learn how you can improve the model.
+5. **Improve the model**: After reviewing model's performance, you can then learn how you can improve the model.
-6. **Deploy model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
+6. **Deploy the model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
8. **Extract entities**: Use your custom models for entity extraction tasks.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/overview.md
Previously updated : 05/24/2022 Last updated : 06/17/2022
Creating a custom text classification project typically involves several differe
Follow these steps to get the most out of your model:
-1. **Define schema**: Know your data and identify the [classes](glossary.md#class) you want differentiate between, avoid ambiguity.
+1. **Define your schema**: Know your data and identify the [classes](glossary.md#class) you want differentiate between, avoid ambiguity.
-2. **Label data**: The quality of data labeling is a key factor in determining model performance. Documents that belong to the same class should always have the same class, if you have a document that can fall into two classes use **Multi label classification** projects. Avoid class ambiguity, make sure that your classes are clearly separable from each other, especially with single label classification projects.
+2. **Label your data**: The quality of data labeling is a key factor in determining model performance. Documents that belong to the same class should always have the same class, if you have a document that can fall into two classes use **Multi label classification** projects. Avoid class ambiguity, make sure that your classes are clearly separable from each other, especially with single label classification projects.
-3. **Train model**: Your model starts learning from your labeled data.
+3. **Train the model**: Your model starts learning from your labeled data.
-4. **View model evaluation details**: View the evaluation details for your model to determine how well it performs when introduced to new data.
+4. **View the model's performance**: View the evaluation details for your model to determine how well it performs when introduced to new data.
-5. **Improve model**: Work on improving your model performance by examining the incorrect model predictions and examining data distribution.
+5. **Improve the model**: Work on improving your model performance by examining the incorrect model predictions and examining data distribution.
-6. **Deploy model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
+6. **Deploy the model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
7. **Classify text**: Use your custom model for custom text classification tasks.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/overview.md
Previously updated : 04/14/2022 Last updated : 06/17/2022
Creating an orchestration workflow project typically involves several different
Follow these steps to get the most out of your model:
-1. **Build schema**: Know your data and define the actions and relevant information that needs to be recognized from user's input utterances. Create the [intents](glossary.md#intent) that you want to assign to user's utterances and the projects you want to connect to your orchestration project.
+1. **Define your schema**: Know your data and define the actions and relevant information that needs to be recognized from user's input utterances. Create the [intents](glossary.md#intent) that you want to assign to user's utterances and the projects you want to connect to your orchestration project.
-2. **Tag data**: The quality of data tagging is a key factor in determining model performance.
-<!-- TODO: TO INCLUDE MORE GUIDANCE -->
+2. **Label your data**: The quality of data tagging is a key factor in determining model performance.
-3. **Train model**: Your model starts learning from your tagged data.
+3. **Train a model**: Your model starts learning from your tagged data.
-4. **View model evaluation details**: View the evaluation details for your model to determine how well it performs when introduced to new data.
+4. **View the model's performance**: View the evaluation details for your model to determine how well it performs when introduced to new data.
-5. **Deploy model**: Deploying a model makes it available for use via the [prediction API](https://aka.ms/clu-runtime-api).
+5. **Improve the model**: After reviewing the model's performance, you can then learn how you can improve the model.
-6. **Predict intents**: Use your custom model to predict intents from user's utterances.
+6. **Deploy the model**: Deploying a model makes it available for use via the [prediction API](https://aka.ms/clu-runtime-api).
+
+7. **Predict intents**: Use your custom model to predict intents from user's utterances.
## Reference documentation and code samples
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/16/2022
communication-services Get Started Raw Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-raw-media-access.md
Title: Quickstart - Add RAW media access to your app (Android) description: In this quickstart, you'll learn how to add raw media access calling capabilities to your app using Azure Communication Services.-+ - Previously updated : 11/18/2021+ Last updated : 06/09/2022
-# Raw media access
+# Raw Video
[!INCLUDE [Public Preview](../../includes/public-preview-include-document.md)] In this quickstart, you'll learn how implement raw media access using the Azure Communication Services Calling SDK for Android.
-## Outbound virtual video device
- The Azure Communication Services Calling SDK offers APIs allowing apps to generate their own video frames to send to remote participants. This quick start builds upon [QuickStart: Add 1:1 video calling to your app](./get-started-with-video-calling.md?pivots=platform-android) for Android.
-## Overview
-
-Once an outbound virtual video device is created, use DeviceManager to make a new virtual video device that behaves just like any other webcam connected to your computer or mobile phone.
+## Virtual Video Stream Overview
Since the app will be generating the video frames, the app must inform the Azure Communication Services Calling SDK about the video formats the app is capable of generating. This is required to allow the Azure Communication Services Calling SDK to pick the best video format configuration given the network conditions at any giving time. The app must register a delegate to get notified about when it should start or stop producing video frames. The delegate event will inform the app which video format is more appropriate for the current network conditions.
-The following is an overview of the steps required to create an outbound virtual video device.
-
-1. Create a `VirtualDeviceIdentification` with basic identification information for the new outbound virtual video device.
-
- ```java
- VirtualDeviceIdentification deviceId = new VirtualDeviceIdentification();
- deviceId.setId("QuickStartVirtualVideoDevice");
- deviceId.setName("My First Virtual Video Device");
- ```
+The following is an overview of the steps required to create a virtual video stream.
-2. Create an array of `VideoFormat` with the video formats supported by the app. It is fine to have only one video format supported, but at least one of the provided video formats must be of the `MediaFrameKind::VideoSoftware` type. When multiple formats are provided, the order of the format in the list does not influence or prioritize which one will be used. The selected format is based on external factors like network bandwidth.
+1. Create an array of `VideoFormat` with the video formats supported by the app. It is fine to have only one video format supported, but at least one of the provided video formats must be of the `VideoFrameKind::VideoSoftware` type. When multiple formats are provided, the order of the format in the list does not influence or prioritize which one will be used. The selected format is based on external factors like network bandwidth.
```java ArrayList<VideoFormat> videoFormats = new ArrayList<VideoFormat>();
The following is an overview of the steps required to create an outbound virtual
format.setWidth(1280); format.setHeight(720); format.setPixelFormat(PixelFormat.RGBA);
- format.setMediaFrameKind(MediaFrameKind.VIDEO_SOFTWARE);
+ format.setVideoFrameKind(VideoFrameKind.VIDEO_SOFTWARE);
format.setFramesPerSecond(30); format.setStride1(1280 * 4); // It is times 4 because RGBA is a 32-bit format. videoFormats.add(format); ```
-3. Create `OutboundVirtualVideoDeviceOptions` and set `DeviceIdentification` and `VideoFormats` with the previously created objects.
+2. Create `RawOutgoingVideoStreamOptions` and set `VideoFormats` with the previously created object.
+
+ ```java
+ RawOutgoingVideoStreamOptions rawOutgoingVideoStreamOptions = new RawOutgoingVideoStreamOptions();
+ rawOutgoingVideoStreamOptions.setVideoFormats(videoFormats);
+ ```
+
+3. Subscribe to `RawOutgoingVideoStreamOptions::addOnOutgoingVideoStreamStateChangedListener` delegate. This delegate will inform the state of the current stream, its important that you do not send frames if the state is no equal to `OutgoingVideoStreamState.STARTED`.
```java
- OutboundVirtualVideoDeviceOptions m_options = new OutboundVirtualVideoDeviceOptions();
+ private OutgoingVideoStreamState outgoingVideoStreamState;
- // ...
+ rawOutgoingVideoStreamOptions.addOnOutgoingVideoStreamStateChangedListener(event -> {
- m_options.setDeviceIdentification(deviceId);
- m_options.setVideoFormats(videoFormats);
+ outgoingVideoStreamState = event.getOutgoingVideoStreamState();
+ });
```
-4. Make sure the `OutboundVirtualVideoDeviceOptions::OnFlowChanged` delegate is defined. This delegate will inform its listener about events requiring the app to start or stop producing video frames. In this quick start, `m_mediaFrameSender` is used as trigger to let the app know when it's time to start generating frames. Feel free to use any mechanism in your app as a trigger.
+4. Make sure the `RawOutgoingVideoStreamOptions::addOnVideoFrameSenderChangedListener` delegate is defined. This delegate will inform its listener about events requiring the app to start or stop producing video frames. In this quick start, `mediaFrameSender` is used as trigger to let the app know when it's time to start generating frames. Feel free to use any mechanism in your app as a trigger.
```java
- private MediaFrameSender m_mediaFrameSender;
+ private VideoFrameSender mediaFrameSender;
- // ...
+ rawOutgoingVideoStreamOptions.addOnVideoFrameSenderChangedListener(event -> {
- m_options.addOnFlowChangedListener(virtualDeviceFlowControlArgs -> {
- if (virtualDeviceFlowControlArgs.getMediaFrameSender().getRunningState() == VirtualDeviceRunningState.STARTED) {
- // Tell the app's frame generator to start producing frames.
- m_mediaFrameSender = virtualDeviceFlowControlArgs.getMediaFrameSender();
- } else {
- // Tell the app's frame generator to stop producing frames.
- m_mediaFrameSender = null;
- }
+ mediaFrameSender = event.getVideoFrameSender();
}); ```
-5. Use `Device
+5. Create an instance of `VirtualRawOutgoingVideoStream` using the `RawOutgoingVideoStreamOptions` we created previously
```java
- private OutboundVirtualVideoDevice m_outboundVirtualVideoDevice;
-
- // ...
+ private VirtualRawOutgoingVideoStream virtualRawOutgoingVideoStream;
- m_outboundVirtualVideoDevice = m_deviceManager.createOutboundVirtualVideoDevice(m_options).get();
+ virtualRawOutgoingVideoStream = new VirtualRawOutgoingVideoStream(rawOutgoingVideoStreamOptions);
```
-6. Tell device manager to use the recently created virtual camera on calls.
+7. Once outgoingVideoStreamState is equal to `OutgoingVideoStreamState.STARTED` create and instance of `FrameGenerator` class this will start a non-UI thread and will send frames, call `FrameGenerator.SetVideoFrameSender` each time we get an updated `VideoFrameSender` on the previous delegate, cast the `VideoFrameSender` to the appropriate type defined by the `VideoFrameKind` property of `VideoFormat`. For example, cast it to `SoftwareBasedVideoFrameSender` and then call the `send` method according to the number of planes defined by the VideoFormat.
+After that, create the ByteBuffer backing the video frame if needed. Then, update the content of the video frame. Finally, send the video frame to other participants with the `sendFrame` API.
```java
- private LocalVideoStream m_localVideoStream;
+ public class FrameGenerator implements VideoFrameSenderChangedListener {
+
+ private VideoFrameSender videoFrameSender;
+ private Thread frameIteratorThread;
+ private final Random random;
+ private volatile boolean stopFrameIterator = false;
+
+ public FrameGenerator() {
+
+ random = new Random();
+ }
+
+ public void FrameIterator() {
+
+ ByteBuffer plane = null;
+ while (!stopFrameIterator && videoFrameSender != null) {
+
+ plane = GenerateFrame(plane);
+ }
+ }
+
+ private ByteBuffer GenerateFrame(ByteBuffer plane) {
+
+ try {
+
+ VideoFormat videoFormat = videoFrameSender.getVideoFormat();
+ if (plane == null || videoFormat.getStride1() * videoFormat.getHeight() != plane.capacity()) {
+
+ plane = ByteBuffer.allocateDirect(videoFormat.getStride1() * videoFormat.getHeight());
+ plane.order(ByteOrder.nativeOrder());
+ }
+
+ int bandsCount = random.nextInt(15) + 1;
+ int bandBegin = 0;
+ int bandThickness = videoFormat.getHeight() * videoFormat.getStride1() / bandsCount;
+
+ for (int i = 0; i < bandsCount; ++i) {
+
+ byte greyValue = (byte) random.nextInt(254);
+ java.util.Arrays.fill(plane.array(), bandBegin, bandBegin + bandThickness, greyValue);
+ bandBegin += bandThickness;
+ }
+
+ if (videoFrameSender instanceof SoftwareBasedVideoFrameSender) {
+ SoftwareBasedVideoFrameSender sender = (SoftwareBasedVideoFrameSender) videoFrameSender;
+
+ long timeStamp = sender.getTimestampInTicks();
+ sender.sendFrame(plane, timeStamp).get();
+ } else {
+
+ HardwareBasedVideoFrameSender sender = (HardwareBasedVideoFrameSender) videoFrameSender;
+
+ int[] textureIds = new int[1];
+ int targetId = GLES20.GL_TEXTURE_2D;
+
+ GLES20.glEnable(targetId);
+ GLES20.glGenTextures(1, textureIds, 0);
+ GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
+ GLES20.glBindTexture(targetId, textureIds[0]);
+ GLES20.glTexImage2D(targetId,
+ 0,
+ GLES20.GL_RGB,
+ videoFormat.getWidth(),
+ videoFormat.getHeight(),
+ 0,
+ GLES20.GL_RGB,
+ GLES20.GL_UNSIGNED_BYTE,
+ plane);
+
+ long timeStamp = sender.getTimestampInTicks();
+ sender.sendFrame(targetId, textureIds[0], timeStamp).get();
+ }
+
+ Thread.sleep((long) (1000.0f / videoFormat.getFramesPerSecond()));
+ } catch (InterruptedException ex) {
+
+ Log.d("FrameGenerator", String.format("FrameGenerator.GenerateFrame, %s", ex.getMessage()));
+ } catch (ExecutionException ex2) {
+
+ Log.d("FrameGenerator", String.format("FrameGenerator.GenerateFrame, %s", ex2.getMessage()));
+ }
+
+ return plane;
+ }
+
+ private void StartFrameIterator() {
- // ...
+ frameIteratorThread = new Thread(this::FrameIterator);
+ frameIteratorThread.start();
+ }
+
+ public void StopFrameIterator() {
+
+ try {
+
+ if (frameIteratorThread != null) {
+
+ stopFrameIterator = true;
+ frameIteratorThread.join();
+ frameIteratorThread = null;
+ stopFrameIterator = false;
+ }
+ } catch (InterruptedException ex) {
- for (VideoDeviceInfo videoDeviceInfo : m_deviceManager.getCameras())
- {
- String deviceId = videoDeviceInfo.getId();
- if (deviceId.equalsIgnoreCase("QuickStartVirtualVideoDevice")) // Same id used in step 1.
- {
- m_localVideoStream = LocalVideoStream(videoDeviceInfo, getApplicationContext());
+ Log.d("FrameGenerator", String.format("FrameGenerator.StopFrameIterator, %s", ex.getMessage()));
} } ```
-7. In a non-UI thread or loop in the app, cast the `MediaFrameSender` to the appropriate type defined by the `MediaFrameKind` property of `VideoFormat`. For example, cast it to `SoftwareBasedVideoFrame` and then call the `send` method according to the number of planes defined by the MediaFormat.
-After that, create the ByteBuffer backing the video frame if needed. Then, update the content of the video frame. Finally, send the video frame to other participants with the `sendFrame` API.
+## Screen Share Video Stream Overview
+
+Repeat steps `1 to 4` from the previous VirtualRawOutgoingVideoStream tutorial.
+
+Since the Android system generates the frames, you must implement your own foreground service to capture the frames and send them through using our Azure Communication Services Calling API
+
+The following is an overview of the steps required to create a screen share video stream.
+
+1. Add this permission to your `Manifest.xml` file inside your Android project
+
+ ```xml
+ <uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
+ ```
+
+2. Create an instance of `ScreenShareRawOutgoingVideoStream` using the `RawOutgoingVideoStreamOptions` we created previously
```java
- java.nio.ByteBuffer plane1 = null;
- Random rand = new Random();
- byte greyValue = 0;
-
- // ...
- java.nio.ByteBuffer plane1 = null;
- Random rand = new Random();
-
- while (m_outboundVirtualVideoDevice != null) {
- while (m_mediaFrameSender != null) {
- if (m_mediaFrameSender.getMediaFrameKind() == MediaFrameKind.VIDEO_SOFTWARE) {
- SoftwareBasedVideoFrame sender = (SoftwareBasedVideoFrame) m_mediaFrameSender;
- VideoFormat videoFormat = sender.getVideoFormat();
-
- // Gets the timestamp for when the video frame has been created.
- // This allows better synchronization with audio.
- int timeStamp = sender.getTimestamp();
-
- // Adjusts frame dimensions to the video format that network conditions can manage.
- if (plane1 == null || videoFormat.getStride1() * videoFormat.getHeight() != plane1.capacity()) {
- plane1 = ByteBuffer.allocateDirect(videoFormat.getStride1() * videoFormat.getHeight());
- plane1.order(ByteOrder.nativeOrder());
- }
+ private ScreenShareRawOutgoingVideoStream screenShareRawOutgoingVideoStream;
- // Generates random gray scaled bands as video frame.
- int bandsCount = rand.nextInt(15) + 1;
- int bandBegin = 0;
- int bandThickness = videoFormat.getHeight() * videoFormat.getStride1() / bandsCount;
+ screenShareRawOutgoingVideoStream = new ScreenShareRawOutgoingVideoStream(rawOutgoingVideoStreamOptions);
+ ```
- for (int i = 0; i < bandsCount; ++i) {
- byte greyValue = (byte)rand.nextInt(254);
- java.util.Arrays.fill(plane1.array(), bandBegin, bandBegin + bandThickness, greyValue);
- bandBegin += bandThickness;
- }
+3. Request needed permissions for screen capture on Android, once this method is called Android will call automatically `onActivityResult` containing the request code we have sent and the result of the operation, expect `Activity.RESULT_OK` if the permission has been provided by the user if so attach the screenShareRawOutgoingVideoStream to the call and start your own foreground service to capture the frames.
+
+ ```java
+ public void GetScreenSharePermissions() {
- // Sends video frame to the other participants in the call.
- FrameConfirmation fr = sender.sendFrame(plane1, timeStamp).get();
+ try {
- // Waits before generating the next video frame.
- // Video format defines how many frames per second app must generate.
- Thread.sleep((long) (1000.0f / videoFormat.getFramesPerSecond()));
- }
+ MediaProjectionManager mediaProjectionManager = (MediaProjectionManager) getSystemService(Context.MEDIA_PROJECTION_SERVICE);
+ startActivityForResult(mediaProjectionManager.createScreenCaptureIntent(), Constants.SCREEN_SHARE_REQUEST_INTENT_REQ_CODE);
+ } catch (Exception e) {
+
+ String error = "Could not start screen share due to failure to startActivityForResult for mediaProjectionManager screenCaptureIntent";
+ Log.d("FrameGenerator", error);
}
+ }
+
+ @Override
+ protected void onActivityResult(int requestCode, int resultCode, Intent data) {
+
+ super.onActivityResult(requestCode, resultCode, data);
+
+ if (requestCode == Constants.SCREEN_SHARE_REQUEST_INTENT_REQ_CODE) {
+
+ if (resultCode == Activity.RESULT_OK && data != null) {
- // Virtual camera hasn't been created yet.
- // Let's wait a little bit before checking again.
- // This is for demo only purposes.
- // Feel free to use a better synchronization mechanism.
- Thread.sleep(100);
+ // Attach the screenShareRawOutgoingVideoStream to the call
+ // Start your foreground service
+ } else {
+
+ String error = "user cancelled, did not give permission to capture screen";
+ }
+ }
} ```+
+4. Once you receive a frame on your foreground service send it through using the `VideoFrameSender` provided
+
+ ````java
+ public void onImageAvailable(ImageReader reader) {
+
+ Image image = reader.acquireLatestImage();
+ if (image != null) {
+
+ final Image.Plane[] planes = image.getPlanes();
+ if (planes.length > 0) {
+
+ Image.Plane plane = planes[0];
+ final ByteBuffer buffer = plane.getBuffer();
+ try {
+
+ SoftwareBasedVideoFrameSender sender = (SoftwareBasedVideoFrameSender) videoFrameSender;
+ sender.sendFrame(buffer, sender.getTimestamp()).get();
+ } catch (Exception ex) {
+
+ Log.d("MainActivity", "MainActivity.onImageAvailable trace, failed to send Frame");
+ }
+ }
+
+ image.close();
+ }
+ }
+ ````
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
Previously updated : 11/03/2021 Last updated : 06/10/2022
Azure Container Apps doesn't provide direct access to the underlying Kubernetes
You can get started building your first container app [using the quickstarts](get-started.md). ### Azure App Service
-Azure App Service provides fully managed hosting for web applications including websites and web APIs. These web applications may be deployed using code or containers. Azure App Service is optimized for web applications. Azure App Service is integrated with other Azure services including Azure Container Apps or Azure Functions. When building web apps, Azure App Service is an ideal option.
+[Azure App Service](/azure/app-service) provides fully managed hosting for web applications including websites and web APIs. These web applications may be deployed using code or containers. Azure App Service is optimized for web applications. Azure App Service is integrated with other Azure services including Azure Container Apps or Azure Functions. When building web apps, Azure App Service is an ideal option.
### Azure Container Instances
-Azure Container Instances (ACI) provides a single pod of Hyper-V isolated containers on demand. It can be thought of as a lower-level "building block" option compared to Container Apps. Concepts like scale, load balancing, and certificates are not provided with ACI containers. For example, to scale to five container instances, you create five distinct container instances. Azure Container Apps provide many application-specific concepts on top of containers, including certificates, revisions, scale, and environments. Users often interact with Azure Container Instances through other services. For example, Azure Kubernetes Service can layer orchestration and scale on top of ACI through [virtual nodes](../aks/virtual-nodes.md). If you need a less "opinionated" building block that doesn't align with the scenarios Azure Container Apps is optimizing for, Azure Container Instances is an ideal option.
+[Azure Container Instances (ACI)](/azure/container-instances) provides a single pod of Hyper-V isolated containers on demand. It can be thought of as a lower-level "building block" option compared to Container Apps. Concepts like scale, load balancing, and certificates are not provided with ACI containers. For example, to scale to five container instances, you create five distinct container instances. Azure Container Apps provide many application-specific concepts on top of containers, including certificates, revisions, scale, and environments. Users often interact with Azure Container Instances through other services. For example, Azure Kubernetes Service can layer orchestration and scale on top of ACI through [virtual nodes](../aks/virtual-nodes.md). If you need a less "opinionated" building block that doesn't align with the scenarios Azure Container Apps is optimizing for, Azure Container Instances is an ideal option.
### Azure Kubernetes Service
-Azure Kubernetes Service provides a fully managed Kubernetes option in Azure. It supports direct access to the Kubernetes API and runs any Kubernetes workload. The full cluster resides in your subscription, with the cluster configurations and operations within your control and responsibility. Teams looking for a fully managed version of Kubernetes in Azure, Azure Kubernetes Service is an ideal option.
+[Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) provides a fully managed Kubernetes option in Azure. It supports direct access to the Kubernetes API and runs any Kubernetes workload. The full cluster resides in your subscription, with the cluster configurations and operations within your control and responsibility. Teams looking for a fully managed version of Kubernetes in Azure, Azure Kubernetes Service is an ideal option.
### Azure Functions
-Azure Functions is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. When building FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change.
+[Azure Functions](../azure-functions/functions-overview.md) is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. When building FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change.
### Azure Spring Cloud
-Azure Spring Cloud makes it easy to deploy Spring Boot microservice applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. If your team or organization is predominantly Spring, Azure Spring Cloud is an ideal option.
+[Azure Spring Cloud](../spring-cloud/overview.md) makes it easy to deploy Spring Boot microservice applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. If your team or organization is predominantly Spring, Azure Spring Cloud is an ideal option.
### Azure Red Hat OpenShift
-Azure Red Hat OpenShift is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
+[Azure Red Hat OpenShift](../openshift/intro-openshift.md) is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
## Next steps
container-instances Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/availability-zones.md
Title: Deploy a zonal container group in Azure Container Instances (ACI) description: Learn how to deploy a container group in an availability zone.- Previously updated : 10/13/2021+++++ Last updated : 06/17/2022
container-instances Container Instances Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-application-gateway.md
Title: Static IP address for container group description: Create a container group in a virtual network and use an Azure application gateway to expose a static frontend IP address to a containerized web app- Previously updated : 03/16/2020+++++ Last updated : 06/17/2022 # Expose a static IP address for a container group
container-instances Container Instances Container Group Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-container-group-ssl.md
Title: Enable TLS with sidecar container description: Create an SSL or TLS endpoint for a container group running in Azure Container Instances by running Nginx in a sidecar container- Previously updated : 07/02/2020+++++ Last updated : 06/17/2022 # Enable a TLS endpoint in a sidecar container
container-instances Container Instances Container Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-container-groups.md
Title: Introduction to container groups description: Learn about container groups in Azure Container Instances, a collection of instances that share a lifecycle and resources such as CPUs, storage, and network- Previously updated : 11/01/2019+++++ Last updated : 06/17/2022
container-instances Container Instances Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-dedicated-hosts.md
Title: Deploy on dedicated host description: Use a dedicated host to achieve true host-level isolation for your Azure Container Instances workloads-+
container-instances Container Instances Egress Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-egress-ip-address.md
Title: Configure static outbound IP description: Configure Azure firewall and user-defined routes for Azure Container Instances workloads that use the firewall's public IP address for ingress and egress-+++++ Last updated 05/03/2022
container-instances Container Instances Encrypt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-encrypt-data.md
Title: Encrypt deployment data description: Learn about encryption of data persisted for your container instance resources and how to encrypt the data with a customer-managed key- Previously updated : 01/17/2020--+++++ Last updated : 06/17/2022 # Encrypt deployment data
container-instances Container Instances Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-environment-variables.md
Title: Set environment variables in container instance description: Learn how to set environment variables in the containers you run in Azure Container Instances- Previously updated : 04/17/2019 +++++ Last updated : 06/17/2022 ms.devlang: azurecli
az container logs --resource-group myResourceGroup --name mycontainer1
az container logs --resource-group myResourceGroup --name mycontainer2 ```
-The output of the containers show how you've modified the second container's script behavior by setting environment variables.
+The outputs of the containers show how you've modified the second container's script behavior by setting environment variables.
**mycontainer1** ```output
container-instances Container Instances Exec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-exec.md
Title: Execute commands in running container instance
-description: Learn how execute a command in a container that's currently running in Azure Container Instances
- Previously updated : 03/30/2018
+description: Learn how to execute a command in a container that's currently running in Azure Container Instances
+++++ Last updated : 06/17/2022 # Execute a command in a running Azure container instance
container-instances Container Instances Get Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-get-logs.md
Title: Get container instance logs & events description: Learn how to retrieve container logs and events in Azure Container Instances to help troubleshoot container issues- Previously updated : 12/30/2019+++++ Last updated : 06/17/2022
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
Title: Deploy container instance by GitHub action description: Configure a GitHub action that automates steps to build, push, and deploy a container image to Azure Container Instances- Previously updated : 08/20/2020+++++ Last updated : 06/17/2022
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-gpu.md
Title: Deploy GPU-enabled container instance description: Learn how to deploy Azure container instances to run compute-intensive container applications using GPU resources.- Previously updated : 07/22/2020+++++ Last updated : 06/17/2022 # Deploy container instances that use GPU resources
container-instances Container Instances Image Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-image-security.md
Title: Security considerations for container instances description: Recommendations to secure images and secrets for Azure Container Instances, and general security considerations for any container platform- Previously updated : 01/10/2020+++++ Last updated : 06/17/2022
You can also minimize the potential attack surface by removing any unused or unn
### Preapprove files and executables that the container is allowed to access or run
-Reducing the number of variables or unknowns helps you maintain a stable, reliable environment. Limiting containers so they can access or run only preapproved or safelisted files and executables is a proven method of limiting exposure to risk.
+Reducing the number of variables or unknowns helps you maintain a stable, reliable environment. Limiting containers so they can access or run only preapproved or safe listed files and executables is a proven method of limiting exposure to risk.
-ItΓÇÖs a lot easier to manage a safelist when itΓÇÖs implemented from the beginning. A safelist provides a measure of control and manageability as you learn what files and executables are required for the application to function correctly.
+ItΓÇÖs a lot easier to manage a safe list when itΓÇÖs implemented from the beginning. A safe list provides a measure of control and manageability as you learn what files and executables are required for the application to function correctly.
-A safelist not only reduces the attack surface but can also provide a baseline for anomalies and prevent the use cases of the "noisy neighbor" and container breakout scenarios.
+A safe list not only reduces the attack surface but can also provide a baseline for anomalies and prevent the use cases of the "noisy neighbor" and container breakout scenarios.
### Enforce network segmentation on running containers
container-instances Container Instances Init Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-init-container.md
Title: Run init containers description: Run init containers in Azure Container Instances to perform setup tasks in a container group before the application containers run. - Previously updated : 06/01/2020+++++ Last updated : 06/17/2022 # Run an init container for setup tasks in a container group
container-instances Container Instances Liveness Probe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-liveness-probe.md
Title: Set up liveness probe on container instance description: Learn how to configure liveness probes to restart unhealthy containers in Azure Container Instances- Previously updated : 07/02/2020+++++ Last updated : 06/17/2022 # Configure liveness probes
container-instances Container Instances Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-log-analytics.md
Title: Collect & analyze resource logs description: Learn how to send resource logs and event data from container groups in Azure Container Instances to Azure Monitor logs- Previously updated : 07/13/2020+++++ Last updated : 06/17/2022 # Container group and instance logging with Azure Monitor logs
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
Title: Enable managed identity in container group description: Learn how to enable a managed identity in Azure Container Instances that can authenticate with other Azure services- Previously updated : 07/02/2020+++++ Last updated : 06/17/2022 # How to use managed identities with Azure Container Instances
container-instances Container Instances Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-monitor.md
Title: Monitor container instances description: How to monitor the consumption of compute resources like CPU and memory by your containers in Azure Container Instances.- Previously updated : 12/17/2020+++++ Last updated : 06/17/2022 # Monitor container resources in Azure Container Instances
container-instances Container Instances Multi Container Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-multi-container-group.md
Title: Tutorial - Deploy multi-container group - template description: In this tutorial, you learn how to deploy a container group with multiple containers in Azure Container Instances by using an Azure Resource Manager template with the Azure CLI.- Previously updated : 07/02/2020+++++ Last updated : 06/17/2022
container-instances Container Instances Multi Container Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-multi-container-yaml.md
Title: Tutorial - Deploy multi-container group - YAML description: In this tutorial, you learn how to deploy a container group with multiple containers in Azure Container Instances by using a YAML file with the Azure CLI.- Previously updated : 07/01/2020+++++ Last updated : 06/17/2022 # Tutorial: Deploy a multi-container group using a YAML file
container-instances Container Instances Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-nat-gateway.md
Title: Configure Container Group Egress with NAT Gateway description: Configure NAT gateway for Azure Container Instances workloads that use the NAT gateway's public IP address for static egress--++ -+ Last updated 05/03/2022
container-instances Container Instances Orchestrator Relationship https://github.co