Updates from: 01/27/2023 02:14:01
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Ad B2c Global Identity Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-ad-b2c-global-identity-solutions.md
The approach you choose will be based on the number of applications you host and
The performance advantage of using multiple tenants, in either the regional or funnel-based configuration, will be an improvement over using a single Azure AD B2C tenant for globally operating businesses.
-When using the funnel-based approach, although the funnel tenant will be located in one region, but serve users globally, performance improvements will be maintained.
+When using the funnel-based approach, the funnel tenant will be located in one specific region and serve users globally. Since the funnel tenants operation utilizes a global component of the Azure AD B2C service, it will maintain a consistant level of performance regardless of where users login from.
![Screenshot shows the Azure AD B2C architecture.](./media/azure-ad-b2c-global-identity-solutions/azure-ad-b2c-architecture.png)
-As shown in the diagram, the Azure AD B2C tenant in the funnel-based approach will only utilize the Policy Engine to perform the redirection to regional Azure AD B2C tenants. The Azure AD B2C Policy Engine component is globally distributed. Therefore, the funnel isn't constrained from a performance perspective, regardless of where the Azure AD B2C funnel tenant is provisioned. A performance loss is encountered due to the extra redirect between funnel and regional tenants in the funnel-based approach.
+As shown in the diagram above, the Azure AD B2C tenant in the funnel-based approach will only utilize the Policy Engine to perform the redirection to regional Azure AD B2C tenants. The Azure AD B2C Policy Engine component is globally distributed. Therefore, the funnel isn't constrained from a performance perspective, regardless of where the Azure AD B2C funnel tenant is provisioned. A performance loss is encountered due to the extra redirect between funnel and regional tenants in the funnel-based approach.
-The regional tenants will perform directory calls into the Directory Store, which is the regionalized component.
+In the regional-based approach, since each user is directed to their most local Azure AD B2C, performance is consistant for all users logging in.
+
+The regional tenants will perform directory calls into the Directory Store, which is the only regionalized component in both the funnel-based and regional-based architectures.
Additional latency is only encountered when the user has performed an authentication in a different region from which they had signed-up in. This is because, calls will be made across regions to reach the Directory Store where their profile lives to complete their authentication.
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Previously updated : 09/14/2022 Last updated : 1/25/2023
Microsoft partners with the following ISVs for MFA and Passwordless authenticati
|:-|:--| | ![Screenshot of a asignio logo](./medi) is a passwordless, soft biometric, and MFA solution. Asignio uses a combination of the patented Asignio Signature and live facial verification for user authentication. The changeable biometric signature eliminates passwords, fraud, phishing, and credential reuse through omni-channel authentication. | | ![Screenshot of a bloksec logo](./medi) is a passwordless authentication and tokenless MFA solution, which provides real-time consent-based services and protects customers against identity-centric cyber-attacks such as password stuffing, phishing, and man-in-the-middle attacks. |
+| ![Screenshot of a grit biometric authentication logo.](./medi) provides users the option to sign in using finger print, face ID or [Windows Hello](https://support.microsoft.com/windows/learn-about-windows-hello-and-set-it-up-dae28983-8242-bb2a-d3d1-87c9d265a5f0) for enhanced security.
| ![Screenshot of a haventec logo](./medi) is a passwordless authentication provider, which provides decentralized identity platform that eliminates passwords, shared secrets, and friction. | | ![Screenshot of a hypr logo](./medi) is a passwordless authentication provider, which replaces passwords with public key encryptions eliminating fraud, phishing, and credential reuse. | | ![Screenshot of a idemia logo](./medi) is a passwordless authentication provider, which provides real-time consent-based services with biometric authentication like faceID and fingerprinting eliminating fraud and credential reuse. |
Microsoft partners with the following ISVs to provide secure hybrid access to on
| ![Screenshot of an Akamai logo.](./medi) provides a Zero Trust Network Access (ZTNA) solution that enables secure remote access to modern and legacy applications that reside in private datacenters. | | ![Screenshot of a Datawiza logo](./medi) enables SSO and granular access control for your applications and extends Azure AD B2C to protect on-premises legacy applications. | | ![Screenshot of a F5 logo](./medi) enables legacy applications to securely expose to the internet through BIG-IP security combined with Azure AD B2C pre-authentication, Conditional Access (CA) and SSO. |
+| ![Screenshot of a Grit logo](./medi) enables migrating a legacy application using header-based authentication to Azure AD B2C with no application code change. |
| ![Screenshot of a Ping logo](./medi) enables secure hybrid access to on-premises legacy applications across multiple clouds. | | ![Screenshot of a strata logo](./medi) provides secure hybrid access to on-premises applications by enforcing consistent access policies, keeping identities in sync, and making it simple to transition applications from legacy identity systems to standards-based authentication and access control provided by Azure AD B2C. | | ![Screenshot of a zscaler logo](./medi) delivers policy-based, secure access to private applications and assets without the cost, hassle, or security risks of a VPN. |
Microsoft partners with the following ISVs for tools that can help with implemen
|:-|:--| | ![Screenshot of a grit ief editor logo.](./medi) provides a low code/no code experience for developers to create sophisticated authentication user journeys. The tool comes with integrated debugger and templates for the most used scenarios.| + ## Additional information - [Custom policies in Azure AD B2C](./custom-policy-overview.md)
active-directory-b2c Partner Grit App Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-app-proxy.md
+
+ Title: Migrate applications to Azure AD B2C with Grit's app proxy
+
+description: Learn how Grit's app proxy can migrate your applications to Azure AD B2C with no code change
++++++ Last updated : 1/25/2023+++++
+# Migrate applications using header-based authentication to Azure Active Directory B2C with Grit's app proxy
+
+In this sample tutorial, learn how to migrate a legacy application using header-based authentication to Azure Active Directory B2C (Azure AD B2C) with [Grit's app proxy](https://www.gritiam.com/appProxy.html).
+
+Benefits of using Grit's app proxy are as follows:
+
+- No application code change and easy deployment resulting in faster ROI
+
+- Enables users to use modern authentication experiences such as Multi-Factor authentication, biometrics, and password-less resulting in enhanced security.
+
+- Significant savings on the license cost of the legacy authentication solution
+
+## Prerequisites
+
+To get started, you'll need:
+
+- License to GritΓÇÖs app proxy. Contact [Grit support](mailto:info@gritsoftwaresystems.com) for license details. For this tutorial, you don't need a license.
+
+- An Azure subscription. If you don't have one, get a [free account](https://azure.microsoft.com/free/).
+
+- An [Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription.
+
+## Scenario description
+
+Grit integration includes the following components:
+
+- **Azure AD B2C**: The authorization server to verify user credentials - Authenticated users access on-premises applications using a local account stored in the Azure AD B2C directory.
+
+- **Grit app proxy**: The service that passes identity to applications through HTTP headers.
+
+- **Web application**: The legacy application to which user requests access.
+
+The following architecture diagram shows the implementation.
+
+ ![Screenshot shows the architecture diagram of the implementation.](./media/partner-grit-app-proxy/grit-app-proxy-architecture.png)
+
+1. The user requests access to an on-premises application.
+
+2. Grit app proxy receives the request through [Azure Web Application Firewall (WAF)](https://azure.microsoft.com/products/web-application-firewall/) and sends it to the application.
+
+3. Grit app proxy checks user authentication state. With no session token, or an invalid token, the user goes to Azure AD B2C for authentication.
+
+4. Azure AD B2C sends the user request to the endpoint specified during Grit app proxy registration in the Azure AD B2C tenant.
+
+4. Grit app proxy evaluates access policies and calculates attribute values in HTTP headers forwarded to the application. Grit app proxy sets the header values and sends the request to the application.
+
+5. The user is authenticated with access granted/denied to the application.
+
+## Onboard with Grit app proxy
+
+Contact [Grit support](mailto:info@gritsoftwaresystems.com) for details to get onboarded.
+
+### Configure Grit's app proxy solution with Azure AD B2C
+
+For this tutorial, Grit already has a backend application and an Azure AD B2C policy. This tutorial will be about configuring the proxy to access the backend application.
+
+You can use the UX to configure each page of the backend application for security. You can configure the type of auth required by each page and the header values needed.
+
+If the users need to be denied permission to certain pages based on group membership or some other criteria, it's handled by the auth user journey.
+
+1. Navigate to https://proxyeditor.z13.web.core.windows.net/.
+
+2. Once the dropdown appears, select the dropdown, and select **Create New**.
+
+3. Enter a name for the page that contains only letters and numbers.
+
+4. Enter **B2C_1A_SIGNUP_SIGNIN** into the B2C Policy box.
+
+5. Select **GET** at the HTTP method.
+
+6. Enter 'https://anj-grit-legacy-backend.azurewebsites.net/Home/Page' into the endpoint field and that would be the endpoint to your legacy application.
+
+ >[!NOTE]
+ >This demo is publicly available, values you enter will be visible to public. Don't configure a secure application with this demo.
+
+ ![Screenshot shows the proxy configuration UI.](./media/partner-grit-app-proxy/proxy-configuration.png)
+
+7. Select **ADD HEADER**.
+
+8. Enter **x-iss** in the destination header field to configure the valid HTTP header that must be sent to the application.
+
+9. Enter **given_name** into the Value field that is the name of a claim in the B2C policy. The value of the claim will be passed into the header.
+
+10. Select **Token** as the source.
+
+11. Select **SAVE SETTINGS**.
+
+12. Select the link in the popup. It will take you to a sign-in page. Select the sign-up link and enter the required information. Once you complete the sign-up process, you'll be redirected to the legacy application. The application displays the name you provided in the **Given name** field during sign-up.
+
+## Test the flow
+
+1. Navigate to the on-premises application URL.
+
+2. The Grit app proxy redirects to the page you configured in your user flow.
+From the list, select the IdP.
+
+3. At the prompt, enter your credentials. If necessary, include an Azure AD Multi-Factor authentication (MFA) token.
+
+4. You're redirected to Azure AD B2C, which forwards the application request to the Grit's app proxy redirect URI.
+
+5. The Grit's app proxy evaluates policies, calculates headers, and sends the user to the upstream application.
+
+6. The requested application appears.
+
+## Additional resources
+
+- [Grit app proxy documentation](https://www.gritiam.com/appProxy.html)
+
+- [Configure the Grit IAM B2B2C solution with Azure AD B2C](partner-grit-iam.md)
+
+- [Edit Azure AD B2C Identity Experience Framework (IEF) XML with Grit Visual IEF Editor](partner-grit-editor.md)
+
+- [Configure Grit biometric authentication with Azure AD B2C](partner-grit-authentication.md)
active-directory-b2c Partner Grit Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-authentication.md
+
+ Title: Configure Grit's biometric authentication with Azure Active Directory B2C
+
+description: Learn how Grit's biometric authentication with Azure AD B2C secures your account
++++++ Last updated : 1/25/2023+++++
+# Configure Grit's biometric authentication with Azure Active Directory B2C
+
+In this sample tutorial, learn how to integrate [Grit's](https://www.gritiam.com) Biometric authentication with Azure Active Directory B2C (Azure AD B2C). Biometric authentication provides users the option to sign in using finger print, face ID or [Windows Hello](https://support.microsoft.com/windows/learn-about-windows-hello-and-set-it-up-dae28983-8242-bb2a-d3d1-87c9d265a5f0). It works both on desktop and mobile applications, provided the device is capable of doing biometric authentication.
+
+Biometric authentication has the following benefits:
+
+1. For users who sign in infrequently or forget passwords often resulting in frequent password resets, biometric authentication reduces friction.
+
+2. Compared to Multi-factor authentication (MFA), biometric authentication is cheaper and more secure.
+
+3. Improved security prevents phishing attack for high valued customers.
+
+4. Adds an additional layer of authentication before the user performs a high value operation like credit card transaction.
+
+## Prerequisites
+
+To get started, you'll need:
+
+- License to [Grit's Visual IEF builder](https://www.gritiefedit.com/). Contact [Grit support](mailto:info@gritsoftwaresystems.com) for licensing details. For this tutorial you don't need a license.
+
+- An Azure subscription. If you don't have one, get a [free account](https://azure.microsoft.com/free/).
+
+- An [Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription.
+
+## Scenario description
+
+In this tutorial, we'll cover the following scenario:
+
+The end user creates an account with username and password (and MFA if needed). If their device supports biometric, they're enrolled in biometrics, and their account is linked to the biometric authentication of the device. Any future logins in that device, unless the user chooses not to, will happen through biometrics.
+
+The user can link multiple devices to the same account. User will have to sign in through their email/password (and MFA if needed), they'll then be presented with an option to link a new device.
+
+For example, user has an account with Contoso. User accesses the account from the computer at work that supports Windows Hello. User also accesses the account from the home computer that doesn't support Windows Hello and an Android phone.
+
+1. After logging in with the work computer, user will be presented with an option to enroll in Windows Hello. If user chooses to do so, any future logins will happen through Windows Hello.
+
+1. After logging in with the home computer, user won't be prompted to enroll in biometrics as the device doesn't support biometrics.
+
+1. After logging in with the Android phone, user will be asked to enroll in biometrics. Any future logins will happen through biometrics.
+
+Using Grit's visual flow chart multiple other scenarios can be implemented. Contact [Grit support](mailto:info@gritsoftwaresystems.com) to discuss your scenarios.
+
+## Onboard with Grit's biometric authentication
+
+Contact [Grit support](mailto:info@gritsoftwaresystems.com) for details to get onboarded.
+
+### Configure Grit's biometric authentication with Azure AD B2C
+
+1. Navigate to <https://www.gritiefedit.com> and enter your email if you're asked for it.
+
+1. Press cancel in the quick start wizard.
+
+1. In the pop-up, select **Customize User Journey**. Under Bio Metric, select the checkbox for **Enable Biometric**.
+
+1. Scroll down and select **Generate template**, a flow chart appears.
+
+1. From the left menu, select **Run Flowcharts** > **Deploy flow charts**.
+
+1. If your device supports Windows Hello or biometric authenticator,
+ select **Test Authentication Journey Builder** link, otherwise send
+ the link to a device that supports biometric authentication.
+
+1. A web page will open on a new tab. Under **Sign in with your social account**, select **createNewAccount**.
+
+1. Go through the steps to create an account. When asked for **Setup Biometric Device sign in**, select **yes**.
+
+1. Steps to perform the biometric depends on the device you are in.
+
+1. A page appears that displays the token. Open the provided link.
+
+1. This time the sign-in will happen through biometrics.
+
+Repeat the same steps for another device. No need to sign up again, use the credentials created to sign in.
+
+## Additional resources
+
+- [Grit documentation](https://app.archbee.com/public/PREVIEW-ddjwV0RI2eVfcBOylxFGI/PREVIEW-bjH2arQd1Kn4le6z_zH84)
+
+- [Configure the Grit IAM B2B2C solution with Azure AD B2C](partner-grit-iam.md)
+
+- [Edit Azure AD B2C Identity Experience Framework (IEF) XML with Grit Visual IEF Editor](partner-grit-editor.md)
+
+- [Migrate legacy apps to Azure AD B2C with Grit's app proxy](partner-grit-app-proxy.md)
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Adding missing attributes needed for an application will start in either on-prem
First, identify which users in your Azure AD tenant will need access to the application and therefore are going to be in scope of being provisioned into the application.
-If any of those users originate in on-premises Active Directory, then you must sync the attributes with the users from Active Directory to Azure AD. You will need to perform the following tasks before configuring provisioning to your application.
+>[!NOTE]
+> For users in on-premises Active Directory, you must sync the users to Azure AD. You can sync users and attributes using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) or [Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md). Both of these solutions automatically synchronizes certain attributes to Azure AD, but not all attributes. Furthermore, some attributes (such as SAMAccountName) that are synchronized by default might not be exposed using the Graph API. In these cases, you can [use the Azure AD Connect directory extension feature to synchronize the attribute to Azure AD](#create-an-extension-attribute-using-azure-ad-connect) or [use Azure AD Connect cloud sync](#create-an-extension-attribute-using-cloud-sync). That way, the attribute will be visible to the Graph API and the Azure AD provisioning service.
1. Check with the on-premises Active Directory domain admins whether the required attributes are part of the AD DS schema, and if they are not, extend the AD DS schema in the domains where those users have accounts. 1. Configure [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) or Azure AD Connect cloud sync to synchronize the users with their extension attribute from Active Directory to Azure AD. Azure AD Connect automatically synchronizes certain attributes to Azure AD, but not all attributes. Furthermore, some attributes (such as `sAMAccountName`) that are synchronized by default might not be exposed using the Graph API. In these cases, you can [use the Azure AD Connect directory extension feature to synchronize the attribute to Azure AD](#create-an-extension-attribute-using-azure-ad-connect). That way, the attribute will be visible to the Graph API and the Azure AD provisioning service.
Set-AzureADUserExtension -objectid 0ccf8df6-62f1-4175-9e55-73da9e742690 -Extensi
Get-AzureADUser -ObjectId 0ccf8df6-62f1-4175-9e55-73da9e742690 | Select -ExpandProperty ExtensionProperty ```
+## Create an extension attribute using cloud sync
+Cloud sync will automatically discover your extensions in on-premises Active Directory when you go to add a new mapping. Use the steps below to auto-discover these attributes and set up a corresponding mapping to Azure AD.
+
+1. Sign-in to the Azure portal with a hybrid administrator account
+2. Select Azure AD Connect
+3. Select **Manage Azure AD cloud sync**
+4. Select the configuration you wish to add the extension attribute and mapping
+5. Under **Manage attributes** select **click to edit mappings**
+6. Click **Add attribute mapping**. The attributes will automatically be discovered.
+7. The new attributes will be available in the drop-down under **source attribute**.
+8. Fill in the type of mapping you want and click **Apply**.
+ [![Custom attribute mapping](media/user-provisioning-sync-attributes-for-mapping/schema-1.png)](media/user-provisioning-sync-attributes-for-mapping/schema-1.png#lightbox)
+
+For more information, see [Cloud Sync Custom Attribute Mapping](../cloud-sync/custom-attribute-mapping.md)
++++ ## Create an extension attribute using Azure AD Connect
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Now we'll walk through each step:
## Single-factor certificate-based authentication Azure AD CBA supports second factors to meet MFA requirements with single-factor certificates. Users can use either passwordless sign-in or FIDO2 security keys as second factors when the first factor is single-factor CBA. Users need to register passwordless sign-in or FIDO2 in advance to signing in with Azure AD CBA.+
+**Steps to set up passwordless phone signin(PSI) with CBA**
+ For passwordless sign-in to work, users should disable legacy notification through mobile app.
-1. Sign in to the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Follow the steps at [Enable passwordless phone sign-in authentication](../authentication/howto-authentication-passwordless-phone.md#enable-passwordless-phone-sign-in-authentication-methods)
+
+ >[!IMPORTANT]
+ >In the above configuration under step 4, please choose **Passwordless** option. Change the mode for each groups added for PSI for **Authentication mode**, choose **Passwordless** for passwordless sign-in to work with CBA.
+ 1. Select **Azure Active Directory** > **Security** > **Multifactor authentication** > **Additional cloud-based multifactor authentication settings**. :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/configure.png" alt-text="Screenshot of how to configure multifactor authentication settings.":::
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
Previously updated : 11/10/2022 Last updated : 01/25/2023
When self-service password reset (SSPR) is used to change or reset a password in
This article describes the password policy settings and complexity requirements associated with user accounts in your Azure AD tenant, and how you can use PowerShell to check or set password expiration settings.
-## <a name="userprincipalname-policies-that-apply-to-all-user-accounts"></a>Username policies
+## Username policies
Every account that signs in to Azure AD must have a unique user principal name (UPN) attribute value associated with their account. In hybrid environments with an on-premises Active Directory Domain Services (AD DS) environment synchronized to Azure AD using Azure AD Connect, by default the Azure AD UPN is set to the on-prem UPN.
The following table outlines the username policies that apply to both on-premise
| Property | UserPrincipalName requirements | | | |
-| Characters allowed |<ul> <li>A ΓÇô Z</li> <li>a - z</li><li>0 ΓÇô 9</li> <li> ' \. - \_ ! \# ^ \~</li></ul> |
-| Characters not allowed |<ul> <li>Any "\@\" character that's not separating the username from the domain.</li> <li>Can't contain a period character "." immediately preceding the "\@\" symbol</li></ul> |
-| Length constraints |<ul> <li>The total length must not exceed 113 characters</li><li>There can be up to 64 characters before the "\@\" symbol</li><li>There can be up to 48 characters after the "\@\" symbol</li></ul> |
+| Characters allowed |A ΓÇô Z<br>a - z<br>0 ΓÇô 9<br>' \. - \_ ! \# ^ \~ |
+| Characters not allowed |Any "\@\" character that's not separating the username from the domain.<br>Can't contain a period character "." immediately preceding the "\@\" symbol |
+| Length constraints |The total length must not exceed 113 characters<br>There can be up to 64 characters before the "\@\" symbol<br>There can be up to 48 characters after the "\@\" symbol |
-## <a name="password-policies-that-only-apply-to-cloud-user-accounts"></a>Azure AD password policies
+## Azure AD password policies
A password policy is applied to all user accounts that are created and managed directly in Azure AD. Some of these password policy settings can't be modified, though you can [configure custom banned passwords for Azure AD password protection](tutorial-configure-custom-password-protection.md) or account lockout parameters.
The following Azure AD password policy options are defined. Unless noted, you ca
| Property | Requirements | | | |
-| Characters allowed |<ul><li>A ΓÇô Z</li><li>a - z</li><li>0 ΓÇô 9</li> <li>@ # $ % ^ & * - _ ! + = [ ] { } &#124; \ : ' , . ? / \` ~ " ( ) ; < ></li> <li>blank space</li></ul> |
-| Characters not allowed | Unicode characters. |
-| Password restrictions |<ul><li>A minimum of 8 characters and a maximum of 256 characters.</li><li>Requires three out of four of the following:<ul><li>Lowercase characters.</li><li>Uppercase characters.</li><li>Numbers (0-9).</li><li>Symbols (see the previous password restrictions).</li></ul></li></ul> |
-| Password expiry duration (Maximum password age) |<ul><li>Default value: **90** days. If the tenant was created after 2021, it has no default expiration value. You can check current policy with [Get-MsolPasswordPolicy](/powershell/module/msonline/get-msolpasswordpolicy).</li><li>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Azure Active Directory Module for Windows PowerShell.</li></ul> |
-| Password expiry notification (When users are notified of password expiration) |<ul><li>Default value: **14** days (before password expires).</li><li>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet.</li></ul> |
-| Password expiry (Let passwords never expire) |<ul><li>Default value: **false** (indicates that password's have an expiration date).</li><li>The value can be configured for individual user accounts by using the `Set-MsolUser` cmdlet.</li></ul> |
+| Characters allowed |A ΓÇô Z<br>a - z<br>0 ΓÇô 9<br>@ # $ % ^ & * - _ ! + = [ ] { } &#124; \ : ' , . ? / \` ~ " ( ) ; < ><br>Blank space |
+| Characters not allowed | Unicode characters |
+| Password restrictions |A minimum of 8 characters and a maximum of 256 characters.<br>Requires three out of four of the following:<br>- Lowercase characters<br>- Uppercase characters<br>- Numbers (0-9)<br>- Symbols (see the previous password restrictions) |
+| Password expiry duration (Maximum password age) |Default value: **90** days. If the tenant was created after 2021, it has no default expiration value. You can check current policy with [Get-MsolPasswordPolicy](/powershell/module/msonline/get-msolpasswordpolicy).<br>The value is configurable by using the `Set-MsolPasswordPolicy` cmdlet from the Azure Active Directory Module for Windows PowerShell.|
+| Password expiry (Let passwords never expire) |Default value: **false** (indicates that passwords have an expiration date).<br>The value can be configured for individual user accounts by using the `Set-MsolUser` cmdlet. |
| Password change history | The last password *can't* be used again when the user changes a password. | | Password reset history | The last password *can* be used again when the user resets a forgotten password. |
You can disable the use of SSPR for administrator accounts using the [Set-MsolCo
A one-gate policy requires one piece of authentication data, such as an email address or phone number. A one-gate policy applies in the following circumstances:
-* It's within the first 30 days of a trial subscription; or
-* A custom domain hasn't been configured for your Azure AD tenant so is using the default **.onmicrosoft.com*. The default **.onmicrosoft.com* domain isn't recommended for production use; and
-* Azure AD Connect isn't synchronizing identities
+- It's within the first 30 days of a trial subscription
-## <a name="set-password-expiration-policies-in-azure-ad"></a>Password expiration policies
+ -Or-
+
+- A custom domain isn't configured (the tenant is using the default **.onmicrosoft.com*, which isn't recommended for production use) and Azure AD Connect isn't synchronizing identities.
+
+## Password expiration policies
A *global administrator* or *user administrator* can use the [Microsoft Azure AD Module for Windows PowerShell](/powershell/module/Azuread/) to set user passwords not to expire.
active-directory How To Create Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-role-policy.md
This article describes how you can use the **Remediation** dashboard in Permissi
## Create a policy for AWS
+> [!NOTE]
+> For information on AWS service quotas, and to request an AWS service quota increase, visit [the AWS documentation](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html).
+ 1. On the Entra home page, select the **Remediation** tab, and then select the **Role/Policies** tab. 1. Use the dropdown lists to select the **Authorization System Type** and **Authorization System**. 1. Select **Create Policy**.
active-directory Custom Attribute Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/custom-attribute-mapping.md
+
+ Title: 'Azure AD Connect cloud sync directory extensions and custom attribute mapping'
+description: This topic provides information on custom attribute mapping in cloud sync.
+++++++ Last updated : 01/12/2023+++++++
+# Cloud Sync directory extensions and custom attribute mapping
+
+## Directory extensions
+You can use directory extensions to extend the schema in Azure Active Directory (Azure AD) with your own attributes from on-premises Active Directory. This feature enables you to build LOB apps by consuming attributes that you continue to manage on-premises.
+
+For additional information on directory extensions see [Using directory extension attributes in claims](../develop/active-directory-schema-extensions.md)
+
+ You can see the available attributes by using [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). You can also use this feature to create dynamic groups in Azure AD.
+
+>[!NOTE]
+> In order to discover new Active Directory extension attributes, the provisioning agent needs to be restarted. You should restart the agent after the directory extensions have been created. For Azure AD extension attributes, the agent doesn't need to be restarted.
+
+## Syncing directory extensions for Azure Active Directory Connect cloud sync
+
+You can use [directory extensions](/graph/api/resources/extensionproperty?view=graph-rest-1.0&preserve-view=true) to extend the synchronization schema directory definition in Azure Active Directory (Azure AD) with your own attributes.
+
+>[!Important]
+> Directory extension for Azure Active Directory Connect cloud sync is only supported for applications with the identifier URI ΓÇ£api://&LT;tenantId&GT;/CloudSyncCustomExtensionsAppΓÇ¥ and the [Tenant Schema Extension App](../hybrid/how-to-connect-sync-feature-directory-extensions.md#configuration-changes-in-azure-ad-made-by-the-wizard) created by Azure AD Connect
+
+### Create application and service principal for directory extension
+
+You need to create an [application](/graph/api/resources/application?view=graph-rest-1.0&preserve-view=true) with the identifier URI "api://&LT;tenantId&GT;/CloudSyncCustomExtensionsApp" if it doesn't exist and create a service principal for the application if it doesn't exist.
++
+ 1. Check if application with the identifier URI "api://&LT;tenantId&GT;/CloudSyncCustomExtensionsApp" exists.
+
+ - Using Microsoft Graph
+
+ ```
+ GET /applications?$filter=identifierUris/any(uri:uri eq 'api://<tenantId>/CloudSyncCustomExtensionsApp')
+ ```
+
+ For more information, see [Get application](/graph/api/application-get?view=graph-rest-1.0&tabs=http&preserve-view=true)
+
+ - Using PowerShell
+
+ ```
+ Get-AzureADApplication -Filter "identifierUris/any(uri:uri eq 'api://<tenantId>/CloudSyncCustomExtensionsApp')"
+ ```
+
+ For more information, see [Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication?view=azureadps-2.0&preserve-view=true)
+
+ 2. If the application doesn't exist, create the application with identifier URI ΓÇ£api://&LT;tenantId&GT;/CloudSyncCustomExtensionsApp.ΓÇ¥
+
+ - Using Microsoft Graph
+ ```
+ POST https://graph.microsoft.com/v1.0/applications
+ Content-type: application/json
+
+ {
+ "displayName": "CloudSyncCustomExtensionsApp",
+ "identifierUris": ["api://<tenant id>/CloudSyncCustomExtensionsApp"]
+ }
+ ```
+ For more information, see [create application](/graph/api/application-post-applications?view=graph-rest-1.0&tabs=http&preserve-view=true)
+
+ - Using PowerShell
+ ```
+ New-AzureADApplication -DisplayName "CloudSyncCustomExtensionsApp" -IdentifierUris "api://<tenant id>/CloudSyncCustomExtensionsApp"
+ ```
+ For more information, see [New-AzureADApplication](/powershell/module/azuread/new-azureadapplication?view=azureadps-2.0&preserve-view=true)
+
+
+
+ 3. Check if the service principal exists for the application with identifier URI ΓÇ£api://&LT;tenantId&GT;/CloudSyncCustomExtensionsAppΓÇ¥.
+
+ - Using Microsoft Graph
+ ```
+ GET /servicePrincipals?$filter=(appId eq '{appId}')
+ ```
+ For more information, see [get service principal](/graph/api/serviceprincipal-get?view=graph-rest-1.0&tabs=http&preserve-view=true)
+
+ - Using PowerShell
+ ```
+ Get-AzureADServicePrincipal -ObjectId '<application objectid>'
+ ```
+ For more information, see [Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true&preserve-view=true)
+
+
+ 4. If a service principal doesn't exist, create a new service principal for the application with identifier URI ΓÇ£api://&LT;tenantId&GT;/CloudSyncCustomExtensionsAppΓÇ¥
+
+ - Using Microsoft Graph
+ ```
+ POST https://graph.microsoft.com/v1.0/servicePrincipals
+ Content-type: application/json
+
+ {
+ "appId":
+ "<application appId>"
+ }
+ ```
+ For more information, see [create servicePrincipal](/graph/api/serviceprincipal-post-serviceprincipals?view=graph-rest-1.0&tabs=http&preserve-view=true)
+
+ - Using PowerShell
+
+ ```
+ New-AzureADServicePrincipal -AppId '<appId>'
+ ```
+ For more information, see [New-AzureADServicePrincipal](/powershell/module/azuread/new-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true)
+
+ 5. You can create directory extensions in Azure AD in several different ways.
+
+|Method|Description|URL|
+|--|--|--|
+|MS Graph|Create extensions using GRAPH|[Create extensionProperty](/graph/api/application-post-extensionproperty?view=graph-rest-1.0&tabs=http&preserve-view=true)|
+|PowerShell|Create extensions using PowerShell|[New-AzureADApplicationExtensionProperty](/powershell/module/azuread/new-azureadapplicationextensionproperty?view=azureadps-2.0&preserve-view=true)|
+Using Cloud Sync and Azure AD Connect|Create extensions using Azure AD Connect|[Create an extension attribute using Azure AD Connect](../app-provisioning/user-provisioning-sync-attributes-for-mapping.md#create-an-extension-attribute-using-azure-ad-connect)|
+|Customizing attributes to sync|Information on customizing which attributes to synch|[Customize which attributes to synchronize with Azure AD](../hybrid/how-to-connect-sync-feature-directory-extensions.md#customize-which-attributes-to-synchronize-with-azure-ad)
+
+## Use attribute mapping to map Directory Extensions
+If you have extended Active Directory to include custom attributes, you can add these attributes and map them to users.
+
+To discover and map attributes, click **Add attribute mapping**. The attributes will automatically be discovered and will be available in the drop-down under **source attribute**. Fill in the type of mapping you want and click **Apply**.
+ [![Custom attribute mapping](media/custom-attribute-mapping/schema-1.png)](media/custom-attribute-mapping/schema-1.png#lightbox)
+
+For information on new attributes that are added and updated in Azure AD see the [user resource type](/graph/api/resources/user?view=graph-rest-1.0#properties&preserve-view=true) and consider subscribing to [change notifications](/graph/webhooks).
+
+For more information on extension attributes, see [Syncing extension attributes for Azure Active Directory Application Provisioning](../app-provisioning/user-provisioning-sync-attributes-for-mapping.md)
+
+## Additional resources
+
+- [Understand the Azure AD schema and custom expressions](concept-attributes.md)
+- [Azure AD Connect sync: Directory extensions](../hybrid/how-to-connect-sync-feature-directory-extensions.md)
+- [Attribute mapping in Azure AD Connect cloud sync](how-to-attribute-mapping.md)
active-directory How To Cloud Sync Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-cloud-sync-workbook.md
+
+ Title: 'Azure AD cloud sync insights workbook'
+description: This article describes the Azure Monitor workbook for cloud sync.
++++++ Last updated : 01/26/2023+++++++
+# Azure AD cloud sync insights workbook
+The Cloud sync workbook provides a flexible canvas for data analysis. The workbook allows you to create rich visual reports within the Azure portal. To learn more, see Azure Monitor Workbooks overview.
+
+This workbook is intended for Hybrid Identity Admins who use cloud sync to sync users from AD to Azure AD. It allows admins to gain insights into sync status and details.
+
+The workbook can be accessed by select **Insights** on the left hand side of the cloud sync page.
++
+ :::image type="content" source="media/how-to-cloud-sync-workbook/workbook-1.png" alt-text="Screenshot of the cloud sync workbook." lightbox="media/how-to-cloud-sync-workbook/workbook-1.png":::
+
+>[!NOTE]
+>The Insights node is available at both the all configurations level and the individual configuration level. To view information on individual configurations select the Job Id for the configuration.
+
+This workbook:
+
+- Provides a synchronization summary of users and groups synchronized from AD to Azure AD
+- Provides a detailed view of information captured by the cloud sync provisioning logs.
+- Allows you to customize the data to tailor it to your specific needs
+++
+|Field|Description|
+|--|--|
+|Date|The range that you want to view data on.|
+|Status|View the provisioning status such as Success or Skipped.|
+|Action|View the provisioning actions taken such as Create or Delete.|
+|Job Id|Allows you to target specific Job Ids. This can be used to see individual configuration data if you have multiple configurations.|
+|SyncType|Filter by type of synchronization such as object or password.|
++
+## Enabling provisioning logs
+
+You should already be familiar with Azure monitoring and Log Analytics. If not, jump over to learn about them and then come back to learn about application provisioning logs. To learn more about Azure monitoring, see [Azure Monitor overview](../../azure-monitor/overview.md). To learn more about Azure Monitor logs and Log Analytics, see [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md) and [Provisioning Logs for troubleshooting cloud sync](how-to-troubleshoot.md).
+
+## Sync summary
+The sync summary section provides a summary of your organizations synchronization activities. These activities include:
+ - Sync actions per day by action
+ - Sync actions per day by status
+ - Unique sync count by status
+ - Recent sync errors
+++
+ :::image type="content" source="media/how-to-cloud-sync-workbook/workbook-2.png" alt-text="Screenshot of the cloud sync summary." lightbox="media/how-to-cloud-sync-workbook/workbook-2.png":::
++
+## Sync details
+The sync details tab allows you to drill into the synchronization data and get more information. This information includes:
+ - Objects sync by status
+ - Sync log details
+
+ :::image type="content" source="media/how-to-cloud-sync-workbook/workbook-3.png" alt-text="Screenshot of the cloud sync details." lightbox="media/how-to-cloud-sync-workbook/workbook-3.png":::
+
+You can further drill in to the sync log details for additional information.
+
+ :::image type="content" source="media/how-to-cloud-sync-workbook/workbook-4.png" alt-text="Screenshot of the log details." lightbox="media/how-to-cloud-sync-workbook/workbook-4.png":::
+
+## Job Id
+A Job Id will be created for each configuration when it runs and is populated with data. You can look at individual configuration based on Job Id.
+++
+## Custom queries
+
+You can create custom queries and show the data on Azure dashboards. To learn how, see [Create and share dashboards of Log Analytics data](../../azure-monitor/logs/get-started-queries.md). Also, be sure to check out [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
+
+## Custom alerts
+
+Azure Monitor lets you configure custom alerts so that you can get notified about key events related to Provisioning. For example, you might want to receive an alert on spikes in failures. Or perhaps spikes in disables or deletes. Another example of where you might want to be alerted is a lack of any provisioning, which indicates something is wrong.
+
+To learn more about alerts, see [Azure Monitor Log Alerts](../../azure-monitor/alerts/alerts-log.md).
+
+## Next steps
+
+- [What is provisioning?](what-is-provisioning.md)
+- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
+- [Known limitations](how-to-prerequisites.md#known-limitations)
+- [Error codes](reference-error-codes.md)
active-directory How To Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-configure.md
After saving, you should see a message telling you what you still need to do to
For more information, see [attribute mapping](how-to-attribute-mapping.md).
+## Directory extensions and custom attribute mapping.
+Azure AD Connect cloud sync allows you to extend the directory with extensions and provides for custom attribute mapping. For more information see [Directory extensions and custom attribute mapping](custom-attribute-mapping.md).
+ ## On-demand provisioning Azure AD Connect cloud sync allows you to test configuration changes, by applying these changes to a single user or group.
active-directory Active Directory Enterprise App Role Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-enterprise-app-role-management.md
Last updated 11/11/2021 + # Configure the role claim issued in the SAML token for enterprise applications
active-directory Active Directory Jwt Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-jwt-claims-customization.md
Last updated 12/19/2022 + # Customize claims issued in the JSON web token (JWT) for enterprise applications (Preview)
active-directory Active Directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-saml-claims-customization.md
Last updated 12/19/2022 + # Customize claims issued in the SAML token for enterprise applications
active-directory Active Directory Schema Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-schema-extensions.md
Last updated 01/06/2023 -+ # Using directory extension attributes in claims
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Last updated 01/06/2023 -+ # Claims mapping policy type
active-directory Workload Identity Federation Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-considerations.md
The creation of federated identity credentials is available on user-assigned man
- Germany North - Sweden South - Sweden Central-- Switzerland West-- Brazil Southeast - East Asia-- Southeast Asia-- South Africa West - Qatar Central-- Australia Central-- Australia Central2-- Norway West Support for creating federated identity credentials in these regions will be rolled out gradually except East Asia where support won't be provided.
active-directory Users Bulk Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-add.md
The rows in a downloaded CSV template are as follows:
- We don't recommend adding new columns to the template. Any additional columns you add are ignored and not processed. - We recommend that you download the latest version of the CSV template as often as possible. - Make sure to check there is no unintended whitespace before/after any field. For **User principal name**, having such whitespace would cause import failure.-- Ensure that values in **Initial password** comply with the currently active [password policy](../authentication/concept-sspr-policy.md#password-policies-that-only-apply-to-cloud-user-accounts).
+- Ensure that values in **Initial password** comply with the currently active [password policy](../authentication/concept-sspr-policy.md#username-policies).
## To create users in bulk
active-directory Check Status Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-status-workflow.md
Previously updated : 03/10/2022 Last updated : 01/26/2023
active-directory Configure Logic App Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md
Previously updated : 08/28/2022- Last updated : 01/26/2023+
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
Previously updated : 02/15/2022 Last updated : 01/26/2023
active-directory Customize Workflow Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md
Previously updated : 01/20/2022 Last updated : 01/26/2023
active-directory Delete Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md
Previously updated : 01/20/2022 Last updated : 01/26/2023
active-directory Entitlement Management Group Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-group-licenses.md
na Previously updated : 08/18/2021- Last updated : 01/25/2023+
active-directory Entitlement Management Onboard External User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-onboard-external-user.md
na Previously updated : 08/18/2021- Last updated : 01/25/2023+
You can use entitlement management as a way of onboarding external users. This feature allows external users to request access to a set of resources and where you can set up approvals before they gain access to your directory. For external users onboarded through entitlement, you can manage their lifecycle through access packages. When their last access package expires, they'll be removed from your directory.
-In this tutorial, you work for WoodGrove Bank as an IT administrator. YouΓÇÖve been asked to create an access package to onboard partners from an outside organization that your business group is working with. They will need access to a Teams group called **External collaboration**.
+In this tutorial, you work for WoodGrove Bank as an IT administrator. YouΓÇÖve been asked to create an access package to onboard partners from an outside organization that your business group is working with. They'll need access to a Teams group called **External collaboration**.
Approval is needed by an internal sponsor for collaborating organizations. Also, you've been informed that the partner's access needs to expire after 60 days. To use entitlement management, you must have one of the following licenses:
For more information, see [License requirements](entitlement-management-overview
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
-1. In the Azure portal, in the left navigation, click **Azure Active Directory**.
+1. In the Azure portal, in the left navigation, select **Azure Active Directory**.
-2. In the left menu, click **Identity Governance**.
+2. In the left menu, select **Identity Governance**.
-3. In the left menu, click **Access packages**. If you see Access denied, ensure that an Azure AD Premium P2 license is present in your directory.
+3. In the left menu, select **Access packages**. If you see Access denied, ensure that an Azure AD Premium P2 license is present in your directory.
-4. Click **New access package**.
+4. Select **New access package**.
5. On the **Basics** tab, enter the name **External user package** and description **Access for external users pending approval**.
For more information, see [License requirements](entitlement-management-overview
## Step 2: Configure resources
-1. Click **Next** to open the **Resource roles** tab.
+1. Select **Next** to open the **Resource roles** tab.
On this tab, you select the resources and the resource role to include in the access package.
-2. Click on **Groups and Teams** and search for your group **External collaboration**.
+2. Select on **Groups and Teams** and search for your group **External collaboration**.
## Step 3: Configure requests
-1. Click **Next** to open the **Requests** tab.
+1. Select **Next** to open the **Requests** tab.
On this tab, you create a request policy. A *policy* defines the rules or guardrails to access an access package. You create a policy that allows a specific user in the resource directory to request this access package.
-2. In the **Users who can request access** section, click **For users not in your directory** and then click **All users (All connected organizations + any new external users)**.
+2. In the **Users who can request access** section, select **For users not in your directory** and then select **All users (All connected organizations + any new external users)**.
-3. Because any user who is not yet in your directory can view and submit a request for this access package, **Yes** is mandatory for the **Require approval** setting.
+3. Because any user who isn't yet in your directory can view and submit a request for this access package, **Yes** is mandatory for the **Require approval** setting.
4. The following settings allow you to configure how your approvals work for your external users:
For more information, see [License requirements](entitlement-management-overview
## Step 4: Configure requestor information
-1. Click **Next** to open the **Requestor information** tab
+1. Select **Next** to open the **Requestor information** tab
2. On this screen, you can ask additional questions to collect more information from your requestor. These questions are shown on their request form and can be set to required or optional. For now you can leave these as empty. ## Step 5: Configure lifecycle
-1. Click **Next** to open the **Lifecycle** tab
+1. Select **Next** to open the **Lifecycle** tab
2. In the **Expiration** section, set **Access package assignment expire** to **Number of days**.
For more information, see [License requirements](entitlement-management-overview
## Step 6: Review and create your access package
-1. Click **Next** to open the **Review + Create** tab.
+1. Select **Next** to open the **Review + Create** tab.
2. On this screen, you can review the configuration for your access package before creating. If there are any issues, you can use the tabs to navigate to a specific point in the create experience to make edits.
-3. When you're happy with your selections, click on **Create**. After a few moments, you should see a notification that the access package was successfully created.
+3. When you're happy with your selections, select on **Create**. After a few moments, you should see a notification that the access package was successfully created.
4. Once created, youΓÇÖll be brought to the **Overview** page for your access package. You can find the **My Access portal link** and copy the value here. Share this link with your external users and they can go to request this package to start collaborating.
In this step, you can delete the **External user package** access package.
**Prerequisite role:** Global administrator, Identity Governance administrator or Access package manager
-1. In the **Azure portal**, in the left navigation, click **Azure Active Directory**.
+1. In the **Azure portal**, in the left navigation, select **Azure Active Directory**.
-2. In the left menu, click **Identity Governance**.
+2. In the left menu, select **Identity Governance**.
-3. In the left menu, click **Access Packages**.
+3. In the left menu, select **Access Packages**.
4. Open the **External user package** access package.
-5. Click **Resource Roles**.
+5. Select **Resource Roles**.
6. Select the **External collaboration** group you added to this access package, and in the **Details** pane, select **Remove resource role**. In the message that appears, select **Yes**.
active-directory Entitlement Management Reprocess Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-assignments.md
na Previously updated : 06/25/2021 Last updated : 01/26/2023
active-directory Entitlement Management Reprocess Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-requests.md
na Previously updated : 06/25/2021 Last updated : 01/26/2023
active-directory Entitlement Management Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-access.md
na Previously updated : 3/30/2022 Last updated : 01/26/2023
The first step is to sign in to the My Access portal where you can request acces
**Prerequisite role:** Requestor
-1. Look for an email or a message from the project or business manager you are working with. The email should include a link to the access package you will need access to. The link starts with `myaccess`, includes a directory hint, and ends with an access package ID. (For US Government, the domain may be `https://myaccess.microsoft.us` instead.)
+1. Look for an email or a message from the project or business manager you're working with. The email should include a link to the access package you'll need access to. The link starts with `myaccess`, includes a directory hint, and ends with an access package ID. (For US Government, the domain may be `https://myaccess.microsoft.us` instead.)
`https://myaccess.microsoft.com/@<directory_hint>#/access-packages/<access_package_id>`
Once you have found the access package in the My Access portal, you can submit a
1. To request access you can either:
- 1. Click the row to see Access package details, and then select Request access.
+ 1. Select the row to see Access package details, and then select Request access.
- 1. Or click **Request access** directly.
+ 1. Or select **Request access** directly.
1. You may have to answer questions and provide business justification for your request. If there are questions that you need to answer, type in your responses in the fields.
Once you have found the access package in the My Access portal, you can submit a
![My Access portal - Request access](./media/entitlement-management-shared/my-access-request-access.png)
-1. When finished, click **Submit** to submit your request.
+1. When finished, select **Submit** to submit your request.
-1. Click **Request history** to see a list of your requests and the status.
+1. Select **Request history** to see a list of your requests and the status.
If the access package requires approval, the request is now in a pending approval state.
When you request access to an access package, your request might be denied or yo
1. Sign in to the **My Access** portal.
-1. Click **Request history** from the navigation menu to the left.
+1. Select **Request history** from the navigation menu to the left.
-1. Find the access package for which you are resubmitting a request.
+1. Find the access package for which you're resubmitting a request.
-1. Click the check mark to select the access package.
+1. Select the check mark to select the access package.
-1. Click the blue **View** link to the right of the selected access package.
+1. Select the blue **View** link to the right of the selected access package.
![Select access package and view link](./media/entitlement-management-request-access/resubmit-request-select-request-and-view.png)
When you request access to an access package, your request might be denied or yo
![Select resubmit button](./media/entitlement-management-request-access/resubmit-request-select-resubmit.png)
-1. Click the **Resubmit** button at the bottom of the pane.
+1. Select the **Resubmit** button at the bottom of the pane.
## Cancel a request
If you submit an access request and the request is still in the **pending approv
**Prerequisite role:** Requestor
-1. In the My Access portal, on the left, click **Request history** to see a list of your requests and the status.
+1. In the My Access portal, on the left, select **Request history** to see a list of your requests and the status.
-1. Click the **View** link for the request you want to cancel.
+1. Select the **View** link for the request you want to cancel.
-1. If the request is still in the **pending approval** state, you can click **Cancel request** to cancel the request.
+1. If the request is still in the **pending approval** state, you can select **Cancel request** to cancel the request.
![My Access portal - Cancel request](./media/entitlement-management-request-access/my-access-cancel-request.png)
-1. Click **Request history** to confirm the request was canceled.
+1. Select **Request history** to confirm the request was canceled.
## Next steps
active-directory Entitlement Management Request Approve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-approve.md
na Previously updated : 06/18/2020 Last updated : 01/26/2023
active-directory Entitlement Management Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-troubleshoot.md
na Previously updated : 12/23/2020 Last updated : 01/26/2023
This article describes some items you should check to help you troubleshoot enti
## Administration
-* If you get an access denied message when configuring entitlement management, and you are a Global administrator, ensure that your directory has an [Azure AD Premium P2 (or EMS E5) license](entitlement-management-overview.md#license-requirements). If you've recently renewed an expired Azure AD Premium P2 subscription, then it may take 8 hours for this license renewal to be visible.
+* If you get an access denied message when configuring entitlement management, and you're a Global administrator, ensure that your directory has an [Azure AD Premium P2 (or EMS E5) license](entitlement-management-overview.md#license-requirements). If you've recently renewed an expired Azure AD Premium P2 subscription, then it may take 8 hours for this license renewal to be visible.
-* If your tenant's Azure AD Premium P2 license has expired, then you will not be able to process new access requests or perform access reviews.
+* If your tenant's Azure AD Premium P2 license has expired, then you won't be able to process new access requests or perform access reviews.
-* If you get an access denied message when creating or viewing access packages, and you are a member of a Catalog creator group, you must [create a catalog](entitlement-management-catalog-create.md) prior to creating your first access package.
+* If you get an access denied message when creating or viewing access packages, and you're a member of a Catalog creator group, you must [create a catalog](entitlement-management-catalog-create.md) prior to creating your first access package.
## Resources
-* Roles for applications are defined by the application itself and are managed in Azure AD. If an application does not have any resource roles, entitlement management assigns users to a **Default Access** role.
+* Roles for applications are defined by the application itself and are managed in Azure AD. If an application doesn't have any resource roles, entitlement management assigns users to a **Default Access** role.
- Note that the Azure portal may also show service principals for services that cannot be selected as applications. In particular, **Exchange Online** and **SharePoint Online** are services, not applications that have resource roles in the directory, so they cannot be included in an access package. Instead, use group-based licensing to establish an appropriate license for a user who needs access to those services.
+ The Azure portal may also show service principals for services that can't be selected as applications. In particular, **Exchange Online** and **SharePoint Online** are services, not applications that have resource roles in the directory, so they can't be included in an access package. Instead, use group-based licensing to establish an appropriate license for a user who needs access to those services.
-* Applications which only support Personal Microsoft Account users for authentication, and do not support organizational accounts in your directory, do not have application roles and cannot be added to access package catalogs.
+* Applications that only support Personal Microsoft Account users for authentication, and don't support organizational accounts in your directory, don't have application roles and can't be added to access package catalogs.
-* For a group to be a resource in an access package, it must be able to be modifiable in Azure AD. Groups that originate in an on-premises Active Directory cannot be assigned as resources because their owner or member attributes cannot be changed in Azure AD. Groups that originate in Exchange Online as Distribution groups cannot be modified in Azure AD either.
+* For a group to be a resource in an access package, it must be able to be modifiable in Azure AD. Groups that originate in an on-premises Active Directory can't be assigned as resources because their owner or member attributes can't be changed in Azure AD. Groups that originate in Exchange Online as Distribution groups can't be modified in Azure AD either.
-* SharePoint Online document libraries and individual documents cannot be added as resources. Instead, create an [Azure AD security group](../fundamentals/active-directory-groups-create-azure-portal.md), include that group and a site role in the access package, and in SharePoint Online use that group to control access to the document library or document.
+* SharePoint Online document libraries and individual documents can't be added as resources. Instead, create an [Azure AD security group](../fundamentals/active-directory-groups-create-azure-portal.md), include that group and a site role in the access package, and in SharePoint Online use that group to control access to the document library or document.
* If there are users that have already been assigned to a resource that you want to manage with an access package, be sure that the users are assigned to the access package with an appropriate policy. For example, you might want to include a group in an access package that already has users in the group. If those users in the group require continued access, they must have an appropriate policy for the access packages so that they don't lose their access to the group. You can assign the access package by either asking the users to request the access package containing that resource, or by directly assigning them to the access package. For more information, see [Change request and approval settings for an access package](entitlement-management-access-package-request-policy.md).
-* When you remove a member of a team, they are removed from the Microsoft 365 Group as well. Removal from the team's chat functionality might be delayed. For more information, see [Group membership](/microsoftteams/office-365-groups#group-membership).
+* When you remove a member of a team, they're removed from the Microsoft 365 Group as well. Removal from the team's chat functionality might be delayed. For more information, see [Group membership](/microsoftteams/office-365-groups#group-membership).
## Access packages
This article describes some items you should check to help you troubleshoot enti
## External users
-* When an external user wants to request access to an access package, make sure they are using the **My Access portal link** for the access package. For more information, see [Share link to request an access package](entitlement-management-access-package-settings.md). If an external user just visits **myaccess.microsoft.com** and does not use the full My Access portal link, then they will see the access packages available to them in their own organization and not in your organization.
+* When an external user wants to request access to an access package, make sure they're using the **My Access portal link** for the access package. For more information, see [Share link to request an access package](entitlement-management-access-package-settings.md). If an external user just visits **myaccess.microsoft.com** and doesn't use the full My Access portal link, then they'll see the access packages available to them in their own organization and not in your organization.
* If an external user is unable to request access to an access package or is unable to access resources, be sure to check your [settings for external users](entitlement-management-external-users.md#settings-for-external-users).
-* If a new external user, that has not previously signed in your directory, receives an access package including a SharePoint Online site, their access package will show as not fully delivered until their account is provisioned in SharePoint Online. For more information about sharing settings, see [Review your SharePoint Online external sharing settings](entitlement-management-external-users.md#review-your-sharepoint-online-external-sharing-settings).
+* If a new external user that has not previously signed in your directory receives an access package including a SharePoint Online site, their access package will show as not fully delivered until their account is provisioned in SharePoint Online. For more information about sharing settings, see [Review your SharePoint Online external sharing settings](entitlement-management-external-users.md#review-your-sharepoint-online-external-sharing-settings).
## Requests
-* When a user wants to request access to an access package, be sure that they are using the **My Access portal link** for the access package. For more information, see [Share link to request an access package](entitlement-management-access-package-settings.md).
+* When a user wants to request access to an access package, be sure that they're using the **My Access portal link** for the access package. For more information, see [Share link to request an access package](entitlement-management-access-package-settings.md).
-* If you open the My Access portal with your browser set to in-private or incognito mode, this might conflict with the sign-in behavior. We recommend that you do not use in-private or incognito mode for your browser when you visit the My Access portal.
+* If you open the My Access portal with your browser set to in-private or incognito mode, this might conflict with the sign-in behavior. We recommend that you don't use in-private or incognito mode for your browser when you visit the My Access portal.
-* When a user who is not yet in your directory signs in to the My Access portal to request an access package, be sure they authenticate using their organizational account. The organizational account can be either an account in the resource directory, or in a directory that is included in one of the policies of the access package. If the user's account is not an organizational account, or the directory where they authenticate is not included in the policy, then the user will not see the access package. For more information, see [Request access to an access package](entitlement-management-request-access.md).
+* When a user who isn't yet in your directory signs in to the My Access portal to request an access package, be sure they authenticate using their organizational account. The organizational account can be either an account in the resource directory, or in a directory that is included in one of the policies of the access package. If the user's account isn't an organizational account, or the directory where they authenticate isn't included in the policy, then the user won't see the access package. For more information, see [Request access to an access package](entitlement-management-request-access.md).
-* If a user is blocked from signing in to the resource directory, they will not be able to request access in the My Access portal. Before the user can request access, you must remove the sign-in block from the user's profile. To remove the sign-in block, in the Azure portal, click **Azure Active Directory**, click **Users**, click the user, and then click **Profile**. Edit the **Settings** section and change **Block sign in** to **No**. For more information, see [Add or update a user's profile information using Azure Active Directory](../fundamentals/active-directory-users-profile-azure-portal.md). You can also check if the user was blocked due to an [Identity Protection policy](../identity-protection/howto-identity-protection-remediate-unblock.md).
+* If a user is blocked from signing in to the resource directory, they won't be able to request access in the My Access portal. Before the user can request access, you must remove the sign-in block from the user's profile. To remove the sign-in block, in the Azure portal, select **Azure Active Directory**, select **Users**, select the user, and then select **Profile**. Edit the **Settings** section and change **Block sign in** to **No**. For more information, see [Add or update a user's profile information using Azure Active Directory](../fundamentals/active-directory-users-profile-azure-portal.md). You can also check if the user was blocked due to an [Identity Protection policy](../identity-protection/howto-identity-protection-remediate-unblock.md).
-* In the My Access portal, if a user is both a requestor and an approver, they will not see their request for an access package on the **Approvals** page. This behavior is intentional - a user cannot approve their own request. Ensure that the access package they are requesting has additional approvers configured on the policy. For more information, see [Change request and approval settings for an access package](entitlement-management-access-package-request-policy.md).
+* In the My Access portal, if a user is both a requestor and an approver, they won't see their request for an access package on the **Approvals** page. This behavior is intentional - a user can't approve their own request. Ensure that the access package they're requesting has additional approvers configured on the policy. For more information, see [Change request and approval settings for an access package](entitlement-management-access-package-request-policy.md).
### View a request's delivery errors **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
-1. Click **Requests**.
+1. Select **Requests**.
1. Select the request you want to view.
This article describes some items you should check to help you troubleshoot enti
If there are any delivery errors, a count of delivery errors will be displayed in the request's detail pane.
-1. Click the count to see all of the request's delivery errors.
+1. Select the count to see all of the request's delivery errors.
### Reprocess a request
You can only reprocess a request that has a status of **Delivery failed** or **P
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
-1. Click **Requests**.
+1. Select **Requests**.
-1. Click the request you want to reprocess.
+1. Select the request you want to reprocess.
-1. In the request details pane, click **Reprocess request**.
+1. In the request details pane, select **Reprocess request**.
![Reprocess a failed request](./media/entitlement-management-troubleshoot/reprocess-request.png) ### Cancel a pending request
-You can only cancel a pending request that has not yet been delivered or whose delivery has failed.The **cancel** button would be grayed out otherwise.
+You can only cancel a pending request that hasn't yet been delivered or whose delivery has failed.The **cancel** button would be grayed out otherwise.
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
-1. Click **Requests**.
+1. Select **Requests**.
-1. Click the request you want to cancel.
+1. Select the request you want to cancel.
-1. In the request details pane, click **Cancel request**.
+1. In the request details pane, select **Cancel request**.
## Multiple policies * Entitlement management follows least privilege best practices. When a user requests access to an access package that has multiple policies that apply, entitlement management includes logic to help ensure stricter or more specific policies are prioritized over generic policies. If a policy is generic, entitlement management might not display the policy to the requestor or might automatically select a stricter policy.
-* For example, consider an access package with two policies for internal employees in which both policies apply to the requestor. The first policy is for specific users that include the requestor. The second policy is for all users in a directory that the requestor is a member of. In this scenario, the first policy is automatically selected for the requestor because it is more strict. The requestor is not given the option to select the second policy.
+* For example, consider an access package with two policies for internal employees in which both policies apply to the requestor. The first policy is for specific users that include the requestor. The second policy is for all users in a directory that the requestor is a member of. In this scenario, the first policy is automatically selected for the requestor because it's more strict. The requestor isn't given the option to select the second policy.
* When multiple policies apply, the policy that is automatically selected or the policies that are displayed to the requestor is based on the following priority logic:
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
Title: Identity Governance - Microsoft Entra | Microsoft Docs
description: Microsoft Entra Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. documentationcenter: ''-+ editor: markwahl-msft
na Previously updated : 8/10/2022- Last updated : 01/26/2023+
active-directory Lifecycle Workflow Audits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-audits.md
+ Previously updated : 12/02/2022 Last updated : 01/26/2023
active-directory Lifecycle Workflow Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md
+ Previously updated : 04/12/2022 Last updated : 01/26/2023
active-directory Lifecycle Workflow History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-history.md
+ Previously updated : 08/01/2022 Last updated : 01/26/2023
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
+ Previously updated : 03/23/2022- Last updated : 01/26/2023 # Lifecycle Workflow built-in tasks (preview)
active-directory Lifecycle Workflow Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-templates.md
+ Previously updated : 07/06/2022 Last updated : 01/26/2023
active-directory Lifecycle Workflow Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-versioning.md
+ Previously updated : 07/06/2022 Last updated : 01/26/2023
active-directory On Demand Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md
+ Previously updated : 03/04/2022- Last updated : 01/26/2023
active-directory Trigger Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/trigger-custom-task.md
+ Previously updated : 07/05/2022 Last updated : 01/26/2023
active-directory What Are Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-are-lifecycle-workflows.md
Previously updated : 01/20/2022 Last updated : 01/26/2023
active-directory Choose Ad Authn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/choose-ad-authn.md
description: This guide helps CEOs, CIOs, CISOs, Chief Identity Architects, Ente
keywords: Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory Cloud Governed Management For On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-governed-management-for-on-premises.md
na Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory Concept Azure Ad Connect Sync Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-architecture.md
na Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory Concept Azure Ad Connect Sync Declarative Provisioning Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-declarative-provisioning-expressions.md
na Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory Concept Azure Ad Connect Sync Declarative Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-declarative-provisioning.md
na Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory Concept Azure Ad Connect Sync Default Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-default-configuration.md
na Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory Deprecated Azure Ad Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/deprecated-azure-ad-connect.md
Previously updated : 12/05/2022 Last updated : 01/26/2023
active-directory How To Bypassdirsyncoverrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-bypassdirsyncoverrides.md
Title: How to use the BypassDirSyncOverrides feature of an Azure AD tenant
description: Describes how to use bypassdirsyncoverrides tenant feature to restore synchronization of Mobile and OtherMobile attributes from on-premises Active Directory. Previously updated : 08/11/2022 Last updated : 01/26/2023
active-directory How To Connect Azure Ad Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-azure-ad-trust.md
na Previously updated : 03/24/2022 Last updated : 01/26/2023
active-directory How To Connect Azureadaccount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-azureadaccount.md
na Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory How To Connect Configure Ad Ds Connector Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md
Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory How To Connect Device Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-device-options.md
na Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory How To Connect Emergency Ad Fs Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-emergency-ad-fs-certificate-rotation.md
Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory How To Connect Fed Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-compatibility.md
na Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Previously updated : 04/05/2022 Last updated : 01/26/2023
active-directory How To Connect Fed Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-management.md
na Previously updated : 01/05/2022 Last updated : 01/26/2023
active-directory How To Connect Fed O365 Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-o365-certs.md
na Previously updated : 10/13/2022 Last updated : 01/26/2023
active-directory How To Connect Fed Saml Idp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-saml-idp.md
na Previously updated : 03/29/2022 Last updated : 01/26/2023
active-directory How To Connect Fed Sha256 Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-sha256-guidance.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Fed Single Adfs Multitenant Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-single-adfs-multitenant-federation.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Fed Ssl Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-ssl-update.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Fed Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-whatis.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Fix Default Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fix-default-rules.md
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Group Writeback Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-disable.md
Previously updated : 06/15/2022 Last updated : 01/26/2023
active-directory How To Connect Group Writeback Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-enable.md
Previously updated : 06/15/2022 Last updated : 01/26/2023
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
Previously updated : 10/12/2022 Last updated : 01/26/2023
active-directory How To Connect Health Ad Fs Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-ad-fs-sign-in.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Health Adds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adds.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Health Adfs Risky Ip Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip-workbook.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Health Adfs Risky Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Health Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adfs.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
na Previously updated : 04/27/2022 Last updated : 01/26/2023
active-directory How To Connect Health Data Freshness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-data-freshness.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Health Data Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-data-retrieval.md
na Previously updated : 09/02/2020 Last updated : 01/26/2023
active-directory How To Connect Health Diagnose Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-diagnose-sync-errors.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Health Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-operations.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Health Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-sync.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Import Export Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-import-export-config.md
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Install Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-automatic-upgrade.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-custom.md
ms.assetid: 6d42fb79-d9cf-48da-8445-f482c4c536af
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Install Existing Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-existing-database.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Install Existing Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-existing-tenant.md
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Install Express https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-express.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Install Move Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-move-db.md
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Install Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-multiple-domains.md
na Previously updated : 03/09/2022 Last updated : 01/26/2023
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
na Previously updated : 02/23/2022 Last updated : 01/26/2023
active-directory How To Connect Install Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-roadmap.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Install Select Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-select-installation.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Install Sql Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-sql-delegation.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Installation Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-installation-wizard.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Migrate Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-migrate-groups.md
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Modify Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-modify-group-writeback.md
Previously updated : 06/15/2022 Last updated : 01/26/2023
active-directory How To Connect Monitor Federation Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-monitor-federation-changes.md
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
ms.assetid: 05f16c3e-9d23-45dc-afca-3d0fa9dbf501
Previously updated : 01/21/2022 Last updated : 01/26/2023 search.appverid:
active-directory How To Connect Post Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-post-installation.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Pta Current Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-current-limitations.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Pta Disable Do Not Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-disable-do-not-configure.md
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Pta How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-how-it-works.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Pta Security Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-security-deep-dive.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Pta Upgrade Preview Authentication Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-upgrade-preview-authentication-agents.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Pta User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-user-privacy.md
na Previously updated : 07/23/2018 Last updated : 01/26/2023
active-directory How To Connect Pta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Selective Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-selective-password-hash-synchronization.md
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Single Object Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-single-object-sync.md
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sso How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso-how-it-works.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sso User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso-user-privacy.md
na Previously updated : 05/21/2018 Last updated : 01/26/2023
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
Previously updated : 08/24/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Best Practices Changing Default Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-best-practices-changing-default-configuration.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Change Addsacct Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-change-addsacct-pass.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Change Serviceacct Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-change-serviceacct-pass.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Change The Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-change-the-configuration.md
ms.assetid: 7b9df836-e8a5-4228-97da-2faec9238b31
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Configure Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-configure-filtering.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Endpoint Api V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-endpoint-api-v2.md
editor: ''
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Feature Directory Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-feature-directory-extensions.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Feature Preferreddatalocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-feature-preferreddatalocation.md
Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Feature Prevent Accidental Deletes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-feature-prevent-accidental-deletes.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Feature Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-feature-scheduler.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Recycle Bin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-recycle-bin.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Service Manager Ui Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui-connectors.md
na Previously updated : 07/13/2017 Last updated : 01/26/2023
active-directory How To Connect Sync Service Manager Ui Mvdesigner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui-mvdesigner.md
na Previously updated : 07/13/2017 Last updated : 01/26/2023
active-directory How To Connect Sync Service Manager Ui Mvsearch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui-mvsearch.md
na Previously updated : 07/13/2017 Last updated : 01/26/2023
active-directory How To Connect Sync Service Manager Ui Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui-operations.md
na Previously updated : 07/13/2017 Last updated : 01/26/2023
active-directory How To Connect Sync Service Manager Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Staging Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-staging-server.md
na Previously updated : 5/18/2022 Last updated : 01/26/2023
active-directory How To Connect Sync Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-whatis.md
na Previously updated : 11/08/2017 Last updated : 01/26/2023
The sync service consists of two components, the on-premises **Azure AD Connect
| [Functions Reference](reference-connect-sync-functions-reference.md) |Lists all functions available in declarative provisioning. | ## Additional Resources
-* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
+* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
active-directory How To Connect Syncservice Duplicate Attribute Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-syncservice-duplicate-attribute-resiliency.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Syncservice Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-syncservice-features.md
na Previously updated : 01/21/2022 Last updated : 01/26/2023
active-directory How To Connect Syncservice Shadow Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-syncservice-shadow-attributes.md
Previously updated : 09/29/2021 Last updated : 01/26/2023
active-directory How To Connect Uninstall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-uninstall.md
Previously updated : 12/09/2020 Last updated : 01/26/2023
active-directory How To Dirsync Upgrade Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-dirsync-upgrade-get-started.md
na Previously updated : 07/13/2017 Last updated : 01/26/2023
active-directory How To Upgrade Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-upgrade-previous-version.md
na Previously updated : 04/08/2019 Last updated : 01/26/2023
This error occurs because the current Azure AD Connect configuration is not supp
If you want to install a newer version of Azure AD Connect: close the Azure AD Connect wizard, uninstall the existing Azure AD Connect, and perform a clean install of the newer Azure AD Connect. ## Next steps
-Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Plan Connect Performance Factors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-performance-factors.md
Previously updated : 10/06/2018 Last updated : 01/26/2023
active-directory Plan Connect Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-topologies.md
na Previously updated : 01/14/2022 Last updated : 01/26/2023
active-directory Plan Connect User Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-user-signin.md
na Previously updated : 05/31/2018 Last updated : 01/26/2023
active-directory Plan Connect Userprincipalname https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-userprincipalname.md
description: The following document describes how the UserPrincipalName attribut
Previously updated : 06/26/2018 Last updated : 01/26/2023
Azure AD Tenant user object:
## Next Steps - [Integrate your on-premises directories with Azure Active Directory](whatis-hybrid-identity.md)-- [Custom installation of Azure AD Connect](how-to-connect-install-custom.md)
+- [Custom installation of Azure AD Connect](how-to-connect-install-custom.md)
active-directory Plan Hybrid Identity Design Considerations Accesscontrol Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-accesscontrol-requirements.md
na Previously updated : 05/30/2018 Last updated : 01/26/2023
active-directory Plan Hybrid Identity Design Considerations Business Needs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-business-needs.md
na Previously updated : 04/29/2019 Last updated : 01/26/2023
active-directory Plan Hybrid Identity Design Considerations Contentmgt Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-contentmgt-requirements.md
na Previously updated : 04/29/2019 Last updated : 01/26/2023
active-directory Plan Hybrid Identity Design Considerations Dataprotection Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-dataprotection-requirements.md
na Previously updated : 04/29/2019 Last updated : 01/26/2023
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
Previously updated : 09/22/2021 Last updated : 01/26/2023
+zone_pivot_groups: enterprise-apps-minus-aad-powershell
+ #Customer intent: As an administrator of an Azure AD tenant, I want to configure the properties of an enterprise application.
To configure the properties of an enterprise application, you need:
Application properties control how the application is represented and how the application is accessed. + To configure the application properties: 1. Go to the [Azure Active Directory Admin Center](https://aad.portal.azure.com) and sign in using one of the roles listed in the prerequisites. 1. In the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to use. 1. In the **Manage** section, select **Properties** to open the **Properties** pane for editing.
-1. Configure the properties based on the needs of your application.
+1. On the **Properties** pane, you may want to configure the following properties for your application:
+ - Logo
+ - User sign in options
+ - App visibility to users
+ - Set available URL options
+ - Choose whether app assignment is required
+
++
+Use the following Microsoft Graph PowerShell script to configure basic application properties.
+
+You'll need to consent to the `Application.ReadWrite.All` permission.
+
+```powershell
+
+Import-Module Microsoft.Graph.Applications
+
+$params = @{
+ Tags = @(
+ "HR"
+ "Payroll"
+ "HideApp"
+ )
+ Info = @{
+ LogoUrl = "https://cdn.pixabay.com/photo/2016/03/21/23/25/link-1271843_1280.png"
+ MarketingUrl = "https://www.contoso.com/app/marketing"
+ PrivacyStatementUrl = "https://www.contoso.com/app/privacy"
+ SupportUrl = "https://www.contoso.com/app/support"
+ TermsOfServiceUrl = "https://www.contoso.com/app/termsofservice"
+ }
+ Web = @{
+ HomePageUrl = "https://www.contoso.com/"
+ LogoutUrl = "https://www.contoso.com/frontchannel_logout"
+ RedirectUris = @(
+ "https://localhost"
+ )
+ }
+ ServiceManagementReference = "Owners aliases: Finance @ contosofinance@contoso.com; The Phone Company HR consulting @ hronsite@thephone-company.com;"
+}
+
+Update-MgApplication -ApplicationId $applicationId -BodyParameter $params
+```
++
+To configure the basic properties of an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+
+You'll need to consent to the `Application.ReadWrite.All` permission.
+
+Run the following Microsoft Graph query to configure basic application properties.
+
+```http
+PATCH https://graph.microsoft.com/v1.0/applications/0d0021e2-eaab-4b9f-a5ad-38c55337d63e/
+Content-type: application/json
+
+{
+ "tags": [
+ "HR",
+ "Payroll",
+ "HideApp"
+ ],
+ "info": {
+ "logoUrl": "https://cdn.pixabay.com/photo/2016/03/21/23/25/link-1271843_1280.png",
+ "marketingUrl": "https://www.contoso.com/app/marketing",
+ "privacyStatementUrl": "https://www.contoso.com/app/privacy",
+ "supportUrl": "https://www.contoso.com/app/support",
+ "termsOfServiceUrl": "https://www.contoso.com/app/termsofservice"
+ },
+ "web": {
+ "homePageUrl": "https://www.contoso.com/",
+ "logoutUrl": "https://www.contoso.com/frontchannel_logout",
+ "redirectUris": [
+ "https://localhost"
+ ]
+ },
+ "serviceManagementReference": "Owners aliases: Finance @ contosofinance@contoso.com; The Phone Company HR consulting @ hronsite@thephone-company.com;"
+}
+```
## Use Microsoft Graph to configure application properties
-You can also configure properties of both app registrations and enterprise applications (service principals) through Microsoft Graph. These can include basic properties, permissions, and role assignments. For more information, see [Create and manage an Azure AD application using Microsoft Graph](/graph/tutorial-applications-basics#configure-other-basic-properties-for-your-app).
+You can also configure other advanced properties of both app registrations and enterprise applications (service principals) through Microsoft Graph. These include properties such as permissions, and role assignments. For more information, see [Create and manage an Azure AD application using Microsoft Graph](/graph/tutorial-applications-basics#configure-other-basic-properties-for-your-app).
## Next steps
active-directory Assign App Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-app-owners.md
Previously updated : 12/05/2022 Last updated : 01/26/2023
-#Customer intent: As an Azure AD administrator, I want to assign owners to enterprise applications.
+zone_pivot_groups: enterprise-apps-minus-aad-powershell
+#Customer intent: As an Azure AD administrator, I want to assign owners to enterprise applications.
# Assign enterprise application owners
-As an [owner of an enterprise application](overview-assign-app-owners.md) in Azure Active Directory (Azure AD), a user can manage the organization-specific configuration of it, such as single sign-on, provisioning, and user assignments. An owner can also add or remove other owners. Unlike Global Administrators, owners can manage only the enterprise applications they own. In this article, you learn how to assign an owner of an application.
+An [owner of an enterprise application](overview-assign-app-owners.md) in Azure Active Directory (Azure AD) can manage the organization-specific configuration of the application, such as single sign-on, provisioning, and user assignments. An owner can also add or remove other owners. Unlike Global Administrators, owners can manage only the enterprise applications they own. In this article, you learn how to assign an owner of an application.
## Assign an owner + To assign an owner to an enterprise application: 1. Sign in to [your Azure AD organization](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) with an account that is eligible for the **Application Administrator** role or the **Cloud Application Administrator** role for the organization.
To assign an owner to an enterprise application:
4. Search for and select the user account that you want to be an owner of the application. 5. Click **Select** to add the user account that you chose as an owner of the application. ++
+Use the following Microsoft Graph PowerShell cmdlet to add an owner to an enterprise application.
+
+You'll need to consent to the `Application.ReadWrite.All` permission.
+
+In the following example, the user's object ID is 8afc02cb-4d62-4dba-b536-9f6d73e9be26 and the applicationId is 46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b.
+
+```powershell
+Import-Module Microsoft.Graph.Applications
+
+$params = @{
+ "@odata.id" = "https://graph.microsoft.com/v1.0/directoryObjects/8afc02cb-4d62-4dba-b536-9f6d73e9be26"
+}
+
+New-MgServicePrincipalOwnerByRef -ServicePrincipalId '46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b' -BodyParameter $params
+```
++
+To assign an owner to an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+
+You'll need to consent to the `Application.ReadWrite.All` permission.
+
+Run the following Microsoft Graph query to assign an owner to an application. You need the object ID of the user you want to assign the application to. In the following example, the user's object ID is 8afc02cb-4d62-4dba-b536-9f6d73e9be26 and the appId is 46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b.
+
+```http
+POST https://graph.microsoft.com/v1.0/servicePrincipals(appId='46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b')/owners/$ref
+Content-Type: application/json
+
+{
+ "@odata.id": "https://graph.microsoft.com/v1.0/directoryObjects/8afc02cb-4d62-4dba-b536-9f6d73e9be26"
+}
+```
++ > [!NOTE] > If the user setting **Restrict access to Azure AD administration portal** is set to `Yes`, non-admin users will not be able to use the Azure portal to manage the applications they own. For more information about the actions that can be performed on owned enterprise applications, see [Owned enterprise applications](../fundamentals/users-default-permissions.md#owned-enterprise-applications). - ## Next steps - [Delegate app registration permissions in Azure Active Directory](../roles/delegate-app-roles.md)
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
To enable the admin consent workflow and choose reviewers:
1. Sign-in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites. 1. Search for and select **Azure Active Directory**. 1. Select **Enterprise applications**.
-1. Under **Manage**, select **User settings**.
+1. Under **Security**, select **Consent and permissions**.
+1. Under **Manage**, select **Admin consent settings**.
Under **Admin consent requests**, select **Yes** for **Users can request admin consent to apps they are unable to consent to** .
- :::image type="content" source="media/configure-admin-consent-workflow/enable-admin-consent-workflow.png" alt-text="Configure admin consent workflow settings":::
+
+ ![Screenshot of configure admin consent workflow settings.](./media/configure-admin-consent-workflow/enable-admin-consent-workflow.png)
+
1. Configure the following settings: - **Select users, groups, or roles that will be designated as reviewers for admin consent requests** - Reviewers can view, block, or deny admin consent requests, but only global administrators can approve admin consent requests. People designated as reviewers can view incoming requests in the **My Pending** tab after they have been set as reviewers. Any new reviewers won't be able to act on existing or expired admin consent requests. - **Selected users will receive email notifications for requests** - Enable or disable email notifications to the reviewers when a request is made.
- - **Selected users will receive request expiration reminders** - Enable or disable reminder email notifications to the reviewers when a request is about to expire.
+ - **Selected users will receive request expiration reminders** - Enable or disable reminder email notifications to the reviewers when a request is about to expire.
- **Consent request expires after (days)** - Specify how long requests stay valid.+ 1. Select **Save**. It can take up to an hour for the workflow to become enabled.
-> [!NOTE]
-> You can add or remove reviewers for this workflow by modifying the **Select admin consent requests reviewers** list. A current limitation of this feature is that a reviewer can retain the ability to review requests that were made while they were designated as a reviewer.
+ > [!NOTE]
+ > You can add or remove reviewers for this workflow by modifying the **Select admin consent requests reviewers** list. A current limitation of this feature is that a reviewer can retain the ability to review requests that were made while they were designated as a reviewer.
## Configure the admin consent workflow using Microsoft Graph
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-app-consent-policies.md
Previously updated : 09/02/2021 Last updated : 01/26/2023
+zone_pivot_groups: enterprise-apps-minus-portal-aad
#customer intent: As an admin, I want to manage app consent policies for enterprise applications in Azure AD
An app consent policy consists of one or more "include" condition sets and zero
Each condition set consists of several conditions. For an event to match a condition set, *all* conditions in the condition set must be met.
-App consent policies where the ID begins with "microsoft-" are built-in policies. Some of these built-in policies are used in existing built-in directory roles. For example, the `microsoft-application-admin` app consent policy describes the conditions under which the Application Administrator and Cloud Application Administrator roles are allowed to grant tenant-wide admin consent. Built-in policies can be used in custom directory roles and to configure user consent settings, but cannot be edited or deleted.
+App consent policies where the ID begins with "microsoft-" are built-in policies. Some of these built-in policies are used in existing built-in directory roles. For example, the `microsoft-application-admin` app consent policy describes the conditions under which the Application Administrator and Cloud Application Administrator roles are allowed to grant tenant-wide admin consent. Built-in policies can be used in custom directory roles and to configure user consent settings, but can't be edited or deleted.
## Pre-requisites
-1. A user or service with one of the following:
+1. A user or service with one of the following roles:
- Global Administrator directory role - Privileged Role Administrator directory role - A custom directory role with the necessary [permissions to manage app consent policies](../roles/custom-consent-permissions.md#managing-app-consent-policies) - The Microsoft Graph app role (application permission) Policy.ReadWrite.PermissionGrant (when connecting as an app or a service)
-
+
+
1. Connect to [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true). ```powershell
Follow these steps to create a custom app consent policy:
-ClientApplicationsFromVerifiedPublisherOnly ```
- Repeat this step to add additional "include" condition sets.
+ Repeat this step to add more "include" condition sets.
1. Optionally, add "exclude" condition sets.
Follow these steps to create a custom app consent policy:
-ResourceApplication $azureApi.AppId ```
- Repeat this step to add additional "exclude" condition sets.
+ Repeat this step to add more "exclude" condition sets.
Once the app consent policy has been created, you can [allow user consent](configure-user-consent.md?tabs=azure-powershell#allow-user-consent-subject-to-an-app-consent-policy) subject to this policy.
Once the app consent policy has been created, you can [allow user consent](confi
Remove-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId "my-custom-policy" ```
-> [!WARNING]
-> Deleted app consent policies cannot be restored. If you accidentally delete a custom app consent policy, you will need to re-create the policy.
-+
+To manage app consent policies, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+
+## List existing app consent policies
+
+It's a good idea to start by getting familiar with the existing app consent policies in your organization:
+
+1. List all app consent policies:
+
+```http
+GET /policies/permissionGrantPolicies?$select=id,displayName,description
+```
+
+1. View the "include" condition sets of a policy:
+
+```http
+GET /policies/permissionGrantPolicies/{ microsoft-application-admin }/includes
+```
+
+1. View the "exclude" condition sets:
+
+```http
+GET /policies/permissionGrantPolicies/{ microsoft-application-admin }/excludes
+```
+
+## Create a custom app consent policy
+
+Follow these steps to create a custom app consent policy:
+
+1. Create a new empty app consent policy.
+```http
+POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies
+Content-Type: application/json
+
+{
+ "id": "my-custom-policy",
+ "displayName": "My first custom consent policy",
+ "description": "This is a sample custom app consent policy"
+}
+```
+
+1. Add "include" condition sets.
+
+ Include delegated permissions classified "low", for apps from verified publishers
+
+```http
+POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/{ my-custom-policy }/includes
+Content-Type: application/json
+
+{
+ "permissionType": "delegated",
+ ΓÇ£PermissionClassification: "low",
+ "clientApplicationsFromVerifiedPublisherOnly": true
+}
+```
+
+ Repeat this step to add more "include" condition sets.
+
+1. Optionally, add "exclude" condition sets.
+ Exclude delegated permissions for the Azure Management API (appId 46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b)
+```http
+POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/my-custom-policy /excludes
+Content-Type: application/json
+
+{
+ "permissionType": "delegated",
+ "resourceApplication": "46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b "
+}
+```
+
+ Repeat this step to add more "exclude" condition sets.
+
+Once the app consent policy has been created, you can [allow user consent](configure-user-consent.md?tabs=azure-powershell#allow-user-consent-subject-to-an-app-consent-policy) subject to this policy.
+
+## Delete a custom app consent policy
+
+1. The following shows how you can delete a custom app consent policy. **This action canΓÇÖt be undone.**
+
+```http
+DELETE https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/ my-custom-policy
+```
++
+> [!WARNING]
+> Deleted app consent policies cannot be restored. If you accidentally delete a custom app consent policy, you will need to re-create the policy.
### Supported conditions The following table provides the list of supported conditions for app consent policies. | Condition | Description| |:|:-|
-| PermissionClassification | The [permission classification](configure-permission-classifications.md) for the permission being granted, or "all" to match with any permission classification (including permissions which are not classified). Default is "all". |
-| PermissionType | The permission type of the permission being granted. Use "application" for application permissions (e.g. app roles) or "delegated" for delegated permissions. <br><br>**Note**: The value "delegatedUserConsentable" indicates delegated permissions which have not been configured by the API publisher to require admin consentΓÇöthis value may be used in built-in permission grant policies, but cannot be used in custom permission grant policies. Required. |
-| ResourceApplication | The **AppId** of the resource application (e.g. the API) for which a permission is being granted, or "any" to match with any resource application or API. Default is "any". |
+| PermissionClassification | The [permission classification](configure-permission-classifications.md) for the permission being granted, or "all" to match with any permission classification (including permissions that aren't classified). Default is "all". |
+| PermissionType | The permission type of the permission being granted. Use "application" for application permissions (for example, app roles) or "delegated" for delegated permissions. <br><br>**Note**: The value "delegatedUserConsentable" indicates delegated permissions that haven't been configured by the API publisher to require admin consent. This value may be used in built-in permission grant policies, but can't be used in custom permission grant policies. Required. |
+| ResourceApplication | The **AppId** of the resource application (for example, the API) for which a permission is being granted, or "any" to match with any resource application or API. Default is "any". |
| Permissions | The list of permission IDs for the specific permissions to match with, or a list with the single value "all" to match with any permission. Default is the single value "all". <ul><li>Delegated permission IDs can be found in the **OAuth2Permissions** property of the API's ServicePrincipal object.</li><li>Application permission IDs can be found in the **AppRoles** property of the API's ServicePrincipal object.</li></ol> | | ClientApplicationIds | A list of **AppId** values for the client applications to match with, or a list with the single value "all" to match any client application. Default is the single value "all". | | ClientApplicationTenantIds | A list of Azure Active Directory tenant IDs in which the client application is registered, or a list with the single value "all" to match with client apps registered in any tenant. Default is the single value "all". | | ClientApplicationPublisherIds | A list of Microsoft Partner Network (MPN) IDs for [verified publishers](../develop/publisher-verification-overview.md) of the client application, or a list with the single value "all" to match with client apps from any publisher. Default is the single value "all". |
-| ClientApplicationsFromVerifiedPublisherOnly | Set this switch to only match on client applications with a [verified publishers](../develop/publisher-verification-overview.md). Disable this switch (`-ClientApplicationsFromVerifiedPublisherOnly:$false`) to match on any client app, even if it does not have a verified publisher. Default is `$false`. |
+| ClientApplicationsFromVerifiedPublisherOnly | Set this switch to only match on client applications with a [verified publishers](../develop/publisher-verification-overview.md). Disable this switch (`-ClientApplicationsFromVerifiedPublisherOnly:$false`) to match on any client app, even if it doesn't have a verified publisher. Default is `$false`. |
+> [!WARNING]
+> Deleted app consent policies cannot be restored. If you accidentally delete a custom app consent policy, you will need to re-create the policy.
## Next steps To learn more:
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
Previously updated : 9/22/2022 Last updated : 01/26/2023
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of an Azure key vault as a secret store with an Azure Kubernetes Service (AKS) cluster via a [CSI volume][kube-csi].
-## Limitations
-
-* A container using subPath volume mount will not receive secret updates when it is rotated. [See](https://secrets-store-csi-driver.sigs.k8s.io/known-limitations.html#secrets-not-rotated-when-using-subpath-volume-mount)
-
-## Prerequisites
+## Features
-- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Before you start, ensure that your version of the Azure CLI is 2.30.0 or later. If it's an earlier version, [install the latest version](/cli/azure/install-azure-cli).-- If restricting Ingress to the cluster, ensure Ports 9808 and 8095 are open.
+* Mounts secrets, keys, and certificates to a pod by using a CSI volume
+* Supports CSI inline volumes
+* Supports mounting multiple secrets store objects as a single volume
+* Supports pod portability with the `SecretProviderClass` CRD
+* Supports Windows containers
+* Syncs with Kubernetes secrets
+* Supports autorotation of mounted contents and synced Kubernetes secrets
-### Supported Kubernetes versions
+## Limitations
-The minimum recommended Kubernetes version is based on the [rolling Kubernetes version support window][kubernetes-version-support]. Ensure that you're running version N-2 or later.
+A container using subPath volume mount won't receive secret updates when it's rotated. For more information, see [Secrets Store CSI Driver known limitations](https://secrets-store-csi-driver.sigs.k8s.io/known-limitations.html#secrets-not-rotated-when-using-subpath-volume-mount).
-## Features
+## Prerequisites
-- Mounts secrets, keys, and certificates to a pod by using a CSI volume-- Supports CSI inline volumes-- Supports mounting multiple secrets store objects as a single volume-- Supports pod portability with the `SecretProviderClass` CRD-- Supports Windows containers-- Syncs with Kubernetes secrets-- Supports auto rotation of mounted contents and synced Kubernetes secrets
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* Check that your version of the Azure CLI is 2.30.0 or later. If it's an earlier version, [install the latest version](/cli/azure/install-azure-cli).
+* If you're restricting Ingress to the cluster, make sure ports **9808** and **8095** are open.
+* The minimum recommended Kubernetes version is based on the [rolling Kubernetes version support window][kubernetes-version-support]. Make sure you're running version N-2 or later.
## Create an AKS cluster with Azure Key Vault Provider for Secrets Store CSI Driver support
-First, create an Azure resource group:
+1. Create an Azure resource group.
-```azurecli-interactive
-az group create -n myResourceGroup -l eastus2
-```
+ ```azurecli-interactive
+ az group create -n myResourceGroup -l eastus2
+ ```
-To create an AKS cluster with Azure Key Vault Provider for Secrets Store CSI Driver capability, use the [az aks create][az-aks-create] command with the `azure-keyvault-secrets-provider` add-on.
+2. Create an AKS cluster with Azure Key Vault Provider for Secrets Store CSI Driver capability using the [`az aks create`][az-aks-create] command with the `azure-keyvault-secrets-provider` add-on.
-```azurecli-interactive
-az aks create -n myAKSCluster -g myResourceGroup --enable-addons azure-keyvault-secrets-provider --enable-managed-identity
-```
+ ```azurecli-interactive
+ az aks create -n myAKSCluster -g myResourceGroup --enable-addons azure-keyvault-secrets-provider --enable-managed-identity
+ ```
-A user-assigned managed identity, named `azurekeyvaultsecretsprovider-*`, is created by the add-on for the purpose of accessing Azure resources. The following example uses this identity to connect to the Azure key vault where the secrets will be stored, but you can also use other [identity access methods][identity-access-methods]. Take note of the identity's `clientId` in the output:
-
-```json
-...,
- "addonProfiles": {
- "azureKeyvaultSecretsProvider": {
- ...,
- "identity": {
- "clientId": "<client-id>",
- ...
- }
- }
-```
+3. A user-assigned managed identity, named `azureKeyvaultSecretsProvider`, is created by the add-on to access Azure resources. The following example uses this identity to connect to the Azure key vault where the secrets will be stored, but you can also use other [identity access methods][identity-access-methods]. Take note of the identity's `clientId` in the output.
+
+ ```json
+ ...,
+ "addonProfiles": {
+ "azureKeyvaultSecretsProvider": {
+ ...,
+ "identity": {
+ "clientId": "<client-id>",
+ ...
+ }
+ }
+ ```
## Upgrade an existing AKS cluster with Azure Key Vault Provider for Secrets Store CSI Driver support
-To upgrade an existing AKS cluster with Azure Key Vault Provider for Secrets Store CSI Driver capability, use the [az aks enable-addons][az-aks-enable-addons] command with the `azure-keyvault-secrets-provider` add-on:
+* Upgrade an existing AKS cluster with Azure Key Vault Provider for Secrets Store CSI Driver capability using the [`az aks enable-addons`][az-aks-enable-addons] command with the `azure-keyvault-secrets-provider` add-on. The add-on creates a user-assigned managed identity you can use to authenticate to your Azure key vault.
-```azurecli-interactive
-az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup
-```
-
-As mentioned in the preceding section, the add-on creates a user-assigned managed identity that you can use to authenticate to your Azure key vault.
+ ```azurecli-interactive
+ az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup
+ ```
## Verify the Azure Key Vault Provider for Secrets Store CSI Driver installation
-The preceding command installs the Secrets Store CSI Driver and the Azure Key Vault Provider on your nodes. Verify that the installation is finished by listing all pods that have the `secrets-store-csi-driver` and `secrets-store-provider-azure` labels in the kube-system namespace, and ensure that your output looks similar to the output shown here:
+1. Verify the installation is finished using the `kubectl get pods` command to list all pods that have the `secrets-store-csi-driver` and `secrets-store-provider-azure` labels in the kube-system namespace, and ensure that your output looks similar to the following output:
-```bash
-kubectl get pods -n kube-system -l 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'
+ ```bash
+ kubectl get pods -n kube-system -l 'app in (secrets-store-csi-driver,secrets-store-provider-azure)'
-NAME READY STATUS RESTARTS AGE
-aks-secrets-store-csi-driver-4vpkj 3/3 Running 2 4m25s
-aks-secrets-store-csi-driver-ctjq6 3/3 Running 2 4m21s
-aks-secrets-store-csi-driver-tlvlq 3/3 Running 2 4m24s
-aks-secrets-store-provider-azure-5p4nb 1/1 Running 0 4m21s
-aks-secrets-store-provider-azure-6pqmv 1/1 Running 0 4m24s
-aks-secrets-store-provider-azure-f5qlm 1/1 Running 0 4m25s
-```
+ NAME READY STATUS RESTARTS AGE
+ aks-secrets-store-csi-driver-4vpkj 3/3 Running 2 4m25s
+ aks-secrets-store-csi-driver-ctjq6 3/3 Running 2 4m21s
+ aks-secrets-store-csi-driver-tlvlq 3/3 Running 2 4m24s
+ aks-secrets-store-provider-azure-5p4nb 1/1 Running 0 4m21s
+ aks-secrets-store-provider-azure-6pqmv 1/1 Running 0 4m24s
+ aks-secrets-store-provider-azure-f5qlm 1/1 Running 0 4m25s
+ ```
-Be sure that a Secrets Store CSI Driver pod and a Secrets Store Provider Azure pod are running on each node in your cluster's node pools.
+2. Verify that each node in your cluster's node pool has a Secrets Store CSI Driver pod and a Secrets Store Provider Azure pod running.
## Create or use an existing Azure key vault
-In addition to an AKS cluster, you'll need an Azure key vault resource that stores the secret content. Keep in mind that the key vault's name must be globally unique.
+In addition to an AKS cluster, you'll need an Azure key vault resource that stores the secret content.
-```azurecli
-az keyvault create -n <keyvault-name> -g myResourceGroup -l eastus2
-```
+1. Create an Azure key vault using the [`az keyvault create`][az-keyvault-create] command. The name of the key vault must be globally unique.
-Your Azure key vault can store keys, secrets, and certificates. In this example, you'll set a plain-text secret called `ExampleSecret`:
+ ```azurecli
+ az keyvault create -n <keyvault-name> -g myResourceGroup -l eastus2
+ ```
-```azurecli
-az keyvault secret set --vault-name <keyvault-name> -n ExampleSecret --value MyAKSExampleSecret
-```
+2. Your Azure key vault can store keys, secrets, and certificates. In this example, use the [`az keyvault secret set`][az-keyvault-secret-set] command to set a plain-text secret called `ExampleSecret`.
-Take note of the following properties for use in the next section:
+ ```azurecli
+ az keyvault secret set --vault-name <keyvault-name> -n ExampleSecret --value MyAKSExampleSecret
+ ```
-- The name of the secret object in the key vault-- The object type (secret, key, or certificate)-- The name of your Azure key vault resource-- The Azure tenant ID that the subscription belongs to
+3. Take note of the following properties for use in the next section:
+
+ * The name of the secret object in the key vault
+ * The object type (secret, key, or certificate)
+ * The name of your Azure key vault resource
+ * The Azure tenant ID that the subscription belongs to
## Provide an identity to access the Azure key vault The Secrets Store CSI Driver allows for the following methods to access an Azure key vault: -- An [Azure Active Directory pod identity][aad-pod-identity] (preview)-- An [Azure Active Directory workload identity][aad-workload-identity] (preview)-- A user-assigned or system-assigned managed identity
+* An [Azure Active Directory pod identity][aad-pod-identity] (preview)
+* An [Azure Active Directory workload identity][aad-workload-identity] (preview)
+* A user-assigned or system-assigned managed identity
Follow the instructions in [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver][identity-access-methods] for your chosen method.
-> [!NOTE]
-> The rest of the examples on this page require that you've followed the instructions in [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver][identity-access-methods], chosen one of the identity methods, and configured a SecretProviderClass. Come back to this page after completed those steps.
+> [!IMPORTANT]
+> The rest of the examples on this page require that you've followed the instructions in [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver][identity-access-methods], chosen one of the identity methods, and configured a SecretProviderClass. Come back to this page after completing those steps.
## Validate the secrets After the pod starts, the mounted content at the volume path that you specified in your deployment YAML is available.
-```Bash
-## show secrets held in secrets-store
-kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
+* Use the following commands to validate your secrets and print a test secret.
-## print a test secret 'ExampleSecret' held in secrets-store
-kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/ExampleSecret
-```
+ ```bash
+ ## show secrets held in secrets-store
+ kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
+
+ ## print a test secret 'ExampleSecret' held in secrets-store
+ kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/ExampleSecret
+ ```
## Obtain certificates and keys
A key vault certificate also contains public x509 certificate metadata. The key
|`key`|The public key, in Privacy Enhanced Mail (PEM) format|N/A| |`cert`|The certificate, in PEM format|No| |`secret`|The private key and certificate, in PEM format|Yes|
-| | |
## Disable the Azure Key Vault Provider for Secrets Store CSI Driver on an existing AKS cluster > [!NOTE] > Before you disable the add-on, ensure that no `SecretProviderClass` is in use. Trying to disable the add-on while `SecretProviderClass` exists will result in an error.
-To disable the Azure Key Vault Provider for Secrets Store CSI Driver capability in an existing cluster, use the [az aks disable-addons][az-aks-disable-addons] command with the `azure-keyvault-secrets-provider` flag:
+* Disable the Azure Key Vault Provider for Secrets Store CSI Driver capability in an existing cluster using the [`az aks disable-addons`][az-aks-disable-addons] command with the `azure-keyvault-secrets-provider` add-on.
-```azurecli-interactive
-az aks disable-addons --addons azure-keyvault-secrets-provider -g myResourceGroup -n myAKSCluster
-```
+ ```azurecli-interactive
+ az aks disable-addons --addons azure-keyvault-secrets-provider -g myResourceGroup -n myAKSCluster
+ ```
> [!NOTE] > If the add-on is disabled, existing workloads will have no issues and will not see any updates in the mounted secrets. If the pod restarts or a new pod is created as part of scale-up event, the pod will fail to start because the driver is no longer running.
-## Additional configuration options
+## More configuration options
### Enable and disable autorotation
az aks disable-addons --addons azure-keyvault-secrets-provider -g myResourceGrou
>[!NOTE] > When a secret is updated in an external secrets store after initial pod deployment, the Kubernetes Secret and the pod mount will be periodically updated depending on how the application consumes the secret data. >
-> **Mount the Kubernetes Secret as a volume**: Use the auto rotation and Sync K8s secrets features of Secrets Store CSI Driver. The application will need to watch for changes from the mounted Kubernetes Secret volume. When the Kubernetes Secret is updated by the CSI Driver, the corresponding volume contents are automatically updated.
+> **Mount the Kubernetes Secret as a volume**: Use the autorotation and Sync K8s secrets features of Secrets Store CSI Driver. The application will need to watch for changes from the mounted Kubernetes Secret volume. When the Kubernetes Secret is updated by the CSI Driver, the corresponding volume contents are automatically updated.
> > **Application reads the data from the containerΓÇÖs filesystem**: Use the rotation feature of Secrets Store CSI Driver. The application will need to watch for the file change from the volume mounted by the CSI driver. > > **Use the Kubernetes Secret for an environment variable**: Restart the pod to get the latest secret as an environment variable. > Use a tool such as [Reloader][reloader] to watch for changes on the synced Kubernetes Secret and perform rolling upgrades on pods.
-To enable autorotation of secrets, use the `enable-secret-rotation` flag when you create your cluster:
+#### Enable autorotation on a new AKS cluster
-```azurecli-interactive
-az aks create -n myAKSCluster2 -g myResourceGroup --enable-addons azure-keyvault-secrets-provider --enable-secret-rotation
-```
+* Enable autorotation of secrets using the `enable-secret-rotation` parameter when you create your cluster.
-Or update an existing cluster with the add-on enabled:
+ ```azurecli-interactive
+ az aks create -n myAKSCluster2 -g myResourceGroup --enable-addons azure-keyvault-secrets-provider --enable-secret-rotation
+ ```
-```azurecli-interactive
-az aks addon update -g myResourceGroup -n myAKSCluster2 -a azure-keyvault-secrets-provider --enable-secret-rotation
-```
+#### Enable autorotation on an existing AKS cluster
-To specify a custom rotation interval, use the `rotation-poll-interval` flag:
+* Update an existing cluster to enable autorotation of secrets using the [`az aks addon update`][az-aks-addon-update] command and the `enable-secret-rotation` parameter.
-```azurecli-interactive
-az aks addon update -g myResourceGroup -n myAKSCluster2 -a azure-keyvault-secrets-provider --enable-secret-rotation --rotation-poll-interval 5m
-```
+ ```azurecli-interactive
+ az aks addon update -g myResourceGroup -n myAKSCluster2 -a azure-keyvault-secrets-provider --enable-secret-rotation
+ ```
-To disable autorotation, first disable the addon. Then, re-enable the addon without the `enable-secret-rotation` flag.
+#### Specify a custom rotation interval
-### Sync mounted content with a Kubernetes secret
+* Specify a custom rotation interval using the `rotation-poll-interval` parameter.
+
+ ```azurecli-interactive
+ az aks addon update -g myResourceGroup -n myAKSCluster2 -a azure-keyvault-secrets-provider --enable-secret-rotation --rotation-poll-interval 5m
+ ```
+
+#### Disable autorotation
+
+* To disable autorotation, first disable the addon. Then, re-enable the addon without the `enable-secret-rotation` parameter.
-You might sometimes want to create a Kubernetes secret to mirror the mounted content.
+ ```azurecli-interactive
+ # disable the addon
+ az aks addon disable -g myResourceGroup -n myAKSCluster2 -a azure-keyvault-secrets-provider
-When you create a `SecretProviderClass`, use the `secretObjects` field to define the desired state of the Kubernetes secret, as shown in the following example.
+ # re-enable the addon without the `enable-secret-rotation` parameter
+ az aks addon enable -g myResourceGroup -n myAKSCluster2 -a azure-keyvault-secrets-provider
+ ```
+
+### Sync mounted content with a Kubernetes secret
> [!NOTE] > The YAML examples here are incomplete. You'll need to modify them to support your chosen method of access to your key vault identity. For details, see [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver][identity-access-methods].
-The secrets will sync only after you start a pod to mount them. To rely solely on syncing with the Kubernetes secrets feature doesn't work. When all the pods that consume the secret are deleted, the Kubernetes secret is also deleted.
+You might want to create a Kubernetes secret to mirror your mounted secrets content. Your secrets will sync after you start a pod to mount them. When you delete the pods that consume the secrets, your Kubernetes secret will also be deleted.
+
+To sync mounted content with a Kubernetes secret, use the `secretObjects` field when creating a `SecretProviderClass` to define the desired state of the Kubernetes secret, as shown in the following example.
```yml apiVersion: secrets-store.csi.x-k8s.io/v1
spec:
``` > [!NOTE]
-> Make sure that the `objectName` in the `secretObjects` field matches the file name of the mounted content. If you use `objectAlias` instead, it should match the object alias.
+> Make sure the `objectName` in the `secretObjects` field matches the file name of the mounted content. If you use `objectAlias` instead, it should match the object alias.
#### Set an environment variable to reference Kubernetes secrets
-After you've created the Kubernetes secret, you can reference it by setting an environment variable in your pod, as shown in the following example code:
+After creating the Kubernetes secret, you can reference it by setting an environment variable in your pod, as shown in the following example code.
> [!NOTE]
-> The example here demonstrates access to a secret through env variables and through volume/volumeMount. This is for illustrative purposes; a typical application would use one method or the other. However, be aware that in order for a secret to be available through env variables, it first must be mounted by at least one pod.
+> The example YAML demonstrates access to a secret through env variables and through volume/volumeMount. This is for illustrative purposes; a typical application would use one method or the other. However, be aware that in order for a secret to be available through env variables, it first must be mounted by at least one pod.
```yml kind: Pod
spec:
secretProviderClass: "azure-sync" ```
-## Metrics
+## Access metrics
### The Azure Key Vault Provider
-Metrics are served via Prometheus from port 8898, but this port isn't exposed outside the pod by default. Access the metrics over localhost by using `kubectl port-forward`:
+Metrics are served via Prometheus from port 8898, but this port isn't exposed outside the pod by default.
-```bash
-kubectl port-forward -n kube-system ds/aks-secrets-store-provider-azure 8898:8898 &
-curl localhost:8898/metrics
-```
+* Access the metrics over localhost using `kubectl port-forward`.
+
+ ```bash
+ kubectl port-forward -n kube-system ds/aks-secrets-store-provider-azure 8898:8898 & curl localhost:8898/metrics
+ ```
-The following table lists the metrics that are provided by the Azure Key Vault Provider for Secrets Store CSI Driver:
+#### Metrics provided by the Azure Key Vault Provider for Secrets Store CSI Driver
|Metric|Description|Tags| |-|-|-| |keyvault_request|The distribution of how long it took to get from the key vault|`os_type=<runtime os>`, `provider=azure`, `object_name=<keyvault object name>`, `object_type=<keyvault object type>`, `error=<error if failed>`| |grpc_request|The distribution of how long it took for the gRPC requests|`os_type=<runtime os>`, `provider=azure`, `grpc_method=<rpc full method>`, `grpc_code=<grpc status code>`, `grpc_message=<grpc status message>`|
-| | |
### The Secrets Store CSI Driver
-Metrics are served from port 8095, but this port is not exposed outside the pod by default. Access the metrics over localhost by using `kubectl port-forward`:
+Metrics are served from port 8095, but this port isn't exposed outside the pod by default.
-```bash
-kubectl port-forward -n kube-system ds/aks-secrets-store-csi-driver 8095:8095 &
-curl localhost:8095/metrics
-```
+* Access the metrics over localhost using `kubectl port-forward`.
+
+ ```bash
+ kubectl port-forward -n kube-system ds/aks-secrets-store-csi-driver 8095:8095 &
+ curl localhost:8095/metrics
+ ```
-The following table lists the metrics provided by the Secrets Store CSI Driver:
+#### Metrics provided by the Secrets Store CSI Driver
|Metric|Description|Tags| |-|-|-|
The following table lists the metrics provided by the Secrets Store CSI Driver:
|total_rotation_reconcile_error|The distribution of how long it took to rotate secrets-store content for pods|`os_type=<runtime os>`| ## Troubleshooting
-You can find generic troubleshooting steps for the _Azure Key Vault Provider for Secrets Store CSI Driver_ [here](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/troubleshooting/)
+
+For generic troubleshooting steps, see [Azure Key Vault Provider for Secrets Store CSI Driver troubleshooting](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/troubleshooting/).
## Next steps
-Now that you've learned how to use the Azure Key Vault Provider for Secrets Store CSI Driver with an AKS cluster, see [Enable CSI drivers for Azure Disks and Azure Files on AKS][csi-storage-drivers].
+In this article, you learned how to use the Azure Key Vault Provider for Secrets Store CSI Driver with an AKS cluster. To learn more about the Azure Key Vault Provider for Secrets Store CSI Driver, see:
+
+* [Using the Azure Key Vault Provider](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/getting-started/usage/)
+* [Upgrading the Azure Key Vault Provider](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/upgrading/)
+* [Using Secrets Store CSI with AKS and Azure Key Vault](https://github.com/Azure-Samples/secrets-store-csi-with-aks-akv)
<!-- LINKS INTERNAL --> [az-aks-create]: /cli/azure/aks#az-aks-create
Now that you've learned how to use the Azure Key Vault Provider for Secrets Stor
[identity-access-methods]: ./csi-secrets-store-identity-access.md [aad-pod-identity]: ./use-azure-ad-pod-identity.md [aad-workload-identity]: workload-identity-overview.md
+[az-keyvault-create]: /cli/azure/keyvault#az-keyvault-create.md
+[az-keyvault-secret-set]: /cli/azure/keyvault#az-keyvault-secret-set.md
+[az-aks-addon-update]: /cli/azure/aks#addon-update.md
<!-- LINKS EXTERNAL --> [kube-csi]: https://kubernetes-csi.github.io/docs/
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md
AppArmor profiles are added using the `apparmor_parser` command.
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ] ```
-1. With the pod deployed, verify the *hello-apparmor* pod shows a *blocked* status by running the following command:
+2. With the pod deployed, run the following command and verify the *hello-apparmor* pod shows a *Running* status:
``` kubectl get pods NAME READY STATUS RESTARTS AGE aks-ssh 1/1 Running 0 4m2s
- hello-apparmor 0/1 Blocked 0 50s
+ hello-apparmor 0/1 Running 0 50s
``` For more information about AppArmor, see [AppArmor profiles in Kubernetes][k8s-apparmor].
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
The following parameters can be used to configure private DNS zone.
- **none** - the default is public DNS. AKS won't create a private DNS zone. - **CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID**, requires you to create a private DNS zone only in the following format for Azure global cloud: `privatelink.<region>.azmk8s.io` or `<subzone>.privatelink.<region>.azmk8s.io`. You'll need the Resource ID of that private DNS zone going forward. Additionally, you need a user assigned identity or service principal with at least the [Private DNS Zone Contributor][private-dns-zone-contributor-role] and [Network Contributor][network-contributor-role] roles. When deploying using API server VNet integration, a private DNS zone additionally supports the naming format of `private.<region>.azmk8s.io` or `<subzone>.private.<region>.azmk8s.io`. - If the private DNS zone is in a different subscription than the AKS cluster, you need to register the Azure provider **Microsoft.ContainerServices** in both subscriptions.
- - "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io`
+ - "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io`.
+ - If your AKS cluster is configured with an Active Directory service principal, AKS does not support using a system-assigned managed identity with custom private DNS zone.
### Create a private AKS cluster with private DNS zone
For associated best practices, see [Best practices for network connectivity and
[operator-best-practices-network]: operator-best-practices-network.md [install-azure-cli]: /cli/azure/install-azure-cli [private-dns-zone-contributor-role]: ../role-based-access-control/built-in-roles.md#dns-zone-contributor
-[network-contributor-role]: ../role-based-access-control/built-in-roles.md#network-contributor
+[network-contributor-role]: ../role-based-access-control/built-in-roles.md#network-contributor
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
## Update an AKS cluster to use a managed identity
+> [!NOTE]
+> If AKS has custom private DNS zone, AKS does not support to use system-assigned managed identity.
+ To update an AKS cluster currently using a service principal to work with a system-assigned managed identity, run the following CLI command. ```azurecli-interactive
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
| Attribute | Description | Required | Default | | - | | -- | | | tenant-id | Tenant ID or URL of the Azure Active Directory service. | Yes | N/A |
-| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | `Authorization` |
| query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A | | token-value | Expression returning a string containing the token. You must not return `Bearer` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A | | failed-validation-httpcode | HTTP status code to return if the JWT doesn't pass validation. | No | 401 |
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
### Simple token validation
-```xml
-<validate-jwt header-name="Authorization" require-scheme="Bearer">
- <issuer-signing-keys>
- <key>{{jwt-signing-key}}</key> <!-- signing key specified as a named value -->
- </issuer-signing-keys>
- <audiences>
- <audience>@(context.Request.OriginalUrl.Host)</audience> <!-- audience is set to API Management host name -->
- </audiences>
- <issuers>
- <issuer>http://contoso.com/</issuer>
- </issuers>
-</validate-jwt>
-```
-
-### Simple token validation
-
-The following policy is the minimal form of the `validate-azure-ad-token` policy. It expects the JWT to be provided in the `Authorization` header using the `Bearer` scheme. In this example, the Azure AD tenant ID and client application ID are provided using named values.
+The following policy is the minimal form of the `validate-azure-ad-token` policy. It expects the JWT to be provided in the default `Authorization` header using the `Bearer` scheme. In this example, the Azure AD tenant ID and client application ID are provided using named values.
```xml <validate-azure-ad-token tenant-id="{{aad-tenant-id}}">
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
This article contains both a quick reference and detailed description of Azure F
| Adjustable | No | No | | **Training dataset size * Neural** | 1 GB <sup>3</sup> | 1 GB (default value) | | Adjustable | No | No |
-| **Training dataset size * Template** | 50 MB <sup>4</sup> | 50 MB (default value) |
+| **Training file size * Template** | 50 MB <sup>4</sup> | 50 MB (default value) |
+| Adjustable | No | No |
+| **Total Training dataset size * Template** | 150 MB <sup>4</sup> | 150 MB (default value) |
| Adjustable | No | No | | **Max number of pages (Training) * Template** | 500 | 500 (default value) | | Adjustable | No | No |
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
``` * [Log in to Azure PowerShell](/powershell/azure/authenticate-azureps) using the identity (user or service principal) that you want to use for connecting your cluster to Azure Arc.
- * The identity used needs to at least have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`).
+ * The identity used needs to at least have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`) and 'Read' permission on the resource group the Azure Arc Cluster is targeting.
* The [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-clusterazure-arc-onboarding) is useful for at-scale onboarding as it has the granular permissions required to only connect clusters to Azure Arc. This role doesn't have the permissions to update, delete, or modify any other clusters or other Azure resources. * An up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options:
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
Benefits of the Azure Key Vault Secrets Provider extension include the following
- Canonical Kubernetes Distribution - Elastic Kubernetes Service - Tanzu Kubernetes Grid
+ - Azure Red Hat OpenShift
- Ensure you have met the [general prerequisites for cluster extensions](extensions.md#prerequisites). You must use version 0.4.0 or newer of the `k8s-extension` Azure CLI extension. > [!TIP]
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
This article provides information on troubleshooting and resolving issues that m
### Logs
-For issues encountered with Arc resource bridge, collect logs for further investigation using the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the same deployment machine that was used to run commands to deploy the Arc resource bridge. If there is a problem collecting logs, most likely the deployment machine is unable to reach the Appliance VM, and the network administrator needs to allow communication between the deployment machine to the Appliance VM.
+For issues encountered with Arc resource bridge, collect logs for further investigation using the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the same management machine that was used to run commands to deploy the Arc resource bridge. If there is a problem collecting logs, most likely the management machine is unable to reach the Appliance VM, and the network administrator needs to allow communication between the management machine to the Appliance VM.
-The `az arcappliance logs` command requires SSH to the Azure Arc resource bridge VM. The SSH key is saved to the deployment machine. To use a different machine to run the logs command, make sure the following files are copied to the machine in the same location:
+The `az arcappliance logs` command requires SSH to the Azure Arc resource bridge VM. The SSH key is saved to the management machine. To use a different machine to run the logs command, make sure the following files are copied to the machine in the same location:
```azurecli $HOME\.KVA\.ssh\logkey.pub
When the appliance is deployed to a host resource pool, there is no high availab
### Restricted outbound connectivity
-If you are experiencing connectivity, check to make sure your network allows all of the firewall and proxy URLs that are required to enable communication from the host machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. For more information, see [Azure Arc resource bridge (preview) network requirements](network-requirements.md).
+If you are experiencing connectivity, check to make sure your network allows all of the firewall and proxy URLs that are required to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. For more information, see [Azure Arc resource bridge (preview) network requirements](network-requirements.md).
### Azure Arc resource bridge is unreachable
To resolve this issue, reboot the resource bridge (preview) VM, and it should re
### SSL proxy configuration issues
-Be sure that the proxy server on your client machine trusts both the SSL certificate for your SSL proxy and the SSL certificate of the Microsoft download servers.
+Be sure that the proxy server on your management machine trusts both the SSL certificate for your SSL proxy and the SSL certificate of the Microsoft download servers.
For more information, see [SSL proxy configuration](network-requirements.md#ssl-proxy-configuration). ### KVA timeout error
-While trying to deploy Arc Resource Bridge, a "KVA timeout error" may appear. The "KVA timeout error" is a generic error that can be the result of a variety of network misconfigurations that involve the deployment machine, Appliance VM, or Control Plane IP not having communication with each other, to the internet, or required URLs. This communication failure is often due to issues with DNS resolution, proxy settings, network configuration, or internet access.
+While trying to deploy Arc Resource Bridge, a "KVA timeout error" may appear. The "KVA timeout error" is a generic error that can be the result of a variety of network misconfigurations that involve the management machine, Appliance VM, or Control Plane IP not having communication with each other, to the internet, or required URLs. This communication failure is often due to issues with DNS resolution, proxy settings, network configuration, or internet access.
-For clarity, "deployment machine" refers to the machine where deployment CLI commands are being run. "Appliance VM" is the VM that hosts Arc resource bridge. "Control Plane IP" is the IP of the control plane for the Kubernetes management cluster in the Appliance VM.
+For clarity, "management machine" refers to the machine where deployment CLI commands are being run. "Appliance VM" is the VM that hosts Arc resource bridge. "Control Plane IP" is the IP of the control plane for the Kubernetes management cluster in the Appliance VM.
#### Top causes of the KVA timeout errorΓÇ» -- Deployment machine is unable to communicate with Control Plane IP and Appliance VM IP.-- Appliance VM is unable to communicate with the deployment machine, vCenter endpoint (for VMware), or MOC cloud agent endpoint (for Azure Stack HCI).ΓÇ»
+- Management machine is unable to communicate with Control Plane IP and Appliance VM IP.
+- Appliance VM is unable to communicate with the management machine, vCenter endpoint (for VMware), or MOC cloud agent endpoint (for Azure Stack HCI).ΓÇ»
- Appliance VM does not have internet access. - Appliance VM has internet access, but connectivity to one or more required URLs is being blocked, possibly due to a proxy or firewall. - Appliance VM is unable to reach a DNS server that can resolve internal names, such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses, such as Azure service addresses and container registry names.  -- Proxy server configuration on the deployment machine or Arc resource bridge configuration files is incorrect. This can impact both the deployment machine and the Appliance VM. When the `az arcappliance prepare` command is run, the deployment machine won't be able to connect and download OS images if the host proxy isn't correctly configured. Internet access on the Appliance VM might be broken by incorrect or missing proxy configuration, which impacts the VM’s ability to pull container images. 
+- Proxy server configuration on the management machine or Arc resource bridge configuration files is incorrect. This can impact both the management machine and the Appliance VM. When the `az arcappliance prepare` command is run, the management machine won't be able to connect and download OS images if the host proxy isn't correctly configured. Internet access on the Appliance VM might be broken by incorrect or missing proxy configuration, which impacts the VM’s ability to pull container images. 
#### Troubleshoot KVA timeout error To resolve the error, one or more network misconfigurations may need to be addressed. Follow the steps below to address the most common reasons for this error.
-1. When there is a problem with deployment, the first step is to collect logs by Appliance VM IP (not by kubeconfig, as the kubeconfig may be empty if deploy command did not complete). Problems collecting logs are most likely due to the deployment machine being unable to reach the Appliance VM.
+1. When there is a problem with deployment, the first step is to collect logs by Appliance VM IP (not by kubeconfig, as the kubeconfig may be empty if deploy command did not complete). Problems collecting logs are most likely due to the management machine being unable to reach the Appliance VM.
Once logs are collected, extract the folder and open kva.log. Review the kva.log for more information on the failure to help pinpoint the cause of the KVA timeout error.
-1. The deployment machine must be able to communicate with the Appliance VM IP and Control Plane IP. Ping the Control Plane IP and Appliance VM IP from the deployment machine and verify there is a response from both IPs.
+1. The management machine must be able to communicate with the Appliance VM IP and Control Plane IP. Ping the Control Plane IP and Appliance VM IP from the management machine and verify there is a response from both IPs.
- If a request times out, the deployment machine is not able to communicate with the IP(s). This could be caused by a closed port, network misconfiguration or a firewall block. Work with your network administrator to allow communication between the deployment machine to the Control Plane IP and Appliance VM IP.
+ If a request times out, the management machine is not able to communicate with the IP(s). This could be caused by a closed port, network misconfiguration or a firewall block. Work with your network administrator to allow communication between the management machine to the Control Plane IP and Appliance VM IP.
-1. Appliance VM IP and Control Plane IP must be able to communicate with the deployment machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This may require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md) for Arc resource bridge.
+1. Appliance VM IP and Control Plane IP must be able to communicate with the management machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This may require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md) for Arc resource bridge.
1. Appliance VM IP and Control Plane IP need internet access to [these required URLs](#restricted-outbound-connectivity). Azure Stack HCI requires [additional URLs](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites). Work with your network administrator to ensure that the IPs can access the required URLs.
-1. In a non-proxy environment, the deployment machine must have external and internal DNS resolution. The deployment machine must be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to [resolve external addresses](#restricted-outbound-connectivity), such as Azure URLs and OS image download URLs. Work with your system administrator to ensure that the deployment machine has internal and external DNS resolution. In a proxy environment, the DNS resolution on the proxy server should resolve internal endpoints and [required external addresses](#restricted-outbound-connectivity).
+1. In a non-proxy environment, the management machine must have external and internal DNS resolution. The management machine must be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to [resolve external addresses](#restricted-outbound-connectivity), such as Azure URLs and OS image download URLs. Work with your system administrator to ensure that the management machine has internal and external DNS resolution. In a proxy environment, the DNS resolution on the proxy server should resolve internal endpoints and [required external addresses](#restricted-outbound-connectivity).
- To test DNS resolution to an internal address from the deployment machine in a non-proxy scenario, open command prompt and run `nslookup <vCenter endpoint or HCI MOC cloud agent IP>`. You should receive an answer if the deployment machine has internal DNS resolution in a non-proxy scenario. 
+ To test DNS resolution to an internal address from the management machine in a non-proxy scenario, open command prompt and run `nslookup <vCenter endpoint or HCI MOC cloud agent IP>`. You should receive an answer if the management machine has internal DNS resolution in a non-proxy scenario. 
1. Appliance VM needs to be able to reach a DNS server that can resolve internal names such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server also needs to be able to resolve external/internal addresses, such as Azure service addresses and container registry names for download of the Arc resource bridge container images from the cloud.
value out of range.
This error occurs when you run the Azure CLI commands in a 32-bit context, which is the default behavior. The vSphere SDK only supports running in a 64-bit context. The specific error returned from the vSphere SDK is `Unable to import ova of size 6GB using govc`. When you install the Azure CLI, it's a 32-bit Windows Installer package. However, the Azure CLI `az arcappliance` extension needs to run in a 64-bit context.
-To resolve this issue, perform the following steps to configure your client machine with the Azure CLI 64-bit version:
+To resolve this issue, perform the following steps to configure your management machine with the Azure CLI 64-bit version:
1. Uninstall the current version of the Azure CLI on Windows following these [steps](/cli/azure/install-azure-cli-windows#uninstall). 1. Install version 3.6 or higher of [Python](https://www.python.org/downloads/windows/) (64-bit).
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
See the complete regional availability of Functions on the [Azure web site](http
|East Asia| 100 | 20 | |East US | 100 | 100 | |East US 2| 100 | 100 |
-|France Central| 100 | 20 |
+|France Central| 100 | 60 |
|Germany West Central| 100 | 20 | |Japan East| 100 | 20 | |Japan West| 100 | 20 |
See the complete regional availability of Functions on the [Azure web site](http
|Korea Central| 100 | 20 | |Korea South| Not Available | 20 | |North Central US| 100 | 20 |
-|North Europe| 100 | 80 |
+|North Europe| 100 | 100 |
|Norway East| 100 | 20 | |South Africa North| 100 | 20 |
-|South Central US| 100 | 60 |
+|South Central US| 100 | 100 |
|South India | 100 | Not Available | |Southeast Asia| 100 | 20 | |Switzerland North| 100 | 20 | |Switzerland West| 100 | 20 | |UAE North| 100 | 20 |
-|UK South| 100 | 60 |
+|UK South| 100 | 100 |
|UK West| 100 | 20 | |USGov Arizona| 100 | 20 | |USGov Texas| 100 | Not Available |
See the complete regional availability of Functions on the [Azure web site](http
|West Central US| 100 | 20 | |West Europe| 100 | 100 | |West India| 100 | 20 |
-|West US| 100 | 40 |
+|West US| 100 | 100 |
|West US 2| 100 | 20 | |West US 3| 100 | 20 |
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 01/20/2023 Last updated : 01/26/2023 # Compare Azure Government and global Azure
Traffic Manager health checks can originate from certain IP addresses for Azure
## Security
-This section outlines variations and considerations when using Security services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,information-protection,application-gateway,vpn-gateway,security-center,key-vault,active-directory-ds,ddos-protection,active-directory&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Security services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=payment-hsm,azure-sentinel,azure-dedicated-hsm,information-protection,application-gateway,vpn-gateway,security-center,key-vault,active-directory-ds,ddos-protection,active-directory&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+
+### [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/)
+
+For feature variations and limitations, see [Microsoft Defender for Endpoint for US Government customers](/microsoft-365/security/defender-endpoint/gov).
### [Microsoft Defender for IoT](../defender-for-iot/index.yml)
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
For more details, read the [Geolocation service documentation](/rest/api/maps/ge
### Render service
-[Render service V2](/rest/api/maps/render-v2) introduces a new version of the [Get Map Tile V2 API](/rest/api/maps/render-v2/get-map-tile) that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 (where applicable) tile sizes and numerous map types such as road, weather, contour, or map tiles created using Azure Maps Creator. For a complete list see [TilesetID](/rest/api/maps/render-v2/get-map-tile#tilesetid) in the REST API documentation. It's recommended that you use Render service V2 instead of Render service V1. You are required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render service V2, either as basemaps or layers, in any third-party map control. For more information see [How to use the Get Map Attribution API](how-to-show-attribution.md).
+[Render service V2](/rest/api/maps/render-v2) introduces a new version of the [Get Map Tile V2 API](/rest/api/maps/render-v2/get-map-tile) that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 (where applicable) tile sizes and numerous map types such as road, weather, contour, or map tiles created using Azure Maps Creator. For a complete list, see [TilesetID](/rest/api/maps/render-v2/get-map-tile#tilesetid) in the REST API documentation. It's recommended that you use Render service V2 instead of Render service V1. You're required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render service V2, either as basemaps or layers, in any third-party map control. For more information, see [How to use the Get Map Attribution API](how-to-show-attribution.md).
:::image type="content" source="./media/about-azure-maps/intro_map.png" border="false" alt-text="Example of a map from the Render service V2":::
For more information, see the [Traffic service documentation](/rest/api/maps/tra
### Weather services
-Weather services offer APIs that developers can use to retrieve weather information for a particular location. The information contains details such as observation date and time, brief description of the weather conditions, weather icon, precipitation indicator flags, temperature, and wind speed information. Additional details such as RealFeelΓäó Temperature and UV index are also returned.
+Weather services offer APIs that developers can use to retrieve weather information for a particular location. The information contains details such as observation date and time, brief description of the weather conditions, weather icon, precipitation indicator flags, temperature, and wind speed information. Other details such as RealFeelΓäó Temperature and UV index are also returned.
Developers can use the [Get Weather along route API](/rest/api/maps/weather/getweatheralongroute) to retrieve weather information along a particular route. Also, the service supports the generation of weather notifications for waypoints that are affected by weather hazards, such as flooding or heavy rain.
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
mapsDemo
| Service Name  | npm packages | Samples  | ||-|--|
-| [Search][search readme] | [@azure/maps-search][search package] | [search samples][search sample] |
+| [Search][search readme] | [@azure-rest/maps-search][search package] | [search samples][search sample] |
| [Route][js route readme] | [@azure-rest/maps-route][js route package] | [route samples][js route sample] | | [Render][js render readme] | [@azure-rest/maps-render][js render package]|[render sample][js render sample] | | [Geolocation][js geolocation readme]|[@azure-rest/maps-geolocation][js geolocation package]|[geolocation sample][js geolocation sample] |
main().catch((err) => {
[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources [dotenv]: https://github.com/motdotla/dotenv#readme
-[search package]: https://www.npmjs.com/package/@azure/maps-search
-[search readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-search/README.md
-[search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search/samples/v1-beta
+[search package]: https://www.npmjs.com/package/@azure-rest/maps-search
+[search readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-search-rest/README.md
+[search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search-rest/samples/v1-beta
[js route readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-route-rest/README.md [js route package]: https://www.npmjs.com/package/@azure-rest/maps-route
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
For more information, see the [Java SDK Developers Guide](how-to-dev-guide-java-
<!-- JavaScript/TypeScript SDK Developers Guide > [Node.js]: https://nodejs.org/en/download/
-[js search readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-search/README.md
-[js search package]: https://www.npmjs.com/package/@azure/maps-search
-[js search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search/samples/v1-beta/javascript
+[js search readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-search-rest/README.md
+[js search package]: https://www.npmjs.com/package/@azure-rest/maps-search
+[js search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search-rest/samples/v1-beta/javascript
[js route readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-route-rest/README.md [js route package]: https://www.npmjs.com/package/@azure-rest/maps-route
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to update to the latest version at all times, or opt in
| Release Date | Release notes | Windows | Linux | |:|:|:|:| | Nov-Dec 2022 | <ul><li>Support for air-gapped clouds added for [Windows MSI installer for clients](./azure-monitor-agent-windows-client.md) </li><li>Reliability improvements for using AMA with Custom Metrics destination</li><li>Performance and internal logging improvements</li></ul> | 1.11.0.0 | None |
-| Oct 2022 | **Windows** <ul><li>Increased default retry timeout for data upload from 4 to 8 hours</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lockdown write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 |
+| Oct 2022 | **Windows** <ul><li>Increased reliability of data uploads</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lockdown write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 |
| Sep 2022 | Reliability improvements | 1.9.0.0 | None | | August 2022 | **Common updates** <ul><li>Improved resiliency: Default lookback (retry) time updated to last 3 days (72 hours) up from 60 minutes, for agent to collect data post interruption. This is subject to default offline cache size of 10gigabytes</li><li>Fixes the preview custom text log feature that was incorrectly removing the *TimeGenerated* field from the raw data of each event. All events are now additionally stamped with agent (local) upload time</li><li>Reliability and supportability improvements</li></ul> **Windows** <ul><li>Fixed datetime format to UTC</li><li>Fix to use default location for firewall log collection, if not provided</li><li>Reliability and supportability improvements</li></ul> **Linux** <ul><li>Support for OpenSuse 15, Debian 11 ARM64</li><li>Support for coexistence of Azure Monitor agent with legacy Azure Diagnostic extension for Linux (LAD)</li><li>Increased max-size of UDP payload for Telegraf output to prevent dimension truncation</li><li>Prevent unconfigured upload to Azure Monitor Metrics destination</li><li>Fix for disk metrics wherein *instance name* dimension will use the disk mount path(s) instead of the device name(s), to provide parity with legacy agent</li><li>Fixed *disk free MB* metric to report megabytes instead of bytes</li></ul> | 1.8.0.0 | 1.22.2 | | July 2022 | Fix for mismatch event timestamps for Sentinel Windows Event Forwarding | 1.7.0.0 | None |
azure-monitor Alerts Metric Multiple Time Series Single Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-multiple-time-series-single-rule.md
For example, assume we've set the preceding alert rule to monitor for CPU above
The alert rule triggers on *myVM1* but not *myVM2*. These triggered alerts are independent. They can also resolve at different times depending on the individual behavior of each of the virtual machines.
-For more information about multi-resource alert rules and the resource types supported for this capability, see [Monitoring at scale using metric alerts in Azure Monitor](alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).
+For more information about multi-resource alert rules and the resource types supported for this capability, see [Monitoring at scale using metric alerts in Azure Monitor](alerts-types.md#monitor-multiple-resources).
> [!NOTE] > In a metric alert rule that monitors multiple resources, only a single condition is allowed.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |AddRegion |Yes |Region Added |Count |Count |Region Added |Region | |AutoscaleMaxThroughput |No |Autoscale Max Throughput |Count |Maximum |Autoscale Max Throughput |DatabaseName, CollectionName |
-|AvailableStorage |No |(deprecated) Available Storage |Bytes |Total |"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://docs.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date. |CollectionName, DatabaseName, Region |
+|AvailableStorage |No |(deprecated) Available Storage |Bytes |Total |"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, refer to [Azure Cosmos DB service limits]( ../../cosmos-db/concepts-limits.md). After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date. |CollectionName, DatabaseName, Region |
|CassandraConnectionClosures |No |Cassandra Connection Closures |Count |Total |Number of Cassandra connections that were closed, reported at a 1 minute granularity |APIType, Region, ClosureReason | |CassandraConnectorAvgReplicationLatency |No |Cassandra Connector Average ReplicationLatency |MilliSeconds |Average |Cassandra Connector Average ReplicationLatency |No Dimensions | |CassandraConnectorReplicationHealthStatus |No |Cassandra Connector Replication Health Status |Count |Count |Cassandra Connector Replication Health Status |NotStarted, ReplicationInProgress, Error |
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
The following table describes some of the ways that you can use Azure Monitor Lo
| Alert | Configure a [log alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. | | Visualize | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](./log-powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.| | Get insights | Logs support [insights](../insights/insights-overview.md) that provide a customized monitoring experience for particular applications and services. |
-| Retrieve | Access log query results from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](https://dev.loganalytics.io/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
+| Retrieve | Access log query results from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](/rest/api/loganalytics/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
| Export | Configure [automated export of log data](./logs-data-export.md) to an Azure Storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](./logicapp-flow-connector.md). | ![Diagram that shows an overview of Azure Monitor Logs.](media/data-platform-logs/logs-overview.png)
The experience of using Log Analytics to work with Azure Monitor queries in the
- Learn about [log queries](./log-query-overview.md) to retrieve and analyze data from a Log Analytics workspace. - Learn about [metrics in Azure Monitor](../essentials/data-platform-metrics.md).-- Learn about the [monitoring data available](../data-sources.md) for various resources in Azure.
+- Learn about the [monitoring data available](../data-sources.md) for various resources in Azure.
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
description: Customers meeting the minimum commitment tier could use dedicated c
Previously updated : 05/01/2022 Last updated : 01/01/2023 # Azure Monitor Logs Dedicated Clusters
-Azure Monitor Logs Dedicated Clusters are a deployment option that enables advanced capabilities for Azure Monitor Logs customers. Customers can select which of their Log Analytics workspaces should be hosted on dedicated clusters.
-
-Dedicated clusters require customers to commit for at least 500 GB of data ingestion per day. You can link existing workspace to a dedicated cluster and unlink it with no data loss or service interruption.
+Log Analytics Dedicated clusters in Azure Monitor enable advanced capabilities, and higher query utilization, provided to linked Log Analytics workspaces. Clusters require a minimum ingestion commitment of 500 GB per day. You can link and unlink workspaces from a dedicated cluster without any data loss or service interruption.
Capabilities that require dedicated clusters:
azure-monitor Save Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/save-query.md
Last updated 06/22/2022
- Allow other users to run the same query. - Create a library of common queries for your organization.
+## Permissions
+- To save a query, you need the **Log Analytics Contributor** role.
+- To view a saved query, you need the **Log Analytics Reader** role.
+ ## Save options
-When you save a query, it's stored in a query pack, which has benefits over the previous method of storing the query in a workspace. Saving to a query pack is the preferred method, and it provides the following benefits:
+When you save a query, it's stored in a query pack, which has benefits over the previous method of storing the query in a workspace, including:
- Easier discoverability with the ability to filter and group queries by different properties. - Queries are available when you use a resource scope in Log Analytics.
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Search jobs are asynchronous queries that fetch records into a new search table within your workspace for further analytics. The search job uses parallel processing and can run for hours across large datasets. This article describes how to create a search job and how to query its resulting data. > [!NOTE]
-> The search job feature is currently not supported in workspaces with [customer-managed keys](customer-managed-keys.md) and in the China East 2 region.
+> The search job feature is currently not supported for the following cases:
+> - Workspaces with [customer-managed keys](customer-managed-keys.md).
+> - Workspaces in the China East 2 region.
## When to use search jobs
azure-monitor Snapshot Debugger App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md
reviewer: cweining Previously updated : 08/18/2022 Last updated : 01/24/2023
Snapshot Debugger currently supports ASP.NET and ASP.NET Core apps that are running on Azure App Service on Windows service plans.
-We recommend that you run your application on the Basic service tier, or higher, when using Snapshot Debugger.
-
-For most applications, the Free and Shared service tiers don't have enough memory or disk space to save snapshots.
+> [!NOTE]
+> We recommend that you run your application on the Basic service tier, or higher, when using Snapshot Debugger. For most applications, the Free and Shared service tiers don't have enough memory or disk space to save snapshots. The Consumption tier is not currently available for Snapshot Debugger.
## <a id="installation"></a> Enable Snapshot Debugger
azure-netapp-files Configure Ldap Over Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-over-tls.md
na Previously updated : 03/15/2022 Last updated : 01/25/2023 # Configure ADDS LDAP over TLS for Azure NetApp Files
You can use LDAP over TLS to secure communication between an Azure NetApp Files
## Considerations
-* LDAP over TLS must not be enabled if you are using Azure Active Directory Domain Services (AADDS). AADDS uses LDAPS (port 636) to secure LDAP traffic instead of LDAP over TLS (port 389).
+* DNS PTR records must exist for each AD DS domain controller assigned to the **AD Site Name** specified in the Azure NetApp Files Active Directory connection.
+* PTR records must exist for all domain controllers in the site for ADDS LDAP over TLS to function properly.
## Generate and export root CA certificate
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 11/28/2022 Last updated : 01/25/2023 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
Azure NetApp Files supports LDAP Channel Binding if both LDAP Signing and LDAP over TLS settings options are enabled in the Active Directory Connection. For more information, see [ADV190023 | Microsoft Guidance for Enabling LDAP Channel Binding and LDAP Signing](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023).
+ >[!NOTE]
+ >DNS PTR records for the AD DS machine account(s) must be created in the AD DS **Organizational Unit** specified in the Azure NetApp Files AD connection for LDAP Signing to work.
+ ![Screenshot of the LDAP signing checkbox.](../media/azure-netapp-files/active-directory-ldap-signing.png) * **Allow local NFS users with LDAP**
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
na Previously updated : 01/06/2022 Last updated : 01/25/2023 # Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
The required network ports are as follows:
| NetBIOS name | 138 | UDP | | SAM/LSA | 445 | TCP | | SAM/LSA | 445 | UDP |
-| w32time | 123 | UDP |
*DNS running on AD DS domain controller
Ensure that you meet the following requirements about the DNS configurations:
* Ensure that DNS servers have network connectivity to the Azure NetApp Files delegated subnet hosting the Azure NetApp Files volumes. * Ensure that network ports UDP 53 and TCP 53 are not blocked by firewalls or NSGs. * Ensure that [the SRV records registered by the AD DS Net Logon service](https://social.technet.microsoft.com/wiki/contents/articles/7608.srv-records-registered-by-net-logon.aspx) have been created on the DNS servers.
-* Ensure that the PTR records for the SRV records registered by the AD DS Net Logon service have been created on the DNS servers.
+* Ensure that the PTR records for the AD DS domain controllers used by Azure NetApp Files have been created on the DNS servers.
* Azure NetApp Files supports standard and secure dynamic DNS updates. If you require secure dynamic DNS updates, ensure that secure updates are configured on the DNS servers.
-* If dynamic DNS updates are not used, you need to manually create A record and PTR records for Azure NetApp Files SMB volumes.
+* If dynamic DNS updates are not used, you need to manually create an A record and a PTR record for the AD DS machine account(s) created in the AD DS **Organizational Unit** (specified in the Azure NetApp Files AD connection) to support Azure NetApp FIles LDAP Signing, LDAP over TLS, SMB, dual-protocol, or Kerberos NFSv4.1 volumes.
* For complex or large AD DS topologies, [DNS Policies or DNS subnet prioritization may be required to support LDAP enabled NFS volumes](#ad-ds-ldap-discover). ### Time source requirements
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
In this article, you'll learn how to deploy Arc for Azure VMware Solution. Once
Before you begin checking off the prerequisites, verify the following actions have been done: - You deployed an Azure VMware Solution private cluster. -- You have a connection to the Azure VMware Solution private cloud through your on-prem environment or your native Azure Virtual Network.
+- You have a connection to the Azure VMware Solution private cloud through your on-premises environment or your native Azure Virtual Network.
- There should be an isolated NSX-T Data Center segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T Data Center segment doesn't exist, one will be created. ## Prerequisites
The following items are needed to ensure you're set up to begin the onboarding p
> [!NOTE] > Only the default port of 443 is supported. If you use a different port, Appliance VM creation will fail.
-At this point, you should have already deployed an Azure VMware Solution private cloud. You need to have a connection from your on-prem environment or your native Azure Virtual Network to the Azure VMware Solution private cloud.
+At this point, you should have already deployed an Azure VMware Solution private cloud. You need to have a connection from your on-premises environment or your native Azure Virtual Network to the Azure VMware Solution private cloud.
For Network planning and setup, use the [Network planning checklist - Azure VMware Solution | Microsoft Docs](./tutorial-network-checklist.md)
Use the following steps to guide you through the process to onboard in Arc for A
- `GatewayIPAddress` is the gateway for the segment for Arc appliance VM. - `applianceControlPlaneIpAddress` is the IP address for the Kubernetes API server that should be part of the segment IP CIDR provided. It shouldn't be part of the k8s node pool IP range. - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd` are the starting and ending IP of the pool of IPs to assign to the appliance VM. Both need to be within the `networkCIDRForApplianceVM`.
- - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd`, `gatewayIPAddress` ,`applianceControlPlaneIpAddress` are optional. You may choose to skip all the optional fields or provide values for all. If you choose not to provide the optional fields then you must use /28 address space for `networkCIDRForApplianceVM`
+ - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd`, `gatewayIPAddress` ,`applianceControlPlaneIpAddress` are optional. You may choose to skip all the optional fields or provide values for all. If you choose not to provide the optional fields, then you must use /28 address space for `networkCIDRForApplianceVM`
**Json example** ```json
When the extension installation steps are completed, they trigger deployment and
## Change Arc appliance credential
-When **cloudadmin** credentials are updated, use the following steps to update the credentials in the appliance store.
+When **cloud admin** credentials are updated, use the following steps to update the credentials in the appliance store.
-1. Log into the jumpbox VM from where onboarding was performed. Change the directory to **onboarding directory**.
+1. Log in to the jumpbox VM from where onboarding was performed. Change the directory to **onboarding directory**.
1. Run the following command for Windows-based jumpbox VM. `./.temp/.env/Scripts/activate`
Use the following steps to perform a manual upgrade for Arc appliance virtual ma
1. Power off the VM. 1. Delete the VM. 1. Delete the download template corresponding to the VM.
-1. Delete the resource bridge ARM resource.
+1. Delete the resource bridge Azure Resource Manager resource.
1. Get the previous script `Config_avs` file and add the following configuration item: 1. `"register":false` 1. Download the latest version of the Azure VMware Solution onboarding script.
Use the following steps to uninstall extensions from the portal.
>[!NOTE] >**Steps 2-5** must be performed for all the VMs that have VM extensions installed.
-1. Log into your Azure VMware Solution private cloud.
+1. Log in to your Azure VMware Solution private cloud.
1. Select **Virtual machines** in **Private cloud**, found in the left navigation under ΓÇ£Arc-enabled VMware resourcesΓÇ¥. 1. Search and select the virtual machine where you have **Guest management** enabled. 1. Select **Extensions**.
For the final step, you'll need to delete the resource bridge VM and the VM temp
## Preview FAQ
-**Is Arc supported in all the Azure VMware Solution regions?**
+**Region support for Azure VMware Solution**
-Arc is supported in EastUS, WestEU, UK South, Australia East, Canada Central and Southeast Asia regions however we are working to extend the regional support.
+Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For more details, see [Azure Arc-enabled VMware vSphere](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/overview).
**How does support work?**
Yes
**Is DHCP support available?**
-DHCP support is not available to customers at this time, we only support static IP.
+DHCP support isn't available to customers at this time, we only support static IP.
>[!NOTE] > This is Azure VMware Solution 2.0 only. It's not available for Azure VMware Solution by Cloudsimple.
azure-web-pubsub Howto Monitor Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-monitor-azure-policy.md
# Audit compliance of Azure Web PubSub Service resources using Azure Policy
-[Azure Policy](../governance/policy/overview.md) is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.
+[Azure Policy](../governance/policy/overview.md) is a free service in Azure to create, assign, and manage policies that enforce rules and effects to ensure your resources stay compliant with your corporate standards and service level agreements. Use these policies to audit Web PubSub resources for compliance.
-This article introduces built-in policies for Azure Web PubSub Service. Use these policies to audit new and existing Web PubSub resources for compliance.
-
-There are no charges for using Azure Policy.
+This article describes the built-in policies for Azure Web PubSub Service.
## Built-in policy definitions
-The following built-in policy definitions are specific to Azure Web PubSub Service:
+
+The following table contains an index of Azure Policy built-in policy definitions for Azure Web PubSub. For Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the Version column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
[!INCLUDE [azure-policy-reference-policies-web-pubsub](../../includes/policy/reference/bycat/policies-web-pubsub.md)] ## Assign policy definitions
-* Assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
-* Scope a policy assignment to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md). Web PubSub policy assignments apply to existing and new Web PubSub resources within the scope.
-* Enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
+When assigning a policy definition:
+
+* You can assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
+* Policy assignments can be scoped to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md).
+* You can enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
+* Web PubSub policy assignments apply to existing and new Web PubSub resources within the scope.
> [!NOTE] > After you assign or update a policy, it takes some time for the assignment to be applied to resources in the defined scope. See information about [policy evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
When a resource is non-compliant, there are many possible reasons. To determine
### Policy compliance in the portal:
-1. Select **All services**, and search for **Policy**.
+1. Open the Azure portal and search for **Policy**.
+1. Select **Policy**.
1. Select **Compliance**.
-1. Use the filters to limit compliance states or to search for policies
-
+1. Use the filters to display by **Scope**, **Type** or **Compliance state**. Use search list by name or
+ ID.
[ ![Policy compliance in portal](./media/howto-monitor-azure-policy/azure-policy-compliance.png) ](./media/howto-monitor-azure-policy/azure-policy-compliance.png#lightbox)
-2. Select a policy to review aggregate compliance details and events. If desired, then select a specific Web PubSub for resource compliance.
+1. Select a policy to review aggregate compliance details and events.
+1. Select a specific Web PubSub for resource compliance.
### Policy compliance in the Azure CLI
-You can also use the Azure CLI to get compliance data. For example, use the [az policy assignment list](/cli/azure/policy/assignment#az-policy-assignment-list) command in the CLI to get the policy IDs of the Azure Web PubSub Service policies that are applied:
+You can use the Azure CLI to get compliance data. Use the [az policy assignment list](/cli/azure/policy/assignment#az-policy-assignment-list) command to get the policy IDs of the Azure Web PubSub Service policies that are applied:
```azurecli az policy assignment list --query "[?contains(displayName,'Web PubSub')].{name:displayName, ID:id}" --output table ```
-Sample output:
+Example output:
``` Name ID
Name
[Preview]: Azure Web PubSub Service should use private links /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.Authorization/policyAssignments/<assignmentId> ```
-Then run [az policy state list](/cli/azure/policy/state#az-policy-state-list) to return the JSON-formatted compliance state for all resources under a specific resource group:
+Run the [az policy state list](/cli/azure/policy/state#az-policy-state-list) command to return the JSON-formatted compliance state for all resources under a specific resource group:
```azurecli az policy state list --g <resourceGroup> ```
-Or run [az policy state list](/cli/azure/policy/state#az-policy-state-list) to return the JSON-formatted compliance state of a specific Web PubSub resource:
+Run the [az policy state list](/cli/azure/policy/state#az-policy-state-list) command to return the JSON-formatted compliance state of a specific Web PubSub resource:
```azurecli az policy state list \
azure-web-pubsub Policy Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/policy-definitions.md
+
+ Title: Built-in policy definitions for Azure Web PubSub
+description: Lists Azure Policy built-in policy definitions for Azure Web PubSub. These built-in policy definitions provide common approaches to managing your Azure resources.
++ Last updated : 01/03/2022++++
+# Azure Policy built-in definitions for Azure Web PubSub
+
+This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
+definitions for Azure Web PubSub. For Azure Policy built-ins for other services, see
+[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Policy definitions
+++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
backup Backup Azure Linux Database Consistent Enhanced Pre Post https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-linux-database-consistent-enhanced-pre-post.md
The new _enhanced_ pre-post script framework has the following key benefits:
The following the list of databases are covered under the enhanced framework: -- [Oracle (Generally Available)](../virtual-machines/workloads/oracle/oracle-database-backup-azure-backup.md) - [Link to support matrix](backup-support-matrix-iaas.md#support-matrix-for-managed-pre-post-scripts-for-linux-databases)
+- [Oracle (Generally Available)](../virtual-machines/workloads/oracle/oracle-database-backup-azure-backup.md) - [Link to support matrix](backup-support-matrix-iaas.md#support-matrix-for-managed-pre-and-post-scripts-for-linux-databases)
- MySQL (Preview) ## Prerequisites
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup
-description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service.
+ Title: Support matrix for Azure VM backups
+description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service.
Last updated 12/06/2022
-# Support matrix for Azure VM backup
+# Support matrix for Azure VM backups
-You can use the [Azure Backup service](backup-overview.md) to back up on-premises machines and workloads, and Azure virtual machines (VMs). This article summarizes support settings and limitations when you back up Azure VMs with Azure Backup.
+You can use the [Azure Backup service](backup-overview.md) to back up on-premises machines and workloads, along with Azure virtual machines (VMs). This article summarizes support settings and limitations when you back up Azure VMs by using Azure Backup.
-Other support matrices:
+Other support matrices include:
- [General support matrix](backup-support-matrix.md) for Azure Backup-- [Support matrix](backup-support-matrix-mabs-dpm.md) for Azure Backup server / System Center Data Protection Manager (DPM) backup
+- [Support matrix](backup-support-matrix-mabs-dpm.md) for Azure Backup servers and System Center Data Protection Manager (DPM) backup
- [Support matrix](backup-support-matrix-mars-agent.md) for backup with the Microsoft Azure Recovery Services (MARS) agent ## Supported scenarios
-Here's how you can back up and restore Azure VMs with the Azure Backup service.
+Here's how you can back up and restore Azure VMs by using the Azure Backup service.
**Scenario** | **Backup** | **Agent** |**Restore** | | |
-Direct backup of Azure VMs | Back up the entire VM. | No additional agent is needed on the Azure VM. Azure Backup installs and uses an extension to the [Azure VM agent](../virtual-machines/extensions/agent-windows.md) that's running on the VM. | Restore as follows:<br/><br/> - **Create a basic VM**. This is useful if the VM has no special configuration such as multiple IP addresses.<br/><br/> - **Restore the VM disk**. Restore the disk. Then attach it to an existing VM, or create a new VM from the disk by using PowerShell.<br/><br/> - **Replace VM disk**. If a VM exists and it uses managed disks (unencrypted), you can restore a disk and use it to replace an existing disk on the VM.<br/><br/> - **Restore specific files/folders**. You can restore files/folders from a VM instead of from the entire VM.
-Direct backup of Azure VMs (Windows only) | Back up specific files/folders/volume. | Install the [Azure Recovery Services agent](backup-azure-file-folder-backup-faq.yml).<br/><br/> You can run the MARS agent alongside the backup extension for the Azure VM agent to back up the VM at file/folder level. | Restore specific folders/files.
-Back up Azure VM to the backup server | Back up files/folders/volumes; system state/bare metal files; app data to System Center DPM or to Microsoft Azure Backup Server (MABS).<br/><br/> DPM/MABS then backs up to the backup vault. | Install the DPM/MABS protection agent on the VM. The MARS agent is installed on DPM/MABS.| Restore files/folders/volumes; system state/bare metal files; app data.
+Direct backup of Azure VMs | Back up the entire VM. | No additional agent is needed on the Azure VM. Azure Backup installs and uses an extension to the [Azure VM agent](../virtual-machines/extensions/agent-windows.md) that's running on the VM. | Restore as follows:<br/><br/> - **Create a basic VM**. This is useful if the VM has no special configuration, such as multiple IP addresses.<br/><br/> - **Restore the VM disk**. Restore the disk. Then attach it to an existing VM, or create a new VM from the disk by using PowerShell.<br/><br/> - **Replace the VM disk**. If a VM exists and it uses managed disks (unencrypted), you can restore a disk and use it to replace an existing disk on the VM.<br/><br/> - **Restore specific files or folders**. You can restore files or folders from a VM instead of restoring the entire VM.
+Direct backup of Azure VMs (Windows only) | Back up specific files, folders, or volumes. | Install the [Azure Recovery Services agent](backup-azure-file-folder-backup-faq.yml).<br/><br/> You can run the MARS agent alongside the backup extension for the Azure VM agent to back up the VM at the file or folder level. | Restore specific files or folders.
+Backup of Azure VMs to the backup server | Back up files, folders, or volumes; system state or bare metal files; and app data to System Center DPM or to Microsoft Azure Backup Server (MABS).<br/><br/> DPM or MABS then backs up to the backup vault. | Install the DPM or MABS protection agent on the VM. The MARS agent is installed on DPM or MABS.| Restore files, folders, or volumes; system state or bare metal files; and app data.
-Learn more about backup [using a backup server](backup-architecture.md#architecture-back-up-to-dpmmabs) and about [support requirements](backup-support-matrix-mabs-dpm.md).
+Learn more about [using a backup server](backup-architecture.md#architecture-back-up-to-dpmmabs) and about [support requirements](backup-support-matrix-mabs-dpm.md).
## Supported backup actions **Action** | **Support** |
-Back up a VM that's shutdown/offline VM | Supported.<br/><br/> Snapshot is crash-consistent only, not app-consistent.
+Back up a VM that's shut down or offline | Supported.<br/><br/> Snapshot is crash consistent only, not app consistent.
Back up disks after migrating to managed disks | Supported.<br/><br/> Backup will continue to work. No action is required.
-Back up managed disks after enabling resource group lock | Not supported.<br/><br/> Azure Backup can't delete the older restore points, and backups will start to fail when the maximum limit of restore points is reached.
-Modify backup policy for a VM | Supported.<br/><br/> The VM will be backed up by using the schedule and retention settings in new policy. If retention settings are extended, existing recovery points are marked and kept. If they're reduced, existing recovery points will be pruned in the next cleanup job and eventually deleted.
-Cancel a backup job| Supported during snapshot process.<br/><br/> Not supported when the snapshot is being transferred to the vault.
+Back up managed disks after enabling a resource group lock | Not supported.<br/><br/> Azure Backup can't delete the older restore points. Backups will start to fail when the limit of restore points is reached.
+Modify backup policy for a VM | Supported.<br/><br/> The VM will be backed up according to the schedule and retention settings in the new policy. If retention settings are extended, existing recovery points are marked and kept. If they're reduced, existing recovery points will be pruned in the next cleanup job and eventually deleted.
+Cancel a backup job| Supported during the snapshot process.<br/><br/> Not supported when the snapshot is being transferred to the vault.
Back up the VM to a different region or subscription |Not supported.<br><br>For successful backup, virtual machines must be in the same subscription as the vault for backup.
-Backups per day (via the Azure VM extension) | Four backups per day - one scheduled backup as per the Backup policy, and three on-demand backups. <br><br> However, to allow user retries in case of failed attempts, hard limit for on-demand backups is set to nine attempts.
-Backups per day (via the MARS agent) | Three scheduled backups per day.
-Backups per day (via DPM/MABS) | Two scheduled backups per day.
-Monthly/yearly backup| Not supported when backing up with Azure VM extension. Only daily and weekly is supported.<br/><br/> You can set up the policy to retain daily/weekly backups for monthly/yearly retention period.
-Automatic clock adjustment | Not supported.<br/><br/> Azure Backup doesn't automatically adjust for daylight saving time changes when backing up a VM.<br/><br/> Modify the policy manually as needed.
-[Security features for hybrid backup](./backup-azure-security-feature.md) |Disabling security features isn't supported.
-Back up the VM whose machine time is changed | Not supported.<br/><br/> If the machine time is changed to a future date-time after enabling backup for that VM, however even if the time change is reverted, successful backup isn't guaranteed.
-Multiple Backups Per Day | Supported (in preview), using *Enhanced policy* (in preview). <br><br> For hourly backup, the minimum RPO is 4 hours and the maximum is 24 hours. You can set the backup schedule to 4, 6, 8, 12, and 24 hours respectively. Learn how to [back up an Azure VM using Enhanced policy](backup-azure-vms-enhanced-policy.md).
-Back up a VM with deprecated plan when publisher has removed it from Azure Marketplace | Not supported. <br><br> Backup is possible. However, restore will fail. <br><br> If you've already configured backup for VM with deprecated virtual machine offer and encounter restore error, see [Troubleshoot backup errors with Azure VMs](backup-azure-vms-troubleshoot.md#usererrormarketplacevmnotsupportedvm-creation-failed-due-to-market-place-purchase-request-being-not-present).
+Back up daily via the Azure VM extension | Four backups per day: one scheduled backup as set up in the backup policy, and three on-demand backups. <br><br> To allow user retries in case of failed attempts, the hard limit for on-demand backups is set to nine attempts.
+Back up daily via the MARS agent | Three scheduled backups per day.
+Back up daily via DPM or MABS | Two scheduled backups per day.
+Back up monthly or yearly| Not supported when you're backing up with the Azure VM extension. Only daily and weekly are supported.<br/><br/> You can set up the policy to retain daily or weekly backups for a monthly or yearly retention period.
+Automatically adjust the clock | Not supported.<br/><br/> Azure Backup doesn't automatically adjust for daylight saving time when you're backing up a VM.<br/><br/> Modify the policy manually as needed.
+[Disable security features for hybrid backup](./backup-azure-security-feature.md) |Not supported.
+Back up a VM whose machine time is changed | Not supported.<br/><br/> If you change the machine time to a future date/time after enabling backup for that VM, even if the time change is reverted, successful backup isn't guaranteed.
+Do multiple backups per day | Supported through **Enhanced policy** (in preview). <br><br> For hourly backup, the minimum recovery point objective (RPO) is 4 hours and the maximum is 24 hours. You can set the backup schedule to 4, 6, 8, 12, and 24 hours, respectively. [Learn how to back up an Azure VM by using Enhanced policy](backup-azure-vms-enhanced-policy.md).
+Back up a VM with a deprecated plan when the publisher has removed it from Azure Marketplace | Not supported. <br><br> Backup is possible. However, restore will fail. <br><br> If you've already configured backup for a VM with a deprecated virtual machine offer and encounter a restore error, see [Troubleshoot backup errors with Azure VMs](backup-azure-vms-troubleshoot.md#usererrormarketplacevmnotsupportedvm-creation-failed-due-to-market-place-purchase-request-being-not-present).
## Operating system support (Windows)
-The following table summarizes the supported operating systems when backing up Azure VMs running Windows.
+The following table summarizes the supported operating systems when you're backing up Azure VMs running Windows.
**Scenario** | **OS support** |
-Back up with Azure VM agent extension | - Windows 11 Client (64 bit only) <br/><br/> - Windows 10 Client (64 bit only) <br/><br/>- Windows Server 2022 (Datacenter/Datacenter Core/Standard) <br/><br/>- Windows Server 2019 (Datacenter/Datacenter Core/Standard) <br/><br/> - Windows Server 2016 (Datacenter/Datacenter Core/Standard) <br/><br/> - Windows Server 2012 R2 (Datacenter/Standard) <br/><br/> - Windows Server 2012 (Datacenter/Standard) <br/><br/> - Windows Server 2008 R2 (RTM and SP1 Standard) <br/><br/> - Windows Server 2008 (64 bit only)
-Back up with MARS agent | [Supported](backup-support-matrix-mars-agent.md#supported-operating-systems) operating systems.
-Back up with DPM/MABS | Supported operating systems for backup with [MABS](backup-mabs-protection-matrix.md) and [DPM](/system-center/dpm/dpm-protection-matrix).
+Back up with the Azure VM agent extension | - Windows 11 client (64 bit only) <br/><br/> - Windows 10 client (64 bit only) <br/><br/>- Windows Server 2022 (Datacenter, Datacenter Core, and Standard) <br/><br/>- Windows Server 2019 (Datacenter, Datacenter Core, and Standard) <br/><br/> - Windows Server 2016 (Datacenter, Datacenter Core, and Standard) <br/><br/> - Windows Server 2012 R2 (Datacenter and Standard) <br/><br/> - Windows Server 2012 (Datacenter and Standard) <br/><br/> - Windows Server 2008 R2 (RTM and SP1 Standard) <br/><br/> - Windows Server 2008 (64 bit only)
+Back up with the MARS agent | [Supported](backup-support-matrix-mars-agent.md#supported-operating-systems) operating systems
+Back up with DPM or MABS | Supported operating systems for backup with [MABS](backup-mabs-protection-matrix.md) and [DPM](/system-center/dpm/dpm-protection-matrix)
Azure Backup doesn't support 32-bit operating systems.
Here's what's supported if you want to back up Linux machines.
**Action** | **Support** |
-Back up Linux Azure VMs with the Linux Azure VM agent | File consistent backup.<br/><br/> App-consistent backup using [custom scripts](backup-azure-linux-app-consistent.md).<br/><br/> During restore, you can create a new VM, restore a disk and use it to create a VM, or restore a disk, and use it to replace a disk on an existing VM. You can also restore individual files and folders.
-Back up Linux Azure VMs with MARS agent | Not supported.<br/><br/> The MARS agent can only be installed on Windows machines.
-Back up Linux Azure VMs with DPM/MABS | Not supported.
-Back up Linux Azure VMs with docker mount points | Currently, Azure Backup doesnΓÇÖt support exclusion of docker mount points as these are mounted at different paths every time.
+Back up Linux Azure VMs with the Linux Azure VM agent | Supported for file-consistent backup.<br/><br/> Also supported for app-consistent backup that uses [custom scripts](backup-azure-linux-app-consistent.md).<br/><br/> During restore, you can create a new VM, restore a disk and use it to create a VM, or restore a disk and use it to replace a disk on an existing VM. You can also restore individual files and folders.
+Back up Linux Azure VMs with the MARS agent | Not supported.<br/><br/> The MARS agent can be installed only on Windows machines.
+Back up Linux Azure VMs with DPM or MABS | Not supported.
+Back up Linux Azure VMs with Docker mount points | Currently, Azure Backup doesn't support exclusion of Docker mount points because these are mounted at different paths every time.
## Operating system support (Linux)
-For Azure VM Linux backups, Azure Backup supports the list of Linux [distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md). Note the following:
+For Linux VM backups, Azure Backup supports the list of [Linux distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md). Note the following:
-- Azure Backup doesn't support Core OS Linux.
+- Azure Backup doesn't support CoreOS Linux.
- Azure Backup doesn't support 32-bit operating systems. - Other bring-your-own Linux distributions might work as long as the [Azure VM agent for Linux](../virtual-machines/extensions/agent-linux.md) is available on the VM, and as long as Python is supported.-- Azure Backup doesn't support a proxy-configured Linux VM if it doesn't have Python version 2.7 or higher installed.-- Azure Backup doesn't support backing up NFS files that are mounted from storage, or from any other NFS server, to Linux or Windows machines. It only backs up disks that are locally attached to the VM.
+- Azure Backup doesn't support a proxy-configured Linux VM if it doesn't have Python version 2.7 or later installed.
+- Azure Backup doesn't support backing up Network File System (NFS) files that are mounted from storage, or from any other NFS server, to Linux or Windows machines. It backs up only disks that are locally attached to the VM.
-## Support matrix for managed pre-post scripts for Linux databases
+## Support matrix for managed pre and post scripts for Linux databases
-Azure Backup provides support for customers to author their own pre-post scripts
+Azure Backup provides the following support for customers to author their own pre and post scripts.
|Supported database |OS version |Database version | ||||
-|Oracle in Azure VMs | [Oracle Linux](../virtual-machines/linux/endorsed-distros.md) | Oracle 12.x or greater |
+|Oracle in Azure VMs | [Oracle Linux](../virtual-machines/linux/endorsed-distros.md) | Oracle 12.x or later |
## Backup frequency and retention **Setting** | **Limits** |
-Maximum recovery points per protected instance (machine/workload) | 9999.
+Maximum recovery points per protected instance (machine or workload) | 9999.
Maximum expiry time for a recovery point | No limit (99 years).
-Maximum backup-frequency to vault (Azure VM extension) | Once a day.
-Maximum backup-frequency to vault (MARS agent) | Three backups per day.
-Maximum backup-frequency to DPM/MABS | Every 15 minutes for SQL Server.<br/><br/> Once an hour for other workloads.
+Maximum backup frequency to a vault (Azure VM extension) | Once a day.
+Maximum backup frequency to a vault (MARS agent) | Three backups per day.
+Maximum backup frequency to DPM or MABS | Every 15 minutes for SQL Server.<br/><br/> Once an hour for other workloads.
Recovery point retention | Daily, weekly, monthly, and yearly. Maximum retention period | Depends on backup frequency.
-Recovery points on DPM/MABS disk | 64 for file servers, and 448 for app servers.<br/><br/> Tape recovery points are unlimited for on-premises DPM.
+Recovery points on DPM or MABS disk | 64 for file servers, and 448 for app servers.<br/><br/> Tape recovery points are unlimited for on-premises DPM.
## Supported restore methods **Restore option** | **Details** |
-**Create a new VM** | Quickly creates and gets a basic VM up and running from a restore point.<br/><br/> You can specify a name for the VM, select the resource group and virtual network (VNet) in which it will be placed, and specify a storage account for the restored VM. The new VM must be created in the same region as the source VM.
-**Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell.
-**Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs and for VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and VMs, classic VMs, and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) and [Key Vault](../key-vault/general/overview.md).
-**Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
-**Cross Subscription (preview)** | Cross Subscription restore can be used to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Subscription Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. It's unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed), [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
-**Cross Zonal Restore** | Cross Zonal restore can be used to restore Azure zone pinned VMs in available zones.<br><br> You can restore Azure VMs or disks to different zones (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Zonal Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore points. It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
-
+**Create a new VM** | This option quickly creates and gets a basic VM up and running from a restore point.<br/><br/> You can specify a name for the VM, select the resource group and virtual network in which it will be placed, and specify a storage account for the restored VM. The new VM must be created in the same region as the source VM.
+**Restore disk** | This option restores a VM disk, which can you can then use to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings and create a VM.<br/><br/> The disks are copied to the resource group that you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM by using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured via the template or PowerShell.
+**Replace existing** | You can restore a disk and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it has been deleted, you can't use this option.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and it stores the snapshot in the staging location that you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>This option is supported for unencrypted managed VMs and for VMs [created from custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and VMs, classic VMs, and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or fewer disks than the current VM, the number of disks in the restore point will only reflect the VM configuration.<br><br> This option is also supported for VMs with linked resources, like [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) and [Azure Key Vault](../key-vault/general/overview.md).
+**Cross Region (secondary region)** | You can use cross-region restore to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Backup admins and app admins have permissions to perform the restore operation on a secondary region.
+**Cross Subscription (preview)** | You can use cross-subscription restore to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription from restore points. This is one of the Azure role-based access control (RBAC) capabilities. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-subscription restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. It's also unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed), [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups), and [trusted launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Zonal Restore** | You can use cross-zonal restore to restore Azure zone-pinned VMs in available zones.<br><br> You can restore Azure VMs or disks to different zones (one of the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-zonal restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) of restore points. It's also unsupported for [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [trusted launch VMs](backup-support-matrix-iaas.md#tvm-backup).
## Support for file-level restore **Restore** | **Supported** |
-Restoring files across operating systems | You can restore files on any machine that has the same (or compatible) OS as the backed-up VM. See the [Compatible OS table](backup-azure-restore-files-from-vm.md#step-3-os-requirements-to-successfully-run-the-script).
-Restoring files from encrypted VMs | Not supported.
-Restoring files from network-restricted storage accounts | Not supported.
-Restoring files on VMs using Windows Storage Spaces | Restore not supported on same VM.<br/><br/> Instead, restore the files on a compatible VM.
-Restore files on Linux VM using LVM/raid arrays | Restore not supported on same VM.<br/><br/> Restore on a compatible VM.
-Restore files with special network settings | Restore not supported on same VM. <br/><br/> Restore on a compatible VM.
-Restore files from Shared disk, Temp drive, Deduplicated Disk, Ultra disk and disk with write Accelerator enabled | Restore not supported, <br/><br/>see [Azure VM storage support](#vm-storage-support).
+Restore files across operating systems | You can restore files on any machine that has the same OS as the backed-up VM, or a compatible OS. See the [compatible OS table](backup-azure-restore-files-from-vm.md#step-3-os-requirements-to-successfully-run-the-script).
+Restore files from encrypted VMs | Not supported.
+Restore files from network-restricted storage accounts | Not supported.
+Restore files on VMs by using Windows Storage Spaces | Not supported on the same VM.<br/><br/> Instead, restore the files on a compatible VM.
+Restore files on a Linux VM by using LVM or RAID arrays | Not supported on the same VM.<br/><br/> Restore on a compatible VM.
+Restore files with special network settings | Not supported on the same VM. <br/><br/> Restore on a compatible VM.
+Restore files from a shared disk, temporary drive, deduplicated disk, ultra disk, or disk with a write accelerator enabled | Not supported. <br/><br/>See [Azure VM storage support](#vm-storage-support).
## Support for VM management
The following table summarizes support for backup during VM management tasks, su
**Restore** | **Supported** |
-<a name="backup-azure-cross-subscription-restore">Restore across subscription</a> | [Cross Subscription Restore (preview)](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
-[Restore across region](backup-azure-arm-restore-vms.md#cross-region-restore) | Supported.
-<a name="backup-azure-cross-zonal-restore">Restore across zone</a> | [Cross Zonal Restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
-Restore to an existing VM | Use replace disk option.
-Restore disk with storage account enabled for Azure Storage Service Encryption (SSE) | Not supported.<br/><br/> Restore to an account that doesn't have SSE enabled.
+<a name="backup-azure-cross-subscription-restore">Restore across a subscription</a> | [Cross-subscription restore (preview)](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
+[Restore across a region](backup-azure-arm-restore-vms.md#cross-region-restore) | Supported.
+<a name="backup-azure-cross-zonal-restore">Restore across a zone</a> | [Cross-zonal restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
+Restore to an existing VM | Use the replace disk option.
+Restore a disk with a storage account enabled for Azure Storage service-side encryption (SSE) | Not supported.<br/><br/> Restore to an account that doesn't have SSE enabled.
Restore to mixed storage accounts |Not supported.<br/><br/> Based on the storage account type, all restored disks will be either premium or standard, and not mixed.
-Restore VM directly to an availability set | For managed disks, you can restore the disk and use the availability set option in the template.<br/><br/> Not supported for unmanaged disks. For unmanaged disks, restore the disk, and then create a VM in the availability set.
-Restore backup of unmanaged VMs after upgrading to managed VM| Supported.<br/><br/> You can restore disks, and then create a managed VM.
-Restore VM to restore point before the VM was migrated to managed disks | Supported.<br/><br/> You restore to unmanaged disks (default), convert the restored disks to managed disk, and create a VM with the managed disks.
-Restore a VM that's been deleted. | Supported.<br/><br/> You can restore the VM from a recovery point.
+Restore a VM directly to an availability set | For managed disks, you can restore the disk and use the availability set option in the template.<br/><br/> Not supported for unmanaged disks. For unmanaged disks, restore the disk, and then create a VM in the availability set.
+Restore backup of unmanaged VMs after upgrading to a managed VM| Supported.<br/><br/> You can restore disks and then create a managed VM.
+Restore a VM to a restore point before the VM was migrated to managed disks | Supported.<br/><br/> You restore to unmanaged disks (default), convert the restored disks to managed disks, and create a VM with the managed disks.
+Restore a VM that has been deleted | Supported.<br/><br/> You can restore the VM from a recovery point.
Restore a domain controller VM | Supported. For details, see [Restore domain controller VMs](backup-azure-arm-restore-vms.md#restore-domain-controller-vms).
-Restore VM in different virtual network |Supported.<br/><br/> The virtual network must be in the same subscription and region.
+Restore a VM in a different virtual network |Supported.<br/><br/> The virtual network must be in the same subscription and region.
## VM compute support **Compute** | **Support** |
-VM size |Any Azure VM size with at least 2 CPU cores and 1-GB RAM.<br/><br/> [Learn more.](../virtual-machines/sizes.md)
-Back up VMs in [availability sets](../virtual-machines/availability.md#availability-sets) | Supported.<br/><br/> You can't restore a VM in an available set by using the option to quickly create a VM. Instead, when you restore the VM, restore the disk and use it to deploy a VM, or restore a disk and use it to replace an existing disk.
-Back up VMs that are deployed with [Hybrid Use Benefit (HUB)](../virtual-machines/windows/hybrid-use-benefit-licensing.md) | Supported.
-Back up VMs that are deployed from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?filters=virtual-machine-images)<br/><br/> (Published by Microsoft, third party) |Supported.<br/><br/> The VM must be running a supported operating system.<br/><br/> When recovering files on the VM, you can restore only to a compatible OS (not an earlier or later OS). We don't restore Azure Marketplace VMs backed as VMs, as these need purchase information. They're only restored as disks.
-Back up VMs that are deployed from a custom image (third-party) |Supported.<br/><br/> The VM must be running a supported operating system.<br/><br/> When recovering files on the VM, you can restore only to a compatible OS (not an earlier or later OS).
-Back up VMs that are migrated to Azure| Supported.<br/><br/> To back up the VM, the VM agent must be installed on the migrated machine.
-Back up Multi-VM consistency | Azure Backup doesn't provide data and application consistency across multiple VMs.
-Backup with [Diagnostic Settings](../azure-monitor/essentials/platform-logs-overview.md) | Unsupported. <br/><br/> If the restore of the Azure VM with diagnostic settings is triggered using the [Create New](backup-azure-arm-restore-vms.md#create-a-vm) option, then the restore fails.
-Restore of Zone-pinned VMs | Supported (where [availability zones](https://azure.microsoft.com/global-infrastructure/availability-zones/) are available).<br/><br/>Azure Backup now supports [restoring Azure VMs to a any available zones](backup-azure-arm-restore-vms.md#restore-options) other that the zone that's pinned in VMs. This enables you to restore VMs when the primary zone is unavailable.d
-Gen2 VMs | Supported <br> Azure Backup supports backup and restore of [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/). When these VMs are restored from Recovery point, they're restored as [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/).
-Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Supported for managed VMs.
-[Spot VMs](../virtual-machines/spot-vms.md) | Unsupported. Azure Backup restores Spot VMs as regular Azure VMs.
-[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) | Supported<br></br>While restoring an Azure VM through the [Create New](backup-azure-arm-restore-vms.md#create-a-vm) option, though the restore gets successful, Azure VM can't be restored in the dedicated host. To achieve this, we recommend you to restore as disks. While [restoring as disks](backup-azure-arm-restore-vms.md#restore-disks) with the template, create a VM in dedicated host, and then attach the disks.<br></br>This is not applicable in secondary region, while performing [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore).
-Windows Storage Spaces configuration of standalone Azure VMs | Supported
-[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM.
-Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
-<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where Trusted Launch VM is available. <br><br> - Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. <br><br> - Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm). <br><br> - Item-level restore is not supported.
-[Confidential VM](../confidential-computing/confidential-vm-overview.md) | The backup support is in Limited Preview. <br><br> Backup is supported only for those Confidential VMs with no confidential disk encryption and for Confidential VMs with confidential OS disk encryption using Platform Managed Key (PMK). <br><br> Backup is currently not supported for Confidential VMs with confidential OS disk encryption using Customer Managed Key (CMK). <br><br> **Feature details** <br><br> - Backup is supported in [all regions where Confidential VM is available](../confidential-computing/confidential-vm-overview.md#regions). <br><br> - Backup is supported using [Enhanced Policy](backup-azure-vms-enhanced-policy.md) only. You can configure backup through [Create VM blade](backup-azure-arm-vms-prepare.md), [VM Manage blade](backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Recovery Services vault](backup-azure-arm-vms-prepare.md). <br><br> - [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore) and File Recovery (Item level Restore) for Confidential VM are currently not supported.
+Back up VMs of a certain size |You can back up any Azure VM that has at least two CPU cores and 1 GB of RAM.<br/><br/> [Learn more](../virtual-machines/sizes.md).
+Back up VMs in [availability sets](../virtual-machines/availability.md#availability-sets) | Supported.<br/><br/> You can't restore a VM in an availability set by using the option to quickly create a VM. Instead, when you restore the VM, restore the disk and use it to deploy a VM, or restore a disk and use it to replace an existing disk.
+Back up VMs that are deployed with [Azure Hybrid Benefit](../virtual-machines/windows/hybrid-use-benefit-licensing.md) | Supported.
+Back up VMs that are deployed from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?filters=virtual-machine-images) (published by Microsoft or a third party) |Supported.<br/><br/> The VMs must be running a supported operating system.<br/><br/> When you're recovering files on the VM, you can restore only to a compatible OS (not an earlier or later OS). We don't restore Azure Marketplace VMs backed as VMs, because these need purchase information. They're restored only as disks.
+Back up VMs that are deployed from a custom image (third-party) |Supported.<br/><br/> The VMs must be running a supported operating system.<br/><br/> When you're recovering files on VMs, you can restore only to a compatible OS (not an earlier or later OS).
+Back up VMs that are migrated to Azure| Supported.<br/><br/> To back up a VM, make sure that the VM agent is installed on the migrated machine.
+Back up multiple VMs and provide consistency | Azure Backup doesn't provide data and application consistency across multiple VMs.
+Back up a VM with [diagnostic settings](../azure-monitor/essentials/platform-logs-overview.md) | Not supported. <br/><br/> If the restore of the Azure VM with diagnostic settings is triggered via the [Create new](backup-azure-arm-restore-vms.md#create-a-vm) option, the restore fails.
+Restore zone-pinned VMs | Supported (where [availability zones](https://azure.microsoft.com/global-infrastructure/availability-zones/) are available).<br/><br/>Azure Backup now supports [restoring Azure VMs to a any availability zones](backup-azure-arm-restore-vms.md#restore-options) other than the zone that's pinned in VMs. This support enables you to restore VMs when the primary zone is unavailable.
+Back up Gen2 VMs | Supported. <br><br/> Azure Backup supports backup and restore of [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/). When these VMs are restored from a recovery point, they're restored as [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/).
+Back up Azure VMs with locks | Supported for managed VMs. <br><br> Not supported for unmanaged VMs.
+[Restore spot VMs](../virtual-machines/spot-vms.md) | Not supported. <br><br/> Azure Backup restores spot VMs as regular Azure VMs.
+[Restore VMs in an Azure dedicated host](../virtual-machines/dedicated-hosts.md) | Supported.<br></br>When you're restoring an Azure VM through the [Create new](backup-azure-arm-restore-vms.md#create-a-vm) option, the VM can't be restored in the dedicated host, even when the restore is successful. To achieve this, we recommend that you [restore as disks](backup-azure-arm-restore-vms.md#restore-disks). While you're restoring as disks by using the template, create a VM in a dedicated host, and then attach the disks.<br></br>This is not applicable in a secondary region while you're performing [cross-region restore](backup-azure-arm-restore-vms.md#cross-region-restore).
+Configure standalone Azure VMs in Windows Storage Spaces | Supported.
+[Restore virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for the flexible orchestration model to back up and restore a single Azure VM.
+Restore with managed identities | Supported for managed Azure VMs. <br><br> Not supported for classic and unmanaged Azure VMs. <br><br> Cross-region restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
+<a name="tvm-backup">Back up trusted launch VMs</a> | Backup is supported. <br><br> Backup of trusted launch VMs is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through a [Recovery Services vault](./backup-azure-arm-vms-prepare.md), the [pane for managing a VM](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and the [pane for creating a VM](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where trusted launch VMs are available. <br><br> - Configuration of backups, alerts, and monitoring for trusted launch VMs is currently not supported through the backup center. <br><br> - Migration of an existing [Gen2 VM](../virtual-machines/generation-2.md) (protected with Azure Backup) to a trusted launch VM is currently not supported. [Learn how to create a trusted launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm). <br><br> - Item-level restore is not supported.
+[Back up confidential VMs](../confidential-computing/confidential-vm-overview.md) | The backup support is in limited preview. <br><br> Backup is supported only for confidential VMs that have no confidential disk encryption and for confidential VMs that have confidential OS disk encryption through a platform-managed key (PMK). <br><br> Backup is currently not supported for confidential VMs that have confidential OS disk encryption through a customer-managed key (CMK). <br><br> **Feature details** <br><br> - Backup is supported in [all regions where confidential VMs are available](../confidential-computing/confidential-vm-overview.md#regions). <br><br> - Backup is supported only if you're using [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can configure backup through the [pane for creating a VM](backup-azure-arm-vms-prepare.md), the [pane for managing a VM](backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and the [Recovery Services vault](backup-azure-arm-vms-prepare.md). <br><br> - [Cross-region restore](backup-azure-arm-restore-vms.md#cross-region-restore) and file recovery (item-level restore) for confidential VMs are currently not supported.
## VM storage support **Component** | **Support** |
-Azure VM data disks | Support for backup of Azure VMs with up to 32 disks.<br><br> Support for backup of Azure VMs with unmanaged disks or classic VMs is up to 16 disks only.
+Azure VM data disks | Support for backup of Azure VMs is up to 32 disks.<br><br> Support for backup of Azure VMs with unmanaged disks or classic VMs is up to 16 disks only.
Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB combined for all disks in a VM.
-Storage type | Standard HDD, Standard SSD, Premium SSD. <br><br> Backup and restore of [ZRS disks](../virtual-machines/disks-redundancy.md#zone-redundant-storage-for-managed-disks) is supported.
+Storage type | Standard HDD, Standard SSD, Premium SSD. <br><br> Backup and restore of [zone-redundant storage disks](../virtual-machines/disks-redundancy.md#zone-redundant-storage-for-managed-disks) is supported.
Managed disks | Supported.
-Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup. <br><br> You can back up and restore disks encrypted using platform-managed keys (PMKs) or customer-managed keys (CMKs). You can also assign a disk-encryption set while restoring in the same region (that is providing disk-encryption set while performing cross-region restore is currently not supported, however, you can assign the DES to the restored disk after the restore is complete).
-Disks with Write Accelerator enabled | Azure VM with WA disk backup is available in all Azure public regions starting from May 18, 2022. If WA disk backup is not required as part of VM backup, you can choose to remove with [**Selective disk** feature](selective-disk-backup-restore.md). <br><br>**Important** <br> Virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
-Disks enabled for access with private EndPoint | Unsupported.
-Back up & Restore deduplicated VMs/disks | Azure Backup doesn't support deduplication. For more information, see this [article](./backup-support-matrix.md#disk-deduplication-support) <br/> <br/> - Azure Backup doesn't deduplicate across VMs in the Recovery Services vault <br/> <br/> - If there are VMs in deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore.
-Add disk to protected VM | Supported.
-Resize disk on protected VM | Supported.
-Shared storage| Backing up VMs using Cluster Shared Volume (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks containing CSV volumes might not come-up.
+Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure Active Directory app).<br/><br/> Encrypted VMs can't be recovered at the file or folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that Azure Backup is already protecting. <br><br> You can back up and restore disks encrypted via platform-managed keys or customer-managed keys. You can also assign a disk-encryption set while restoring in the same region. That is, providing a disk-encryption set while performing cross-region restore is currently not supported. However, you can assign the disk-encryption set to the restored disk after the restore is complete.
+Disks with a write accelerator enabled | Azure VMs with disk backup for a write accelerator became available in all Azure public regions on May 18, 2022. If disk backup for a write accelerator is not required as part of VM backup, you can choose to remove it by using the [selective disk feature](selective-disk-backup-restore.md). <br><br>**Important** <br> Virtual machines with write accelerator disks need internet connectivity for a successful backup, even though those disks are excluded from the backup.
+Disks enabled for access with a private endpoint | Not supported.
+Backup and restore of deduplicated VMs or disks | Azure Backup doesn't support deduplication. For more information, see [this article](./backup-support-matrix.md#disk-deduplication-support). <br/> <br/> Azure Backup doesn't deduplicate across VMs in the Recovery Services vault. <br/> <br/> If there are VMs in a deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore.
+Adding a disk to a protected VM | Supported.
+Resizing a disk on a protected VM | Supported.
+Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up.
[Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported.
-Ultra SSD disks | Not supported. For more information, see these [limitations](selective-disk-backup-restore.md#limitations).
-[Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Temporary disks aren't backed up by Azure Backup.
+Ultra SSD disks | Not supported. For more information, see [these limitations](selective-disk-backup-restore.md#limitations).
+[Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks.
NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported.
-[ReFS](/windows-server/storage/refs/refs-overview) restore | Supported. VSS supports app-consistent backups on ReFS also like NFS.
-Dynamic disk with spanned/striped volumes | Supported <br><br> If you enable selective disk feature on an Azure VM, then this won't be supported.
+[Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS.
+Dynamic disk with spanned or striped volumes | Supported, unless you enable the selective disk feature on an Azure VM.
## VM network support **Component** | **Support** |
-Number of network interfaces (NICs) | Up to maximum number of NICs supported for a specific Azure VM size.<br/><br/> NICs are created when the VM is created during the restore process.<br/><br/> The number of NICs on the restored VM mirrors the number of NICs on the VM when you enabled protection. Removing NICs after you enable protection doesn't affect the count.
-External/internal load balancer |Supported. <br/><br/> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-special-configurations) about restoring VMs with special network settings.
+Number of network interfaces (NICs) | Supported up to the maximum number for a specific Azure VM size.<br/><br/> NICs are created when the VM is created during the restore process.<br/><br/> The number of NICs on the restored VM mirrors the number of NICs on the VM when you enabled protection. Removing NICs after you enable protection doesn't affect the count.
+External or internal load balancer |Supported. <br/><br/> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-special-configurations) about restoring VMs with special network settings.
Multiple reserved IP addresses |Supported. <br/><br/> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-special-configurations) about restoring VMs with special network settings. VMs with multiple network adapters| Supported. <br/><br/> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-special-configurations) about restoring VMs with special network settings.
-VMs with public IP addresses| Supported.<br/><br/> Associate an existing public IP address with the NIC, or create an address and associate it with the NIC after restore is done.
-Network security group (NSG) on NIC/subnet. |Supported.
+VMs with public IP addresses| Supported.<br/><br/> Associate an existing public IP address with the NIC, or create an address and associate it with the NIC after the restore is done.
+Network security group (NSG) on a NIC or subnet |Supported.
Static IP address | Not supported.<br/><br/> A new VM that's created from a restore point is assigned a dynamic IP address.<br/><br/> For classic VMs, you can't back up a VM with a reserved IP address and no defined endpoint.
-Dynamic IP address |Supported.<br/><br/> If the NIC on the source VM uses dynamic IP addressing, by default the NIC on the restored VM will use it too.
+Dynamic IP address |Supported.<br/><br/> If the NIC on the source VM uses dynamic IP addressing, the NIC on the restored VM will also use it by default.
Azure Traffic Manager| Supported.<br/><br/>If the backed-up VM is in Traffic Manager, manually add the restored VM to the same Traffic Manager instance. Azure DNS |Supported. Custom DNS |Supported. Outbound connectivity via HTTP proxy | Supported.<br/><br/> An authenticated proxy isn't supported.
-Virtual network service endpoints| Supported.<br/><br/> Firewall and virtual network storage account settings should allow access from all networks.
+Virtual network service endpoints| Supported.<br/><br/> Storage account settings for a firewall and a virtual network should allow access from all networks.
-## VM security and encryption support
+## Support for VM security and encryption
-Azure Backup supports encryption for in-transit and at-rest data:
+Azure Backup supports encryption for in-transit and at-rest data.
-Network traffic to Azure:
+For network traffic to Azure:
-- Backup-traffic from servers to the Recovery Services vault is encrypted by using Advanced Encryption Standard 256.
+- Backup traffic from servers to the Recovery Services vault is encrypted via Advanced Encryption Standard 256.
- Backup data is sent over a secure HTTPS link.-- The backup data is stored in the Recovery Services vault in encrypted form.
+- Backup data is stored in the Recovery Services vault in encrypted form.
- Only you have the encryption key to unlock this data. Microsoft can't decrypt the backup data at any point. > [!WARNING] > After you set up the vault, only you have access to the encryption key. Microsoft never maintains a copy and doesn't have access to the key. If the key is misplaced, Microsoft can't recover the backup data.
-Data security:
+For data security:
-- When backing up Azure VMs, you need to set up encryption *within* the virtual machine.-- Azure Backup supports Azure Disk Encryption, which uses BitLocker on virtual machines running Windows and uses **dm-crypt** on Linux virtual machines.-- On the back end, Azure Backup uses [Azure Storage Service encryption](../storage/common/storage-service-encryption.md), which protects data at rest.
+- When you're backing up Azure VMs, you need to set up encryption *within* the virtual machine.
+- Azure Backup supports Azure Disk Encryption, which uses BitLocker on virtual machines running Windows and uses *dm-crypt* on Linux virtual machines.
+- On the back end, Azure Backup uses [Azure Storage service-side encryption](../storage/common/storage-service-encryption.md) to help protect data at rest.
**Machine** | **In transit** | **At rest** | |
-On-premises Windows machines without DPM/MABS | ![Yes][green] | ![Yes][green]
+On-premises Windows machines without DPM or MABS | ![Yes][green] | ![Yes][green]
Azure VMs | ![Yes][green] | ![Yes][green]
-On-premises/Azure VMs with DPM | ![Yes][green] | ![Yes][green]
-On-premises/Azure VMs with MABS | ![Yes][green] | ![Yes][green]
+On-premises or Azure VMs with DPM | ![Yes][green] | ![Yes][green]
+On-premises or Azure VMs with MABS | ![Yes][green] | ![Yes][green]
## VM compression support
-Backup supports the compression of backup traffic, as summarized in the following table. Note the following:
+Azure Backup supports the compression of backup traffic. Note the following:
- For Azure VMs, the VM extension reads the data directly from the Azure storage account over the storage network. It isn't necessary to compress this traffic.-- If you're using DPM or MABS, you can save bandwidth by compressing the data before it's backed up to DPM/MABS.
+- If you're using DPM or MABS, you can save bandwidth by compressing the data before it's backed up.
-**Machine** | **Compress to MABS/DPM (TCP)** | **Compress to vault (HTTPS)**
+**Machine** | **Compress to DPM/MABS (TCP)** | **Compress to vault (HTTPS)**
| |
-On-premises Windows machines without DPM/MABS | NA | ![Yes][green]
-Azure VMs | NA | NA
-On-premises/Azure VMs with DPM | ![Yes][green] | ![Yes][green]
-On-premises/Azure VMs with MABS | ![Yes][green] | ![Yes][green]
+On-premises Windows machines without DPM or MABS | Not applicable | ![Yes][green]
+Azure VMs | Not applicable | Not applicable
+On-premises or Azure VMs with DPM | ![Yes][green] | ![Yes][green]
+On-premises or Azure VMs with MABS | ![Yes][green] | ![Yes][green]
## Next steps
backup Guidance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/guidance-best-practices.md
You can configure such critical alerts and route them to any preferred notificat
#### Automatic Retry of Failed Backup Jobs
-Many of the failure errors or the outage scenarios are transient in nature, and you can remediate by setting up the right Azure role-based access control (Azure RBAC) permissions3 or re-trigger the backup/restore job. As the solution to such failures is simple, that you donΓÇÖt need tp invest time waiting for an engineer to manually trigger the job or to assign the relevant permission. Therefore, the smarter way to handle this scenario is to automate the retry of the failed jobs. This will highly minimize the time taken to recover from failures.
+Many of the failure errors or the outage scenarios are transient in nature, and you can remediate by setting up the right Azure role-based access control (Azure RBAC) permissions or re-trigger the backup/restore job. As the solution to such failures is simple, that you donΓÇÖt need to invest time waiting for an engineer to manually trigger the job or to assign the relevant permission. Therefore, the smarter way to handle this scenario is to automate the retry of the failed jobs. This will highly minimize the time taken to recover from failures.
You can achieve this by retrieving relevant backup data via Azure Resource Graph (ARG) and combine it with corrective [PowerShell/CLI procedure](/azure/architecture/framework/resiliency/auto-retry). Watch the following video to learn how to re-trigger backup for all failed jobs (across vaults, subscriptions, tenants) using ARG and PowerShell.
batch Batch Apis Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-apis-tools.md
The Azure Resource Manager APIs for Batch provide programmatic access to Batch a
| API | API reference | Download | Tutorial | Code samples | | | | | | | | **Batch Management REST** |[Azure REST API - Docs](/rest/api/batchmanagement/) |- |- |[GitHub](https://github.com/Azure-Samples/batch-dotnet-manage-batch-accounts) |
-| **Batch Management .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch/management/management-batch) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.Batch/) | [Tutorial](batch-management-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) |
+| **Batch Management .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch/management/management-batch(deprecated)) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.Batch/) | [Tutorial](batch-management-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) |
| **Batch Management Python** |[Azure SDK for Python - Docs](/samples/azure-samples/azure-samples-python-management/batch/) |[PyPI](https://pypi.org/project/azure-mgmt-batch/) |- |- | | **Batch Management JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/arm-batch-readme) |[npm](https://www.npmjs.com/package/@azure/arm-batch) |- |- | | **Batch Management Java** |[Azure SDK for Java - Docs](/java/api/overview/azure/batch/management) |[Maven](https://search.maven.org/search?q=a:azure-batch) |- |- |
batch Batch Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-customer-managed-key.md
resourceGroupName='myResourceGroup'
accountName='mybatchaccount' az batch account create \
- -n $accountName \
- -g $resourceGroupName \
+ --name $accountName \
+ --resource-group $resourceGroupName \
--locations regionName='West US 2' \ --identity 'SystemAssigned' ```
After the account is created, you can verify that system-assigned managed identi
```azurecli az batch account show \
- -n $accountName \
- -g $resourceGroupName \
+ --name $accountName \
+ --resource-group $resourceGroupName \
--query identity ```
In the [Azure portal](https://portal.azure.com/), go to the Batch account page.
### Azure CLI
-After the Batch account is created with system-assigned managed identity and the access to Key Vault is granted, update the Batch account with the `{Key Identifier}` URL under `keyVaultProperties` parameter. Also set **encryption_key_source** as `Microsoft.KeyVault`.
+After the Batch account is created with system-assigned managed identity and the access to Key Vault is granted, update the Batch account with the `{Key Identifier}` URL under `keyVaultProperties` parameter. Also set `--encryption-key-source` as `Microsoft.KeyVault`.
```azurecli az batch account set \
- -n $accountName \
- -g $resourceGroupName \
- --encryption_key_source Microsoft.KeyVault \
- --encryption_key_identifier {YourKeyIdentifier}
+ --name $accountName \
+ --resource-group $resourceGroupName \
+ --encryption-key-source Microsoft.KeyVault \
+ --encryption-key-identifier {YourKeyIdentifier}
``` ## Create a Batch account with user-assigned managed identity and customer-managed keys
You can also use Azure CLI to update the version.
```azurecli az batch account set \
- -n $accountName \
- -g $resourceGroupName \
- --encryption_key_identifier {YourKeyIdentifierWithNewVersion}
+ --name $accountName \
+ --resource-group $resourceGroupName \
+ --encryption-key-identifier {YourKeyIdentifierWithNewVersion}
``` ## Use a different key for Batch encryption
You can also use Azure CLI to use a different key.
```azurecli az batch account set \
- -n $accountName \
- -g $resourceGroupName \
- --encryption_key_identifier {YourNewKeyIdentifier}
+ --name $accountName \
+ --resource-group $resourceGroupName \
+ --encryption-key-identifier {YourNewKeyIdentifier}
``` ## Frequently asked questions
batch Jobs And Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/jobs-and-tasks.md
A job specifies the [pool](nodes-and-pools.md#pools) in which the work is to be
You can assign an optional job priority to jobs that you create. The Batch service uses the priority value of the job to determine the order of scheduling (for all tasks within the job) wtihin each pool.
-To update the priority of a job, call the [Update the properties of a job](/rest/api/batchservice/job/update) operation (Batch REST), or modify the [CloudJob.Priority](/dotnet/api/microsoft.azure.batch.cloudjob.priority) (Batch .NET). Priority values range from -1000 (lowest priority) to 1000 (highest priority).
+To update the priority of a job, call the [Update the properties of a job](/rest/api/batchservice/job/update) operation (Batch REST), or modify the [CloudJob.Priority](/dotnet/api/microsoft.azure.batch.cloudjob.priority) (Batch .NET). Priority values range from -1000 (lowest priority) to +1000 (highest priority).
Within the same pool, higher-priority jobs have scheduling precedence over lower-priority jobs. Tasks in lower-priority jobs that are already running won't be preempted by tasks in a higher-priority job. Jobs with the same priority level have an equal chance of being scheduled, and ordering of task execution is not defined.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Overview/overview.md
Once a QnA Maker knowledge base is published, a client application sends a quest
The QnA Maker portal provides the complete knowledge base authoring experience. You can import documents, in their current form, to your knowledge base. These documents (such as an FAQ, product manual, spreadsheet, or web page) are converted into question and answer pairs. Each pair is analyzed for follow-up prompts and connected to other pairs. The final _markdown_ format supports rich presentation including images and links.
-Once your knowledge base is edited, publish the knowledge base to a working [Azure Web App bot](https://azure.microsoft.com/services/bot-service/) without writing any code. Test your bot in the [Azure portal](https://portal.azure.com) or download it and continue development.
- ## High quality responses with layered ranking QnA Maker's system is a layered ranking approach. The data is stored in Azure search, which also serves as the first ranking layer. The top results from Azure search are then passed through QnA Maker's NLP re-ranking model to produce the final results and confidence score.
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
Previously updated : 01/12/2023 Last updated : 01/24/2023 zone_pivot_groups: programming-languages-speech-services-nomore-variant
Language identification (LID) use cases include:
* [Speech-to-text recognition](#speech-to-text) when you need to identify the language in an audio source and then transcribe it to text. * [Speech translation](#speech-translation) when you need to identify the language in an audio source and then translate it to another language.
-Note that for speech recognition, the initial latency is higher with language identification. You should only include this optional feature as needed.
+For speech recognition, the initial latency is higher with language identification. You should only include this optional feature as needed.
## Configuration options
+> [!IMPORTANT]
+> Language Identification (preview) APIs have been simplified in the Speech SDK version 1.25. The
+`SpeechServiceConnection_SingleLanguageIdPriority` and `SpeechServiceConnection_ContinuousLanguageIdPriority` properties have
+been removed and replaced by a single property `SpeechServiceConnection_LanguageIdMode`. Prioritizing between low latency and high accuracy is no longer necessary following recent model improvements. Now, you only need to select whether to run at-start or continuous Language Identification when doing continuous speech recognition or translation.
+ Whether you use language identification with [speech-to-text](#speech-to-text) or with [speech translation](#speech-translation), there are some common concepts and configuration options. - Define a list of [candidate languages](#candidate-languages) that you expect in the audio. - Decide whether to use [at-start or continuous](#at-start-and-continuous-language-identification) language identification.-- Prioritize [low latency or high accuracy](#accuracy-and-latency-prioritization) of results. Then you make a [recognize once or continuous recognition](#recognize-once-or-continuous) request to the Speech service.
-Code snippets are included with the concepts described next. Complete samples for each use case are provided further below.
+Code snippets are included with the concepts described next. Complete samples for each use case are provided later.
### Candidate languages
-You provide candidate languages, at least one of which is expected be in the audio. You can include up to 4 languages for [at-start LID](#at-start-and-continuous-language-identification) or up to 10 languages for [continuous LID](#at-start-and-continuous-language-identification).
+You provide candidate languages with the `AutoDetectSourceLanguageConfig` object, at least one of which is expected to be in the audio. You can include up to four languages for [at-start LID](#at-start-and-continuous-language-identification) or up to 10 languages for [continuous LID](#at-start-and-continuous-language-identification). The Speech service returns one of the candidate languages provided even if those languages weren't in the audio. For example, if `fr-FR` (French) and `en-US` (English) are provided as candidates, but German is spoken, either `fr-FR` or `en-US` would be returned.
-You must provide the full locale with dash (`-`) separator, but language identification only uses one locale per base language. Do not include multiple locales (e.g., "en-US" and "en-GB") for the same language.
+You must provide the full locale with dash (`-`) separator, but language identification only uses one locale per base language. Don't include multiple locales (for example, "en-US" and "en-GB") for the same language.
::: zone pivot="programming-language-csharp"
Speech supports both at-start and continuous language identification (LID).
> [!NOTE] > Continuous language identification is only supported with Speech SDKs in C#, C++, Java ([for speech to text only](#speech-to-text)), and Python.-- At-start LID identifies the language once within the first few seconds of audio. Use at-start LID if the language in the audio won't change.-- Continuous LID can identify multiple languages for the duration of the audio. Use continuous LID if the language in the audio could change. Continuous LID does not support changing languages within the same sentence. For example, if you are primarily speaking Spanish and insert some English words, it will not detect the language change per word.-
-You implement at-start LID or continuous LID by calling methods for [recognize once or continuous](#recognize-once-or-continuous). Results also depend upon your [Accuracy and Latency prioritization](#accuracy-and-latency-prioritization).
-
-### Accuracy and Latency prioritization
-
-You can choose to prioritize accuracy or latency with language identification.
-
-> [!NOTE]
-> Latency is prioritized by default with the Speech SDK. You can choose to prioritize accuracy or latency with the Speech SDKs for C#, C++, Java ([for speech to text only](#speech-to-text)), and Python.
-
-Prioritize `Latency` if you need a low-latency result such as during live streaming. Set the priority to `Accuracy` if the audio quality may be poor, and more latency is acceptable. For example, a voicemail could have background noise, or some silence at the beginning. Allowing the engine more time will improve language identification results.
-
-* **At-start:** With at-start LID in `Latency` mode the result is returned in less than 5 seconds. With at-start LID in `Accuracy` mode the result is returned within 30 seconds. You set the priority for at-start LID with the `SpeechServiceConnection_SingleLanguageIdPriority` property.
-* **Continuous:** With continuous LID in `Latency` mode the results are returned every 2 seconds for the duration of the audio. Continuous LID in `Accuracy` mode isn't supported with [speech-to-text](#speech-to-text) and [speech translation](#speech-translation) continuous recognition.
-
-> [!IMPORTANT]
-> With [speech-to-text](#speech-to-text) and [speech translation](#speech-translation) continuous recognition, do not set `Accuracy` with the SpeechServiceConnection_ContinuousLanguageIdPriority property. The setting will be ignored without error, and the default priority of `Latency` will remain in effect.
-
-Speech uses at-start LID with `Latency` prioritization by default. You need to set a priority property for any other LID configuration.
-
-Here is an example of using continuous LID while still prioritizing latency.
-
-```csharp
-speechConfig.SetProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
-```
--
-Here is an example of using continuous LID while still prioritizing latency.
-
-```cpp
-speechConfig->SetProperty(PropertyId::SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
-```
+- At-start LID identifies the language once within the first few seconds of audio. Use at-start LID if the language in the audio won't change. With at-start LID, a single language is detected and returned in less than 5 seconds.
+- Continuous LID can identify multiple languages for the duration of the audio. Use continuous LID if the language in the audio could change. Continuous LID doesn't support changing languages within the same sentence. For example, if you're primarily speaking Spanish and insert some English words, it will not detect the language change per word.
-
-Here is an example of using continuous LID while still prioritizing latency.
-
-```java
-speechConfig.setProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
-```
---
-Here is an example of using continuous LID while still prioritizing latency.
-
-```python
-speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, value='Latency')
-```
--
-When prioritizing `Latency`, the Speech service returns one of the candidate languages provided even if those languages were not in the audio. For example, if `fr-FR` (French) and `en-US` (English) are provided as candidates, but German is spoken, either `fr-FR` or `en-US` would be returned. When prioritizing `Accuracy`, the Speech service will return the string `Unknown` as the detected language if none of the candidate languages are detected or if the language identification confidence is low.
-
-> [!NOTE]
-> You may see cases where an empty string will be returned instead of `Unknown`, due to Speech service inconsistency.
-> While this note is present, applications should check for both the `Unknown` and empty string case and treat them identically.
+You implement at-start LID or continuous LID by calling methods for [recognize once or continuous](#recognize-once-or-continuous). Continuous LID is only supported with continuous recognition.
### Recognize once or continuous
-Language identification is completed with recognition objects and operations. You will make a request to the Speech service for recognition of audio.
+Language identification is completed with recognition objects and operations. You'll make a request to the Speech service for recognition of audio.
> [!NOTE] > Don't confuse recognition with identification. Recognition can be used with or without language identification.
-Let's map these concepts to the code. You will either call the recognize once method, or the start and stop continuous recognition methods. You choose from:
+You'll either call the "recognize once" method, or the start and stop continuous recognition methods. You choose from:
-- Recognize once with at-start LID-- Continuous recognition with at start LID
+- Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
+- Continuous recognition with at-start LID
- Continuous recognition with continuous LID
-The `SpeechServiceConnection_ContinuousLanguageIdPriority` property is always required for continuous LID. Without it the speech service defaults to at-start lid.
+The `SpeechServiceConnection_LanguageIdMode` property is only required for continuous LID. Without it, the Speech service defaults to at-start lid. The supported values are "AtStart" for at-start LID or "Continuous" for continuous LID.
::: zone pivot="programming-language-csharp" ```csharp
-// Recognize once with At-start LID
+// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
var result = await recognizer.RecognizeOnceAsync(); // Start and stop continuous recognition with At-start LID
await recognizer.StartContinuousRecognitionAsync();
await recognizer.StopContinuousRecognitionAsync(); // Start and stop continuous recognition with Continuous LID
-speechConfig.SetProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
+speechConfig.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
await recognizer.StartContinuousRecognitionAsync(); await recognizer.StopContinuousRecognitionAsync(); ```
await recognizer.StopContinuousRecognitionAsync();
::: zone pivot="programming-language-cpp" ```cpp
-// Recognize once with At-start LID
+// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
auto result = recognizer->RecognizeOnceAsync().get(); // Start and stop continuous recognition with At-start LID
recognizer->StartContinuousRecognitionAsync().get();
recognizer->StopContinuousRecognitionAsync().get(); // Start and stop continuous recognition with Continuous LID
-speechConfig->SetProperty(PropertyId::SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
+speechConfig->SetProperty(PropertyId::SpeechServiceConnection_LanguageIdMode, "Continuous");
recognizer->StartContinuousRecognitionAsync().get(); recognizer->StopContinuousRecognitionAsync().get(); ```
recognizer->StopContinuousRecognitionAsync().get();
::: zone pivot="programming-language-java" ```java
-// Recognize once with At-start LID
+// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
SpeechRecognitionResult result = recognizer->RecognizeOnceAsync().get(); // Start and stop continuous recognition with At-start LID
recognizer.startContinuousRecognitionAsync().get();
recognizer.stopContinuousRecognitionAsync().get(); // Start and stop continuous recognition with Continuous LID
-speechConfig.setProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
+speechConfig.setProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
recognizer.startContinuousRecognitionAsync().get(); recognizer.stopContinuousRecognitionAsync().get(); ```
recognizer.stopContinuousRecognitionAsync().get();
::: zone pivot="programming-language-python" ```python
-# Recognize once with At-start LID
+# Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
result = recognizer.recognize_once() # Start and stop continuous recognition with At-start LID
recognizer.start_continuous_recognition()
recognizer.stop_continuous_recognition() # Start and stop continuous recognition with Continuous LID
-speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, value='Latency')
+speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
recognizer.start_continuous_recognition() recognizer.stop_continuous_recognition() ```
You use Speech-to-text recognition when you need to identify the language in an
> [!NOTE] > Speech-to-text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech-to-text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, Java, and Python.
+>
> Currently for speech-to-text recognition with continuous language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it. ::: zone pivot="programming-language-csharp"
-See more examples of speech-to-text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/translation_samples.cs).
+See more examples of speech-to-text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_with_language_id_samples.cs).
### [Recognize once](#tab/once)
using Microsoft.CognitiveServices.Speech.Audio;
var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey","YourServiceRegion");
-speechConfig.SetProperty(PropertyId.SpeechServiceConnection_SingleLanguageIdPriority, "Latency");
- var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages( new string[] { "en-US", "de-DE", "zh-CN" });
var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/
var endpointUrl = new Uri(endpointString); var config = SpeechConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
-config.SetProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
+
+// Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
+config.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
using namespace Microsoft::Cognitive
using namespace Microsoft::Cognitive auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey","YourServiceRegion");
-speechConfig->SetProperty(PropertyId::SpeechServiceConnection_SingleLanguageIdPriority, "Latency");
auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });
SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
::: zone-end ::: zone pivot="programming-language-javascript"
-Language detection with a custom endpoint is not supported by the Speech SDK for JavaScript. For example, if you include "fr-FR" as shown here, the custom endpoint will be ignored.
+Language detection with a custom endpoint isn't supported by the Speech SDK for JavaScript. For example, if you include "fr-FR" as shown here, the custom endpoint will be ignored.
```Javascript var enLanguageConfig = SpeechSDK.SourceLanguageConfig.fromLanguage("en-US");
public static async Task RecognizeOnceSpeechTranslationAsync()
var config = SpeechTranslationConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
- speechTranslationConfig.SetProperty(PropertyId.SpeechServiceConnection_SingleLanguageIdPriority, "Latency");
- // Source language is required, but currently ignored. string fromLanguage = "en-US"; speechTranslationConfig.SpeechRecognitionLanguage = fromLanguage;
public static async Task MultiLingualTranslation()
config.AddTargetLanguage("de"); config.AddTargetLanguage("fr");
- config.SetProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
+ // Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
+ config.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" }); var stopTranslation = new TaskCompletionSource<int>();
auto region = "YourServiceRegion";
auto endpointString = std::format("wss://{}.stt.speech.microsoft.com/speech/universal/v2", region); auto config = SpeechTranslationConfig::FromEndpoint(endpointString, "YourSubscriptionKey");
-// Language Id feature requirement
-// Please refer to language id document for different modes
-config->SetProperty(PropertyId::SpeechServiceConnection_SingleLanguageIdPriority, "Latency");
auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE" }); // Sets source and target languages
void MultiLingualTranslation()
auto endpointString = std::format("wss://{}.stt.speech.microsoft.com/speech/universal/v2", region); auto config = SpeechTranslationConfig::FromEndpoint(endpointString, "YourSubscriptionKey");
- speechConfig->SetProperty(PropertyId::SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
+ // Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
+ speechConfig->SetProperty(PropertyId::SpeechServiceConnection_LanguageIdMode, "Continuous");
auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" }); promise<void> recognitionEnd;
translation_config = speechsdk.translation.SpeechTranslationConfig(
target_languages=('de', 'fr')) audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
-# Set the Priority (optional, default Latency, either Latency or Accuracy is accepted)
-translation_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_SingleLanguageIdPriority, value='Accuracy')
- # Specify the AutoDetectSourceLanguageConfig, which defines the number of possible languages auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
translation_config = speechsdk.translation.SpeechTranslationConfig(
target_languages=('de', 'fr')) audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
-# Set the Priority (optional, default Latency)
-translation_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_SingleLanguageIdPriority, value='Latency')
+# Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
+translation_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
# Specify the AutoDetectSourceLanguageConfig, which defines the number of possible languages auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
recognizer.stop_continuous_recognition()
## Next steps
-* [Captioning concepts](captioning-concepts.md)
+* [Try the speech to text quickstart](get-started-speech-to-text.md)
+* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
+* [Use batch transcription](batch-transcription.md)
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
Azure Cognitive Service for Speech is updated on an ongoing basis. To stay up-to
## Recent highlights
+* Speech SDK 1.25.0 was released in January 2023.
* Text-to-speech Batch synthesis API is available in public preview. * Speech-to-text REST API version 3.1 is generally available.
-* Speech SDK 1.24.0 and Speech CLI 1.24.0 were released in October 2022.
* Speech-to-text and text-to-speech container versions were updated in October 2022. * TTS Service November 2022, several voices for `es-MX`, `it-IT`, and `pr-BR` locales were made generally available.
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM
## Next steps
-Learn about [managing deployments, models, and finetuning with the REST API](/rest/api/cognitiveservices/azureopenai/deployments/create).
+Learn about [managing deployments, models, and finetuning with the REST API](/rest/api/cognitiveservices/azureopenaistable/deployments/create).
Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/overview.md
The data flow for joining Teams meetings is available at the [client and server
High-level coding articles: - [Authenticate as Teams external user](../../../quickstarts/identity/access-token-teams-external-users.md) -- [Stateful Client (Meeting)](https://azure.github.io/communication-ui-library/?path=/story/composites-meeting-basicexample--basic-example)
+- [Call with Chat Composite](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-basicexample--basic-example)
Low-level coding articles: - [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following table represents the set of supported browsers which are currently
| Platform | Chrome | Safari | Edge | Firefox | Webview | | | | | | - | - | | Android | ✔️ | ❌ | ❌ | ❌ | ✔️ * |
-| iOS | ❌ | ✔️ | ❌ | ❌ | ❌ |
+| iOS | ❌ | ✔️ | ❌ | ❌ | ✔️ |
| macOS | ✔️ | ✔️ | ✔️ | ✔️ | ❌ | | Windows | ✔️ | ❌ | ✔️ | ✔️ | ❌ | | Ubuntu/Linux | ✔️ | ❌ | ❌ | ❌ | ❌ |
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
Title: Add a bot to your chat app-
-description: This quickstart shows you how to build chat experience with a bot using Communication Services Chat SDK and Bot Services.
-+
+description: Learn how to build a chat experience with a bot by using the Azure Communication Services Chat SDK and Azure Bot Service.
+
-# Add a bot to your chat app
+# Quickstart: Add a bot to your chat app
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-In this quickstart, you will learn how to build conversational AI experiences in a chat application using Azure Communication Services Chat messaging channel that is available under Azure Bot Services. This article will describe how to create a bot using BotFramework SDK and how to integrate this bot into any chat application that is built using Communication Services Chat SDK.
+Learn how to build conversational AI experiences in a chat application by using the Azure Communication Services Chat messaging channel that's available in Azure Bot Service. In this quickstart, you create a bot by using the BotFramework SDK. Then, you integrate the bot into a chat application you create by using the Communication Services Chat SDK.
-You will learn how to:
+In this quickstart, you learn how to:
-- [Create and deploy an Azure bot](#step-1create-and-deploy-an-azure-bot)-- [Get an Azure Communication Services resource](#step-2get-an-azure-communication-services-resource)-- [Enable Communication Services Chat channel for the bot](#step-3enable-azure-communication-services-chat-channel)-- [Create a chat app and add bot as a participant](#step-4create-a-chat-app-and-add-bot-as-a-participant)-- [Explore more features available for bot](#more-things-you-can-do-with-a-bot)
+- [Create and deploy a bot in Azure](#create-and-deploy-a-bot-in-azure)
+- [Get a Communication Services resource](#get-a-communication-services-resource)
+- [Enable the Communication Services Chat channel for the bot](#enable-the-communication-services-chat-channel)
+- [Create a chat app and add the bot as a participant](#create-a-chat-app-and-add-the-bot-as-a-participant)
+- [Explore more features for your bot](#more-things-you-can-do-with-a-bot)
## Prerequisites-- Create an Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- [Visual Studio (2019 and above)](https://visualstudio.microsoft.com/vs/)-- Latest version of .NET Core. For this tutorial, we have used [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1) (Make sure to install the version that corresponds with your visual studio instance, 32 vs 64 bit)
-## Step 1 - Create and deploy an Azure bot
+- An Azure account and an active subscription. Create an [account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Visual Studio 2019 or later](https://visualstudio.microsoft.com/vs/).
+- The latest version of .NET Core. In this quickstart, we use [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1). Be sure to install the version that corresponds with your instance of Visual Studio, 32-bit or 64-bit.
-To use Azure Communication Services chat as a channel in Azure Bot Service, the first step is to deploy a bot. You can do so by following below steps:
+## Create and deploy a bot in Azure
-### Create an Azure bot service resource in Azure
+To use Azure Communication Services chat as a channel in Azure Bot Service, first deploy a bot. To deploy a bot, you complete these steps:
- Refer to the Azure Bot Service documentation on how to [create a bot](/azure/bot-service/abs-quickstart?tabs=userassigned).
+- Create an Azure Bot Service resource
+- Get the bot's app ID and password
+- Create a web app to hold the bot logic
+- Create a messaging endpoint for the bot
- For this example, we have selected a multitenant bot but if you wish to use single tenant or managed identity bots refer to [configuring single tenant and managed identity bots](#support-for-single-tenant-and-managed-identity-bots).
-
+### Create an Azure Bot Service resource
-### Get Bot's MicrosoftAppId and MicrosoftAppPassword
+First, [use the Azure portal to create an Azure Bot Service resource](/azure/bot-service/abs-quickstart?tabs=userassigned).
- Fetch your Azure bot's [Microsoft App ID and secret](/azure/bot-service/abs-quickstart?tabs=userassigned#to-get-your-app-or-tenant-id) as you will need those values for configurations.
+This quickstart uses a multi-tenant bot. To use a single-tenant bot or a managed identity bot, see [Support for single-tenant and managed identity bots](#support-for-single-tenant-and-managed-identity-bots).
-### Create a Web App where the bot logic resides
+### Get the bot's app ID and app password
- You can check out some samples at [Bot Builder Samples](https://github.com/Microsoft/BotBuilder-Samples) and tweak them or use [Bot Builder SDK](/composer/introduction) to create one. One of the simplest samples is [Echo Bot](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot). Generally, the Azure Bot Service expects the Bot Application Web App Controller to expose an endpoint `/api/messages`, which handles all the messages reaching the bot. To create the bot application, you can either use Azure CLI to [create an App Service](/azure/bot-service/provision-app-service?tabs=singletenant%2Cexistingplan) or directly create from the portal using below steps.
+Next, [get the Microsoft app ID and password](/azure/bot-service/abs-quickstart?tabs=userassigned#to-get-your-app-or-tenant-id) that are assigned to your bot when it's deployed. You use these values for later configurations.
- 1. Select `Create a resource` and in the search box, search for web app and select `Web App`.
-
- :::image type="content" source="./media/web-app.png" alt-text="Screenshot of creating a Web app resource in Azure portal.":::
+### Create a web app to hold the bot logic
+To create a web app for your bot, you can revise [Bot Builder samples](https://github.com/Microsoft/BotBuilder-Samples) for your scenario or use the [Bot Builder SDK](/composer/introduction) to create a web app. One of the simplest samples is [Echo Bot](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot).
- 2. Configure the options you want to set including the region you want to deploy it to.
-
- :::image type="content" source="./media/web-app-create-options.png" alt-text="Screenshot of specifying Web App create options to set.":::
+Azure Bot Service typically expects the Bot Application Web App Controller to expose an endpoint in the form `/api/messages`. The endpoint handles all messages that are sent to the bot.
- 3. Review your options and create the Web App and once it has been created, copy the hostname URL exposed by the Web App.
-
- :::image type="content" source="./media/web-app-endpoint.png" alt-text="Diagram that shows how to copy the newly created Web App endpoint.":::
+To create the bot app, either use the Azure CLI to [create an Azure App Service resource](/azure/bot-service/provision-app-service?tabs=singletenant%2Cexistingplan) or create the app in the Azure portal.
+To create a bot web app by using the Azure portal:
-### Configure the Azure Bot
+1. In the portal, select **Create a resource**. In the search box, enter **web app**. Select the **Web App** tile.
+
+ :::image type="content" source="./media/web-app.png" alt-text="Screenshot that shows creating a web app resource in the Azure portal.":::
-Configure the Azure Bot you created with its Web App endpoint where the bot logic is located. To do this configuration, copy the hostname URL of the Web App from previous step and append it with `/api/messages`
+1. In **Create Web App**, select or enter details for the app, including the region where you want to deploy the app.
+
+ :::image type="content" source="./media/web-app-create-options.png" alt-text="Screenshot that shows details to set to create a web app deployment.":::
- :::image type="content" source="./media/smaller-bot-configure-with-endpoint.png" alt-text="Diagram that shows how to set bot messaging endpoint with the copied Web App endpoint." lightbox="./media/bot-configure-with-endpoint.png":::
+1. Select **Review + Create** to validate the deployment and review the deployment details. Then, select **Create**.
+1. When the web app resource is created, copy the hostname URL that's shown in the resource details. The URL will be part of the endpoint you create for the web app.
+
+ :::image type="content" source="./media/web-app-endpoint.png" alt-text="Screenshot that shows how to copy the web app endpoint URL.":::
-### Deploy the Azure Bot
+### Create a messaging endpoint for the bot
-The final step would be to deploy the Web App we created. The Echo bot functionality is limited to echoing the user input. Here's how we deploy it to Azure Web App.
+Next, in the bot resource, create a web app messaging endpoint:
- 1. To use the samples, clone this GitHub repository using Git.
- ```
- git clone https://github.com/Microsoft/BotBuilder-Samples.git
- cd BotBuilder-Samples
- ```
- 2. Open the project located here [Echo bot](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot) in Visual Studio.
+1. In the Azure portal, go to your Azure Bot resource. In the resource menu, select **Configuration**.
- 3. Go to the appsettings.json file inside the project and copy the [Microsoft application ID and secret](#get-bots-microsoftappid-and-microsoftapppassword) in their respective placeholders.
- ```js
+1. In **Configuration**, for **Messaging endpoint**, paste the hostname URL of the web app you copied in the preceding section. Append the URL with `/api/messages`.
+
+1. Select **Save**.
++
+### Deploy the web app
+
+The final step to create a bot is to deploy the web app. For this quickstart, use the Echo Bot sample. The Echo Bot functionality is limited to echoing the user input. Here's how you deploy it to your web app in Azure:
+
+1. Use Git to clone this GitHub repository:
+
+ ```console
+ git clone https://github.com/Microsoft/BotBuilder-Samples.git
+ cd BotBuilder-Samples
+ ```
+
+1. In Visual Studio, open the [Echo Bot project](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot).
+
+1. In the Visual Studio project, open the *Appsettings.json* file. Paste the [Microsoft app ID and app password](#get-the-bots-app-id-and-app-password) you copied earlier:
+
+ ```json
{
- "MicrosoftAppId": "<App-registration-id>",
+ "MicrosoftAppId": "<App-registration-ID>",
"MicrosoftAppPassword": "<App-password>" }
- ```
- For deploying the bot, you can either use command line to [deploy an Azure bot](/azure/bot-service/provision-and-publish-a-bot?tabs=userassigned%2Ccsharp) or use Visual studio for C# bots as described below.
+ ```
+
+ Next, use Visual Studio for C# bots to deploy the bot.
+
+ You also can use a Command Prompt window to [deploy an Azure bot](/azure/bot-service/provision-and-publish-a-bot?tabs=userassigned%2Ccsharp).
+
+1. In Visual Studio, in Solution Explorer, right-click the **EchoBot** project and select **Publish**:
- 1. Select the project to publish the Web App code to Azure. Choose the publish option in Visual Studio.
+ :::image type="content" source="./media/publish-app.png" alt-text="Screenshot that shows publishing your web app from Visual Studio.":::
- :::image type="content" source="./media/publish-app.png" alt-text="Screenshot of publishing your Web App from Visual Studio.":::
+1. Select **New** to create a new publishing profile. For **Target**, select **Azure**:
- 2. Select New to create a new publishing profile, choose Azure as the target, and Azure App Service as the specific target.
+ :::image type="content" source="./media/select-azure-as-target.png" alt-text="Screenshot that shows how to select Azure as target in a new publishing profile.":::
+
+ For the specific target, select **Azure App Service**:
+
+ :::image type="content" source="./media/select-app-service.png" alt-text="Screenshot that shows how to select Azure App Service as the specific target.":::
- :::image type="content" source="./media/select-azure-as-target.png" alt-text="Diagram that shows how to select Azure as target in a new publishing profile.":::
-
- :::image type="content" source="./media/select-app-service.png" alt-text="Diagram that shows how to select specific target as Azure App Service.":::
+1. In the deployment configuration, select the web app in the results that appear after you sign in to your Azure account. To complete the profile, select **Finish**, and then select **Publish** to start the deployment.
+
+ :::image type="content" source="./media/smaller-deployment-config.png" alt-text="Screenshot that shows setting the deployment configuration with the web app name." lightbox="./media/deployment-config.png":::
- 3. Lastly, the above option opens the deployment config. Choose the Web App we had created from the list of options it comes up with after signing into your Azure account. Once ready select `Finish` to complete the profile, and then select `Publish` to start the deployment.
-
- :::image type="content" source="./media/smaller-deployment-config.png" alt-text="Screenshot of setting deployment config with the created Web App name." lightbox="./media/deployment-config.png":::
+## Get a Communication Services resource
-## Step 2 - Get an Azure Communication Services Resource
-Now that bot is created and deployed, you will need an Azure Communication Services resource, which you can use to configure the Azure Communication Services channel.
-1. Create an Azure Communication Services resource. For details, see [Create an Azure Communication Services resource](../../quickstarts/create-communication-resource.md).
+Now that your bot is created and deployed, create a Communication Services resource to use to set up a Communication Services channel:
-2. Create an Azure Communication Services User and issue a [User Access Token](../../quickstarts/access-tokens.md). Be sure to set the scope to **chat**, and **note the token string as well as the userId string**.
+1. Complete the steps to [create a Communication Services resource](../../quickstarts/create-communication-resource.md).
-## Step 3 - Enable Azure Communication Services Chat channel
-With the Azure Communication Services resource, you can set up the Azure Communication Services channel in Azure Bot to assign an Azure Communication Services User ID to a bot.
+1. Create a Communication Services user and issue a [user access token](../../quickstarts/access-tokens.md). Be sure to set the scope to **chat**. *Copy the token string and the user ID string*.
-1. Go to your Bot Services resource on Azure portal. Navigate to `Channels` configuration on the left pane and select `Azure Communications Services - Chat` channel from the list provided.
-
- :::image type="content" source="./media/smaller-demoapp-launch-acs-chat.png" alt-text="Screenshot of launching Azure Communication Services Chat channel." lightbox="./media/demoapp-launch-acs-chat.png":::
+## Enable the Communication Services Chat channel
-
-2. Select the connect button to see a list of Communication resources available under your subscriptions.
+When you have a Communication Services resource, you can set up a Communication Services channel in the bot resource. In this process, a user ID is generated for the bot.
- :::image type="content" source="./media/smaller-bot-connect-acs-chat-channel.png" alt-text="Diagram that shows how to connect an Azure Communication Service Resource to this bot." lightbox="./media/bot-connect-acs-chat-channel.png":::
+1. In the Azure portal, go to your Azure Bot resource. In the resource menu, select **Channels**. In the list of available channels, select **Azure Communications Services - Chat**.
-3. Once you have selected the required Azure Communication Services resource from the resources dropdown list, press the apply button.
+ :::image type="content" source="./media/smaller-demoapp-launch-acs-chat.png" alt-text="Screenshot that shows opening the Communication Services Chat channel." lightbox="./media/demoapp-launch-acs-chat.png":::
- :::image type="content" source="./media/smaller-bot-choose-resource.png" alt-text="Diagram that shows how to save the selected Azure Communication Service resource to create a new Azure Communication Services user ID." lightbox="./media/bot-choose-resource.png":::
+1. Select **Connect** to see a list of Communication Services resources that are available in your subscription.
-4. Once the provided resource details are verified, you will see the **bot's Azure Communication Services ID** assigned. With this ID, you can add the bot to the conversation whenever appropriate using Chat's AddParticipant API. Once the bot is added as participant to a chat, it will start receiving chat related activities, and can respond back in the chat thread.
+ :::image type="content" source="./media/smaller-bot-connect-acs-chat-channel.png" alt-text="Screenshot that shows how to connect a Communication Service resource to the bot." lightbox="./media/bot-connect-acs-chat-channel.png":::
- :::image type="content" source="./media/smaller-acs-chat-channel-saved.png" alt-text="Screenshot of new Azure Communication Services user ID assigned to the bot." lightbox="./media/acs-chat-channel-saved.png":::
+1. In the **New Connection** pane, select the Communication Services chat resource, and then select **Apply**.
+ :::image type="content" source="./media/smaller-bot-choose-resource.png" alt-text="Screenshot that shows how to save the selected Communication Service resource to create a new Communication Services user ID." lightbox="./media/bot-choose-resource.png":::
-## Step 4 - Create a chat app and add bot as a participant
-Now that you have the bot's Azure Communication Services ID, you can create a chat thread with the bot as a participant.
+1. When the resource details are verified, a bot ID is shown in the **Bot ACS Id** column. You can use the bot ID to represent the bot in a chat thread by using the Communication Services Chat AddParticipant API. After you add the bot to a chat as participant, the bot starts to receive chat-related activities, and it can respond in the chat thread.
+
+ :::image type="content" source="./media/smaller-acs-chat-channel-saved.png" alt-text="Screenshot that shows the new Communication Services user ID assigned to the bot." lightbox="./media/acs-chat-channel-saved.png":::
+
+## Create a chat app and add the bot as a participant
+
+Now that you have the bot's Communication Services ID, you can create a chat thread with the bot as a participant.
### Create a new C# application
-```console
-dotnet new console -o ChatQuickstart
-```
+1. Run the following command to create a C# application:
-Change your directory to the newly created app folder and use the `dotnet build` command to compile your application.
+ ```console
+ dotnet new console -o ChatQuickstart
+ ```
-```console
-cd ChatQuickstart
-dotnet build
-```
+1. Change your directory to the new app folder and use the `dotnet build` command to compile your application:
+
+ ```console
+ cd ChatQuickstart
+ dotnet build
+ ```
### Install the package
-Install the Azure Communication Chat SDK for .NET
+Install the Communication Services Chat SDK for .NET:
-```PowerShell
+```powershell
dotnet add package Azure.Communication.Chat ``` ### Create a chat client
-To create a chat client, you will use your Azure Communication Services endpoint and the access token that was generated as part of Step 2. You need to use the `CommunicationIdentityClient` class from the Identity SDK to create a user and issue a token to pass to your chat client.
+To create a chat client, use your Communication Services endpoint and the user access token you generated earlier. Use the `CommunicationIdentityClient` class from the Identity SDK to create a user and issue a token to pass to your chat client.
+Copy the following code and paste it in the *Program.cs* source file:
-Copy the following code snippets and paste into source file: **Program.cs**
```csharp using Azure; using Azure.Communication;
namespace ChatQuickstart
{ static async System.Threading.Tasks.Task Main(string[] args) {
- // Your unique Azure Communication service endpoint
+ // Your unique Communication Services endpoint
Uri endpoint = new Uri("https://<RESOURCE_NAME>.communication.azure.com"); CommunicationTokenCredential communicationTokenCredential = new CommunicationTokenCredential(<Access_Token>);
namespace ChatQuickstart
### Start a chat thread with the bot
-Use the `createChatThread` method on the chatClient to create a chat thread, replace with the bot's Azure Communication Services ID you obtained.
+Use the `createChatThread` method on `chatClient` to create a chat thread. Replace the ID with the bot's Communication Services ID.
+ ```csharp var chatParticipant = new ChatParticipant(identifier: new CommunicationUserIdentifier(id: "<BOT_ID>")) {
string threadId = chatThreadClient.Id;
``` ### Get a chat thread client
-The `GetChatThreadClient` method returns a thread client for a thread that already exists.
+
+The `GetChatThreadClient` method returns a thread client for a thread that already exists:
```csharp string threadId = "<THREAD_ID>";
ChatThreadClient chatThreadClient = chatClient.GetChatThreadClient(threadId: thr
### Send a message to a chat thread
-Use `SendMessage` to send a message to a thread.
+To use `SendMessage` to send a message to a thread:
+ ```csharp SendChatMessageOptions sendChatMessageOptions = new SendChatMessageOptions() {
string messageId = sendChatMessageResult.Id;
### Receive chat messages from a chat thread
-You can retrieve chat messages by polling the `GetMessages` method on the chat thread client at specified intervals.
+You can get chat messages by polling the `GetMessages` method on the chat thread client at set intervals:
```csharp AsyncPageable<ChatMessage> allMessages = chatThreadClient.GetMessagesAsync();
await foreach (ChatMessage message in allMessages)
Console.WriteLine($"{message.Id}:{message.Content.Message}"); } ```
-You should see bot's echo reply to "Hello World" in the list of messages.
-When creating the chat applications, you can also receive real-time notifications by subscribing to listen for new incoming messages using our JavaScript or mobile SDKs. An example using JavaScript SDK would be:
-```js
-// open notifications channel
+
+Check the list of messages for the bot's echo reply to "Hello World".
+
+You can use JavaScript or the Azure mobile SDKs to subscribe to incoming message notifications:
+
+```javascript
+// Open notifications channel
await chatClient.startRealtimeNotifications();
-// subscribe to new notification
+// Subscribe to new notifications
chatClient.on("chatMessageReceived", (e) => { console.log("Notification chatMessageReceived!");
- // your code here
+ // Your code here
}); ``` ### Clean up the chat thread
-Delete the thread when finished.
+When you're finished using the chat thread, delete the thread:
```csharp chatClient.DeleteChatThread(threadId); ``` ### Deploy the C# chat application
-Follow these steps to deploy the chat application:
-1. Open the chat project in Visual Studio.
-2. Select the ChatQuickstart project and from the right-click menu, select Publish
- :::image type="content" source="./media/deploy-chat-application.png" alt-text="Screenshot of deploying chat application to Azure from Visual Studio.":::
+To deploy the chat application:
+1. In Visual Studio, open the chat project.
+
+1. Right-click the **ChatQuickstart** project and select **Publish**:
+
+ :::image type="content" source="./media/deploy-chat-application.png" alt-text="Screenshot that shows deploying the chat application to Azure from Visual Studio.":::
## More things you can do with a bot
-In addition to sending a plain text message, a bot is also able to receive many other activities from the user through Azure Communications Services Chat channel including
+
+A bot can receive more than a plain-text message from a user in a Communications Services Chat channel. Some of the activities a bot can receive from a user include:
+ - Conversation update - Message update-- Message delete
+- Message delete
- Typing indicator - Event activity-- Various attachments including Adaptive cards
+- Various attachments, including adaptive cards
- Bot channel data
-Below are some samples to illustrate these features:
+The next sections show some samples to illustrate these features.
### Send a welcome message when a new user is added to the thread
- The current Echo Bot logic accepts input from the user and echoes it back. If you would like to add extra logic such as responding to a participant added Azure Communication Services event, copy the following code snippets and paste into the source file: [EchoBot.cs](https://github.com/microsoft/BotBuilder-Samples/blob/main/samples/csharp_dotnetcore/02.echo-bot/Bots/EchoBot.cs)
+
+The current Echo Bot logic accepts input from the user and echoes it back. If you want to add more logic, such as responding to a participant-added Communication Services event, copy the following code and paste it in the [EchoBot.cs](https://github.com/microsoft/BotBuilder-Samples/blob/main/samples/csharp_dotnetcore/02.echo-bot/Bots/EchoBot.cs) source file:
```csharp using System.Threading;
namespace Microsoft.BotBuilderSamples.Bots
} } ```+ ### Send an adaptive card
-Sending adaptive cards to the chat thread can help you increase engagement and efficiency and communicate with users in a variety of ways. You can send adaptive cards from a bot by adding them as bot activity attachments.
+You can send an adaptive card to the chat thread to increase engagement and efficiency. An adaptive card also helps you communicate with users in various ways. You can send an adaptive card from a bot by adding the card as a bot activity attachment.
+Here's an example of how to send an adaptive card:
```csharp var reply = Activity.CreateMessageActivity();
var adaptiveCard = new Attachment()
reply.Attachments.Add(adaptiveCard); await turnContext.SendActivityAsync(reply, cancellationToken); ```
-You can find sample payloads for adaptive cards at [Samples and Templates](https://adaptivecards.io/samples)
-On the Azure Communication Services User side, the Azure Communication Services Chat channel will add a field to the message's metadata that will indicate that this message has an attachment. The key in the metadata is `microsoft.azure.communication.chat.bot.contenttype`, which is set to the value `azurebotservice.adaptivecard`. Here is an example of the chat message that will be received:
+Get sample payloads for adaptive cards at [Samples and templates](https://adaptivecards.io/samples).
+
+For a chat user, the Communication Services Chat channel adds a field to the message metadata that indicates the message has an attachment. In the metadata, the `microsoft.azure.communication.chat.bot.contenttype` property is set to `azurebotservice.adaptivecard`.
+
+Here's an example of a chat message that has an adaptive card attached:
```json {
- "content": "{\"attachments\":[{\"contentType\":\"application/vnd.microsoft.card.adaptive\",\"content\":{/* the adaptive card */}}]}",
- "senderDisplayName": "BotDisplayName",
- "metadata": {
- "microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.adaptivecard"
- },
+ "content": "{\"attachments\":[{\"contentType\":\"application/vnd.microsoft.card.adaptive\",\"content\":{/* the adaptive card */}}]}",
+ "senderDisplayName": "BotDisplayName",
+ "metadata": {
+ "microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.adaptivecard"
+ },
"messageType": "Text" } ```
-* ### Send a message from user to bot
+#### Send a message from user to bot
+
+You can send a basic text message from a user to the bot the same way you send a text message to another user.
+
+However, when you send a message that has an attachment from a user to a bot, add this flag to the Communication Services Chat metadata:
+
+`"microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.adaptivecard"`
-You can send a simple text message from user to bot just the same way you send a text message to another user.
-However, when sending a message carrying an attachment from a user to the bot, you will need to add this flag to the Communication Services Chat metadata `"microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.adaptivecard"`. For sending an event activity from user to bot, you will need to add to Communication Services Chat metadata `"microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.event"`. Below are sample formats for user to bot Chat messages.
+To send an event activity from a user to a bot, add this flag to the Communication Services Chat metadata:
- * #### Simple text message
+`"microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.event"`
+
+The following sections show sample formats for chat messages from a user to a bot.
+
+#### Simple text message
```json {
However, when sending a message carrying an attachment from a user to the bot, y
"text":"random text", "key1":"value1", "key2":"{\r\n \"subkey1\": \"subValue1\"\r\n
- "},
+ "},
"messageType": "Text" }
-```
+```
- * #### Message with an attachment
+#### Message with an attachment
```json {
- "content": "{
+ "content": "{
\"text\":\"sample text\", \"attachments\": [{ \"contentType\":\"application/vnd.microsoft.card.adaptive\", \"content\": { \"*adaptive card payload*\" } }] }",
- "senderDisplayName": "Acs-Dev-Bot",
- "metadata": {
- "microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.adaptivecard",
- "text": "random text",
- "key1": "value1",
- "key2": "{\r\n \"subkey1\": \"subValue1\"\r\n}"
- },
+ "senderDisplayName": "Acs-Dev-Bot",
+ "metadata": {
+ "microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.adaptivecard",
+ "text": "random text",
+ "key1": "value1",
+ "key2": "{\r\n \"subkey1\": \"subValue1\"\r\n}"
+ },
"messageType": "Text" } ```
- * #### Message with an event activity
+#### Message with an event activity
+
+An event payload includes all JSON fields in the message content except `Name`. The `Name` field contains the name of the event.
+
+In the following example, the event name `endOfConversation` with the payload `"{field1":"value1", "field2": { "nestedField":"nestedValue" }}` is sent to the bot:
-Event payload comprises all json fields in the message content except name field, which should contain the name of the event. Below event name `endOfConversation` with the payload `"{field1":"value1", "field2": { "nestedField":"nestedValue" }}` is sent to the bot.
```json { "content":"{
Event payload comprises all json fields in the message content except name field
} ```
-> The metadata field `"microsoft.azure.communication.chat.bot.contenttype"` is only needed in user to bot direction. It is not needed in bot to user direction.
+The metadata field `microsoft.azure.communication.chat.bot.contenttype` is required only in a message that's sent from a user to a bot.
## Supported bot activity fields
-### Bot to user flow
+The following sections describe supported bot activity fields for bot-to-user flows and user-to-bot flows.
+
+### Bot-to-user flow
+
+The following bot activity fields are supported for bot-to-user flows.
#### Activities -- Message activity-- Typing activity
+- Message
+- Typing
#### Message activity fields+ - `Text` - `Attachments` - `AttachmentLayout` - `SuggestedActions`-- `From.Name` (Converted to Azure Communication Services SenderDisplayName)-- `ChannelData` (Converted to Azure Communication Services Chat Metadata. If any `ChannelData` mapping values are objects, then they'll be serialized in JSON format and sent as a string)
+- `From.Name` (Converted to Communication Services `SenderDisplayName`.)
+- `ChannelData` (Converted to Communication Services `Chat Metadata`. If any `ChannelData` mapping values are objects, they're serialized in JSON format and sent as a string.)
-### User to bot flow
+### User-to-bot flow
+
+These bot activity fields are supported for user-to-bot flows.
#### Activities and fields -- Message activity
- - `Id` (Azure Communication Services Chat message ID)
+- Message
+
+ - `Id` (Communication Services Chat message ID)
- `TimeStamp` - `Text` - `Attachments`-- Conversation update activity
- - `MembersAdded`
- - `MembersRemoved`
- - `TopicName`
-- Message update activity
- - `Id` (Updated Azure Communication Services Chat message ID)
- - `Text`
- - `Attachments`
-- Message delete activity
- - `Id` (Deleted Azure Communication Services Chat message ID)
-- Event activity
- - `Name`
- - `Value`
-- Typing activity+
+- Conversation update
+
+ - `MembersAdded`
+ - `MembersRemoved`
+ - `TopicName`
+
+- Message update
+
+ - `Id` (Updated Communication Services Chat message ID)
+ - `Text`
+ - `Attachments`
+
+- Message delete
+
+ - `Id` (Deleted Communication Services Chat message ID)
+
+- Event
+
+ - `Name`
+ - `Value`
+
+- Typing
#### Other common fields -- `Recipient.Id` and `Recipeint.Name` (Azure Communication Services Chat user ID and display name)-- `From.Id` and `From.Name` (Azure Communication Services Chat user ID and display name)-- `Conversation.Id` (Azure Communication Services Chat thread ID)-- `ChannelId` (AcsChat if empty)-- `ChannelData` (Azure Communication Services Chat message metadata)
+- `Recipient.Id` and `Recipient.Name` (Communication Services Chat user ID and display name)
+- `From.Id` and `From.Name` (Communication Services Chat user ID and display name)
+- `Conversation.Id` (Communication Services Chat thread ID)
+- `ChannelId` (Communication Services Chat if empty)
+- `ChannelData` (Communication Services Chat message metadata)
-## Support for single tenant and managed identity bots
+## Support for single-tenant and managed identity bots
-Azure Communication Services Chat channel supports single tenant and managed identity bots as well. Refer to [bot identity information](/azure/bot-service/bot-builder-authentication?tabs=userassigned%2Caadv2%2Ccsharp#bot-identity-information) to set up your bot web app.
+Communication Services Chat channel supports single-tenant bots, managed identity bots, and multi-tenant bots. To set up a single-tenant or managed identity bot, review [Bot identity information](/azure/bot-service/bot-builder-authentication?tabs=userassigned%2Caadv2%2Ccsharp#bot-identity-information).
-For managed identity bots, additionally, you might have to [update bot service identity](/azure/bot-service/bot-builder-authentication?tabs=userassigned%2Caadv2%2Ccsharp#to-update-your-app-service).
+For a managed identity bot, you might have to [update the bot service identity](/azure/bot-service/bot-builder-authentication?tabs=userassigned%2Caadv2%2Ccsharp#to-update-your-app-service).
## Bot handoff patterns
-Sometimes the bot wouldn't be able to understand or answer a question or a customer can request to be connected to a human agent. Then it will be necessary to handoff the chat thread from a bot to a human agent. In such cases, you can design your application to [transition conversation from bot to human](/azure/bot-service/bot-service-design-pattern-handoff-human).
+Sometimes, a bot doesn't understand a question, or it can't answer a question. A customer might ask in the chat to be connected to a human agent. In these scenarios, the chat thread must be handed off from the bot to a human agent. You can design your application to [transition a conversation from a bot to a human](/azure/bot-service/bot-service-design-pattern-handoff-human).
+
+## Handling bot-to-bot communication
+
+In some use cases, two bots need to be added to the same chat thread to provide different services. In this scenario, you might need to ensure that a bot doesn't send automated replies to another bot's messages. If not handled properly, the bots' automated interaction between themselves might result in an infinite loop of messages.
+
+You can verify the Communication Services user identity of a message sender in the activity's `From.Id` property. Check to see whether it belongs to another bot. Then, take the required action to prevent a bot-to-bot communication flow. If this type of scenario results in high call volumes, the Communication Services Chat channel throttles the requests and a bot can't send and receive messages.
-## Handling bot to bot communication
+Learn more about [throttle limits](/azure/communication-services/concepts/service-limits#chat).
- There may be certain use cases where two bots need to be added to the same chat thread to provide different services. In such use cases, you may need to ensure that bots don't start sending automated replies to each other's messages. If not handled properly, the bots' automated interaction between themselves may result in an infinite loop of messages. You can verify the Azure Communication Services user identity of the sender of a message from the activity's `From.Id` field to see if it belongs to another bot and take required action to prevent such a communication flow. If such a scenario results in high call volumes, then Azure Communication Services Chat channel will start throttling the requests, which will result in the bot not being able to send and receive the messages. You can learn more about the [throttle limits](../../concepts/service-limits.md#chat).
+## Troubleshoot
-## Troubleshooting
+The following sections describe ways to troubleshoot common scenarios.
-### Chat channel cannot be added
+### Chat channel can't be added
-- Verify that in the Azure Bot Framework (ABS) portal, Configuration -> Bot Messaging endpoint has been set correctly.
+In the [Microsoft Bot Framework developer portal](https://dev.botframework.com/bots), go to **Configuration** > **Bot Messaging** to verify that the endpoint has been set correctly.
### Bot gets a forbidden exception while replying to a message -- Verify that bot's Microsoft App ID and secret are saved correctly in the bot configuration file uploaded to the webapp.
+Verify that the bot's Microsoft app ID and password are saved correctly in the bot configuration file you upload to the web app.
-### Bot is not able to be added as a participant
+### Bot can't be added as a participant
-- Verify that bot's Azure Communication Services ID is being used correctly while sending a request to add bot to a chat thread.
+Verify that the bot's Communication Services ID is used correctly when a request is sent to add a bot to a chat thread.
## Next steps
-Try the [Sample App](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App), which showcases a 1:1 chat between the end user and chat bot, and uses BotFramework's WebChat UI component.
+Try the [chat bot demo app](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App) for a 1:1 chat between a chat user and a bot via the BotFramework WebChat UI component.
communication-services Chat Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/chat-hero-sample.md
Below you'll find more information on prerequisites and steps to set up the samp
1. Set your connection string in `Server/appsettings.json` 2. Set your endpoint URL string in `Server/appsettings.json`
+3. Set your adminUserId string in `Server/appsettings.json`
3. `npm run setup` from the root directory 4. `npm run start` from the root directory
container-instances Container Instances Resource And Quota Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-resource-and-quota-limits.md
+
+ Title: Resource availability & quota limits for ACI
+description: Availability and quota limits of compute and memory resources for the Azure Container Instances service in different Azure regions.
+++++ Last updated : 1/19/2023+++
+# Resource availability & quota limits for ACI
+
+This article details the availability and quota limits of Azure Container Instances compute, memory, and storage resources in Azure regions and by target operating system. For a general list of available regions for Azure Container Instances, see [available regions](https://azure.microsoft.com/regions/services/).
+
+Values presented are the maximum resources available per deployment of a [container group](container-instances-container-groups.md). Values are current at time of publication.
+
+> [!NOTE]
+> Container groups created within these resource limits are subject to availability within the deployment region. When a region is under heavy load, you may experience a failure when deploying instances. To mitigate such a deployment failure, try deploying instances with lower resource settings, or try your deployment at a later time or in a different region with available resources.
+
+## Default Quota Limits
+
+All Azure services include certain default limits and quotas for resources and features. This section details the default quotas and limits for Azure Container Instances.
+
+Use the [List Usage](/rest/api/container-instances/location/listusage) API to review current quota usage in a region for a subscription.
+
+Certain default limits and quotas can be increased. To request an increase of one or more resources that support such an increase, please submit an [Azure support request][azure-support] (select "Quota" for **Issue type**).
+
+> [!IMPORTANT]
+> Not all limit increase requests are guaranteed to be approved.
+> Deployments with GPU resources are not supported in an Azure virtual network deployment and are only available on Linux container groups.
+> Using GPU resources (preview) is not fully supported yet and any support is provided on a best-effort basis.
+
+### Unchangeable (Hard) Limits
+
+The following limits are default limits that canΓÇÖt be increased through a quota request. Any quota increase requests for these limits will not be approved.
+
+| Resource | Actual Limit |
+| | : |
+| Number of containers per container group | 60 |
+| Number of volumes per container group | 20 |
+| Ports per IP | 5 |
+| Container instance log size - running instance | 4 MB |
+| Container instance log size - stopped instance | 16 KB or 1,000 lines |
++
+### Changeable Limits (Eligible for Quota Increases)
+
+| Resource | Actual Limit |
+| | : |
+| Standard sku container groups per region per subscription | 100 |
+| Standard sku cores (CPUs) per region per subscription | 100 |
+| Standard sku cores (CPUs) for K80 GPU per region per subscription | 0 |
+| Standard sku cores (CPUs) for V100 GPU per region per subscription | 0 |
+| Container group creates per hour |300<sup>1</sup> |
+| Container group creates per 5 minutes | 100<sup>1</sup> |
+| Container group deletes per hour | 300<sup>1</sup> |
+| Container group deletes per 5 minutes | 100<sup>1</sup> |
+
+## Standard Core Resources
+
+### Linux Container Groups
+
+By default, the following resources are available general purpose (standard core SKU) containers in general deployments and [Azure virtual network](container-instances-vnet.md) deployments) for Linux & Windows containers.
+
+| Max CPU | Max Memory (GB) | VNET Max CPU | VNET Max Memory (GB) | Storage (GB) |
+| :: | :: | :-: | :--: | :-: |
+| 4 | 16 | 4 | 16 | 50 |
+
+For a general list of available regions for Azure Container Instances, see [available regions](https://azure.microsoft.com/regions/services/).
+
+### Windows Containers
+
+The following regions and maximum resources are available to container groups with [supported and preview](./container-instances-faq.yml) Windows Server containers.
+
+#### Windows Server 2019 LTSC
+
+> [!NOTE]
+> 1B and 2B hosts have been deprecated for Windows Server 2019 LSTC. See [Host and container version compatibility](/virtualization/windowscontainers/deploy-containers/update-containers#host-and-container-version-compatibility) for more information on 1B, 2B, and 3B hosts.
+
+The following resources are available in all Azure Regions supported by Azure Container Instances. For a general list of available regions for Azure Container Instances, see [available regions](https://azure.microsoft.com/regions/services/).
+
+| 3B Max CPU | 3B Max Memory (GB) | Storage (GB) | Availability Zone support |
+| :-: | :--: | :-: |
+| 4 | 16 | 20 | Y |
+
+## GPU Resources (Preview)
+
+> [!IMPORTANT]
+> Not all limit increase requests are guaranteed to be approved.
+> Deployments with GPU resources are not supported in an Azure virtual network deployment and are only available on Linux container groups.
+> Using GPU resources (preview) is not fully supported yet and any support is provided on a best-effort basis.
+
+The following maximum resources are available to a container group deployed with [GPU resources](container-instances-gpu.md) (preview).
+
+| GPU SKUs | GPU count | Max CPU | Max Memory (GB) | Storage (GB) |
+| | | | | |
+| K80 | 1 | 6 | 56 | 50 |
+| K80 | 2 | 12 | 112 | 50 |
+| K80 | 4 | 24 | 224 | 50 |
+| P100, V100 | 1 | 6 | 112 | 50 |
+| P100, V100 | 2 | 12 | 224 | 50 |
+| P100, V100 | 4 | 24 | 448 | 50 |
+
+## Next steps
+
+Certain default limits and quotas can be increased. To request an increase of one or more resources that support such an increase, please submit an [Azure support request][azure-support] (select "Quota" for **Issue type**).
+
+Let the team know if you'd like to see additional regions or increased resource availability at [aka.ms/aci/feedback](https://aka.ms/aci/feedback).
+
+For information on troubleshooting container instance deployment, see [Troubleshoot deployment issues with Azure Container Instances](container-instances-troubleshooting.md)
+
+<!-- LINKS - External -->
+
+[az-region-support]: ../availability-zones/az-overview.md#regions
+
+[azure-support]: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest
+
+
+
+
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-virtual-network-concepts.md
Last updated 06/17/2022
This article provides background about virtual network scenarios, limitations, and resources. For deployment examples using the Azure CLI, see [Deploy container instances into an Azure virtual network](container-instances-vnet.md). > [!IMPORTANT]
-> Container group deployment to a virtual network is generally available for Linux containers, in most regions where Azure Container Instances is available. For details, see [Regions and resource availability](container-instances-region-availability.md).
+> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For details, see [Regions and resource availability](container-instances-region-availability.md).
## Scenarios
Container groups deployed into an Azure virtual network enable scenarios like:
## Other limitations
-* Currently, only Linux containers are supported in a container group deployed to a virtual network.
* To deploy container groups to a subnet, the subnet can't contain other resource types. Remove all existing resources from an existing subnet prior to deploying container groups to it, or create a new subnet. * To deploy container groups to a subnet, the subnet and the container group must be on the same Azure subscription. * You can't enable a [liveness probe](container-instances-liveness-probe.md) or [readiness probe](container-instances-readiness-probe.md) in a container group deployed to a virtual network.
container-instances Container Instances Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-vnet.md
This article shows how to use the [az container create][az-container-create] com
For networking scenarios and limitations, see [Virtual network scenarios and resources for Azure Container Instances](container-instances-virtual-network-concepts.md). > [!IMPORTANT]
-> Container group deployment to a virtual network is generally available for Linux containers, in most regions where Azure Container Instances is available. For details, see [Regions and resource availability][container-regions].
+> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For details, see [Regions and resource availability][container-regions].
[!INCLUDE [network profile callout](./includes/network-profile/network-profile-callout.md)]
cosmos-db Feature Support 32 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-32.md
Unique indexes are available for all Azure Cosmos DB accounts using Azure Cosmos
## Time-to-live (TTL)
-Azure Cosmos DB only supports a time-to-live (TTL) at the collection level (_ts) in version 3.2. Upgrade to versions 3.6+ to take advantage of other forms of [TTL](https://learn.microsoft.com/azure/cosmos-db/mongodb/time-to-live).
+Azure Cosmos DB only supports a time-to-live (TTL) at the collection level (_ts) in version 3.2. Upgrade to versions 3.6+ to take advantage of other forms of [TTL](time-to-live.md).
## User and role management
cost-management-billing Add Change Subscription Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/add-change-subscription-administrator.md
tags: billing
Previously updated : 01/23/2023 Last updated : 01/26/2023
To manage access to Azure resources, you must have the appropriate administrator
This article describes how add or change the administrator role for a user using Azure RBAC at the subscription scope.
-This article applies to a Microsoft Online Service Program (pay-as-you-go) account or a Visual Studio account. If you have a Microsoft Customer Agreement (Azure plan) account, see [Understand Microsoft Customer Agreement administrative roles in Azure](understand-mca-roles.md).
+This article applies to a Microsoft Online Service Program (pay-as-you-go) account or a Visual Studio account. If you have a Microsoft Customer Agreement (Azure plan) account, see [Understand Microsoft Customer Agreement administrative roles in Azure](understand-mca-roles.md). If you have an Azure Enterprise Agreement, see [Manage Azure Enterprise Agreement roles](understand-ea-roles.md).
Microsoft recommends that you manage access to resources using Azure RBAC. However, if you are still using the classic deployment model and managing the classic resources by using [Azure Service Management PowerShell Module](/powershell/module/servicemanagement/azure.service), you'll need to use a classic administrator.
To identify accounts for which you're a billing administrator, visit the [Cost M
If you're not sure who the account administrator is for a subscription, visit the [Subscriptions page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Then select the subscription you want to check, and then look under **Settings**. Select **Properties** and the account administrator of the subscription is shown in the **Account Admin** box.
-If you don't see **Account Admin**, you have a Microsoft Customer Agreement account. Instead, [check your access to a Microsoft Customer Agreement](understand-mca-roles.md#check-access-to-a-microsoft-customer-agreement).
-
+If you don't see **Account Admin**, you might have a Microsoft Customer Agreement or Enterprise Agreement account. Instead, [check your access to a Microsoft Customer Agreement](understand-mca-roles.md#check-access-to-a-microsoft-customer-agreement) or see [Manage Azure Enterprise Agreement roles](understand-ea-roles.md).
## Assign a subscription administrator
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
Installation of the self-hosted integration runtime on a domain controller isn't
- Copy-activity runs happen with a specific frequency. Processor and RAM usage on the machine follows the same pattern with peak and idle times. Resource usage also depends heavily on the amount of data that is moved. When multiple copy jobs are in progress, you see resource usage go up during peak times. - Tasks might fail during extraction of data in Parquet, ORC, or Avro formats. For more on Parquet, see [Parquet format in Azure Data Factory](./format-parquet.md#using-self-hosted-integration-runtime). File creation runs on the self-hosted integration machine. To work as expected, file creation requires the following prerequisites: - [Visual C++ 2010 Redistributable](https://download.microsoft.com/download/3/2/2/3224B87F-CFA0-4E70-BDA3-3DE650EFEBA5/vcredist_x64.exe) Package (x64)
- - Java Runtime (JRE) version 8 from a JRE provider such as [Adopt OpenJDK](https://adoptopenjdk.net/). Ensure that the JAVA_HOME environment variable is set to the JDK folder (and not just the JRE folder).
+ - Java Runtime (JRE) version 8 from a JRE provider such as [Eclipse Temurin](https://adoptium.net/temurin/releases/?version=8). Ensure that the JAVA_HOME environment variable is set to the JDK folder (and not just the JRE folder).
>[!NOTE] >It might be necessary to adjust the Java settings if memory errors occur, as described in the [Parquet format](./format-parquet.md#using-self-hosted-integration-runtime) documentation.
data-factory Industry Sap Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-connectors.md
The following table shows the SAP connectors and in which activity scenarios the
| :-- | :-- | :-- | :-- | | |[SAP Business Warehouse Open Hub](connector-sap-business-warehouse-open-hub.md) | ✓/− | | ✓ | SAP Business Warehouse version 7.01 or higher. SAP BW/4HANA isn't supported by this connector. | |[SAP Business Warehouse via MDX](connector-sap-business-warehouse.md)| ✓/− | | ✓ | SAP Business Warehouse version 7.x. |
-| [SAP CDC](connector-sap-change-data-capture.md) | | Γ£ô/- | | Can connect to all SAP releases supporting SAP Operational Data Provisioning (ODP). This includes most SAP ECC and SAP BW releases, as well as SAP S/4HANA, SAP BW/4HANA and SAP Landscape Transformation Replication Server (SLT). For details, follow [Overview and architecture of the SAP CDC capabilities](sap-change-data-capture-introduction-architecture.md) |
+| [SAP CDC](connector-sap-change-data-capture.md) | | Γ£ô/- | | Can connect to all SAP releases supporting SAP Operational Data Provisioning (ODP). This includes most SAP ECC and SAP BW releases, as well as SAP S/4HANA, SAP BW/4HANA and SAP Landscape Transformation Replication Server (SLT). Regarding prerequisites for the SAP source syste, follow [SAP system requirements](sap-change-data-capture-prerequisites-configuration.md#sap-system-requirements). For details on the connector, follow [Overview and architecture of the SAP CDC capabilities](sap-change-data-capture-introduction-architecture.md) |
| [SAP Cloud for Customer (C4C)](connector-sap-cloud-for-customer.md) | ✓/✓ | | ✓ | SAP Cloud for Customer including the SAP Cloud for Sales, SAP Cloud for Service, and SAP Cloud for Social Engagement solutions. | | [SAP ECC](connector-sap-ecc.md) | ✓/− | | ✓ | SAP ECC on SAP NetWeaver version 7.0 and later. | | [SAP HANA](connector-sap-hana.md) | ✓/✓ | | ✓ | Any version of SAP HANA database |
data-factory Sap Change Data Capture Prerequisites Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prerequisites-configuration.md
ODP offers various data extraction contexts or *source object types*. Although m
- Ensure that DataSources are activated on your SAP source systems. This requirement applies only to DataSources that are delivered by SAP or its partners. DataSources that are created by customers are automatically activated. If DataSources have been or are being extracted by SAP BW or BW/4HANA, the DataSources have already been activated. For more information about DataSources and their activations, see [Installing BW Content DataSources](https://help.sap.com/saphelp_nw73/helpdata/en/4a/1be8b7aece044fe10000000a421937/frameset.htm). -- Make sure that DataSources are released for extractions via ODP. This requirement applies only to DataSources that customers create. DataSources that are delivered by SAP or its partners are automatically released. For more information, see the following SAP support notes:-
- - [1560241 - To release DataSources for ODP API](https://launchpad.support.sap.com/#/notes/1560241)
-
- Combine this task with running the following programs:
-
- - RODPS_OS_EXPOSE to release DataSources for external use
-
- - BS_ANLY_DS_RELEASE_ODP to release BW extractors for the ODP API
-
- - [2232584 - To release SAP extractors for ODP API](https://launchpad.support.sap.com/#/notes/2232584) for a list of all SAP-delivered DataSources (more than 7,400) that have been released
+- Make sure that DataSources are released for extraction via ODP. This requirement applies to DataSources that customers create as well as DataSources created by SAP in older releases of SAP ECC. For more information, see the following SAP support note [2232584 - To release SAP extractors for ODP API](https://launchpad.support.sap.com/#/notes/2232584).
### Set up the SAP Landscape Transformation Replication Server
data-factory Source Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/source-control.md
For more info about connecting Azure Repos to your organization's Active Directo
Visual authoring with GitHub integration supports source control and collaboration for work on your data factory pipelines. You can associate a data factory with a GitHub account repository for source control, collaboration, versioning. A single GitHub account can have multiple repositories, but a GitHub repository can be associated with only one data factory. If you don't have a GitHub account or repository, follow [these instructions](https://github.com/join) to create your resources.
-The GitHub integration with Data Factory supports both public GitHub (that is, [https://github.com](https://github.com)), GitHub Enterprise Cloud and GitHub Enterprise Server. You can use both public and private GitHub repositories with Data Factory as long you have read and write permission to the repository in GitHub. ADFΓÇÖs GitHub enterprise server integration only works with [officially supported versions of GitHub enterprise server.](https://docs.github.com/en/enterprise-server@3.1/admin/all-releases)
+The GitHub integration with Data Factory supports both public GitHub (that is, [https://github.com](https://github.com)), GitHub Enterprise Cloud and GitHub Enterprise Server. You can use both public and private GitHub repositories with Data Factory as long you have read and write permission to the repository in GitHub. To connect with a public repository, select the **Use Link Repository option**, as they will not be visible in the dropdown menu of **Repository name**. ADFΓÇÖs GitHub enterprise server integration only works with [officially supported versions of GitHub enterprise server.](https://docs.github.com/en/enterprise-server@3.1/admin/all-releases)
> [!NOTE] > If you are using Microsoft Edge, GitHub Enterprise version less than 2.1.4 does not work with it. GitHub officially supports >=3.0 and these all should be fine for ADF. As GitHub changes its minimum version, ADF supported versions will also change.
defender-for-cloud Defender For Dns Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-alerts.md
Now that you know how to respond to DNS alerts, find out more about how to manag
For related material, see the following articles: - To [export Defender for Cloud alerts](export-to-siem.md) to your centralized security information and event management (SIEM) system, such as Microsoft Sentinel, any third-party SIEM, or any other external tool.-- To [send alerts to in real-time](continuous-export.md) to Log Analytics or Event Hubs to create automated processes to analyze and respond to security alerts.
+- To [send alerts in real-time](continuous-export.md) to Log Analytics or Event Hubs to create automated processes to analyze and respond to security alerts.
defender-for-cloud Devops Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md
+
+ Title: Defender for DevOps FAQ
+description: If you're having issues with Defender for DevOps perhaps, you can solve it with these frequently asked questions.
+ Last updated : 01/26/2023++
+# Defender for DevOps frequently asked questions (FAQ)
+
+If you're having issues with Defender for DevOps these frequently asked questions may assist you,
+
+## FAQ
+
+- [I donΓÇÖt see Recommendations for findings](#i-dont-see-recommendations-for-findings)
+- [Why can't I find my repository](#why-cant-i-find-my-repository)
+- [Secret scan didn't run on my code](#secret-scan-didnt-run-on-my-code)
+- [I donΓÇÖt see generated SARIF file in the path I chose to drop it](#i-dont-see-generated-sarif-file-in-the-path-i-chose-to-drop-it)
+- [I donΓÇÖt see the results for my ADO projects in Microsoft Defender for Cloud](#i-dont-see-the-results-for-my-ado-projects-in-microsoft-defender-for-cloud)
+- [What information does Defender for DevOps store about me and my enterprise, and where is the data stored?](#what-information-does-defender-for-devops-store-about-me-and-my-enterprise-and-where-is-the-data-stored)
+
+### I donΓÇÖt see Recommendations for findings
+
+Ensure that you've onboarded the project with the connector and that your repository (that build is for), is onboarded to Microsoft Defender for Cloud. You can learn how to [onboard your DevOps repository](/azure/defender-for-cloud/quickstart-onboard-devops?branch=main) to Defender for Cloud.
+
+You must have more than a [stakeholder license](https://azure.microsoft.com/pricing/details/devops/azure-devops-services/) to the repos to onboard them. You can confirm if you've onboarded the repositories by seeing them in the inventory list in Microsoft Defender for Cloud.
+
+### Why can't I find my repository
+
+Only TfsGit is supported on Azure DevOps service.
+
+Ensure that you've [onboarded your repositories](/azure/defender-for-cloud/quickstart-onboard-devops?branch=main) to Microsoft Defender for Cloud. If you still can't see your repository, ensure that you're signed in with the correct Azure DevOps organization user account. If the user for the connector is wrong, you need to delete the connector that was created, sign in with the correct user account and re-create the connector.
+
+### Secret scan didn't run on my code
+
+To ensure your code is scanned for secrets, make sure you've [onboarded your repositories](/azure/defender-for-cloud/quickstart-onboard-devops?branch=main) to Defender for Cloud.
+
+In addition to onboarding resources, you must have the [Microsoft Security DevOps (MSDO) Azure DevOps extension](/azure/defender-for-cloud/azure-devops-extension?branch=main) configured for your pipelines. The extension runs secret scan along with other scanners.
+
+If no secrets are identified through scans, the total exposed secret for the resource shows `Healthy` in Microsoft Defender for Cloud. If secret scan isn't enabled (meaning MSDO isn't configured for your pipeline), the resource shows as `N/A` in Defender for Cloud.
+
+### I donΓÇÖt see generated SARIF file in the path I chose to drop it
+
+If you donΓÇÖt see SARIF file in the expected path, you may have chosen a different drop path than the `CodeAnalysisLogs/msdo.sarif` one. Currently you should drop your SARIF files to `CodeAnalysisLogs/msdo.sarif`.
+
+### I donΓÇÖt see the results for my ADO projects in Microsoft Defender for Cloud
+
+Currently, OSS vulnerabilities, IaC scanning vulnerabilities, and Total code scanning vulnerabilities are only available for GitHub repositories.
+
+Azure DevOps repositories only have the total exposed secrets available and will show `N/A` for all other fields. You can learn more about how to [Review your findings](defender-for-devops-introduction.md).
+
+### What information does Defender for DevOps store about me and my enterprise, and where is the data stored?
+
+Data Defender for DevOps connects to your source code management system, for example, Azure DevOps, GitHub, to provide a central console for your DevOps resources and security posture. Defender for DevOps processes and stores the following information:
+
+- Metadata on your connected source code management systems and associated repositories. This data includes user, organizational, and authentication information.
+
+- Scan results for recommendations and assessments results and details.
+
+Data is stored within the region your connector is created in. You should consider which region to create your connector in, for any data residency requirements as you design and create your DevOps connector.
+
+Defender for DevOps currently doesn't process or store your code, build, and audit logs.
+
+## Next steps
+
+- [Overview of Defender for DevOps](defender-for-devops-introduction.md)
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
You can use any of the following ways to enable enhanced security for your subsc
### Can I enable Microsoft Defender for Servers on a subset of servers?
-No. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, all of the connected machines will be protected by Defender for Servers.
-
-Another alternative is to enable Microsoft Defender for Servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include Microsoft Defender for Endpoint, VA solution (TVM/Qualys), just-in-time VM access, and more.
+No. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, all of the connected machines will be protected by Defender for Servers. This includes servers that don't have the Log Analytics agent or Azure Monitor agent installed.
### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
To monitor database migrations in the Azure portal:
- Server roles - Server audit
- For a complete list of metadata and server objects that you need to move, refer to the detailed information available in [Manage Metadata When Making a Database Available on Another Server](https://learn.microsoft.com/sql/relational-databases/databases/manage-metadata-when-making-a-database-available-on-another-server).
+ For a complete list of metadata and server objects that you need to move, refer to the detailed information available in [Manage Metadata When Making a Database Available on Another Server](/sql/relational-databases/databases/manage-metadata-when-making-a-database-available-on-another-server).
- SQL Server 2008 and earlier as target versions aren't supported for migrations to SQL Server on Azure Virtual Machines.
firewall Easy Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/easy-upgrade.md
+
+ Title: Azure Firewall easy upgrade/downgrade (preview)
+description: Learn about Azure Firewall easy upgrade/downgrade (preview)
++++ Last updated : 01/26/2023+++
+# Azure Firewall easy upgrade/downgrade (preview)
++
+> [!IMPORTANT]
+> This feature is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+You can now easily upgrade your existing Firewall Standard SKU to Premium SKU and downgrade from Premium to Standard SKU. The process is fully automated and has no service impact (zero service downtime).
+
+## Policies
+
+In the upgrade process, you can select the policy to be attached to the upgraded Premium SKU. You can select an existing Premium Policy or an existing Standard Policy. You can use your existing Standard policy and let the system automatically duplicate, upgrade to Premium Policy, and then attach it to the newly created Premium Firewall.
+
+## Availability
+
+This new capability is available through the Azure portal as shown here. It's also available via PowerShell and Terraform by changing the sku_tier attribute.
++
+> [!NOTE]
+> This new upgrade/downgrade capability will also support the Basic SKU for GA.
+
+## Next steps
++
+- To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md).
firewall Firewall Network Rule Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-network-rule-logging.md
Title: Azure network rule name logging (preview)
-description: Learn about Azure network rule name logging (preview)
+ Title: Azure Firewall network rule name logging (preview)
+description: Learn about Azure Firewall network rule name logging (preview)
Last updated 01/25/2023
-# Azure network rule name logging (preview)
+# Azure Firewall network rule name logging (preview)
> [!IMPORTANT]
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
With this new feature, the event logs for network rules adds the following attri
- Rule collection - Rule name
-For more information, see [Azure network rule name logging (preview)](firewall-network-rule-logging.md).
+For more information, see [Azure Firewall network rule name logging (preview)](firewall-network-rule-logging.md).
### Structured Firewall Logs (preview)
For more information, see [Azure Structured Firewall Logs (preview)](firewall-st
### Policy Analytics (preview)
-Policy Analytics provides insights, centralized visibility, and control to Azure Firewall. IT teams today are challenged to keep Firewall rules up to date, manage existing rules, and remove unused rules. Any accidental rule updates can lead to a significant downtime for IT teams.
+Policy Analytics provides insights, centralized visibility, and control to Azure Firewall. IT teams today are challenged to keep Firewall rules up to date, manage existing rules, and remove unused rules. Any accidental rule updates can lead to a significant downtime for IT teams.
-For large, geographically dispersed organizations, manually managing Firewall rules and policies is a complex and sometimes error-prone process. The new Policy Analytics feature is the answer to this common challenge faced by IT teams.
+For more information, see [Azure Firewall Policy Analytics (preview)](policy-analytics.md).
-You can now refine and update Firewall rules and policies with confidence in just a few steps in the Azure portal. You have granular control to define your own custom rules for an enhanced security and compliance posture. You can automate rule and policy management to reduce the risks associated with a manual process.<br><br>
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE57NCC]
-
-#### Pricing
-
-Enabling Policy Analytics on a Firewall Policy associated with a single firewall is billed per policy as described on the [Azure Firewall Manager pricing](https://azure.microsoft.com/pricing/details/firewall-manager/) page. Enabling Policy Analytics on a Firewall Policy associated with more than one firewall is offered at no added cost.
-
-#### Key Policy Analytics features
--- **Policy insight panel**: Aggregates insights and highlights relevant policy information.-- **Rule analytics**: Analyzes existing DNAT, Network, and Application rules to identify rules with low utilization or rules with low usage in a specific time window.-- **Traffic flow analysis**: Maps traffic flow to rules by identifying top traffic flows and enabling an integrated experience.-- **Single Rule analysis**: Analyzes a single rule to learn what traffic hits that rule to refine the access it provides and improve the overall security posture.-
-### Prerequisites
--- An Azure Firewall Standard or Premium-- An Azure Firewall Standard or Premium policy attached to the Firewall-- The [network rule name logging preview feature](#network-rule-name-logging-preview) must be enabled to view network rules analysis-- The [structured firewall logs feature](#structured-firewall-logs-preview) must be enabled on Firewall Standard or Premium--
-### Enable Policy Analytics
-
-Policy analytics starts monitoring the flows in the DNAT, Network, and Application rule analysis only after you enable the feature. It can't analyze rules hit before the feature is enabled.
-
-#### Firewall with no Diagnostics settings configured
--
-1. Once all prerequisites are met, select **Policy analytics (preview)** in the table of contents.
-2. Next, select **Configure Workspaces**.
-3. In the pane that opens, select the **Enable Policy Analytics** checkbox.
-4. Next, choose a log analytics workspace. The log analytics workspace should be the same as the Firewall attached to the policy.
-5. Select **Save** after you choose the log analytics workspace.
-6. Go to the Firewall attached to the policy and enter the **Diagnostic settings** page. You'll see the **FirewallPolicySetting** added there as part of the policy analytics feature.
-7. Select **Edit Setting**, and ensure the **Resource specific** toggle is checked, and the highlighted tables are checked. In the previous example, all logs are written to the log analytics workspace.
-
-#### Firewall with Diagnostics settings already configured
-
-1. Ensure that the Firewall attached to the policy is logging to **Resource Specific** tables, and that the following three tables are also selected:
- - AZFWApplicationRuleAggregation
- - AZFWNetworkRuleAggregation
- - AZFWNatRuleAggregation
-2. Next, select **Policy Analytics (preview)** in the table of contents. Once inside the feature, select **Configure Workspaces**.
-3. Now, select **Enable Policy Analytics**.
-4. Next, choose a log analytics workspace. The log analytics workspace should be the same as the Firewall attached to the policy.
-5. Select **Save** after you choose the log analytics workspace.
-
- During the save process, you might see the following error message: **Failed to update Diagnostic Settings**
-
- You can disregard this error message if the policy was successfully updated.
-
-> [!TIP]
-> Policy Analytics has a dependency on both Log Analytics and Azure Firewall resource specific logging. Verify the Firewall is configured appropriately or follow the previous instructions. Be aware that logs take 60 minutes to appear after enabling them for the first time. This is because logs are aggregated in the backend every hour. You can check logs are configured appropriately by running a log analytics query on the resource specific tables such as **AZFWNetworkRuleAggregation**, **AZFWApplicationRuleAggregation**, and **AZFWNatRuleAggregation**.
-
-### Single click upgrade/downgrade (preview)
+### Easy upgrade/downgrade (preview)
You can now easily upgrade your existing Firewall Standard SKU to Premium SKU and downgrade from Premium to Standard SKU. The process is fully automated and has no service impact (zero service downtime).
-In the upgrade process, you can select the policy to be attached to the upgraded Premium SKU. You can select an existing Premium Policy or an existing Standard Policy. You can use your existing Standard policy and let the system automatically duplicate, upgrade to Premium Policy, and then attach it to the newly created Premium Firewall.
-
-This new capability is available through the Azure portal as shown here, and via PowerShell and Terraform simply by changing the sku_tier attribute.
--
-> [!NOTE]
-> This new upgrade/downgrade capability will also support the Basic SKU for GA.
+For more information, see [Azure Firewall easy upgrade/downgrade (preview)](easy-upgrade.md).
## Next steps
firewall Policy Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-analytics.md
+
+ Title: Azure Firewall Policy Analytics (preview)
+description: Learn about Azure Firewall Policy Analytics (preview)
++++ Last updated : 01/26/2023+++
+# Azure Firewall Policy Analytics (preview)
++
+> [!IMPORTANT]
+> This feature is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Policy Analytics provides insights, centralized visibility, and control to Azure Firewall. IT teams today are challenged to keep Firewall rules up to date, manage existing rules, and remove unused rules. Any accidental rule updates can lead to a significant downtime for IT teams.
+
+For large, geographically dispersed organizations, manually managing Firewall rules and policies is a complex and sometimes error-prone process. The new Policy Analytics feature is the answer to this common challenge faced by IT teams.
+
+You can now refine and update Firewall rules and policies with confidence in just a few steps in the Azure portal. You have granular control to define your own custom rules for an enhanced security and compliance posture. You can automate rule and policy management to reduce the risks associated with a manual process.<br><br>
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE57NCC]
+
+## Pricing
+
+Enabling Policy Analytics on a Firewall Policy associated with a single firewall is billed per policy as described on the [Azure Firewall Manager pricing](https://azure.microsoft.com/pricing/details/firewall-manager/) page. Enabling Policy Analytics on a Firewall Policy associated with more than one firewall is offered at no added cost.
+
+## Key Policy Analytics features
+
+- **Policy insight panel**: Aggregates insights and highlights relevant policy information.
+- **Rule analytics**: Analyzes existing DNAT, Network, and Application rules to identify rules with low utilization or rules with low usage in a specific time window.
+- **Traffic flow analysis**: Maps traffic flow to rules by identifying top traffic flows and enabling an integrated experience.
+- **Single Rule analysis**: Analyzes a single rule to learn what traffic hits that rule to refine the access it provides and improve the overall security posture.
+
+## Prerequisites
+
+- An Azure Firewall Standard or Premium
+- An Azure Firewall Standard or Premium policy attached to the Firewall
+- The [Azure Firewall network rule name logging (preview)](firewall-network-rule-logging.md) must be enabled to view network rules analysis.
+- The [Azure Structured Firewall Logs (preview)](firewall-structured-logs.md) must be enabled on Firewall Standard or Premium.
++
+## Enable Policy Analytics
+
+Policy analytics starts monitoring the flows in the DNAT, Network, and Application rule analysis only after you enable the feature. It can't analyze rules hit before the feature is enabled.
+
+### Firewall with no diagnostics settings configured
++
+1. Once all prerequisites are met, select **Policy analytics (preview)** in the table of contents.
+2. Next, select **Configure Workspaces**.
+3. In the pane that opens, select the **Enable Policy Analytics** checkbox.
+4. Next, choose a log analytics workspace. The log analytics workspace should be the same as the Firewall attached to the policy.
+5. Select **Save** after you choose the log analytics workspace.
+6. Go to the Firewall attached to the policy and enter the **Diagnostic settings** page. You'll see the **FirewallPolicySetting** added there as part of the policy analytics feature.
+7. Select **Edit Setting**, and ensure the **Resource specific** toggle is checked, and the highlighted tables are checked. In the previous example, all logs are written to the log analytics workspace.
+
+### Firewall with Diagnostics settings already configured
+
+1. Ensure that the Firewall attached to the policy is logging to **Resource Specific** tables, and that the following three tables are also selected:
+ - AZFWApplicationRuleAggregation
+ - AZFWNetworkRuleAggregation
+ - AZFWNatRuleAggregation
+2. Next, select **Policy Analytics (preview)** in the table of contents. Once inside the feature, select **Configure Workspaces**.
+3. Now, select **Enable Policy Analytics**.
+4. Next, choose a log analytics workspace. The log analytics workspace should be the same as the Firewall attached to the policy.
+5. Select **Save** after you choose the log analytics workspace.
+
+ During the save process, you might see the following error message: **Failed to update Diagnostic Settings**
+
+ You can disregard this error message if the policy was successfully updated.
+
+> [!TIP]
+> Policy Analytics has a dependency on both Log Analytics and Azure Firewall resource specific logging. Verify the Firewall is configured appropriately or follow the previous instructions. Be aware that logs take 60 minutes to appear after enabling them for the first time. This is because logs are aggregated in the backend every hour. You can check logs are configured appropriately by running a log analytics query on the resource specific tables such as **AZFWNetworkRuleAggregation**, **AZFWApplicationRuleAggregation**, and **AZFWNatRuleAggregation**.
+
+## Next steps
++
+- To learn more about Azure Firewall logs and metrics, see [Azure Firewall logs and metrics](logs-and-metrics.md).
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Title: Azure API for FHIR monthly releases description: This article provides details about the Azure API for FHIR monthly features and enhancements. -+
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
+## **November 2022**
+
+**Fixed the Error generated when resource is updated using if-match header and PATCH**
+
+Bug is now fixed and Resource will be updated if matches the Etag header. For details , see [#2877](https://github.com/microsoft/fhir-server/issues/2877)|
+ ## May 2022 ### **Enhancement**
healthcare-apis Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/data-flow.md
- Title: The MedTech service data flow - Azure Health Data Services
-description: Understand the MedTech service's data flow. The MedTech service ingests, normalizes, groups, transforms, and persists IoMT data to FHIR service.
----- Previously updated : 01/18/2023---
-# The MedTech service data flow
-
-This article provides an overview of the MedTech service data flow. You'll learn about the different data processing stages within the MedTech service that transforms device data into Fast Healthcare Interoperability Resources (FHIR&#174;)-based [Observation](https://www.hl7.org/fhir/observation.html) resources.
-
-Data from devices flows through a path in which the MedTech service transforms data into FHIR, and then data is stored on and accessed from the FHIR service. The data path follows these steps in this order: ingest, normalize, group, transform, and persist. Data is retrieved from the device in the first step of ingestion. After the data is received, it's processed, or normalized per a user-selected/user-created schema template called the device mapping. Normalized data is simpler to process and can be grouped. In the next step, data is grouped into three Operate parameters. After the data is normalized and grouped, it can be processed or transformed through a FHIR destination mapping, and then saved or persisted on the FHIR service.
-
-This article goes into more depth about each step in the data flow. The next steps are [Choose a deployment method for the MedTech service](deploy-new-choose.md) by using a device mapping (the normalization step) and a FHIR destination mapping (the transformation step).
-
-This next section of the article describes the stages that IoT (Internet of Things) device data goes through as it processed through the MedTech service.
--
-## Ingest
-Ingest is the first stage where device data is received into the MedTech service. The ingestion endpoint for device data is hosted on an [Azure Event Hubs](../../event-hubs/index.yml). The Azure Event Hubs platform supports high scale and throughput with ability to receive and process millions of messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device data gets processed.
-
-> [!NOTE]
-> JSON is the only supported format at this time for device data.
-
-## Normalize
-Normalize is the next stage where device data is retrieved from the above event hub and processed using the device mapping. This mapping process results in transforming device data into a normalized schema.
-
-The normalization process not only simplifies data processing at later stages but also provides the capability to project one input message into multiple normalized messages. For instance, a device could send multiple vital signs for body temperature, pulse rate, blood pressure, and respiration rate in a single message. This input message would create four separate FHIR resources. Each resource would represent different vital sign, with the input message projected into four different normalized messages.
-
-## Group
-Group is the next stage where the normalized messages available from the previous stage are grouped using three different parameters:
-
-* Device identity
-* Measurement type
-* Time period
-
-Device identity and measurement type grouping enable use of [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. This type provides a concise way to represent a time-based series of measurements from a device in FHIR. And time period controls the latency at which Observation resources generated by the MedTech service are written to FHIR service.
-
-> [!NOTE]
-> The time period value is defaulted to 15 minutes and cannot be configured for preview.
-
-## Transform
-In the Transform stage, grouped-normalized messages are processed through FHIR destination mapping templates. Messages matching a template type get transformed into FHIR-based Observation resources as specified through the mapping.
-
-At this point, [Device](https://www.hl7.org/fhir/device.html) resource, along with its associated [Patient](https://www.hl7.org/fhir/patient.html) resource, is also retrieved from the FHIR service using the device identifier present in the message. These resources are added as a reference to the Observation resource being created.
-
-> [!NOTE]
-> All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients it is advised you create a virtual device resource that is specific to the patient and send virtual device identifier in the message payload. The virtual device can be linked to the actual device resource as a parent.
-
-If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of `Resolution Type` set at the time of creation. When set to `Lookup`, the specific message is ignored, and the pipeline will continue to process other incoming messages. If set to `Create`, the MedTech service will create a bare-bones Device and Consumer resources on the FHIR service.
-
-## Persist
-Once the Observation FHIR resource is generated in the Transform stage, the resource is saved into the FHIR service. If the Observation FHIR resource is new, it will be created on the FHIR service. If the Observation FHIR resource already existed, it will get updated.
-
-## Next steps
-
-In this article, you learned about the MedTech service data flow.
-
-To learn how to configure the MedTech service device and FHIR destination mappings, see
-
-> [!div class="nextstepaction"]
-> [Device mappings](how-to-configure-device-mappings.md)
-
-> [!div class="nextstepaction"]
-> [FHIR destination mappings](how-to-configure-fhir-mappings.md)
-
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy New Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-arm.md
Previously updated : 1/5/2023 Last updated : 1/20/2023
To begin deployment in the Azure portal, select the **Deploy to Azure** button:
:::image type="content" source="media\deploy-new-arm\iot-deployment-complete-banner.png" alt-text="Screenshot that shows a green checkmark and the message Your deployment is complete."::: > [!IMPORTANT]
- > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+ > If you're going to allow access from multiple services to the device message event hub, it's required that each service has its own event hub consumer group.
> > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). >
healthcare-apis Device Messages Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md
Previously updated : 1/18/2023 Last updated : 1/20/2023
For enhanced workflows and ease of use, you can use the MedTech service to recei
:::image type="content" source="media\device-messages-through-iot-hub\data-flow-diagram.png" border="false" alt-text="Diagram of the IoT message data flow through an IoT hub and event hub, and then into the MedTech service." lightbox="media\device-messages-through-iot-hub\data-flow-diagram.png"::: > [!TIP]
-> To learn more about how the MedTech service transforms and persists device messages into the Fast Healthcare Interoperability Resources (FHIR&#174;) service as FHIR Observations, see [The MedTech service data flow](data-flow.md).
+> To learn more about how the MedTech service transforms and persists device message data into the Fast Healthcare Interoperability Resources (FHIR&#174;) service as FHIR Observations, see [Understand the MedTech service device message data transformation](understand-service.md).
In this tutorial, you learn how to:
To begin deployment in the Azure portal, select the **Deploy to Azure** button:
:::image type="content" source="media\device-messages-through-iot-hub\deployment-complete-banner.png" alt-text="Screenshot that shows a green checkmark and the message Your deployment is complete."::: > [!IMPORTANT]
- > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+ > If you're going to allow access from multiple services to the device message event hub, it's required that each service has its own event hub consumer group.
> > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). >
healthcare-apis Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/frequently-asked-questions.md
Previously updated : 1/5/2023 Last updated : 1/26/2023 # Frequently asked questions about the MedTech service
-Here are some of the frequently asked questions (FAQs) about the MedTech service.
+This article provides answers to frequently asked questions (FAQs) about the MedTech service.
-## The MedTech service: The basics
-
-### Where is the MedTech service available?
+## Where is the MedTech service available?
The MedTech service is available in these Azure regions: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services).
-### Can I use the MedTech service with a different FHIR service other than the Azure Health Data Services FHIR service?
+## Can I use the MedTech service with a different FHIR service other than the Azure Health Data Services FHIR service?
-No. The Azure Health Data Services MedTech service currently only supports the Azure Health Data Services Fast Healthcare Interoperability Resources (FHIR&#174;) service for the persistence of data. The open-source version of the MedTech service supports the use of different FHIR services.
+No. The MedTech service currently only supports the Azure Health Data Services Fast Healthcare Interoperability Resources (FHIR&#174;) service for the persistence of transformed device message data. The open-source version of the MedTech service supports the use of different FHIR services.
To learn more about the MedTech service open-source projects, see [Open-source projects](git-projects.md).
-### What versions of FHIR does the MedTech service support?
+## What versions of FHIR does the MedTech service support?
+
+The MedTech service currently only supports the persistence of [HL7 FHIR&#174; R4](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491).
+
+## How long does it take for device message data to show up on the FHIR service?
-The MedTech service currently only supports the persistence of [HL7 FHIR&#174; R4](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491).
+The MedTech service buffers the FHIR Observations resources created during the transformation stage and provides near real-time processing. However, it can potentially take up to five minutes for FHIR Observation resources to be persisted in the FHIR service. To learn how the MedTech service transforms device message data into FHIR Observations resources, see [Understand the MedTech service device message data transformation](understand-service.md).
-### Does the MedTech service perform backups of device messages?
+## Why do I have to provide device and FHIR destination mappings to the MedTech service?
+
+The MedTech service requires the device and FHIR destination mappings to perform normalization and transformation processes on the device message data. To learn how the MedTech service transforms device message data into FHIR Observations resources, see [Understand the MedTech service device message data transformation](understand-service.md).
+
+## Does the MedTech service perform backups of device messages?
No. The MedTech service doesn't back up the device messages that come into the customer's event hub. The customer controls the device message retention period within their event hub, which can be from 1-7 days. If the device message data is successfully processed by the MedTech service, it's persisted in the FHIR service, and the FHIR service backup policy applies. To learn more about event hub message retention, see [What is the maximum retention period for events?](/azure/event-hubs/event-hubs-faq#what-is-the-maximum-retention-period-for-events-)
-### What are the subscription quota limits for the MedTech service?
+## What are the subscription quota limits for the MedTech service?
* 25 MedTech services per Subscription (not adjustable) * 10 MedTech services per workspace (not adjustable)
To learn more about event hub message retention, see [What is the maximum retent
(* - FHIR Destination is a child resource of the MedTech service)
-### Can I use the MedTech service with device messages from Apple&#174;, Google&#174;, or Fitbit&#174; devices?
+## Can I use the MedTech service with device messages from Apple&#174;, Google&#174;, or Fitbit&#174; devices?
Yes. The MedTech service supports device messages from all these vendors through the open-source version of the MedTech service. To learn more about the MedTech service open-source projects, see [Open-source projects](git-projects.md).
-## More frequently asked questions
-
-[FAQs about the Azure Health Data Services](../healthcare-apis-faqs.md)
-
-[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md)
-
-[FAQs about Azure Health Data Services DICOM service](../dicom/dicom-services-faqs.yml)
- ## Next steps In this article, you learned about the MedTech service frequently asked questions (FAQs)
+To learn about the MedTech service, see
+
+> [!div class="nextstepaction"]
+> [What is the MedTech service?](overview.md)
+ To learn about methods for deploying the MedTech service, see > [!div class="nextstepaction"]
healthcare-apis Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md
Previously updated : 1/18/2023 Last updated : 1/20/2023
The MedTech service processes the data in five steps:
If the processing was successful and you didn't get any error messages, your device data is now a FHIR service [Observation](http://hl7.org/fhir/observation.html) resource.
-For more information on the data flow through MedTech, see [The MedTech service data flow](data-flow.md).
+For more information on the MedTech service device message data transformation, see [Understand the MedTech service device message data transformation](understand-service.md).
## Step 6: Verify the processed data
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
Previously updated : 1/12/2023 Last updated : 1/20/2023
Metric category|Metric name|Metric description|
|--|--|--| |Availability|IotConnector Health Status|The overall health of the MedTech service.| |Errors|Total Error Count|The total number of errors.|
-|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](data-flow.md#group) performs buffering, aggregating, and grouping on normalized messages.|
-|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](data-flow.md#normalize) performs normalization on raw incoming messages.|
-|Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](data-flow.md#persist) by the MedTech service.|
-|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](data-flow.md#ingest) (for example, the device events) from the configured source event hub.|
-|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](data-flow.md#transform) of the MedTech service.|
+|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](understand-service.md#group) performs buffering, aggregating, and grouping on normalized messages.|
+|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](understand-service.md#normalize) performs normalization on raw incoming messages.|
+|Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](understand-service.md#persist) by the MedTech service.|
+|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](understand-service.md#ingest) (for example, the device events) from the configured source event hub.|
+|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](understand-service.md#transform) of the MedTech service.|
|Traffic|Number of Message Groups|The number of groups that have messages aggregated in the designated time window.| |Traffic|Number of Normalized Messages|The number of normalized messages.|
healthcare-apis How To Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md
Previously updated : 1/24/2023 Last updated : 1/26/2023
If you choose to include your Log Analytics workspace as a destination option fo
:::image type="content" source="media/how-to-enable-diagnostic-settings/select-logs-button.png" alt-text="Screenshot of logs option." lightbox="media/how-to-enable-diagnostic-settings/select-logs-button.png":::
-2. Copy the below table query string into your Log Analytics workspace query area and select **Run**. Using the *AHDSMedTechDiagnosticLogs* table will provide you with all logs contained in the entire table for the selected **Time range** setting (the default value is **Last 24 hours**). The MedTech service provides five pre-defined queries that will be addressed in the article section titled [Accessing the MedTech service pre-defined Azure Log Analytics queries](how-to-enable-diagnostic-settings.md#accessing-the-medtech-service-pre-defined-azure-log-analytics-queries).
+2. Copy the below table query string into your Log Analytics workspace query area and select **Run**. Using the *AHDSMedTechDiagnosticLogs* table will provide you with all logs contained in the entire table for the selected **Time range** setting (the default value is **Last 24 hours**). The MedTech service provides five pre-defined queries that will be addressed in the article section titled [Accessing the MedTech service pre-defined Azure Log Analytics queries](#accessing-the-medtech-service-pre-defined-azure-log-analytics-queries).
```Kusto AHDSMedTechDiagnosticLogs
healthcare-apis How To Use Custom Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md
Previously updated : 1/12/2023 Last updated : 1/18/2023 # How to use custom functions with device mappings
-Many functions are available when using **JMESPath** as the expression language. Besides the functions available as part of the JMESPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](how-to-configure-device-mappings.md) during the device message [normalization](data-flow.md#normalize) process.
+Many functions are available when using **JMESPath** as the expression language. Besides the functions available as part of the JMESPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](how-to-configure-device-mappings.md) during the device message [normalization](understand-service.md#normalize) process.
> [!TIP] > For more information on JMESPath functions, see the [JMESPath specification](https://jmespath.org/specification.html#built-in-functions).
healthcare-apis How To Use Monitoring Tab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-tab.md
Previously updated : 1/12/2023 Last updated : 1/18/2023
Metric category|Metric name|Metric description|
|--|--|--| |Availability|IotConnector Health Status|The overall health of the MedTech service.| |Errors|**Total Error Count**|The total number of errors.|
-|Latency|**Average Group Stage Latency**|The average latency of the group stage. The [group stage](data-flow.md#group) performs buffering, aggregating, and grouping on normalized messages.|
-|Latency|**Average Normalize Stage Latency**|The average latency of the normalized stage. The [normalized stage](data-flow.md#normalize) performs normalization on raw incoming messages.|
-|Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](data-flow.md#persist) by the MedTech service.|
-|Traffic|**Number of Incoming Messages**|The number of received raw [incoming messages](data-flow.md#ingest) (for example, the device events) from the configured source event hub.|
-|Traffic|**Number of Measurements**|The number of normalized value readings received by the FHIR [transformation stage](data-flow.md#transform) of the MedTech service.|
+|Latency|**Average Group Stage Latency**|The average latency of the group stage. The [group stage](understand-service.md#group) performs buffering, aggregating, and grouping on normalized messages.|
+|Latency|**Average Normalize Stage Latency**|The average latency of the normalized stage. The [normalized stage](understand-service.md#normalize) performs normalization on raw incoming messages.|
+|Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](understand-service.md#persist) by the MedTech service.|
+|Traffic|**Number of Incoming Messages**|The number of received raw [incoming messages](understand-service.md#ingest) (for example, the device events) from the configured source event hub.|
+|Traffic|**Number of Measurements**|The number of normalized value readings received by the FHIR [transformation stage](understand-service.md#transform) of the MedTech service.|
|Traffic|**Number of Message Groups**|The number of groups that have messages aggregated in the designated time window.| |Traffic|**Number of Normalized Messages**|The number of normalized messages.|
To learn how to enable the MedTech service diagnostic settings, see
> [!div class="nextstepaction"] > [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
-(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
The following Microsoft solutions can use MedTech service for extra functionalit
In this article, you learned about the MedTech service and its capabilities.
-To learn about the MedTech service data flow, see
+To learn about how the MedTech service processes device messages, see
> [!div class="nextstepaction"]
-> [The MedTech service data flow](data-flow.md)
+> [Understand the MedTech service device message data transformation](understand-service.md)
To learn about the different deployment methods for the MedTech service, see
healthcare-apis Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors.md
Previously updated : 1/12/2023 Last updated : 1/20/2023
This article provides assistance troubleshooting and fixing MedTech service erro
This property represents the operation being performed by the MedTech service when the error has occurred. An operation generally represents the data flow stage while processing a device message. Here's a list of possible values for this property. > [!NOTE]
-> For information about the different stages of data flow in the MedTech service, see [The MedTech service data flow](data-flow.md).
+> For information about the MedTech service device message data transformation, see [Understand the MedTech service device message data transformation](understand-service.md).
|Data flow stage|Description| ||--|
healthcare-apis Understand Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/understand-service.md
+
+ Title: Understand the MedTech service device message data transformation - Azure Health Data Services
+description: This article will provide you with an understanding of the MedTech service device messaging data transformation to FHIR Observation resources. The MedTech service ingests, normalizes, groups, transforms, and persists device message data into the FHIR service.
+++++ Last updated : 1/25/2023+++
+# Understand the MedTech service device message data transformation
+
+This article provides an overview of the device message data processing stages within the [MedTech service](overview.md). The MedTech service transforms device message data into Fast Healthcare Interoperability Resources (FHIR&#174;) [Observation](https://www.hl7.org/fhir/observation.html) resources for persistence on the [FHIR service](../fhir/overview.md).
+
+The MedTech service device message data processing follows these steps and in this order:
+
+> [!div class="checklist"]
+> - Ingest
+> - Normalize - Device mappings applied.
+> - Group
+> - Transform - FHIR destination mappings applied.
+> - Persist
++
+## Ingest
+Ingest is the first stage where device messages are received from an [Azure Event Hubs](../../event-hubs/index.yml) event hub (`device message event hub`) and immediately pulled into the MedTech service. The Event Hubs service supports high scale and throughput with the ability to receive and process millions of device messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device messages are processed.
+
+The device message event hub uses the MedTech service's [system-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](/azure/role-based-access-control/overview) for secure access to the device message event hub.
+
+> [!NOTE]
+> JSON is the only supported format at this time for device message data.
+
+> [!IMPORTANT]
+> If you're going to allow access from multiple services to the device message event hub, it's required that each service has its own event hub consumer group.
+>
+> Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+>
+> Examples:
+>
+> - Two MedTech services accessing the same device message event hub.
+>
+> - A MedTech service and a storage writer application accessing the same device message event hub.
+
+## Normalize
+Normalize is the next stage where device message data is processed using user-selected/user-created conforming and valid [device mappings](how-to-configure-device-mappings.md). This mapping process results in transforming device message data into a normalized schema.
+
+The normalization process not only simplifies data processing at later stages, but also provides the capability to project one device message into multiple normalized messages. For instance, a device could send multiple vital signs for body temperature, pulse rate, blood pressure, and respiration rate in a single device message. This device message would create four separate FHIR Observation resources. Each resource would represent a different vital sign, with the device message projected into four different normalized messages.
+
+## Group
+Group is the next *optional* stage where the normalized messages available from the MedTech service normalization stage are grouped using three different parameters:
+
+> [!div class="checklist"]
+> - Device identity
+> - Measurement type
+> - Time period
+
+`Device identity` and `measurement type` grouping is optional and enabled by the use of the [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. The SampledData measurement type provides a concise way to represent a time-based series of measurements from a device message into FHIR Observation resources. When you use the SampledData measurement type, measurements can be grouped into a single FHIR Observation resource that represents a 1-hour period or a 24-hour period.
+
+## Transform
+Transform is the next stage where normalized messages are processed using user-selected/user-created conforming and valid [FHIR destination mappings](how-to-configure-fhir-mappings.md). Normalized messages get transformed into FHIR Observation resources if a matching FHIR destination mapping has been authored.
+
+At this point, the [Device](https://www.hl7.org/fhir/device.html) resource, along with its associated [Patient](https://www.hl7.org/fhir/patient.html) resource, is also retrieved from the FHIR service using the device identifier present in the device message. These resources are added as a reference to the FHIR Observation resource being created.
+
+> [!NOTE]
+> All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients, it is advised you create a virtual device resource that is specific to the patient and send the virtual device identifier in the device message payload. The virtual device can be linked to the actual device resource as a parent.
+
+If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [Resolution Type](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to `Lookup`, the specific message is ignored, and the pipeline will continue to process other incoming device messages. If set to `Create`, the MedTech service will create minimal Device and Patient resources on the FHIR service.
+
+> [!NOTE]
+> The `Resolution Type` can also be adjusted post deployment of the MedTech service in the event that a different type is later desired.
+
+The MedTech service buffers the FHIR Observations resources created during the transformation stage and provides near real-time processing. However, it can potentially take up to five minutes for FHIR Observation resources to be persisted in the FHIR service.
+
+## Persist
+Persist is the final stage where the FHIR Observation resources from the transform stage are persisted in the [FHIR service](../fhir/overview.md). If the FHIR Observation resource is new, it will be created in the FHIR service. If the FHIR Observation resource already existed, it will get updated in the FHIR service.
+
+The FHIR service uses the MedTech service's [system-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](/azure/role-based-access-control/overview) for secure access to the FHIR service.
+
+## Next steps
+
+In this article, you learned about the MedTech service device message processing and persistence in the FHIR service.
+
+To learn how to configure the MedTech service device and FHIR destination mappings, see
+
+> [!div class="nextstepaction"]
+> [How to configure device mappings](how-to-configure-device-mappings.md)
+
+> [!div class="nextstepaction"]
+> [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md
Title: Azure Health Data Services known issues description: This article provides details about the known issues of Azure Health Data Services. -+
Refer to the table below to find details about resolution dates or possible work
## FHIR service
-|Issue | Date discovered | Status | Date resolved |
+|Issue | Date discovered | Workaround | Date resolved |
| :- | : | :- | :- | |Using [token type](https://www.hl7.org/fhir/search.html#token) fields of length more than 128 characters can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | August 2022 |No workaround | Not resolved |
-|The SQL provider will cause the `RawResource` column in the database to save incorrectly. This occurs in a small number of cases when a transient exception occurs that causes the provider to use its retry logic.ΓÇ»|April 2022 |Resolved [#2571](https://github.com/microsoft/fhir-server/pull/2571)|May 2022 |
-| Queries not providing consistent result counts after appended with `_sort` operator. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). | July 2022 | No workaround| July 2022| Not resolved |
+|The SQL provider will cause the `RawResource` column in the database to save incorrectly. This occurs in a small number of cases when a transient exception occurs that causes the provider to use its retry logic.ΓÇ»|April 2022 |-|May 2022 Resolved [#2571](https://github.com/microsoft/fhir-server/pull/2571) |
+| Queries not providing consistent result counts after appended with `_sort` operator. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). | July 2022 | No workaround|Not resolved |
## Next steps
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Title: Azure Health Data Services monthly releases description: This article provides details about the Azure Health Data Services monthly features and enhancements. -+ Previously updated : 08/09/2022 Last updated : 01/25/2023 # Release notes: Azure Health Data Services
->[!Note]
+>[Note]
> Azure Health Data Services is Generally Available. > >For more information about Azure Health Data Services Service Level Agreements, see [SLA for Azure Health Data Services](https://azure.microsoft.com/support/legal/sla/health-data-services/v1_1/). Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
+## December 2022
+
+### Azure Health Data Services
+
+**Azure Health Data services General Available (GA) in new regions**
+
+General availability (GA) of Azure Health Data services in France Central, North Central US and Qatar Central Regions.
+
+
+### DICOM service
++
+ **DICOM Events available in public preview**
+
+Azure Health Data Services [Events](events/events-overview.md) now include a public preview of [two new event types](events/events-message-structure.md#dicom-events-message-structure) for the DICOM service. These new event types enable applications that use Event Grid to use event-driven workflows when DICOM images are created or deleted.
++
+## November 2022
+### FHIR service
+
+**Fixed the Error generated when resource is updated using if-match header and PATCH**
+
+Bug is now fixed and Resource will be updated if matches the Etag header. For details , see [#2877](https://github.com/microsoft/fhir-server/issues/2877)
++
+### Toolkit and Samples Open Source
++
+**Azure Health Data Services Toolkit is released**
+
+The [Azure Health Data Services Toolkit](https://github.com/microsoft/azure-health-data-services-toolkit), which was previously in a pre-release state, is now in **Public Preview** . The toolkit is open-source project and allows customers to more easily customize and extend the functionality of their Azure Health Data Services implementations. The NuGet packages of the toolkit are available for download from the NuGet gallery, and you can find links to them from the repo documentation.
+
+## October 2022
+### MedTech service
++
+ **Added Deploy to Azure button**
+
+ Customers can now deploy the MedTech service fully, including Event Hubs, AHDS workspace, FHIR service, MedTech service, and managed identity roles, all by clicking the "Deploy to Azure" button. [Deploy the MedTech service using an Azure Resource Manager template](./iot/deploy-new-arm.md)
+++
+**Added the Dropped Event Metrics**
+
+Customers can now determine if their mappings are working as intended, as they can now see dropped events as a metric to ensure that data is flowing through accurately.
+ ## September 2022
For information about the features and bug fixes in Azure API for FHIR, see
>[!div class="nextstepaction"] >[Release notes: Azure API for FHIR](./azure-api-for-fhir/release-notes.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-central Tutorial Smart Meter App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-smart-meter-app.md
Title: Tutorial - Azure IoT smart meter monitoring | Microsoft Docs
-description: This tutorial shows you how to deploy and use the smart meter monitoring application template for IoT Central.
+ Title: Tutorial - Azure IoT smart-meter monitoring
+description: This tutorial shows you how to deploy and use an application template for monitoring smart meters in Azure IoT Central.
Last updated 06/14/2022
-# Tutorial: Deploy and walk through the smart meter monitoring application template
+# Tutorial: Deploy and walk through an application template for monitoring smart meters
-The smart meters not only enable automated billing, but also advanced metering use cases such as real-time readings and bi-directional communication. The _smart meter monitoring_ application template enables utilities and partners to monitor smart meters status and data, define alarms and notifications. It provides sample commands, such as disconnect meter and update software. The meter data can be set up to egress to other business applications and to develop custom solutions.
+Smart meters enable not only automated billing, but also advanced metering use cases like real-time readings and bidirectional communication.
-App's key functionalities:
+An application template enables utilities and partners to monitor the status and data of smart meters, along with defining alarms and notifications. The template provides sample commands, such as disconnecting a meter and updating software. You can set up the meter data to egress to other business applications, and to develop custom solutions.
-- Meter sample device model
+The application's key functionalities include:
+
+- Sample device model for meters
- Meter info and live status-- Meter readings such as energy, power, and voltages
+- Meter readings such as energy, power, and voltage
- Meter command samples - Built-in visualization and dashboards - Extensibility for custom solution development
+In this tutorial, you learn how to:
+
+- Create an application for monitoring smart meters.
+- Walk through the application.
+- Clean up resources.
+
+## Application architecture
-This architecture consists of the following components. Some solutions may not require every component listed here.
+
+The architecture of the application consists of the following components. Some solutions might not require every component listed here.
### Smart meters and connectivity
-A smart meter is one of the most important devices among all the energy assets. It records and communicates energy consumption data to utilities for monitoring and other use cases, such as billing and demand response. Typically, a meter uses a gateway or bridge to connect to an IoT Central application. To learn more about bridges, see [Use the IoT Central device bridge to connect other IoT clouds to IoT Central](../core/howto-build-iotc-device-bridge.md).
+A smart meter is one of the most important devices among all the energy assets. It records and communicates energy consumption data to utilities for monitoring and other use cases, such as billing and demand response.
-### IoT Central platform
+Typically, a meter uses a gateway or bridge to connect to an Azure IoT Central application. To learn more about bridges, see [Use the Azure IoT Central device bridge to connect other IoT clouds to Azure IoT Central](../core/howto-build-iotc-device-bridge.md).
-When you build an IoT solution, Azure IoT Central simplifies the build process and helps to reduce the burden and costs of IoT management, operations, and development. With IoT Central, you can easily connect, monitor, and manage your Internet of Things (IoT) assets at scale. After you connect your smart meters to IoT Central, the application template uses built-in features such as device models, commands, and dashboards. The application template also uses the IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
+### Azure IoT Central platform
-### Extensibility options to build with IoT Central
+When you build an Internet of Things (IoT) solution, Azure IoT Central simplifies the build process and helps reduce the burden and costs of IoT management, operations, and development. With Azure IoT Central, you can easily connect, monitor, and manage your IoT assets at scale.
-The IoT Central platform provides two extensibility options: Continuous Data Export (CDE) and APIs. The customers and partners can choose between these options based to customize their solutions for specific needs. For example, one of our partners configured CDE with Azure Data Lake Storage (ADLS). They're using ADLS for long-term data retention and other cold path storage scenarios, such batch processing, auditing and reporting purposes.
+After you connect your smart meters to Azure IoT Central, the application template uses built-in features such as device models, commands, and dashboards. The application template also uses the Azure IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
-In this tutorial, you learn how to:
+### Extensibility options to build with Azure IoT Central
-- Create the smart meter app-- Application walk-through-- Clean up resources
+The Azure IoT Central platform provides two extensibility options: Continuous Data Export and APIs. Customers and partners can choose between these options to customize their solutions for their specific needs.
+
+For example, a partner might configure Continuous Data Export with Azure Data Lake Storage. That partner can then use Data Lake Storage for long-term data retention and other scenarios for cold path storage, such batch processing, auditing, and reporting.
## Prerequisites
-An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+To complete this tutorial, you need an active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Create an application for monitoring smart meters
-## Create a smart meter monitoring application
+1. Go to the [Azure IoT Central build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account.
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Energy** tab:
+1. Select **Build** from the left menu, and then select the **Energy** tab.
- :::image type="content" source="media/tutorial-iot-central-smart-meter/smart-meter-build.png" alt-text="Screenshot showing the Azure IoT Central build site with the energy app templates.":::
+ :::image type="content" source="media/tutorial-iot-central-smart-meter/smart-meter-build.png" alt-text="Screenshot that shows the Azure IoT Central build site with energy app templates.":::
-1. Select **Create app** under **Smart meter monitoring**.
+1. Under **Smart meter monitoring**, select **Create app**.
-To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md).
+To learn more, see [Create an Azure IoT Central application](../core/howto-create-iot-central-application.md).
## Walk through the application
-The following sections walk you through the key features of the application:
+The following sections walk you through the key features of the application.
### Dashboard
-After you deploy the application template, it comes with sample smart meter device, device model, and a dashboard.
+After you deploy the application template, it comes with a sample smart meter, a device model, and a dashboard.
-Adatum is a fictitious energy company, who monitors and manages smart meters. On the smart meter monitoring dashboard, you see smart meter properties, data, and sample commands. It enables operators and support teams to proactively perform the following activities before it turns into support incidents:
+Adatum is a fictitious energy company that monitors and manages smart meters. The dashboard for monitoring smart meters shows properties, data, and sample commands for meters. The dashboard enables operators and support teams to proactively perform the following activities before they become support incidents:
* Review the latest meter info and its installed [location](../core/howto-use-location-data.md) on the map. * Proactively check the meter network and connection status.
-* Monitor Min and Max voltage readings for network health.
+* Monitor minimum and maximum voltage readings for network health.
* Review the energy, power, and voltage trends to catch any anomalous patterns. * Track the total energy consumption for planning and billing purposes.
-* Command and control operations such as reconnect meter and update firmware version. In the template, the command buttons show the possible functionalities and don't send real commands.
+* Perform command and control operations, such as reconnecting a meter and updating a firmware version. In the template, the command buttons show the possible functionalities and don't send real commands.
### Devices
-The app comes with a sample smart meter device. You can see the device details by clicking on the **Devices** tab.
+The application comes with a sample smart-meter device. You can see available devices by selecting **Devices** on the left menu.
-Click on the sample device **SM0123456789** link to see the device details. You can update the writable properties of the device on the **Update Properties** page, and visualize the updated values on the dashboard.
+Select the link for sample device **SM0123456789** to see the device details. You can update the writable properties of the device on the **Update Properties** page, and then visualize the updated values on the dashboard.
-### Device Template
+### Device template
-Click on the **Device templates** tab to see the smart meter device model. The model has pre-define interface for Data, Property, Commands, and Views.
+Select **Device templates** on the left menu to see the model of the smart meter. The model has a predefined interface for data, properties, commands, and views.
## Customize your application
Click on the **Device templates** tab to see the smart meter device model. The m
## Next steps
-> [Tutorial: Deploy and walk through a Solar panel application template](tutorial-solar-panel-app.md)
+> [Tutorial: Deploy and walk through a solar panel application template](tutorial-solar-panel-app.md)
key-vault Tutorial Import Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/tutorial-import-certificate.md
In this case, we will create a certificate called **ExampleCertificate**, or imp
:::image type="content" source="../media/certificates/tutorial-import-cert/cert-import.png" alt-text="Importing a certificate through the Azure portal":::
+When importing a .pem file, check if the format is the following:
+
+--BEGIN CERTIFICATE--<br>
+MIID2TCCAsGg...<br>
+--END CERTIFICATE--<br>
+--BEGIN PRIVATE KEY--<br>
+MIIEvQIBADAN...<br>
+--END PRIVATE KEY--<br>
+ When importing a certificate, Azure Key vault will automatically populate certificate parameters (i.e. validity period, Issuer name, activation date etc.). Once you receive the message that the certificate has been successfully imported, you may click on it on the list to view its properties.
key-vault Integrate Databricks Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/integrate-databricks-blob-storage.md
az storage account create --name contosoblobstorage5 --resource-group contosoRes
Before you can create a container to upload the blob to, you'll need to assign the [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) role to yourself. For this example, the role will be assigned to the storage account you've made earlier. ```azurecli
-az role assignment create --role "Storage Blob Data Contributor" --assignee t-trtr@microsoft.com --scope "/subscriptions/885e24c8-7a36-4217-b8c9-eed31e110504/resourceGroups/contosoResourceGroup5/providers/Microsoft.Storage/storageAccounts/contosoblobstorage5
+az role assignment create --role "Storage Blob Data Contributor" --assignee t-trtr@microsoft.com --scope "/subscriptions/aaaaaaaa-bbbb-bbbb-cccc-dddddddddddd/resourceGroups/contosoResourceGroup5/providers/Microsoft.Storage/storageAccounts/contosoblobstorage5
``` Now that you've assign the role to storage account, you can create a container for your blob.
key-vault Tutorial Javascript Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-javascript-virtual-machine.md
On the virtual machine, install the two npm libraries we'll be using in our Java
return (await client.getSecret(secretName)).value; }
- getSecret("mySecret").then(secretValue => {
- console.log(`The value of secret 'mySecret' in '${keyVaultName}' is: '${secretValue}'`);
+ getSecret(secretName).then(secretValue => {
+ console.log(`The value of secret '${secretName}' in '${keyVaultName}' is: '${secretValue}'`);
}).catch(err => { console.log(err); })
az group delete -g myResourceGroup
## Next steps
-[Azure Key Vault REST API](/rest/api/keyvault/)
+[Azure Key Vault REST API](/rest/api/keyvault/)
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/how-to-configure-key-rotation.md
Automated cryptographic key rotation in [Key Vault](../general/overview.md) allo
Our recommendation is to rotate encryption keys at least every two years to meet cryptographic best practices.
-For more information about objects in Key Vault are versioned, see [Key Vault objects, identifiers, and versioning](../general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning).
+For more information about how objects in Key Vault are versioned, see [Key Vault objects, identifiers, and versioning](../general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning).
## Integration with Azure services This feature enables end-to-end zero-touch rotation for encryption at rest for Azure services with customer-managed key (CMK) stored in Azure Key Vault. Please refer to specific Azure service documentation to see if the service covers end-to-end rotation.
key-vault Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/access-control.md
tags: azure-resource-manager
Previously updated : 01/04/2023 Last updated : 01/26/2023 # Customer intent: As the admin for managed HSMs, I want to set access policies and configure the Managed HSM, so that I can ensure it's secure and auditors can properly monitor all activities for these managed HSMs.
When a managed HSM is created, the requestor also provides a list of data plane
Permission model for both planes uses the same syntax, but they're enforced at different levels and role assignments use different scopes. Management plane Azure RBAC is enforced by Azure Resource Manager while data plane Managed HSM local RBAC is enforced by managed HSM itself. > [!IMPORTANT]
-> Granting a security principal management plane access to an managed HSM does not grant them any access to data plane to access keys or data plane role assignments Managed HSM local RBAC). This isolation is by design to prevent inadvertent expansion of privileges affecting access to keys stored in Managed HSM.
+> Granting a security principal management plane access to an managed HSM does not grant them any access to data plane to access keys or data plane role assignments Managed HSM local RBAC). This isolation is by design to prevent inadvertent expansion of privileges affecting access to keys stored in Managed HSM. The one exception is members of the Azure Active Directory Global Administrator role are implicitly part of the Managed HSM Administrator role for recovery purposes such as when there are no longer any valid Managed HSM administrator accounts. Please follow [Azure Active Directory best practices for securing the Global Adminstrator role](../../active-directory/roles/best-practices.md#5-limit-the-number-of-global-administrators-to-less-than-5).
For example, a subscription administrator (since they have "Contributor" permission to all resources in the subscription) can delete an managed HSM in their subscription, but if they don't have data plane access specifically granted through Managed HSM local RBAC, they can't gain access to keys or manage role assignment in the managed HSM to grant themselves or others access to data plane.
You grant a security principal access to execute specific key operations by assi
- For a getting-started tutorial for an administrator, see [What is Managed HSM?](overview.md) - For a role management tutorial, see [Managed HSM local RBAC](role-management.md)-- For more information about usage logging for Managed HSM logging, see [Managed HSM logging](logging.md)
+- For more information about usage logging for Managed HSM logging, see [Managed HSM logging](logging.md)
lighthouse Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/index.md
Title: Azure Lighthouse samples and templates description: These samples and Azure Resource Manager templates help you onboard customers and support Azure Lighthouse scenarios. Previously updated : 12/21/2022 Last updated : 01/26/2023 # Azure Lighthouse samples
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
In this article, you learn about:
After you train a machine learning model, you need to deploy the model so that others can use it to do inferencing. In Azure Machine Learning, you can use **endpoints** and **deployments** to do so.
-An **endpoint** is an HTTPS endpoint that clients can call to receive the inferencing (scoring) output of a trained model. It provides:
+An **endpoint**, in this context, is an HTTPS path that provides an interface for clients to send requests (input data) and receive the inferencing (scoring) output of a trained model. An endpoint provides:
- Authentication using "key & token" based auth - SSL termination - A stable scoring URI (endpoint-name.region.inference.ml.azure.com)
A **deployment** is a set of resources required for hosting the model that does
A single endpoint can contain multiple deployments. Endpoints and deployments are independent Azure Resource Manager resources that appear in the Azure portal.
-Azure Machine Learning uses the concept of endpoints and deployments to implement different types of endpoints: [online endpoints](#what-are-online-endpoints) and [batch endpoints](#what-are-batch-endpoints).
+Azure Machine Learning allows you to implement both [online endpoints](#what-are-online-endpoints) and [batch endpoints](#what-are-batch-endpoints).
### Multiple developer interfaces
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
-+ Previously updated : 05/11/2022 Last updated : 01/04/2023 # MLOps: Model management, deployment, and monitoring with Azure Machine Learning
Last updated 05/11/2022
> * [v1](./v1/concept-model-management-and-deployment.md) > * [v2 (current version)](concept-model-management-and-deployment.md)
-In this article, learn about how do Machine Learning Operations (MLOps) in Azure Machine Learning to manage the lifecycle of your models. MLOps improves the quality and consistency of your machine learning solutions.
+In this article, learn how to apply Machine Learning Operations (MLOps) practices in Azure Machine Learning for the purpose of managing the lifecycle of your models. Applying MLOps practices can improve the quality and consistency of your machine learning solutions.
## What is MLOps?
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
Machine learning pipelines can use the previously mentioned training methods. Pi
The Azure training lifecycle consists of:
-1. Zipping the files in your project folder, ignoring those specified in _.amlignore_ or _.gitignore_
+1. Zipping the files in your project folder and upload to the cloud.
+
+ > [!TIP]
+ > [!INCLUDE [amlinclude-info](../../includes/machine-learning-amlignore-gitignore.md)]
+ 1. Scaling up your compute cluster 1. Building or downloading the dockerfile to the compute node 1. The system calculates a hash of:
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
description: Securely access Azure resources for your machine learning model dep
--++ Last updated 04/07/2022
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
description: Learn to scale up online endpoints. Get more CPU, memory, disk spac
--++
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
description: Learn how to use Visual Studio Code to test and debug online endpoi
--++ Last updated 11/03/2021
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
description: Learn to deploy your AutoML model as a web service that's automatic
--++ Last updated 05/11/2022
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
description: Learn how to use a custom container to use open-source servers in A
--++ Last updated 10/13/2022
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
Last updated 06/10/2022 --++ ms.devlang: azurecli
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-keras.md
It's now time to submit the job to run in AzureML. This time, you'll use `create
Once completed, the job will register a model in your workspace (as a result of training) and output a link for viewing the job in AzureML studio. > [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory.
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](concept-train-machine-learning-model.md#understand-what-happens-when-you-submit-a-training-job) or don't include it in the source directory.
### What happens during job execution As the job is executed, it goes through the following stages:
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
It's now time to submit the job to run in AzureML. This time, you'll use `create
Once completed, the job will register a model in your workspace (as a result of training) and output a link for viewing the job in AzureML studio. > [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory.
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](concept-train-machine-learning-model.md#understand-what-happens-when-you-submit-a-training-job) or don't include it in the source directory.
### What happens during job execution As the job is executed, it goes through the following stages:
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
It's now time to submit the job to run in AzureML. This time you'll use `create_
Once completed, the job will register a model in your workspace (as a result of training) and output a link for viewing the job in AzureML studio. > [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory.
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](concept-train-machine-learning-model.md#understand-what-happens-when-you-submit-a-training-job) or don't include it in the source directory.
### What happens during job execution As the job is executed, it goes through the following stages:
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
It's now time to submit the job to run in AzureML. This time, you'll use `create
Once completed, the job will register a model in your workspace (as a result of training) and output a link for viewing the job in AzureML studio. > [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory.
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](concept-train-machine-learning-model.md#understand-what-happens-when-you-submit-a-training-job) or don't include it in the source directory.
### What happens during job execution As the job is executed, it goes through the following stages:
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-custom-image.md
run.wait_for_completion(show_output=True)
``` > [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use an [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory. Instead, access your data by using a [datastore](/python/api/azureml-core/azureml.data).
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use an [.ignore file](concept-train-machine-learning-model.md#understand-what-happens-when-you-submit-a-training-job) or don't include it in the source directory. Instead, access your data by using a [datastore](/python/api/azureml-core/azureml.data).
## Next steps In this article, you trained a model by using a custom Docker image. See these other articles to learn more about Azure Machine Learning:
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
description: Learn how to troubleshoot some common deployment and scoring errors
--++ Last updated 11/04/2022
machine-learning Migrate To V2 Deploy Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-deploy-endpoints.md
--++ Last updated 09/16/2022
machine-learning Migrate To V2 Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-managed-online-endpoints.md
--++ Last updated 09/28/2022-+
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
- Previously updated : 04/26/2022- Last updated : 01/24/2023++ # CLI (v2) managed online deployment YAML schema
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `initial_delay` | integer | The number of seconds after the container has started before the probe is initiated. Minimum value is `1`. | `10` | | `period` | integer | How often (in seconds) to perform the probe. | `10` | | `timeout` | integer | The number of seconds after which the probe times out. Minimum value is `1`. | `2` |
-| `success_threshold` | integer | The minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is `1`. | `1` |
+| `success_threshold` | integer | The minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is `1` for readiness probe. The value for liveness probe is fixed as `1`. | `1` |
| `failure_threshold` | integer | When a probe fails, the system will try `failure_threshold` times before giving up. Giving up in the case of a liveness probe means the container will be restarted. In the case of a readiness probe the container will be marked Unready. Minimum value is `1`. | `30` | ## Remarks
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-model-management-and-deployment.md
---+++ Previously updated : 08/18/2022 Last updated : 01/04/2023 # MLOps: Model management, deployment, lineage, and monitoring with Azure Machine Learning v1
Last updated 08/18/2022
> * [v1](concept-model-management-and-deployment.md) > * [v2 (current version)](../concept-model-management-and-deployment.md)
-In this article, learn about how do Machine Learning Operations (MLOps) in Azure Machine Learning to manage the lifecycle of your models. MLOps improves the quality and consistency of your machine learning solutions.
+In this article, learn how to apply Machine Learning Operations (MLOps) practices in Azure Machine Learning for the purpose of managing the lifecycle of your models. Applying MLOps practices can improve the quality and consistency of your machine learning solutions.
## What is MLOps?
machine-learning How To Save Write Experiment Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-save-write-experiment-files.md
+
+ Title: Where to save & write experiment files
+
+description: Learn where to save your input and output files to prevent storage limitation errors and experiment latency.
+++++++ Last updated : 01/25/2023++
+# Where to save and write files for Azure Machine Learning experiments
+
+In this article, you learn where to save input files, and where to write output files from your experiments to prevent storage limit errors and experiment latency.
+
+When launching training jobs on a [compute target](../concept-compute-target.md), they are isolated from outside environments. The purpose of this design is to ensure reproducibility and portability of the experiment. If you run the same script twice, on the same or another compute target, you receive the same results. With this design, you can treat compute targets as stateless computation resources, each having no affinity to the jobs that are running after they are finished.
+
+## Where to save input files
+
+Before you can initiate an experiment on a compute target or your local machine, you must ensure that the necessary files are available to that compute target, such as dependency files and data files your code needs to run.
+
+Azure Machine Learning jobs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory. Instead, access your data using a [datastore](/python/api/azureml-core/azureml.data).
+
+The storage limit for experiment snapshots is 300 MB and/or 2000 files.
+
+For this reason, we recommend:
+
+* **Storing your files in an Azure Machine Learning [dataset](/python/api/azureml-core/azureml.data).** This prevents experiment latency issues, and has the advantages of accessing data from a remote compute target, which means authentication and mounting are managed by Azure Machine Learning. Learn more about how to specify a dataset as your input data source in your training script with [Train with datasets](how-to-train-with-datasets.md).
+
+* **If you only need a couple data files and dependency scripts and can't use a datastore,** place the files in the same folder directory as your training script. Specify this folder as your `source_directory` directly in your training script, or in the code that calls your training script.
+
+<a name="limits"></a>
+
+### Storage limits of experiment snapshots
+
+For experiments, Azure Machine Learning automatically makes an experiment snapshot of your code based on the directory you suggest when you configure the job. This has a total limit of 300 MB and/or 2000 files. If you exceed this limit, you'll see the following error:
+
+```Python
+While attempting to take snapshot of .
+Your total snapshot size exceeds the limit of 300.0 MB
+```
+
+To resolve this error, store your experiment files on a datastore. If you can't use a datastore, the below table offers possible alternate solutions.
+
+Experiment&nbsp;description|Storage limit solution
+|
+Less than 2000 files & can't use a datastore| Override snapshot size limit with <br> `azureml._restclient.snapshots_client.SNAPSHOT_MAX_SIZE_BYTES = 'insert_desired_size'`<br> This may take several minutes depending on the number and size of files.
+Must use specific script directory| [!INCLUDE [amlinclude-info](../../../includes/machine-learning-amlignore-gitignore.md)]
+Pipeline|Use a different subdirectory for each step
+Jupyter notebooks| Create a `.amlignore` file or move your notebook into a new, empty, subdirectory and run your code again.
+
+## Where to write files
+
+Due to the isolation of training experiments, the changes to files that happen during jobs are not necessarily persisted outside of your environment. If your script modifies the files local to compute, the changes are not persisted for your next experiment job, and they're not propagated back to the client machine automatically. Therefore, the changes made during the first experiment job don't and shouldn't affect those in the second.
+
+When writing changes, we recommend writing files to storage via an Azure Machine Learning dataset with an [OutputFileDatasetConfig object](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig). See [how to create an OutputFileDatasetConfig](how-to-train-with-datasets.md#where-to-write-training-output).
+
+Otherwise, write files to the `./outputs` and/or `./logs` folder.
+
+>[!Important]
+> Two folders, *outputs* and *logs*, receive special treatment by Azure Machine Learning. During training, when you write files to`./outputs` and`./logs` folders, the files will automatically upload to your job history, so that you have access to them once your job is finished.
+
+* **For output such as status messages or scoring results,** write files to the `./outputs` folder, so they are persisted as artifacts in job history. Be mindful of the number and size of files written to this folder, as latency may occur when the contents are uploaded to job history. If latency is a concern, writing files to a datastore is recommended.
+
+* **To save written file as logs in job history,** write files to `./logs` folder. The logs are uploaded in real time, so this method is suitable for streaming live updates from a remote job.
+
+## Next steps
+
+* Learn more about [accessing data from storage](how-to-access-data.md).
+
+* Learn more about [Create compute targets for model training and deployment](../how-to-create-attach-compute-studio.md)
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md
src = ScriptRunConfig(source_directory=project_folder,
``` > [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](../how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](how-to-set-up-training-targets.md).
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-scikit-learn.md
run.wait_for_completion(show_output=True)
``` > [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](../how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
### What happens during run execution As the run is executed, it goes through the following stages:
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-tensorflow.md
src = ScriptRunConfig(source_directory=script_folder,
``` > [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](../how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](how-to-set-up-training-targets.md).
marketplace Azure Vm Plan Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-manage.md
Last updated 02/11/2022
-# Update core sizes for an Azure virtual machine offer
+# Update vCPU sizes for an Azure virtual machine offer
-Complete these steps when you are notified that new core sizes are now supported.
+Complete these steps when you are notified that new vCPU sizes are now supported.
> [!NOTE]
-> These steps apply regardless of the selected price entry option (free, flate rate, per core, per core size, per market and core size) and must be completed within the timeframe specified in the notification from Microsoft. Otherwise, weΓÇÖll publish the new core sizes at the price that we calculate for you.
+> These steps apply regardless of the selected price input option (free, flat ate, per vCPU, per vCPU size, per market and vCPU size) and must be completed within the period specified in the notification from Microsoft. Otherwise, weΓÇÖll publish the new vCPU sizes at the price that we calculate for you.
-1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
+1. Sign-in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
1. On the Home page, select the **Marketplace offers** tile. [![Screenshot shows the marketplace offers tile on the Partner Center home page.](./media/workspaces/partner-center-home.png)](./media/workspaces/partner-center-home.png#lightbox)
Complete these steps when you are notified that new core sizes are now supported
1. On the **Offer overview** page, under **Plan overview**, select a plan within your offer. 1. In the left-nav menu, select **Pricing and availability**. 1. Do one of the following:
- - If either the _Per core size_ or _Per market and core size_ price entry options are used, under **Pricing**, verify the price and make any necessary adjustments for the new core sizes that have been added.
- - If your price entry option is set to _Free_, _Flat rate_, or _Per core_, go to step 7.
-1. Select **Save draft** and then **Review and publish**. After the offer is republished, the new core sizes will be available to your customers at the prices that you have set.
+ - If either the _Per_ *vCPU* _size_ or _Per market and_ vCPU _size_ price entry options are used, under **Pricing**, verify the price and make any necessary adjustments for the new vCPU sizes that have been added.
+ - If your price entry option is set to _Free_, _Flat rate_, or _Per_ *vCPU*, go to step 7.
+
+1. Select **Save draft** and then **Review and publish**. After the offer is republished, the new vCPU sizes will be available to your customers at the prices that you have set.
+
+
marketplace Azure Vm Plan Pricing And Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-pricing-and-availability.md
You can design each plan to be visible to everyone or only to a preselected priv
> [!IMPORTANT] > Private plans are still visible to everyone in the CLI, but only deployable to customers configured in the private audience.
+> [!NOTE]
+> Private plans can only be deployed by the customers configured in the private audience. It is recommended to create a Private Offer instead of using private plans. However, if you decide to make a private plan instead, keep in mind that the Plan ID, URN and Offer Name is publicly visible via Azure CLI. When creating your private plans, be sure to name them appropriately with this in mind.
+ Private offers aren't supported with Azure subscriptions established through a reseller of the Cloud Solution Provider program (CSP). ## Hide plan
mysql Tutorial Wordpress App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-wordpress-app-service.md
- Title: 'Tutorial: Create a WordPress site on Azure App Service integrating with Azure Database for MySQL - Flexible Server'
-description: Create your first and fully managed WordPress site on Azure App Service and integrate with Azure Database for MySQL - Flexible Server in minutes.
----- Previously updated : 8/11/2022---
-# Tutorial: Create a WordPress site on Azure App Service integrating with Azure Database for MySQL - Flexible Server
-
-[WordPress](https://www.wordpress.org) is an open source content management system (CMS) that can be used to create websites, blogs, and other applications. Over 40% of the web uses WordPress from blogs to major news websites.
-
-In this tutorial, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org) site to [Azure App Service on Linux](../../app-service/overview.md#app-service-on-linux) integrating with [Azure Database for MySQL - Flexible Server]() in the backend. You'll use the [WordPress on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview) to set up your site along with the database integration within minutes.
-
-## Prerequisites
--- An Azure subscription [!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)]--
-## Create WordPress site using Azure portal
-
-1. Browse to [https://portal.azure.com/#create/WordPress.WordPress](https://portal.azure.com/#create/WordPress.WordPress), or search for "WordPress" in the Azure Marketplace.
-
- :::image type="content" source="./media/tutorial-wordpress-app-service/01-portal-create-wordpress-on-app-service.png?text=WordPress from Azure Marketplace" alt-text="Screenshot of Create a WordPress site.":::
-
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Type `myResourceGroup` for the name and select a **Region** you want to serve your app from.
-
- :::image type="content" source="./media/tutorial-wordpress-app-service/04-wordpress-basics-project-details.png?text=Azure portal WordPress Project Details" alt-text="Screenshot of WordPress project details.":::
-
-1. Under **Instance details**, type a globally unique name for your web app and choose **Linux** for **Operating System**. For the purposes of this tutorial, select **Basic** for **Hosting plan**.
-
- :::image type="content" source="./media/tutorial-wordpress-app-service/05-wordpress-basics-instance-details.png?text=WordPress basics instance details" alt-text="Screenshot of WordPress instance details.":::
-
- For app and database SKUs for each hosting plans, see the below table.
-
- | **Hosting Plan** | **Web App** | **Database (MySQL Flexible Server)** |
- ||||
- |Basic (Hobby or Research purposes) | B1 (1 vCores, 1.75 GB RAM, 10 GB Storage) | Burstable, B1ms (1 vCores, 2 GB RAM, 32 GB Storage, 400 IOPs) |
- |Standard (General Purpose production apps)| P1V2 (1 vCores, 3.5 GB RAM, 250 GB Storage)| General Purpose D2ds_v4 (2 vCores, 8 GB RAM, 128 GB Storage, 700 IOPs)|
- |Premium (Heavy Workload production apps) | P1V3 (2 Cores, 8 GB RAM, 250 GB storage) | Business Critical, Standard_E4ds_v4 (2 vCores, 16 GB RAM, 256 GB storage, 1100 IOPS) |
-
- For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) and [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
-
-1. <a name="wordpress-settings"></a>Under **WordPress Settings**, type an **Admin Email**, **Admin Username**, and **Admin Password**. The **Admin Email** here is used for WordPress administrative sign-in only.
-
- :::image type="content" source="./media/tutorial-wordpress-app-service/06-wordpress-basics-wordpress-settings.png?text=Azure portal WordPress settings" alt-text="Screenshot of WordPress settings.":::
-
-1. Select the **Advanced** tab. Under **Additional Settings** choose your preferred **Site Language** and **Content Distribution**. If you're unfamiliar with a [Content Delivery Network](../../cdn/cdn-overview.md) or [Blob Storage](../../storage/blobs/storage-blobs-overview.md), select **Disabled**. For more details on the Content Distribution options, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html).
-
- :::image type="content" source="./media/tutorial-wordpress-app-service/08-wordpress-advanced-settings.png" alt-text="Screenshot of WordPress Advanced Settings.":::
-
-1. Select the **Review + create** tab. After validation runs, select the **Create** button at the bottom of the page to create the WordPress site.
-
- :::image type="content" source="./media/tutorial-wordpress-app-service/09-wordpress-create.png?text=WordPress create button" alt-text="Screenshot of WordPress create button.":::
-
-1. Browse to your site URL and verify the app is running properly. The site may take a few minutes to load. If you receive an error, allow a few more minutes then refresh the browser.
-
- :::image type="content" source="./media/tutorial-wordpress-app-service/wordpress-sample-site.png?text=WordPress sample site" alt-text="Screenshot of WordPress site.":::
-
-1. To access the WordPress Admin page, browse to `/wp-admin` and use the credentials you created in the [WordPress settings step](#wordpress-settings).
-
- :::image type="content" source="./media/tutorial-wordpress-app-service/wordpress-admin-login.png?text=WordPress admin login" alt-text="Screenshot of WordPress admin login.":::
-
-> [!NOTE]
-> - [After November 28, 2022, PHP will only be supported on App Service on Linux.](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#end-of-life-for-php-74).
-> - The WordPress installation comes with pre-installed plugins for performance improvements, [W3TC](https://wordpress.org/plugins/w3-total-cache/) for caching and [Smush](https://wordpress.org/plugins/wp-smushit/) for image compression.
->
-> If you have feedback to improve this WordPress offering on App Service, submit your ideas at [Web Apps Community](https://feedback.azure.com/d365community/forum/b09330d1-c625-ec11-b6e6-000d3a4f0f1c).
--
-## MySQL flexible server username and password
--- Database username and password of the MySQL Flexible Server are generated automatically. To retrieve these values after the deployment go to Application Settings section of the Configuration page in Azure App Service. The WordPress configuration is modified to use these [Application Settings](../../app-service/reference-app-settings.md#wordpress) to connect to the MySQL database.--- To change the MySQL database password, see [Reset admin password](how-to-manage-server-portal.md#reset-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](../../app-service/reference-app-settings.md#wordpress) also need to be updated. The [Application Settings for MySQL database](../../app-service/reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html#known-limitations).-
-## Manage the MySQL database
-
-The MySQL Flexible Server is created behind a private [Virtual Network](../../virtual-network/virtual-networks-overview.md) and can't be accessed directly. To access and manage the database, use phpMyAdmin that's deployed with the WordPress site.
-- Navigate to the URL : https://`<sitename>`.azurewebsites.net/phpmyadmin-- Login with the flexible server's username and password-
-## Change WordPress admin password
-
-The [Application Settings](../../app-service/reference-app-settings.md#wordpress) for WordPress admin credentials are only for deployment purposes. Modifying these values has no effect on the WordPress installation. To change the WordPress admin password, see [resetting your password](https://wordpress.org/support/article/resetting-your-password/#to-change-your-password). The [Application Settings for WordPress admin credentials](../../app-service/reference-app-settings.md#wordpress) begin with the **`WORDPRESS_ADMIN_`** prefix. For more information on updating the WordPress admin password, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html#known-limitations).
-
-## Clean up resources
-
-When no longer needed, you can delete the resource group, App service, and all related resources.
-
-1. From your App Service *overview* page, click the *resource group* you created in the [Create WordPress site using Azure portal](#create-wordpress-site-using-azure-portal) step.
-
- :::image type="content" source="./media/tutorial-wordpress-app-service/resource-group.png" alt-text="Resource group in App Service overview page.":::
-
-1. From the *resource group* page, select **Delete resource group**. Confirm the name of the resource group to finish deleting the resources.
-
- :::image type="content" source="./media/tutorial-wordpress-app-service/delete-resource-group.png" alt-text="Delete resource group.":::
-
-## Next steps
-
-Congratulations, you've successfully completed this quickstart!
-
-> [!div class="nextstepaction"]
-> [Tutorial: Map a custom domain name](../../app-service/app-service-web-tutorial-custom-domain.md)
-
-> [!div class="nextstepaction"]
-> [Tutorial: PHP app with MySQL](tutorial-php-database-app.md)
-
-> [!div class="nextstepaction"]
-> [How to manage your server](how-to-manage-server-cli.md)
network-watcher Connection Monitor Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md
Title: Tutorial - Monitor network communication between two Virtual Machine Scale Sets by using the Azure portal
-description: In this tutorial, you'll learn how to monitor network communication between two Virtual Machine Scale Sets by using the Azure Network Watcher connection monitor capability.
+ Title: 'Tutorial: Monitor network communication with virtual machine scale set - Azure portal'
+
+description: In this tutorial, you'll learn how to monitor network communication between two virtual machine scale sets with Azure Network Watcher connection monitor using the Azure portal.
tags: azure-resource-manager
-# Customer intent: I need to monitor communication between a virtual machine scale set and another VM. If the communication fails, I need to know why, so that I can resolve the problem.
Previously updated : 10/17/2022 Last updated : 01/25/2023 -+
+# Customer intent: I need to monitor communication between a virtual machine scale set and another VM. If the communication fails, I need to know why, so that I can resolve the problem.
-# Tutorial: Monitor network communication between two Virtual Machine Scale Sets by using the Azure portal
+# Tutorial: Monitor network communication with a virtual machine scale set using the Azure portal
-Successful communication between a virtual machine scale set and an endpoint, such as another virtual machine (VM), can be critical for your organization. Sometimes, the introduction of configuration changes can break communication. In this tutorial, you learn how to:
+Successful communication between a virtual machine scale set and another endpoint, such as virtual machine (VM), can be critical for your organization. Sometimes, the introduction of configuration changes can break communication.
+
+In this tutorial, you learn how to:
> [!div class="checklist"] > * Create a virtual machine scale set and a VM.
-> * Monitor communication between a scale set and a VM by using Connection Monitor.
-> * Generate alerts on Connection Monitor metrics.
+> * Monitor communication between a scale set and a VM by using Connection monitor.
+> * Generate alerts on Connection monitor metrics.
> * Diagnose a communication problem between two VMs, and learn how to resolve it. > [!NOTE]
-> This tutorial uses Connection Monitor. To experience enhanced connectivity monitoring, try the updated version of [Connection Monitor](connection-monitor-overview.md).
+> This tutorial uses Connection monitor (classic). To experience enhanced connectivity monitoring, try the updated version of [Connection monitor](connection-monitor-overview.md).
> [!IMPORTANT]
-> As of July 1, 2021, you can't add new connection monitors in Connection Monitor (classic) but you can continue to use earlier versions that were created prior to that date. To minimize service disruption to your current workloads, [migrate from Connection Monitor (classic) to the latest Connection Monitor](migrate-to-connection-monitor-from-connection-monitor-classic.md) in Azure Network Watcher before February 29, 2024.
+> As of July 1, 2021, you can't add new connection monitors in Connection monitor (classic) but you can continue to use earlier versions that were created prior to that date. To minimize service disruption to your current workloads, [migrate from Connection monitor (classic) to the latest Connection monitor](migrate-to-connection-monitor-from-connection-monitor-classic.md) in Azure Network Watcher before February 29, 2024.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-Before you begin, if you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+## Prerequisites
+
+* An Azure subscription
## Sign in to Azure
In the following sections, you create a virtual machine scale set.
[Azure Load Balancer](../load-balancer/load-balancer-overview.md) distributes incoming traffic among healthy virtual machine instances.
-First, create a public standard load balancer by using the Azure portal. The name and public IP address you create are automatically configured as the load balancer's front end.
+First, create a public standard load balancer using the Azure portal. The name and public IP address you create are automatically configured as the load balancer's front end.
-1. In the search box, type **load balancer** and then, under **Marketplace** in the search results, select **Load balancer**.
+1. In the search box, enter **load balancer** and then, under **Marketplace** in the search results, select **Load balancer**.
1. On the **Basics** pane of the **Create load balancer** page, do the following: | Setting | Value |
First, create a public standard load balancer by using the Azure portal. The nam
1. Select **Review + create**. 1. After it passes validation, select **Create**. - ### Create a virtual machine scale set You can deploy a scale set with a Windows Server image or Linux images such as RHEL, CentOS, Ubuntu, or SLES.
-1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual Machine Scale Sets**.
-1. On the **Virtual Machine Scale Sets** pane, select **Create**.
-
- The **Create a virtual machine scale set** page opens.
+1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual machine scale sets**.
+1. On the **Virtual machine scale sets** pane, select **Create**. The **Create a virtual machine scale set** page opens.
1. On the **Basics** pane, under **Project details**, ensure that the correct subscription is selected, and then select **myVMSSResourceGroup** in the resource group list. 1. For **Name**, type **myScaleSet**. 1. For **Region**, select a region that's close to your area.
The VM takes a few minutes to deploy. Wait for it to finish deploying before you
## Create a connection monitor
-To create a monitor in Connection Monitor by using the Azure portal:
+To create a monitor in Connection monitor by using the Azure portal:
1. On the Azure portal home page, go to **Network Watcher**. 1. On the left pane, in the **Monitoring** section, select **Connection monitor**.
- You'll see a list of the connection monitors that were created in Connection Monitor. To see the connection monitors that were created in the classic Connection Monitor, select the **Connection monitor** tab.
+ You'll see a list of the connection monitors that were created in Connection monitor. To see the connection monitors that were created in the classic Connection monitor, select the **Connection monitor** tab.
- :::image type="content" source="./media/connection-monitor-2-preview/cm-resource-view.png" alt-text="Screenshot that lists the connection monitors that were created in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/cm-resource-view.png" alt-text="Screenshot that lists the connection monitors that were created in Connection monitor.":::
-1. On the **Connection Monitor** dashboard, at the upper left, select **Create**.
+1. On the **Connection monitor** dashboard, at the upper left, select **Create**.
1. On the **Basics** pane, enter information for your connection monitor:
To create a monitor in Connection Monitor by using the Azure portal:
* To use the default workspace, select the checkbox. * To choose a custom workspace, clear the checkbox, and then select the subscription and region for your custom workspace.
- :::image type="content" source="./media/connection-monitor-2-preview/create-cm-basics.png" alt-text="Screenshot that shows the 'Basics' pane in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/create-cm-basics.png" alt-text="Screenshot that shows the 'Basics' pane in Connection monitor.":::
1. Select **Next: Test groups**.
-1. Add sources, destinations, and test configurations in your test groups. To learn about setting up test groups, see [Create test groups in Connection Monitor](#create-test-groups-in-a-connection-monitor).
+1. Add sources, destinations, and test configurations in your test groups. To learn about setting up test groups, see [Create test groups in Connection monitor](#create-test-groups-in-a-connection-monitor).
- :::image type="content" source="./media/connection-monitor-2-preview/create-tg.png" alt-text="Screenshot that shows the 'Test groups' pane in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/create-tg.png" alt-text="Screenshot that shows the 'Test groups' pane in Connection monitor.":::
-1. At the bottom of the pane, select **Next: Create Alerts**. To learn about creating alerts, see [Create alerts in Connection Monitor](#create-alerts-in-connection-monitor).
+1. At the bottom of the pane, select **Next: Create Alerts**. To learn about creating alerts, see [Create alerts in Connection monitor](#create-alerts-in-connection-monitor).
:::image type="content" source="./media/connection-monitor-2-preview/create-alert.png" alt-text="Screenshot that shows the 'Create alerts' pane.":::
To create a monitor in Connection Monitor by using the Azure portal:
1. On the **Review + create** pane, review the basic information and test groups before you create the connection monitor. If you need to edit the connection monitor, you can do so by going back to the respective panes.
- :::image type="content" source="./media/connection-monitor-2-preview/review-create-cm.png" alt-text="Screenshot that shows the 'Review + create' pane in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/review-create-cm.png" alt-text="Screenshot that shows the 'Review + create' pane in Connection monitor.":::
> [!NOTE]
- > The **Review + create** pane shows the cost per month during the Connection Monitor stage. Currently, the **Current Cost/Month** column shows no charge. When Connection Monitor becomes generally available, this column will show a monthly charge.
+ > The **Review + create** pane shows the cost per month during the Connection monitor stage. Currently, the **Current Cost/Month** column shows no charge. When Connection monitor becomes generally available, this column will show a monthly charge.
>
- > Even during the Connection Monitor stage, Log Analytics ingestion charges apply.
+ > Even during the Connection monitor stage, Log Analytics ingestion charges apply.
1. When you're ready to create the connection monitor, at the bottom of the **Review + create** pane, select **Create**.
-Connection Monitor creates the connection monitor resource in the background.
- ## Create test groups in a connection monitor > [!NOTE]
- > Connection Monitor now supports the auto-enabling of monitoring extensions for Azure and non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection Monitor.
+ > Connection monitor now supports the auto-enabling of monitoring extensions for Azure and non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection monitor.
Each test group in a connection monitor includes sources and destinations that get tested on network parameters. They're tested for the percentage of checks that fail and the RTT over test configurations.
In the Azure portal, to create a test group in a connection monitor, do the foll
1. **Name**: Name your test group. 1. **Sources**: You can specify both Azure VMs and on-premises machines as sources if agents are installed on them. To learn about installing an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
- * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or Virtual Machine Scale Sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and Virtual Machine Scale Sets are grouped into the subscription that they belong to. These groups are collapsed.
+ * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or virtual machine scale sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and virtual machine scale sets are grouped into the subscription that they belong to. These groups are collapsed.
You can drill down from the **Subscription** level to other levels in the hierarchy:
In the Azure portal, to create a test group in a connection monitor, do the foll
When you select a virtual network, subnet, a single VM or a virtual machine scale set the corresponding resource ID is set as the endpoint. By default, all VMs in the selected virtual network or subnet participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
- :::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the 'Add Sources' pane and the Azure endpoints, including the 'VMSS' pane, in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the 'Add Sources' pane and the Azure endpoints, including the 'virtual machine scale set' pane, in Connection monitor.":::
* To choose on-premises agents, select the **NonΓÇôAzure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have the Network Performance Monitor configured.
In the Azure portal, to create a test group in a connection monitor, do the foll
1. Under **Create Connection Monitor**, on the **Basics** pane, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created. You can also change the **Group by** selector to group by agents.
- :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-sources.png" alt-text="Screenshot that shows the 'Add Sources' pane and the 'Non-Azure endpoints' pane in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-sources.png" alt-text="Screenshot that shows the 'Add Sources' pane and the 'Non-Azure endpoints' pane in Connection monitor.":::
1. To choose recently used endpoints, you can use the **Recent endpoint** pane.
In the Azure portal, to create a test group in a connection monitor, do the foll
To add an endpoint, in the upper-right corner, select **Add Endpoint**. Then provide an endpoint name and URL, IP, or FQDN.
- :::image type="content" source="./media/connection-monitor-2-preview/add-endpoints.png" alt-text="Screenshot that shows where to add public endpoints as destinations in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-endpoints.png" alt-text="Screenshot that shows where to add public endpoints as destinations in Connection monitor.":::
* To choose recently used endpoints, go to the **Recent endpoint** pane. 1. When you finish choosing destinations, select **Done**. You can still edit basic properties, such as the endpoint name, by selecting the endpoint in the **Create Test Group** view.
-1. **Test configurations**: You can add one or more test configurations to a test group. Create a new test configuration by using the **New configuration** pane. Or add a test configuration from another test group in the same Connection Monitor from the **Choose existing** pane.
+1. **Test configurations**: You can add one or more test configurations to a test group. Create a new test configuration by using the **New configuration** pane. Or add a test configuration from another test group in the same Connection monitor from the **Choose existing** pane.
a. **Test configuration name**: Name the test configuration. b. **Protocol**: Select **TCP**, **ICMP**, or **HTTP**. To change HTTP to HTTPS, select **HTTP** as the protocol, and then select **443** as the port.
In the Azure portal, to create a test group in a connection monitor, do the foll
* **Round trip time**: Set the RTT, in milliseconds, for how long sources can take to connect to the destination over the test configuration.
- :::image type="content" source="./media/connection-monitor-2-preview/add-test-config.png" alt-text="Screenshot that shows where to set up a test configuration in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-test-config.png" alt-text="Screenshot that shows where to set up a test configuration in Connection monitor.":::
-1. **Test Groups**: You can add one or more Test Groups to a Connection Monitor. These test groups can consist of multiple Azure or non-Azure endpoints.
+1. **Test Groups**: You can add one or more Test Groups to a Connection monitor. These test groups can consist of multiple Azure or non-Azure endpoints.
- For selected Azure VMs or Azure Virtual Machine Scale Sets and non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the Network Performance Monitor solution for non-Azure endpoints will be auto-enabled after the creation of Connection Monitor begins.
+ For selected Azure VMs or Azure virtual machine scale sets and non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the Network Performance Monitor solution for non-Azure endpoints will be auto-enabled after the creation of Connection monitor begins.
- If the selected Virtual Machine Scale Set is set for manual upgrade, you'll have to upgrade the scale set after the Network Watcher extension installation. Doing so lets you continue setting up the Connection Monitor with Virtual Machine Scale Sets as endpoints. If the virtual machine scale set is set to auto-upgrade, you don't need to worry about upgrading after the installation of the Network Watcher extension.
+ If the selected virtual machine scale set is set for manual upgrade, you'll have to upgrade the scale set after the Network Watcher extension installation. Doing so lets you continue setting up the Connection monitor with virtual machine scale sets as endpoints. If the virtual machine scale set is set to auto-upgrade, you don't need to worry about upgrading after the installation of the Network Watcher extension.
- In the previously mentioned scenario, you can consent to an auto-upgrade of Virtual Machine Scale sets with auto-enabling of the Network Watcher extension during the creation of Connection Monitor for Virtual Machine Scale sets with manual upgrading. This approach eliminates the need to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
+ In the previously mentioned scenario, you can consent to an auto-upgrade of virtual Machine Scale sets with auto-enabling of the Network Watcher extension during the creation of Connection monitor for virtual Machine Scale sets with manual upgrading. This approach eliminates the need to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
- :::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up a test group and consent for an auto-upgrade of the virtual machine scale set in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up a test group and consent for an auto-upgrade of the virtual machine scale set in Connection monitor.":::
-## Create alerts in Connection Monitor
+## Create alerts in Connection monitor
You can set up alerts on tests that are failing based on the thresholds set in test configurations.
In the Azure portal, to create alerts for a connection monitor, specify values f
* **Enable rule upon creation**: Select this checkbox to enable the alert rule based on the condition. Disable this checkbox if you want to create the rule without enabling it. After you've completed all the steps, the process will proceed with a unified enabling of monitoring extensions for all endpoints without monitoring agents enabled, followed by the creation of the connection monitor.
After the creation process is successful, it takes about 5 minutes for the conne
## Virtual machine scale set coverage
-Currently, Connection Monitor provides default coverage for the scale set instances that are selected as endpoints. This means that only a default percentage of all the added scale set instances would be randomly selected to monitor connectivity from the scale set to the endpoint.
+Currently, Connection monitor provides default coverage for the scale set instances that are selected as endpoints. This means that only a default percentage of all the added scale set instances would be randomly selected to monitor connectivity from the scale set to the endpoint.
-As a best practice, to avoid loss of data because of a downscaling of instances, we recommend that you select *all* instances in a scale set while you're creating a test group, instead of selecting a particular few for monitoring your endpoints.
+As a best practice, to avoid loss of data due to downscaling of instances, we recommend that you select *all* instances in a scale set while you're creating a test group, instead of selecting a particular few for monitoring your endpoints.
## Scale limits
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
-Azure PostgreSQL uses [Azure Storage encryption](../../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using Microsoft-managed keys. For Azure PostgreSQL users, it's similar to Transparent Data Encryption (TDE) in other databases such as SQL Server. Many organizations require full control of access to the data using a customer-managed key. Data encryption with customer-managed keys for Azure Database for PostgreSQL Flexible server - Preview enables you to bring your key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
+Azure PostgreSQL uses [Azure Storage encryption](../../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using Microsoft-managed keys. For Azure PostgreSQL users, it's similar to Transparent Data Encryption (TDE) in other databases such as SQL Server. Many organizations require full control of access to the data using a customer-managed key. Data encryption with customer-managed keys for Azure Database for PostgreSQL Flexible server enables you to bring your key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
-Data encryption with customer-managed keys for Azure Database for PostgreSQL Flexible server - Preview is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the service's data encryption key (DEK). The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](https://azure.microsoft.com/services/key-vault/)) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) are described in more detail later in this article.
+Data encryption with customer-managed keys for Azure Database for PostgreSQL Flexible server is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the service's data encryption key (DEK). The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](https://azure.microsoft.com/services/key-vault/)) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) are described in more detail later in this article.
Key Vault is a cloud-based, external key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). It doesn't allow direct access to a stored key but provides encryption and decryption services to authorized entities. Key Vault can generate the key, import it, or have it transferred from an on-premises HSM device. ## Benefits
-Data encryption with customer-managed keys for Azure Database for PostgreSQL - Flexible Server (Preview) provides the following benefits:
+Data encryption with customer-managed keys for Azure Database for PostgreSQL - Flexible Server provides the following benefits:
- You fully control data-access by the ability to remove the key and make the database inaccessible.
purview How To Metamodel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-metamodel.md
Previously updated : 11/10/2022 Last updated : 01/26/2023
This article will get you started in building a metamodel for your Microsoft Pur
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Create a new, or use an existing Microsoft Purview account. You can [follow our quick-start guide to create one](create-catalog-portal.md). - Create a new, or use an existing resource group, and place new data sources under it. [Follow this guide to create a new resource group](../azure-resource-manager/management/manage-resource-groups-portal.md).-- [Data Curator role](catalog-permissions.md#roles) on the collection where the data asset is housed. See the guide on [managing Microsoft Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users).
+- [Data Curator role](catalog-permissions.md#roles) on the collection where the data asset is housed and/or the root collection, depending on what you need. See the guide on [managing Microsoft Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users).
+ - Create and modify asset types, modify assets - Data Curator on the collection where the data asset is housed. An asset will need to be moved to your collection after creation for you to be able to modify it.
+ - Create and modify assets - Data curator on the root collection.
+
+>[!NOTE]
+> As this feature is in preview, these permissions are not the final permission structure for metamodel. Updates will continue to be made to this structure.
## Current limitations -- New relationships will always be association relationships.
+>[!NOTE]
+> Since this feature is in preview, available abilities are regularly updated.
+ - When a new asset created, you have to refresh the asset to see relationships
+- New assets are created in the root collection, but can be edited afterwards to be moved to a new collection.
- You can't set relationships between two data assets in the Microsoft Purview governance portal - The related tab only shows a "business lineage" view for business assets, not data assets
An asset type is a template for storing a concept thatΓÇÖs important to your org
## Next steps
-For more information about the metamodel, see the metamodel [concept page](concept-metamodel.md).
+For more information about the metamodel, see the metamodel [concept page](concept-metamodel.md).
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Microsoft Purview's solutions in the governance portal provide a unified data go
## Data Map
-Microsoft Purview automates data discovery by providing data scanning and classification for assets across your data estate. Metadata and descriptions of discovered data assets are integrated into a holistic map of your data estate. Atop this map, there are purpose-built apps that create environments for data discovery, access management, and insights about your data landscape.
+Microsoft Purview automates data discovery by providing data scanning and classification for assets across your data estate. Metadata and descriptions of discovered data assets are integrated into a holistic map of your data estate. Microsoft Purview Data Map provides the foundation for data discovery and data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the data map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.2 APIs.
+Microsoft Purview Data Map powers the Microsoft Purview Data Catalog, the Microsoft Purview Data Estate Insights and the Microsoft Purview Data Policy as unified experiences within the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
+For more information, see our [introduction to Data Map](concept-elastic-data-map.md).
+
+Atop the Data Map, there are purpose-built apps that create environments for data discovery, access management, and insights about your data landscape.
|App |Description | |-|--|
Microsoft Purview automates data discovery by providing data scanning and classi
|[Data Sharing](#data-sharing-app) | Allows you to securely share data internally or cross organizations with business partners and customers. | |[Data Policy](#data-policy-app) | A set of central, cloud-based experiences that help you provision access to data securely and at scale. | -
-Microsoft Purview Data Map provides the foundation for data discovery and data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the data map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.2 APIs.
-Microsoft Purview Data Map powers the Microsoft Purview Data Catalog and Microsoft Purview Data Estate Insights as unified experiences within the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
-
-For more information, see our [introduction to Data Map](concept-elastic-data-map.md).
- ## Data Catalog app With the Microsoft Purview Data Catalog, business and technical users can quickly and easily find relevant data using a search experience with filters based on lenses such as glossary terms, classifications, sensitivity labels and more. For subject matter experts, data stewards and officers, the Microsoft Purview Data Catalog provides data curation features such as business glossary management and the ability to automate tagging of data assets with glossary terms. Data consumers and producers can also visually trace the lineage of data assets: for example, starting from operational systems on-premises, through movement, transformation & enrichment with various data storage and processing systems in the cloud, to consumption in an analytics system like Power BI.
search Search Api Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-versions.md
The following table provides links to more recent SDK versions.
|-|--|| | [Azure.Search.Documents 11](/dotnet/api/overview/azure/search.documents-readme) | Active | New client library from the Azure .NET SDK team, initially released July 2020. See the [Change Log](https://github.com/Azure/azure-sdk-for-net/blob/Azure.Search.Documents_11.3.0/sdk/search/Azure.Search.Documents/CHANGELOG.md) for information about minor releases. | | [Microsoft.Azure.Search 10](https://www.nuget.org/packages/Microsoft.Azure.Search/) | Retired | Released May 2019. This is the last version of the Microsoft.Azure.Search package and it's now deprecated. It's succeeded by Azure.Search.Documents. |
-| [Microsoft.Azure.Management.Search 4.0.0](/dotnet/api/overview/azure/search/management/management-search) | Active | Targets the Management REST api-version=2020-08-01. |
+| [Microsoft.Azure.Management.Search 4.0.0](/dotnet/api/overview/azure/search/management/management-cognitivesearch) | Active | Targets the Management REST api-version=2020-08-01. |
| [Microsoft.Azure.Management.Search 3.0.0](https://www.nuget.org/packages/Microsoft.Azure.Management.Search/3.0.0) | Active | Targets the Management REST api-version=2015-08-19. | ## Azure SDK for Java
search Search Howto Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-dotnet-sdk.md
This article explains how to create and manage search objects using C# and the [
## About version 11
-Azure SDK for .NET includes an [**Azure.Search.Documents**](/dotnet/api/overview/azure/search) client library from the Azure SDK team that is functionally equivalent to the previous client library, [Microsoft.Azure.Search](/dotnet/api/overview/azure/search/search). Version 11 is more consistent in terms of Azure programmability. Some examples include [`AzureKeyCredential`](/dotnet/api/azure.azurekeycredential) key authentication, and [System.Text.Json.Serialization](/dotnet/api/system.text.json.serialization) for JSON serialization.
+Azure SDK for .NET includes an [**Azure.Search.Documents**](/dotnet/api/overview/azure/search) client library from the Azure SDK team that is functionally equivalent to the previous client library, [Microsoft.Azure.Search](/dotnet/api/microsoft.azure.search). Version 11 is more consistent in terms of Azure programmability. Some examples include [`AzureKeyCredential`](/dotnet/api/azure.azurekeycredential) key authentication, and [System.Text.Json.Serialization](/dotnet/api/system.text.json.serialization) for JSON serialization.
As with previous versions, you can use this library to:
The client library defines classes like `SearchIndex`, `SearchField`, and `Searc
Azure.Search.Documents (version 11) targets version [`2020-06-30` of the Azure Cognitive Search REST API](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/search/data-plane/Azure.Search/preview/2020-06-30).
-The client library doesn't provide [service management operations](/rest/api/searchmanagement/), such as creating and scaling search services and managing API keys. If you need to manage your search resources from a .NET application, use the [Microsoft.Azure.Management.Search](/dotnet/api/overview/azure/search/management/management-search) library in the Azure SDK for .NET.
+The client library doesn't provide [service management operations](/rest/api/searchmanagement/), such as creating and scaling search services and managing API keys. If you need to manage your search resources from a .NET application, use the [Microsoft.Azure.Management.Search](/dotnet/api/microsoft.azure.management.search) library in the Azure SDK for .NET.
## Upgrade to v11
search Search Howto Index Cosmosdb Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-mongodb.md
These are the limitations of this feature:
+ The column name `_ts` is a reserved word. If you need this field, consider alternative solutions for populating an index. ++ The MongoDB attribute `$ref` is a reserved word. If you need this in your MongoDB collection, consider alternative solutions for populating an index. ++ As an alternative to this connector, if your scenario has any of those requirements, you could use the [Push API/SDK](search-what-is-data-import.md) or consider [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) with an [Azure Cognitive Search index](../data-factory/connector-azure-search.md) as the sink. ## Define the data source
security Antimalware Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware-code-samples.md
documentationcenter: na ms.assetid: 265683c8-30d7-4f2b-b66c-5082a18f7a8b
na Previously updated : 09/29/2021 Last updated : 01/25/2023
security Antimalware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware.md
na Previously updated : 09/29/2021 Last updated : 01/25/2023 # Microsoft Antimalware for Azure Cloud Services and Virtual Machines Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems.
-The solution is built on the same antimalware platform as Microsoft Security Essentials \[MSE\], Microsoft Forefront Endpoint Protection, Microsoft System Center Endpoint Protection, Microsoft Intune, and Microsoft Defender for Cloud. Microsoft Antimalware for Azure is a single-agent solution for applications and tenant environments, designed to run in the background without human intervention. Protection may be deployed based on the needs of application workloads, with either basic secure-by-default or advanced custom configuration, including antimalware monitoring.
+The solution is built on the same antimalware platform as Microsoft Security Essentials (MSE), Microsoft Forefront Endpoint Protection, Microsoft System Center Endpoint Protection, Microsoft Intune, and Microsoft Defender for Cloud. Microsoft Antimalware for Azure is a single-agent solution for applications and tenant environments, designed to run in the background without human intervention. Protection may be deployed based on the needs of application workloads, with either basic secure-by-default or advanced custom configuration, including antimalware monitoring.
When you deploy and enable Microsoft Antimalware for Azure for your applications, the following core features are available:
When you deploy and enable Microsoft Antimalware for Azure for your applications
* **Scheduled scanning** - Scans periodically to detect malware, including actively running programs. * **Malware remediation** - automatically takes action on detected malware, such as deleting or quarantining malicious files and cleaning up malicious registry entries. * **Signature updates** - automatically installs the latest protection signatures (virus definitions) to ensure protection is up-to-date on a pre-determined frequency.
-* **Antimalware Engine updates** ΓÇô automatically updates the Microsoft Antimalware engine.
-* **Antimalware Platform updates** ΓÇô automatically updates the Microsoft Antimalware platform.
+* **Antimalware Engine updates** - automatically updates the Microsoft Antimalware engine.
+* **Antimalware Platform updates** - automatically updates the Microsoft Antimalware platform.
* **Active protection** - reports telemetry metadata about detected threats and suspicious resources to Microsoft Azure to ensure rapid response to the evolving threat landscape, as well as enabling real-time synchronous signature delivery through the Microsoft Active Protection System (MAPS). * **Samples reporting** - provides and reports samples to the Microsoft Antimalware service to help refine the service and enable troubleshooting.
-* **Exclusions** ΓÇô allows application and service administrators to configure exclusions for files, processes, and drives.
+* **Exclusions** - allows application and service administrators to configure exclusions for files, processes, and drives.
* **Antimalware event collection** - records the antimalware service health, suspicious activities, and remediation actions taken in the operating system event log and collects them into the customer's Azure Storage account. > [!NOTE]
-> Microsoft Antimalware can also be deployed using Microsoft Defender for Cloud. Read [Install Endpoint Protection in Microsoft Defender for Cloud](../../security-center/security-center-services.md#supported-endpoint-protection-solutions-) for more information.
+> Microsoft Antimalware can also be deployed using Microsoft Defender for Cloud. Read [Install Endpoint Protection in Microsoft Defender for Cloud](../../defender-for-cloud/integration-defender-for-endpoint.md) for more information.
## Architecture
The Microsoft Antimalware Client and Service is installed by default in a disabl
When using Azure App Service on Windows, the underlying service that hosts the web app has Microsoft Antimalware enabled on it. This is used to protect Azure App Service infrastructure and does not run on customer content. > [!NOTE]
-> Microsoft Defender Antivirus is the built-in Antimalware enabled in Windows Server 2016. The Microsoft Defender Antivirus Interface is also enabled by default on some Windows Server 2016 SKU's [see here for more information](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows).
-> The Azure VM Antimalware extension can still be added to a Windows Server 2016 Azure VM with Microsoft Defender Antivirus, but in this scenario the extension will apply any optional [configuration policies](https://gallery.technet.microsoft.com/Antimalware-For-Azure-5ce70efe) to be used by Microsoft Defender Antivirus, the extension will not deploy any additional antimalware services.
-> You can read more about this update [here](/archive/blogs/azuresecurity/update-to-azure-antimalware-extension-for-cloud-services).
+> Microsoft Defender Antivirus is the built-in Antimalware enabled in Windows Server 2016 and above.
+> The Azure VM Antimalware extension can still be added to a Windows Server 2016 and above Azure VM with Microsoft Defender Antivirus. In this scenario, the extension applies any optional [configuration policies](antimalware.md#default-and-custom-antimalware-configuration) to be used by Microsoft Defender Antivirus The extension does not deploy any other antimalware services.
+> See the [Samples](antimalware.md#samples) section of this article for more details.
### Microsoft antimalware workflow The Azure service administrator can enable Antimalware for Azure with a default or custom configuration for your Virtual Machines and Cloud Services using the following options:
-* Virtual Machines ΓÇô In the Azure portal, under **Security Extensions**
-* Virtual Machines ΓÇô Using the Visual Studio virtual machines configuration in Server Explorer
-* Virtual Machines and Cloud Services ΓÇô Using the Antimalware [classic deployment model](/previous-versions/azure/ee460799(v=azure.100))
-* Virtual Machines and Cloud Services ΓÇô Using Antimalware PowerShell cmdlets
+* Virtual Machines - In the Azure portal, under **Security Extensions**
+* Virtual Machines - Using the Visual Studio virtual machines configuration in Server Explorer
+* Virtual Machines and Cloud Services - Using the Antimalware [classic deployment model](/previous-versions/azure/ee460799(v=azure.100))
+* Virtual Machines and Cloud Services - Using Antimalware PowerShell cmdlets
-The Azure portal or PowerShell cmdlets push the Antimalware extension package file to the Azure system at a pre-determined fixed location. The Azure Guest Agent (or the Fabric Agent) launches the Antimalware Extension, applying the Antimalware configuration settings supplied as input. This step enables the Antimalware service with either default or custom configuration settings. If no custom configuration is provided, then the antimalware service is enabled with the default configuration settings. Refer to the *Antimalware configuration* section in the [Microsoft Antimalware for Azure ΓÇô Code Samples](/samples/browse/?redirectedfrom=TechNet-Gallery "Microsoft Antimalware For Azure Cloud Services and VMs Code Samples") for more details.
+The Azure portal or PowerShell cmdlets push the Antimalware extension package file to the Azure system at a pre-determined fixed location. The Azure Guest Agent (or the Fabric Agent) launches the Antimalware Extension, applying the Antimalware configuration settings supplied as input. This step enables the Antimalware service with either default or custom configuration settings. If no custom configuration is provided, then the antimalware service is enabled with the default configuration settings. See the [Samples](antimalware.md#samples) section of this article for more details..
Once running, the Microsoft Antimalware client downloads the latest protection engine and signature definitions from the Internet and loads them on the Azure system. The Microsoft Antimalware service writes service-related events to the system OS events log under the "Microsoft Antimalware" event source. Events include the Antimalware client health state, protection and remediation status, new and old configuration settings, engine updates and signature definitions, and others.
The deployment workflow including configuration steps and options supported for
![Microsoft Antimalware in Azure](./media/antimalware/sec-azantimal-fig1.PNG) > [!NOTE]
-> You can however use PowerShell/APIs and Azure Resource Manager templates to deploy Virtual Machine Scale Sets with the Microsoft Anti-Malware extension. For installing an extension on an already running Virtual Machine, you can use the sample Python script [vmssextn.py](https://github.com/gbowerman/vmsstools). This script gets the existing extension config on the Scale Set and adds an extension to the list of existing extensions on the VM Scale Sets.
+> You can however use PowerShell/APIs and Azure Resource Manager templates to deploy Virtual Machine Scale Sets with the Microsoft Anti-Malware extension. For installing an extension on an already running Virtual Machine, you can use the sample Python script [vmssextn.py](https://github.com/gbowerman/vmsstools#vmssextn). This script gets the existing extension config on the Scale Set and adds an extension to the list of existing extensions on the VM Scale Sets.
> >
To enable and configure the Microsoft Antimalware service using Visual Studio:
![Virtual Machine configuration extension](./media/antimalware/sec-azantimal-fig7.PNG) > [!NOTE]
->The Visual Studio Virtual Machines configuration for Antimalware supports only JSON format configuration. The Antimalware JSON configuration settings template is included in the [Microsoft Antimalware For Azure - Code Samples](/samples/browse/?redirectedfrom=TechNet-Gallery "Microsoft Antimalware For Azure - Code Samples"), showing the supported Antimalware configuration settings.
+>The Visual Studio Virtual Machines configuration for Antimalware supports only JSON format configuration. See the [Samples](antimalware.md#samples) section of this article for more details.
#### Deployment Using PowerShell cmdlets
To enable and configure Microsoft Antimalware using PowerShell cmdlets:
2. Use the [Set-AzureVMMicrosoftAntimalwareExtension](/powershell/module/servicemanagement/azure.service/set-azurevmmicrosoftantimalwareextension) cmdlet to enable and configure Microsoft Antimalware for your Virtual Machine. > [!NOTE]
->The Azure Virtual Machines configuration for Antimalware supports only JSON format configuration. The Antimalware JSON configuration settings template is included in the [Microsoft Antimalware For Azure - Code Samples](/samples/browse/?redirectedfrom=TechNet-Gallery "Microsoft Antimalware For Azure - Code Samples"), showing the supported Antimalware configuration settings.
+>The Azure Virtual Machines configuration for Antimalware supports only JSON format configuration. See the [Samples](antimalware.md#samples) section of this article for more details.
### Enable and Configure Antimalware Using PowerShell cmdlets
To enable and configure Microsoft Antimalware using PowerShell cmdlets:
1. Set up your PowerShell environment - Refer to the documentation at <https://github.com/Azure/azure-powershell> 2. Use the [Set-AzureServiceExtension](/powershell/module/servicemanagement/azure.service/set-azureserviceextension) cmdlet to enable and configure Microsoft Antimalware for your Cloud Service.
-The Antimalware XML configuration settings template is included in the [Microsoft Antimalware For Azure - Code Samples](/samples/browse/?redirectedfrom=TechNet-Gallery "Microsoft Antimalware For Azure - Code Samples"), showing the supported Antimalware configuration settings.
+See the [Samples](antimalware.md#samples) section of this article for more details.
### Cloud Services and Virtual Machines - Configuration Using PowerShell cmdlets
To retrieve the Microsoft Antimalware configuration using PowerShell cmdlets:
2. **For Virtual Machines**: Use the [Get-AzureVMMicrosoftAntimalwareExtension](/powershell/module/servicemanagement/azure.service/get-azurevmmicrosoftantimalwareextension) cmdlet to get the antimalware configuration. 3. **For Cloud Services**: Use the [Get-AzureServiceExtension](/powershell/module/servicemanagement/azure.service/get-azureserviceextension) cmdlet to get the Antimalware configuration.
+## Samples
+ ### Remove Antimalware Configuration Using PowerShell cmdlets An Azure application or service can remove the Antimalware configuration and any associated Antimalware monitoring configuration from the relevant Azure Antimalware and diagnostics service extensions associated with the Cloud Service or Virtual Machine.
The following code sample is available:
### Enable and configure Antimalware using PowerShell cmdlets for Azure Arc-enabled servers To enable and configure Microsoft Antimalware for Azure Arc-enabled servers using PowerShell cmdlets:
-1. Set up your PowerShell environment using this [documentation](https://github.com/Azure/azure-powershell) on GitHub.
-2. Use the [New-AzConnectedMachineExtension](../../azure-arc/servers/manage-vm-extensions-powershell.md) cmdlet to enable and configure Microsoft Antimalware for your Arc-enabled servers.
+1. Set up your PowerShell environment using this [documentation](https://github.com/Azure/azure-powershell) on GitHub.
+2. Use the [New-AzConnectedMachineExtension](../../azure-arc/servers/manage-vm-extensions-powershell.md) cmdlet to enable and configure Microsoft Antimalware for your Arc-enabled servers.
The following code samples are available:
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md
The following services are generally available for Customer Lockbox:
- Azure Health Bot - Azure Intelligent Recommendations - Azure Kubernetes Service
+- Azure Logic Apps
- Azure Monitor - Azure Spring Apps - Azure SQL Database
The following services are generally available for Customer Lockbox:
- Azure subscription transfers - Azure Synapse Analytics - Azure Unified Vision Service
+- Microsoft Azure Attestation
- Microsoft Energy Data Services
+- OpenAI
- Virtual machines in Azure (covering remote desktop access, access to memory dumps, and managed disks)
service-bus-messaging Service Bus Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-azure-credentials.md
+
+ Title: Migrate applications to use passwordless authentication with Azure Service Bus
+
+description: Learn to migrate existing Service Bus applications away from connection strings to use Azure AD and Azure RBAC for enhanced security.
+++ Last updated : 12/07/2022+++
+ - devx-track-csharp
+ - passwordless-dotnet
+ms.devlang: csharp
++
+# Migrate an application to use passwordless connections with Azure Service Bus
+
+Application requests to Azure Service Bus must be authenticated using either account access keys or passwordless connections. However, you should prioritize passwordless connections in your applications when possible. This tutorial explores how to migrate from traditional authentication methods to more secure, passwordless connections.
+
+## Security risks associated with access keys
+
+The following code example demonstrates how to connect to Azure Service Bus using a connection string that includes an access key. When you create a Service Bus, Azure generates these keys and connection strings automatically. Many developers gravitate towards this solution because it feels familiar to options they've worked with in the past. If your application currently uses connection strings, consider migrating to passwordless connections using the steps described in this document.
+
+```csharp
+var serviceBusClient = new ServiceBusClient(
+ "<NAMESPACE-CONNECTION-STRING>",
+ clientOptions);
+```
+
+Connection strings should be used with caution. Developers must be diligent to never expose the keys in an unsecure location. Anyone who gains access to the key is able to authenticate. For example, if an account key is accidentally checked into source control, sent through an unsecure email, pasted into the wrong chat, or viewed by someone who shouldn't have permission, there's risk of a malicious user accessing the application. Instead, consider updating your application to use passwordless connections.
+
+## Migrate to passwordless connections
++
+## Steps to migrate an app to use passwordless authentication
+
+The following steps explain how to migrate an existing application to use passwordless connections instead of a key-based solution. You'll first configure a local development environment, and then apply those concepts to an Azure app hosting environment. These same migration steps should apply whether you're using access keys directly, or through connection strings.
+
+### Configure roles and users for local development authentication
++
+### Sign-in and migrate the app code to use passwordless connections
+
+For local development, make sure you're authenticated with the same Azure AD account you assigned the role to for the Service Bus namespace. You can authenticate via the Azure CLI, Visual Studio, Azure PowerShell, or other tools such as IntelliJ.
++
+Next you'll need to update your code to use passwordless connections.
+
+1. To use `DefaultAzureCredential` in a .NET application, add the **Azure.Identity** NuGet package to your application.
+
+ ```dotnetcli
+ dotnet add package Azure.Identity
+ ```
+
+1. At the top of your `Program.cs` file, add the following `using` statement:
+
+ ```csharp
+ using Azure.Identity;
+ ```
+
+1. Identify the locations in your code that currently create a `ServiceBusClient` to connect to Azure Service Bus. This task is often handled in `Program.cs`, potentially as part of your service registration with the .NET dependency injection container. Update your code to match the following example:
+
+ ```csharp
+ var clientOptions = new ServiceBusClientOptions
+ {
+ TransportType = ServiceBusTransportType.AmqpWebSockets
+ };
+
+ //TODO: Replace the "<SERVICE-BUS-NAMESPACE-NAME>" placeholder.
+ client = new ServiceBusClient(
+ "<SERVICE-BUS-NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential(),
+ clientOptions);
+ ```
+
+1. Make sure to update the Service Bus namespace in the URI of your `ServiceBusClient`. You can find the namespace on the overview page of the Azure portal.
+
+#### Run the app locally
+
+After making these code changes, run your application locally. The new configuration should pick up your local credentials, such as the Azure CLI, Visual Studio, or IntelliJ. The roles you assigned to your local dev user in Azure will allow your app to connect to the Azure service locally.
+
+### Configure the Azure hosting environment
+
+Once your application is configured to use passwordless connections and runs locally, the same code can authenticate to Azure services after it's deployed to Azure. For example, an application deployed to an Azure App Service instance that has a managed identity enabled can connect to Azure Service Bus.
+
+#### Create the managed identity using the Azure portal
++
+Alternatively, you can also enable managed identity on an Azure hosting environment using the Azure CLI.
+
+### [Service Connector](#tab/service-connector-identity)
+
+You can use Service Connector to create a connection between an Azure compute hosting environment and a target service using the Azure CLI. The CLI automatically handles creating a managed identity and assigns the proper role, as explained in the [portal instructions](#create-the-managed-identity-using-the-azure-portal).
+
+If you're using an Azure App Service, use the `az webapp connection` command:
+
+```azurecli
+az webapp connection create servicebus \
+ --resource-group <resource-group-name> \
+ --name <webapp-name> \
+ --target-resource-group <target-resource-group-name> \
+ --namespace <target-service-bus-namespace> \
+ --system-identity
+```
+
+If you're using Azure Spring Apps, use the `az spring connection` command:
+
+```azurecli
+az spring connection create servicebus \
+ --resource-group <resource-group-name> \
+ --service <service-instance-name> \
+ --app <app-name> \
+ --deployment <deployment-name> \
+ --target-resource-group <target-resource-group> \
+ --namespace <target-service-bus-namespace> \
+ --system-identity
+```
+
+If you're using Azure Container Apps, use the `az containerapp connection` command:
+
+```azurecli
+az containerapp connection create servicebus \
+ --resource-group <resource-group-name> \
+ --name <webapp-name> \
+ --target-resource-group <target-resource-group-name> \
+ --namespace <target-service-bus-namespace> \
+ --system-identity
+```
+
+### [Azure App Service](#tab/app-service-identity)
+
+You can assign a managed identity to an Azure App Service instance with the [az webapp identity assign](/cli/azure/webapp/identity) command.
+
+```azurecli
+az webapp identity assign \
+ --resource-group <resource-group-name> \
+ --name <webapp-name>
+```
+
+### [Azure Spring Apps](#tab/spring-apps-identity)
+
+You can assign a managed identity to an Azure Spring Apps instance with the [az spring app identity assign](/cli/azure/spring/app/identity) command.
+
+```azurecli
+az spring app identity assign \
+ --resource-group <resource-group-name> \
+ --name <app-name> \
+ --service <service-name>
+```
+
+### [Azure Container Apps](#tab/container-apps-identity)
+
+You can assign a managed identity to an Azure Container Apps instance with the [az container app identity assign](/cli/azure/containerapp/identity) command.
+
+```azurecli
+az containerapp identity assign \
+ --resource-group <resource-group-name> \
+ --name <app-name>
+```
+
+### [Azure Virtual Machines](#tab/virtual-machines-identity)
+
+You can assign a managed identity to a virtual machine with the [az vm identity assign](/cli/azure/vm/identity) command.
+
+```azurecli
+az vm identity assign \
+ --resource-group <resource-group-name> \
+ --name <virtual-machine-name>
+```
+
+### [Azure Kubernetes Service](#tab/aks-identity)
+
+You can assign a managed identity to an Azure Kubernetes Service (AKS) instance with the [az aks update](/cli/azure/aks) command.
+
+```azurecli
+az aks update \
+ --resource-group <resource-group-name> \
+ --name <virtual-machine-name>
+ --enable-managed-identity
+```
+++
+#### Assign roles to the managed identity
+
+Next, you need to grant permissions to the managed identity you created to access your Service Bus. You can do this by assigning a role to the managed identity, just like you did with your local development user.
+
+### [Service Connector](#tab/assign-role-service-connector)
+
+If you connected your services using the Service Connector you don't need to complete this step. The necessary configurations were handled for you:
+
+* If you selected a managed identity while creating the connection, a system-assigned managed identity was created for your app and assigned the **Azure Service Bus Data Owner** role on the Service Bus.
+
+* If you selected connection string, the connection string was added as an app environment variable.
+
+### [Azure portal](#tab/assign-role-azure-portal)
+
+1. Navigate to your Service Bus overview page and select **Access Control (IAM)** from the left navigation.
+
+1. Choose **Add role assignment**.
+
+ :::image type="content" source="../../includes/passwordless/media/migration-add-role-small.png" alt-text="Screenshot showing how to add a role to a managed identity." lightbox="../../includes/passwordless/media/migration-add-role.png":::
+
+1. In the **Role** search box, search for *Azure Service Bus Data Owner*, which is a common role used to manage data operations for blobs. You can assign whatever role is appropriate for your use case. Select the *Azure Service Bus Data Owner* from the list and choose **Next**.
+
+1. On the **Add role assignment** screen, for the **Assign access to** option, select **Managed identity**. Then choose **+Select members**.
+
+1. In the flyout, search for the managed identity you created by entering the name of your app service. Select the system assigned identity, and then choose **Select** to close the flyout menu.
+
+ :::image type="content" source="../../includes/passwordless/media/migration-select-identity-small.png" alt-text="Screenshot showing how to select the assigned managed identity." lightbox="../../includes/passwordless/media/migration-select-identity.png":::
+
+1. Select **Next** a couple times until you're able to select **Review + assign** to finish the role assignment.
+
+### [Azure CLI](#tab/assign-role-azure-cli)
+
+To assign a role at the resource level using the Azure CLI, you first must retrieve the resource ID using the `az servicebus show` command. You can filter the output properties using the --query parameter.
+
+```azurecli
+az servicebus show \
+ --resource-group '<your-resource-group-name>' \
+ --name '<your-service-bus-namespace>' \
+ --query id
+```
+
+Copy the output ID from the preceding command. You can then assign roles using the `az role` command of the Azure CLI.
+
+```azurecli
+az role assignment create \
+ --assignee "<your-username>" \
+ --role "Azure Service Bus Data Owner" \
+ --scope "<your-resource-id>"
+```
+++
+#### Test the app
+
+After making these code changes, browse to your hosted application in the browser. Your app should be able to connect to the Service Bus successfully. Keep in mind that it may take several minutes for the role assignments to propagate through your Azure environment. Your application is now configured to run both locally and in a production environment without the developers having to manage secrets in the application itself.
+
+## Next steps
+
+In this tutorial, you learned how to migrate an application to passwordless connections.
service-fabric Service Fabric Linux Windows Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-linux-windows-differences.md
There are some features that are supported on Windows, but not yet on Linux. Eve
* DNS service for Service Fabric services (DNS service is supported for containers on Linux) * CLI command equivalents of certain PowerShell commands (list below, most of which apply only to standalone clusters) * [Differences in log implementation that may affect scalability](service-fabric-concepts-scalability.md#choosing-a-platform)
+* [Difference in Service Fabric Events Channel](service-fabric-diagnostics-overview.md#platform-cluster-monitoring)
+ ## PowerShell cmdlets that do not work against a Linux Service Fabric cluster
service-health Impacted Resources Outage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/impacted-resources-outage.md
Title: Impacted resources support for outages
-description: This article details what is communicated to users and where they can view information about their impacted resources.
+ Title: Resource impact from Azure outages
+description: This article details where to find information from Azure Service Health about how Azure outages might affect your resources.
Last updated 12/16/2022
-# Impacted Resources for Azure Outages
+# Resource impact from Azure outages
-[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) helps customers view any health events that impact their Subscriptions and Tenants. The Service Issues blade on Service Health shows any ongoing problems in Azure services that are impacting your resources. You can understand when the issue began, and what services and regions are impacted. Previously, the Potential Impact tab on the Service Issues blade was within the details of an incident. It showed any resources under a customer's Subscriptions that may be impacted by an outage, and their resource health signal to help customers evaluate impact.
+[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) helps customers view any health events that affect their subscriptions and tenants. In the Azure portal, the **Service Issues** pane in **Service Health** shows any ongoing problems in Azure services that are affecting your resources. You can understand when each problem began, and what services and regions are affected.
-**In support of the impacted resource experience, Service Health has enabled a new feature to:**
+Previously, the **Potential Impact** tab on the **Service Issues** pane appeared in the incident details section. It showed any resources under a subscription that an outage might affect, along with a signal from [Azure Resource Health](../service-health/resource-health-overview.md) to help you evaluate impact.
-- Replace ΓÇ£Potential ImpactΓÇ¥ tab with ΓÇ£Impacted ResourcesΓÇ¥ tab on Service Issues.-- Display resources that are confirmed to be impacted by an outage.-- Display resources that are not confirmed to be impacted by an outage but could be impacted because they fall under a service or region that is confirmed to be impacted by an outage.-- Resource Health status of both confirmed and potential impacted resources showing the availability of the resource.
+In support of the experience of viewing affected resources, Service Health has enabled a new feature to:
-This article details what is communicated to users and where they can view information about their impacted resources.
+- Replace the **Potential Impact** tab with an **Impacted Resources** tab on the **Service Issues** pane.
+- Display resources that are confirmed to be affected by an outage.
+- Display resources that are not confirmed to be affected by an outage but could be affected because they fall under a service or region that's confirmed to be affected.
+- Provide the status of both confirmed and potential affected resources to show their availability.
+
+This article details what Service Health communicates and where you can view information about your affected resources.
>[!Note]
->This feature will be enabled to users in phases. Only selected subscription-level customers will start seeing the experience initially and gradually expand to 100% of subscription customers. In future this capability will be live for tenant level customers.
+>This feature will be rolled out in phases. Initially, only selected subscription-level customers will get the experience. The rollout will gradually expand to 100 percent of subscription customers. It will go live for tenant-level customers in the future.
-## Impacted Resources for Outages on the Service Health Portal
+## View affected resources
-The impacted resources tab under Azure portal-> Service Health ->Service Issues will display resources that are Confirmed to be impacted by an outage and resources that could Potentially be impacted by an outage. Below is an example of impacted resources tab for an incident on Service Issues with Confirmed and Potential impact resources.
+In the Azure portal, the **Impacted Resources** tab under **Service Health** > **Service Issues** displays resources that are or might be affected by an outage. The following example of the **Impacted Resources** tab shows an incident with confirmed and potentially affected resources.
-##### Service Health provides the below information to users whose resources are impacted by an outage:
+Service Health provides the following information:
|Column |Description | |||
-|Resource Name|Name of resource|
-|Resource Health|Health status of a resource at that point in time|
-|Impact Type|Tells customers if their resource is confirmed to be impacted or potentially impacted|
-|Resource Type|Type of resource impacted (.ie Virtual Machines)|
-|Resource Group|Resource group which contains the impacted resource|
-|Location|Location which contains the impacted resource|
-|Subscription ID|Unique ID for the subscription that contains the impacted resource|
-|Subscription Name|Subscription name for the subscription which contains the impacted resource|
-|Tenant ID|Unique ID for the tenant that contains the impacted resource|
-
-## Resource Name
-
-This will be the resource name of the resource. The resource name will be a clickable link that links to Resource Health page for this resource.
-
-It will be text only if there is no Resource Health signal available for this resource.
-
-## Impact Type
-
-This column will display values ΓÇ£Confirmed vs PotentialΓÇ¥
--- *Confirmed*: Resource that was confirmed to be impacted by an outage. Customers should check the Summary section to make sure customer action items (if any) are taken to remediate the issue.-- *Potential*: Resource that is not confirmed to be impacted by an outage but could potentially be impacted as it is under a service or region which is impacted by an outage. Customers are advised to look at the resource health and make sure everything is working as planned.-
-## Resource Health
-
-The health status listed under **[Resource Health](../service-health/resource-health-overview.md)** refers to the status of a given resource at that point in time.
--- A health status of available means your resource is healthy but it may have been affected by the service event at a previous point in time.-- A health status of degraded or unavailable (caused by a customer-initiated action or platform-initiated action) means your resource is impacted but could be now healthy and pending a status update.--
+|**Resource Name**|Name of the resource. This name is a clickable link that goes to the Resource Health page for the resource. If no Resource Health signal is available for the resource, this name is text only.|
+|**Resource Health**|Health status of the resource: <br><br>**Available**: Your resource is healthy, but a service event might have affected it at a previous point in time. <br><br>**Degraded** or **Unavailable**: A customer-initiated action or a platform-initiated action might have caused this status. It means your resource was affected but might now be healthy, pending a status update. <br><br>:::image type="content" source="./media/impacted-resource-outage/rh-cropped.PNG" alt-text="Screenshot of health statuses for a resource.":::|
+|**Impact Type**|Indication of whether the resource is or might be affected: <br><br>**Confirmed**: The resource is confirmed to be affected by an outage. Check the **Summary** section for any action items that you can take to remediate the problem. <br><br>**Potential**: The resource is not confirmed to be affected, but it could potentially be affected because it's under a service or region that an outage is affecting. Check the **Resource Health** column to make sure that everything is working as planned.|
+|**Resource Type**|Type of affected resource (for example, virtual machine).|
+|**Resource Group**|Resource group that contains the affected resource.|
+|**Location**|Location that contains the affected resource.|
+|**Subscription ID**|Unique ID for the subscription that contains the affected resource.|
+|**Subscription Name**|Subscription name for the subscription that contains the affected resource.|
+|**Tenant ID**|Unique ID for the tenant that contains the affected resource.|
>[!Note]
->Not all resources will show resource health status. This will be shown on resources for which we have a resource health signal available only. The status of resources where the health signal is not available is shown as **ΓÇ£N/AΓÇ¥** and corresponding Resource name value will be text only not a clickable link.
+>Not all resources show a Resource Health status. The status appears only on resources for which a Resource Health signal is available. The status of resources for which a Resource Health signal is not available appears as **N/A**, and the corresponding **Resource Name** value is text instead of a clickable link.
-## Filters
+## Filter results
-Customers can filter on the results using the below filters:
+You can adjust the results by using these filters:
-- Impact type: Confirmed or Potential-- Subscription ID: All Subscription IDs the user has access to-- Status: Resource Health status column that shows Available, Unavailable, Degraded, Unknown, N/A
+- **Impact type**: Select **Confirmed** or **Potential**.
+- **Subscription ID**: Show all subscription IDs that the user can access.
+- **Status**: Focus on Resource Health status by selecting **Available**, **Unavailable**, **Degraded**, **Unknown**, or **N/A**.
## Export to CSV
-The list of impacted resources can be exported to an excel file by clicking on this option.
+To export the list of affected resources to an Excel file, select the **Export to CSV** option.
-## Accessing Impacted Resources programmatically via an API
+## Access affected resources programmatically via an API
-Outage impacted resource information can be retrieved programmatically using the Events API. The API documentation [here](https://learn.microsoft.com/rest/api/resourcehealth/2022-05-01/impacted-resources/list-by-subscription-id-and-event-id?tabs=HTTP) provides the details around how customers can access this data.
+You can get information about outage-affected resources programmatically by using the Events API. For details on how to access this data, see the [API documentation](/rest/api/resourcehealth/2022-05-01/impacted-resources/list-by-subscription-id-and-event-id?tabs=HTTP).
-## Next Steps
-- See [Introduction to Azure Service Health dashboard](service-health-overview.md) and [Introduction to Azure Resource Health](resource-health-overview.md) to understand more about them.-- [Frequently asked questions about Azure Resource Health](resource-health-faq.yml)
+## Next steps
+- [Introduction to the Azure Service Health dashboard](service-health-overview.md)
+- [Introduction to Azure Resource Health](resource-health-overview.md)
+- [Frequently asked questions about Azure Resource Health](resource-health-faq.yml)
storage Blobfuse2 How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md
Previously updated : 12/02/2022 Last updated : 01/26/2023
To check your version of Linux, run the following command:
lsb_release -a ```
-If no binaries are available for your distribution, you can [build the binaries from source code](https://github.com/MicrosoftDocs/azure-docs-pr/pull/203174#option-2-build-from-source).
+If no binaries are available for your distribution, you can [Option 2: Build the binaries from source code](#option-2-build-the-binaries-from-source-code).
To install BlobFuse2 from the repositories:
storage Storage Quickstart Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-cli.md
Previously updated : 04/04/2022 Last updated : 01/25/2023
In this example, you upload a blob to the container you created in the last step
az storage blob upload \ --account-name <storage-account> \ --container-name <container> \
- --name helloworld \
- --file helloworld \
+ --name myFile.txt \
+ --file myFile.txt \
--auth-mode login ```
Use the [az storage blob download](/cli/azure/storage/blob) command to download
az storage blob download \ --account-name <storage-account> \ --container-name <container> \
- --name helloworld \
- --file ~/destination/path/for/file \
+ --name myFile.txt \
+ --file <~/destination/path/for/file> \
--auth-mode login ```
The following example uses AzCopy to upload a local file to a blob. Remember to
```bash azcopy login
-azcopy copy 'C:\myDirectory\myTextFile.txt' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt'
+azcopy copy 'C:\myDirectory\myFile.txt' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myFile.txt'
``` ## Clean up resources
storage Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/migrate-azure-credentials.md
ms.devlang: csharp
-# Migrate an application to use passwordless connections with Azure services
+# Migrate an application to use passwordless connections with Azure Storage
Application requests to Azure Storage must be authenticated using either account access keys or passwordless connections. However, you should prioritize passwordless connections in your applications when possible. This tutorial explores how to migrate from traditional authentication methods to more secure, passwordless connections.
Storage account keys should be used with caution. Developers must be diligent to
## Migrate to passwordless connections
-Many Azure services support passwordless connections through Azure AD and Role Based Access control (RBAC). These techniques provide robust security features and can be implemented using `DefaultAzureCredential` from the Azure Identity client libraries.
-
-> [!IMPORTANT]
-> Some languages must implement `DefaultAzureCredential` explicitly in their code, while others utilize `DefaultAzureCredential` internally through underlying plugins or drivers.
-
-`DefaultAzureCredential` supports multiple authentication methods and automatically determines which should be used at runtime. This approach enables your app to use different authentication methods in different environments (local dev vs. production) without implementing environment-specific code.
-
-The order and locations in which `DefaultAzureCredential` searches for credentials can be found in the [Azure Identity library overview](/dotnet/api/overview/azure/Identity-readme#defaultazurecredential) and varies between languages. For example, when working locally with .NET, `DefaultAzureCredential` will generally authenticate using the account the developer used to sign-in to Visual Studio. When the app is deployed to Azure, `DefaultAzureCredential` will automatically switch to use a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md). No code changes are required for this transition.
--
-> [!NOTE]
-> A managed identity provides a security identity to represent an app or service. The identity is managed by the Azure platform and does not require you to provision or rotate any secrets. You can read more about managed identities in the [overview](../../active-directory/managed-identities-azure-resources/overview.md) documentation.
-
-The following code example demonstrates how to connect to an Azure Storage account using passwordless connections. The next section describes how to migrate to this setup in more detail.
-
-A .NET Core application can pass an instance of `DefaultAzureCredential` into the constructor of a service client class. `DefaultAzureCredential` will automatically discover the credentials that are available in that environment.
-
-```csharp
-var blobServiceClient = new BlobServiceClient(
- new Uri("https://<your-storage-account>.blob.core.windows.net"),
- new DefaultAzureCredential());
-```
## Steps to migrate an app to use passwordless authentication
Once your application is configured to use passwordless connections and runs loc
#### Create the managed identity using the Azure portal
-The following steps demonstrate how to create a system-assigned managed identity for various web hosting services. The managed identity can securely connect to other Azure Services using the app configurations you set up previously.
-
-### [Service Connector](#tab/service-connector)
-
-Some app hosting environments support Service Connector, which helps you connect Azure compute services to other backing services. Service Connector automatically configures network settings and connection information. You can learn more about Service Connector and which scenarios are supported on the [overview page](../../service-connector/overview.md).
-
-The following compute services are currently supported:
-
-* Azure App Service
-* Azure Spring Cloud
-* Azure Container Apps (preview)
-
-For this migration guide you will use App Service, but the steps are similar on Azure Spring Apps and Azure Container Apps.
-
-> [!NOTE]
-> Azure Spring Apps currently only supports Service Connector using connection strings.
-
-1. On the main overview page of your App Service, select **Service Connector** from the left navigation.
-
-1. Select **+ Create** from the top menu and the **Create connection** panel will open. Enter the following values:
-
- * **Service type**: Choose **Storage blob**.
- * **Subscription**: Select the subscription you would like to use.
- * **Connection Name**: Enter a name for your connection, such as *connector_appservice_blob*.
- * **Client type**: Leave the default value selected or choose the specific client you'd like to use.
-
- Select **Next: Authentication**.
-
- :::image type="content" source="media/migration-create-identity-small.png" alt-text="Screenshot showing how to create a system assigned managed identity." lightbox="media/migration-create-identity.png":::
-
-1. Make sure **System assigned managed identity (Recommended)** is selected, and then choose **Next: Networking**.
-1. Leave the default values selected, and then choose **Next: Review + Create**.
-1. After Azure validates your settings, select **Create**.
-
-The Service Connector will automatically create a system-assigned managed identity for the app service. The connector will also assign the managed identity a **Storage Blob Data Contributor** role for the storage account you selected.
-
-### [Azure App Service](#tab/app-service)
-
-1. On the main overview page of your Azure App Service instance, select **Identity** from the left navigation.
-
-1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code.
-
- :::image type="content" source="media/migration-create-identity-small.png" alt-text="Screenshot showing how to create a system assigned managed identity." lightbox="media/migration-create-identity.png":::
-
-### [Azure Spring Apps](#tab/spring-apps)
-
-1. On the main overview page of your Azure Spring Apps instance, select **Identity** from the left navigation.
-
-1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code.
-
- :::image type="content" source="media/storage-migrate-credentials/spring-apps-identity.png" alt-text="Screenshot showing how to enable managed identity for Azure Spring Apps.":::
-
-### [Azure Container Apps](#tab/container-apps)
-
-1. On the main overview page of your Azure Container Apps instance, select **Identity** from the left navigation.
-
-1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code.
-
- :::image type="content" source="media/storage-migrate-credentials/container-apps-identity.png" alt-text="Screenshot showing how to enable managed identity for Azure Container Apps.":::
-
-### [Azure virtual machines](#tab/virtual-machines)
-
-1. On the main overview page of your virtual machine, select **Identity** from the left navigation.
-
-1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code.
-
- :::image type="content" source="media/storage-migrate-credentials/virtual-machine-identity.png" alt-text="Screenshot showing how to enable managed identity for virtual machines.":::
--
-You can also enable managed identity on an Azure hosting environment using the Azure CLI.
+Alternatively, you can also enable managed identity on an Azure hosting environment using the Azure CLI.
### [Service Connector](#tab/service-connector-identity)
storage File Sync Firewall And Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-firewall-and-proxy.md
The following table describes the required domains for communication:
| Service | Public cloud endpoint | Azure Government endpoint | Usage | ||-|||
-| **Azure Resource Manager** | `https://management.azure.com` | https://management.usgovcloudapi.net | Any user call (like PowerShell) goes to/through this URL, including the initial server registration call. |
-| **Azure Active Directory** | https://login.windows.net<br>`https://login.microsoftonline.com` | https://login.microsoftonline.us | Azure Resource Manager calls must be made by an authenticated user. To succeed, this URL is used for user authentication. |
-| **Azure Active Directory** | https://graph.microsoft.com/ | https://graph.microsoft.com/ | As part of deploying Azure File Sync, a service principal in the subscription's Azure Active Directory will be created. This URL is used for that. This principal is used for delegating a minimal set of rights to the Azure File Sync service. The user performing the initial setup of Azure File Sync must be an authenticated user with subscription owner privileges. |
-| **Azure Active Directory** | https://secure.aadcdn.microsoftonline-p.com | Use the public endpoint URL. | This URL is accessed by the Active Directory authentication library that the Azure File Sync server registration UI uses to log in the administrator. |
+| **Azure Resource Manager** | `https://management.azure.com` | `https://management.usgovcloudapi.net` | Any user call (like PowerShell) goes to/through this URL, including the initial server registration call. |
+| **Azure Active Directory** | `https://login.windows.net`<br>`https://login.microsoftonline.com` | `https://login.microsoftonline.us` | Azure Resource Manager calls must be made by an authenticated user. To succeed, this URL is used for user authentication. |
+| **Azure Active Directory** | `https://graph.microsoft.com/` | `https://graph.microsoft.com/` | As part of deploying Azure File Sync, a service principal in the subscription's Azure Active Directory will be created. This URL is used for that. This principal is used for delegating a minimal set of rights to the Azure File Sync service. The user performing the initial setup of Azure File Sync must be an authenticated user with subscription owner privileges. |
+| **Azure Active Directory** | `https://secure.aadcdn.microsoftonline-p.com` | Use the public endpoint URL. | This URL is accessed by the Active Directory authentication library that the Azure File Sync server registration UI uses to log in the administrator. |
| **Azure Storage** | &ast;.core.windows.net | &ast;.core.usgovcloudapi.net | When the server downloads a file, then the server performs that data movement more efficiently when talking directly to the Azure file share in the Storage Account. The server has a SAS key that only allows for targeted file share access. | | **Azure File Sync** | &ast;.one.microsoft.com<br>&ast;.afs.azure.net | &ast;.afs.azure.us | After initial server registration, the server receives a regional URL for the Azure File Sync service instance in that region. The server can use the URL to communicate directly and efficiently with the instance handling its sync. |
-| **Microsoft PKI** | https://www.microsoft.com/pki/mscorp/cps<br>http://crl.microsoft.com/pki/mscorp/crl/<br>http://mscrl.microsoft.com/pki/mscorp/crl/<br>http://ocsp.msocsp.com<br>http://ocsp.digicert.com/<br>http://crl3.digicert.com/ | https://www.microsoft.com/pki/mscorp/cps<br>http://crl.microsoft.com/pki/mscorp/crl/<br>http://mscrl.microsoft.com/pki/mscorp/crl/<br>http://ocsp.msocsp.com<br>http://ocsp.digicert.com/<br>http://crl3.digicert.com/ | Once the Azure File Sync agent is installed, the PKI URL is used to download intermediate certificates required to communicate with the Azure File Sync service and Azure file share. The OCSP URL is used to check the status of a certificate. |
+| **Microsoft PKI** | `https://www.microsoft.com/pki/mscorp/cps`<br>`http://crl.microsoft.com/pki/mscorp/crl/`<br>`http://mscrl.microsoft.com/pki/mscorp/crl/`<br>`http://ocsp.msocsp.com`<br>`http://ocsp.digicert.com/`<br>`http://crl3.digicert.com/` | `https://www.microsoft.com/pki/mscorp/cps`<br>`http://crl.microsoft.com/pki/mscorp/crl/`<br>`http://mscrl.microsoft.com/pki/mscorp/crl/`<br>`http://ocsp.msocsp.com`<br>`http://ocsp.digicert.com/`<br>`http://crl3.digicert.com/` | Once the Azure File Sync agent is installed, the PKI URL is used to download intermediate certificates required to communicate with the Azure File Sync service and Azure file share. The OCSP URL is used to check the status of a certificate. |
| **Microsoft Update** | &ast;.update.microsoft.com<br>&ast;.download.windowsupdate.com<br>&ast;.ctldl.windowsupdate.com<br>&ast;.dl.delivery.mp.microsoft.com<br>&ast;.emdl.ws.microsoft.com | &ast;.update.microsoft.com<br>&ast;.download.windowsupdate.com<br>&ast;.ctldl.windowsupdate.com<br>&ast;.dl.delivery.mp.microsoft.com<br>&ast;.emdl.ws.microsoft.com | Once the Azure File Sync agent is installed, the Microsoft Update URLs are used to download Azure File Sync agent updates. | > [!IMPORTANT]
synapse-analytics Implementation Success Assess Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-assess-environment.md
Title: "Synapse implementation success methodology: Assess environment" description: "Learn how to assess your environment to help evaluate the solution design and make informed technology decisions to implement Azure Synapse Analytics."--++
synapse-analytics Implementation Success Evaluate Data Integration Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-data-integration-design.md
Title: "Synapse implementation success methodology: Evaluate data integration design" description: "Learn how to evaluate the data integration design and validate that it meets guidelines and requirements."--++
synapse-analytics Implementation Success Evaluate Dedicated Sql Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-dedicated-sql-pool-design.md
Title: "Synapse implementation success methodology: Evaluate dedicated SQL pool design" description: "Learn how to evaluate your dedicated SQL pool design to identify issues and validate that it meets guidelines and requirements."--++
synapse-analytics Implementation Success Evaluate Project Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-project-plan.md
Title: "Synapse implementation success methodology: Evaluate project plan" description: "Learn how to evaluate your modern data warehouse project plan before the project starts."--++
synapse-analytics Implementation Success Evaluate Serverless Sql Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-serverless-sql-pool-design.md
Title: "Synapse implementation success methodology: Evaluate serverless SQL pool design" description: "Learn how to evaluate your serverless SQL pool design to identify issues and validate that it meets guidelines and requirements."--++
synapse-analytics Implementation Success Evaluate Solution Development Environment Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-solution-development-environment-design.md
Title: "Synapse implementation success methodology: Evaluate solution development environment design" description: "Learn how to set up multiple environments for your modern data warehouse project to support development, testing, and production."--++
synapse-analytics Implementation Success Evaluate Spark Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-spark-pool-design.md
Title: "Synapse implementation success methodology: Evaluate Spark pool design" description: "Learn how to evaluate your Spark pool design to identify issues and validate that it meets guidelines and requirements."--++
synapse-analytics Implementation Success Evaluate Team Skill Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-team-skill-sets.md
Title: "Synapse implementation success methodology: Evaluate team skill sets" description: "Learn how to evaluate your team of skilled resources that will implement your Azure Synapse solution."--++
synapse-analytics Implementation Success Evaluate Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-workspace-design.md
Title: "Synapse implementation success methodology: Evaluate workspace design" description: "Learn how to evaluate the Synapse workspace design and validate that it meets guidelines and requirements."--++
synapse-analytics Implementation Success Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-overview.md
Title: Azure Synapse implementation success by design description: "Learn about the Azure Synapse success series of articles that's designed to help you deliver a successful implementation of Azure Synapse Analytics."--++
synapse-analytics Implementation Success Perform Monitoring Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-monitoring-review.md
Title: "Synapse implementation success methodology: Perform monitoring review" description: "Learn how to perform monitoring of your Azure Synapse solution."--++
synapse-analytics Implementation Success Perform Operational Readiness Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-operational-readiness-review.md
Title: "Synapse implementation success methodology: Perform operational readiness review" description: "Learn how to perform an operational readiness review to evaluate your solution for its preparedness to provide optimal services to users."--++
synapse-analytics Implementation Success Perform User Readiness And Onboarding Plan Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-user-readiness-and-onboarding-plan-review.md
Title: "Synapse implementation success methodology: Perform user readiness and onboarding plan review" description: "Learn how to perform user readiness and onboarding of new users to ensure successful adoption of your data warehouse."--++
synapse-analytics Proof Of Concept Playbook Dedicated Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-dedicated-sql-pool.md
Title: "Synapse POC playbook: Data warehousing with dedicated SQL pool in Azure Synapse Analytics" description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for dedicated SQL pool."--++
synapse-analytics Proof Of Concept Playbook Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-overview.md
Title: Azure Synapse proof of concept playbook description: "Introduction to a series of articles that provide a high-level methodology for planning, preparing, and running an effective Azure Synapse Analytics proof of concept project."--++
synapse-analytics Proof Of Concept Playbook Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-serverless-sql-pool.md
Title: "Synapse POC playbook: Data lake exploration with serverless SQL pool in Azure Synapse Analytics" description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for serverless SQL pool."--++
synapse-analytics Proof Of Concept Playbook Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-spark-pool.md
Title: "Synapse POC playbook: Big data analytics with Apache Spark pool in Azure Synapse Analytics" description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for Apache Spark pool."--++
synapse-analytics Security White Paper Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-access-control.md
Title: "Azure Synapse Analytics security white paper: Access control" description: Use different approaches or a combination of techniques to control access to data with Azure Synapse Analytics.--++
synapse-analytics Security White Paper Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-authentication.md
Title: "Azure Synapse Analytics security white paper: Authentication" description: Implement authentication mechanisms with Azure Synapse Analytics.--++
synapse-analytics Security White Paper Data Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-data-protection.md
Title: "Azure Synapse Analytics security white paper: Data protection" description: Protect data to comply with federal, local, and company guidelines with Azure Synapse Analytics.--++
synapse-analytics Security White Paper Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-introduction.md
Title: Azure Synapse Analytics security white paper description: Overview of the Azure Synapse Analytics security white paper series of articles.--++
synapse-analytics Security White Paper Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-network-security.md
Title: "Azure Synapse Analytics security white paper: Network security" description: Manage secure network access with Azure Synapse Analytics.--++
synapse-analytics Security White Paper Threat Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-threat-protection.md
Title: "Azure Synapse Analytics security white paper: Threat detection" description: Audit, protect, and monitor Azure Synapse Analytics.--++
synapse-analytics Success By Design Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/success-by-design-introduction.md
Title: Success by design description: "TODO: Success by design"--++
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-features.md
Synapse SQL pools enable you to use built-in security features to secure your da
Dedicated SQL pool and serverless SQL pool use standard Transact-SQL language to query data. For detailed differences, look at the [Transact-SQL language reference](/sql/t-sql/language-reference).
+## Platform features
+
+| | Dedicated | Serverless |
+| | | |
+| Scaling | [Yes](../sql-data-warehouse/sql-data-warehouse-manage-compute-overview.md) | Serverless SQL pool automatically scales depending on the workload. |
+| Pause/resume | [Yes](../sql-data-warehouse/sql-data-warehouse-manage-compute-overview.md) | Serverless SQL pool automatically deactivated when it is not used and activated when needed. User action is not required. |
+| Database backups | [Yes](../sql-data-warehouse/backup-and-restore.md) | No. Data is stored in external systems (ADLS, Cosmos DB), so make sure that you are doing backups of data at source. Make sure that you use store SQL metadata (table, view, procedure definitions, and user permissions) in the source control. Table definitions in the Lake database are stored in Spark metadata, so make sure that you are also keeping Spark table definitions in the source control. |
+| Database restore | [Yes](../sql-data-warehouse/backup-and-restore.md) | No. Data is stored in external systems (ADLS, Cosmos DB), so you need to recover source systems to bring your data. Make sure that your SQL metadata (table, view, procedure definitions, and user permissions) is in the source control so you can re-create the SQL objects. Table definitions in the Lake database are stored in Spark metadata, so make sure that you are also keeping Spark table definitions in the source control. |
+ ## Tools You can use various tools to connect to Synapse SQL to query data.
virtual-desktop Client Features Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-web.md
Title: Use features of the Remote Desktop Web client - Azure Virtual Desktop
description: Learn how to use features of the Remote Desktop Web client when connecting to Azure Virtual Desktop. Previously updated : 12/14/2022 Last updated : 01/25/2023
Native resolution is set to off by default. To turn on native resolution:
1. Set **Enable native display resolution** to **On**.
+### Preview user interface (preview)
+
+A new user interface is available in preview for you to try. To enable the new user interface:
+
+1. Sign in to the Remote Desktop Web client.
+
+1. Toggle **Try the new client (Preview)** to **On**. To revert to the original user interface, toggle this to **Off**.
+
+### Grid view and list view (preview)
+
+You can change the view of remote resources assigned to you between grid view (default) and list view. To change between grid view and list view:
+
+1. Sign in to the Remote Desktop Web client and make sure you have toggled **Try the new client (Preview)** to **On**.
+
+1. In the top-right hand corner, select **Grid View** icon or the **List View** icon. The change will take effect immediately.
+
+### Light mode and dark mode (preview)
+
+You can change between light mode (default) and dark mode. To change between light mode and dark mode:
+
+1. Sign in to the Remote Desktop Web client and make sure you have toggled **Try the new client (Preview)** to **On**, then select **Settings** on the taskbar.
+
+1. Toggle **Dark Mode** to **On** to use dark mode, or **Off** to use light mode. The change will take effect immediately.
+ ## Input methods You can use a built-in or external PC keyboard, trackpad and mouse to control desktops or apps.
If you have another Remote Desktop client installed, you can download an RDP fil
1. Open the downloaded RDP file in your Remote Desktop client to launch a remote session.
+## Reset user settings (preview)
+
+If you want to reset your user settings back to the default, you can do this in the web client for the current browser. To reset user settings:
+
+1. Sign in to the Remote Desktop Web client and make sure you have toggled **Try the new client (Preview)** to **On**, then select **Settings** on the taskbar.
+
+1. Select **Reset user settings**. You'll need to confirm that you want reset the web client settings to default.
+ ## Provide feedback If you want to provide feedback to us on the Remote Desktop Web client, you can do so in the Web client:
virtual-desktop Connect Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-web.md
When you sign in to the Remote Desktop Web client, you'll see your workspaces. A
>[!TIP] >If you've already signed in to the web browser with a different Azure Active Directory account than the one you want to use for Azure Virtual Desktop, you should either sign out or use a private browser window.
+## Preview features
+
+If you want to help us test new features, you should enable the preview. A new user interface is available in preview; to learn how to try the new user interface, see [Preview user interface](client-features-web.md#preview-user-interface-preview), and for more information about what's new, see [What's new in the Remote Desktop Web client for Azure Virtual Desktop](../whats-new-client-web.md?toc=%2Fazure%2Fvirtual-desktop%2Fusers%2Ftoc.json).
+ ## Next steps To learn more about the features of the Remote Desktop Web client, check out [Use features of the Remote Desktop Web client when connecting to Azure Virtual Desktop](client-features-web.md).
virtual-desktop Whats New Client Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-web.md
+
+ Title: What's new in the Remote Desktop Web client for Azure Virtual Desktop
+description: Learn about recent changes to the Remote Desktop Web client for Azure Virtual Desktop
+++ Last updated : 01/25/2023+
+# What's new in the Remote Desktop Web client for Azure Virtual Desktop
+
+We regularly update the Remote Desktop Web client for Azure Virtual Desktop, adding new features and fixing issues. Here's where you'll find the latest updates.
+
+You can find more detailed information about the Windows Desktop client at [Connect to Azure Virtual Desktop with the Remote Desktop Web client](users/connect-web.md) and [Use features of the Remote Desktop Web client when connecting to Azure Virtual Desktop](users/client-features-web.md).
+
+> [!NOTE]
+> What's new information used to be combined for the Remote Desktop Web client for Azure Virtual Desktop and Remote Desktop Services. You can find information for versions earlier than 2.0.0.3 at [What's new in the web client](/windows-server/remote/remote-desktop-services/clients/web-client-whatsnew).
+
+## Updates for version 2.0.0.3 (preview)
+
+*Date published: January 26th 2023*
+
+A new user interface is available in preview, which has the following new functionality:
+
+- An updated design.
+- [Switch between grid view and list view](users/client-features-web.md#grid-view-and-list-view-preview).
+- [Switch between light mode and dark mode](users/client-features-web.md#light-mode-and-dark-mode-preview).
+- [Reset user settings](users/client-features-web.md#reset-user-settings-preview).
+
+For more information and how to try the new user interface, see [Preview user interface](users/client-features-web.md#preview-user-interface-preview).
+
+## Next steps
+
+- [Connect to Azure Virtual Desktop with the Remote Desktop Web client](users/connect-web.md)
+- [Use features of the Remote Desktop Web client when connecting to Azure Virtual Desktop](users/client-features-web.md)
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Wi
- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Updates to Teams for Azure Virtual Desktop, including the following: - Bug fix for Background Effects persistence between Teams sessions.
- - Various bug fixes for multimedia redirection (MMR) video playback redirection.
+- Various bug fixes for multimedia redirection (MMR) video playback redirection.
>[!IMPORTANT] >This is the final version of the Remote Desktop client with Windows 7 support. After this version, if you try to use the Remote Desktop client with Windows 7, it may not work as expected. For more information about which versions of Windows the Remote Desktop client currently supports, see [Prerequisites](./users/connect-windows.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&tabs=subscribe#prerequisites).
virtual-machines Compute Gallery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/compute-gallery-whats-new.md
+
+ Title: What's new for Azure Compute Gallery
+description: Learn about what's new for Azure Compute Gallery in Azure.
++++ Last updated : 01/25/2023++++
+# What's new for Azure Compute Gallery
+
+This article is a list of updates to Compute Gallery features in Azure.
+
+## January 2023 updates:
+
+- [Launched public preview of Direct shared gallery on 07/25](/azure/virtual-machines/share-gallery-direct?tabs=portaldirect)
+
+- [Best Practices document now available](/azure/virtual-machines/azure-compute-gallery#best-practices)
+
+- [Replica count for 'Image Versions' increased from 50 to 100](/azure/virtual-machines/azure-compute-gallery#limits)
+
+### Supported features:
+
+- ['ARM64' image support](/cli/azure/sig/image-definition?view=azure-cli-latest#az-sig-image-definition-create&preserve-view=true)
+
+- ['TrustedLaunch' support](/cli/azure/sig/image-definition?view=azure-cli-latest#az-sig-image-definition-create&preserve-view=true)
+
+- ['ConfidentialVM' support](/cli/azure/sig/image-definition?view=azure-cli-latest#az-sig-image-definition-create&preserve-view=true)
+
+- ['IsAcceleratedNetworkSupported' support](/cli/azure/sig/image-definition?view=azure-cli-latest#az-sig-image-definition-create&preserve-view=true)
+
+- [Replication Mode: Full and Shallow](/cli/azure/sig/image-version?view=azure-cli-latest#commands&preserve-view=true)
+ - Shallow replication support image size up to 32 TB
+ - Shallow replication is only for test images
+
+## Next steps
+
+For updates and announcements about Azure, see the [Microsoft Azure Blog](https://azure.microsoft.com/blog/).
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
You can use either the [CLI](#cli) or [PowerShell](#powershell) sections to chec
### CLI
-First, get a list of all snapshots associated with a particular disk. Replace `yourResourceGroupNameHere` with your value and then you can use the following script to list your existing incremental snapshots of Ultra Disks:
+You have two options for getting the status of snapshots. You can either get a [list of all incremental snapshots associated with a specific disk](#clilist-incremental-snapshots), and their respective status, or you can get the [status of an individual snapshot](#cliindividual-snapshot).
+#### CLI - List incremental snapshots
+
+The following script returns a list of all snapshots associated with a particular disk. The value of the `CompletionPercent` property of any snapshot must be 100 before it can be used. Replace `yourResourceGroupNameHere`, `yourSubscriptionId`, and `yourDiskNameHere` with your values then run the script:
```azurecli # Declare variables and create snapshot list
diskId=$(az disk show -n $diskName -g $resourceGroupName --query [id] -o tsv)
az snapshot list --query "[?creationData.sourceResourceId=='$diskId' && incremental]" -g $resourceGroupName --output table ```
-Now that you have a list of snapshots, you can check the `CompletionPercent` property of an individual snapshot to get its status. Replace `$sourceSnapshotName` with the name of your snapshot. The value of the property must be 100 before you can use the snapshot for restoring disk or generate a SAS URI for downloading the underlying data.
+#### CLI - Individual snapshot
+
+You can also check the status of an individual snapshot by checking the `CompletionPercent` property. Replace `$sourceSnapshotName` with the name of your snapshot then run the following command. The value of the property must be 100 before you can use the snapshot for restoring disk or generate a SAS URI for downloading the underlying data.
```azurecli az snapshot show -n $sourceSnapshotName -g $resourceGroupName --query [completionPercent] -o tsv
az snapshot show -n $sourceSnapshotName -g $resourceGroupName --query [completio
### PowerShell
-The following script creates a list of all incremental snapshots associated with a particular disk that haven't completed their background copy. Replace `yourResourceGroupNameHere` and `yourDiskNameHere`, then run the script.
+You have two options for getting the status of snapshots. You can either get a [list of all incremental snapshots associated with a particular disk](#powershelllist-incremental-snapshots) and their respective status, or you can get the [status of an individual snapshot](#powershellindividual-snapshots).
+
+#### PowerShell - List incremental snapshots
+
+The following script returns a list of all incremental snapshots associated with a particular disk that haven't completed their background copy. Replace `yourResourceGroupNameHere` and `yourDiskNameHere`, then run the script.
```azurepowershell $resourceGroupName = "yourResourceGroupNameHere"
foreach ($snapshot in $snapshots)
$incrementalSnapshots ```
-Now that you have a list of snapshots, you can check the `CompletionPercent` property of an individual snapshot to get its status. Replace `yourResourceGroupNameHere` and `yourSnapshotName` then run the script. The value of the property must be 100 before you can use the snapshot for restoring disk or generate a SAS URI for downloading the underlying data.
+#### PowerShell - individual snapshots
+
+You can check the `CompletionPercent` property of an individual snapshot to get its status. Replace `yourResourceGroupNameHere` and `yourSnapshotName` then run the script. The value of the property must be 100 before you can use the snapshot for restoring disk or generate a SAS URI for downloading the underlying data.
```azurepowershell $resourceGroupName = "yourResourceGroupNameHere"
virtual-machines Disks Shared Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared-enable.md
Update-AzDisk -ResourceGroupName 'myResourceGroup' -DiskName 'mySharedDisk' -Dis
```azurecli #Modifying a disk to enable or modify sharing configuration
-az disk update --name mySharedDisk --max-shares 5
+az disk update --name mySharedDisk --max-shares 5 --resource-group myResourceGroup
``` ## Using Azure shared disks with your VMs
virtual-machines Iaas Antimalware Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/iaas-antimalware-windows.md
vm-windows Previously updated : 01/19/2023 Last updated : 01/25/2023
## Overview
-The modern threat landscape for cloud environments is extremely dynamic, increasing the pressure on business IT cloud subscribers to maintain effective protection in order to meet compliance and security requirements. Microsoft Antimalware for Azure is free real-time protection capability that helps identify and remove viruses, spyware, and other malicious software, with configurable alerts when known malicious or unwanted software attempts to install itself or run on your Azure systems. The solution is built on the same antimalware platform as Microsoft Security Essentials (MSE), Microsoft Forefront Endpoint Protection, Microsoft System Center Endpoint Protection, Windows Intune, and Windows Defender for Windows 8.0 and higher.
+The modern threat landscape for cloud environments is dynamic, increasing the pressure on business IT cloud subscribers to maintain effective protection in order to meet compliance and security requirements. Microsoft Antimalware for Azure is free, real-time protection capability. Microsoft Antimalware helps identify and remove viruses, spyware, and other malicious software, with configurable alerts when known malicious or unwanted software attempts to install itself or run on your Azure systems. The solution is built on the same antimalware platform as Microsoft Security Essentials (MSE), Microsoft Forefront Endpoint Protection, Microsoft System Center Endpoint Protection, Windows Intune, and Windows Defender for Windows 8.0 and higher.
Microsoft Antimalware for Azure is a single-agent solution for applications and tenant environments, designed to run in the background without human intervention. You can deploy protection based on the needs of your application workloads, with either basic secure-by-default or advanced custom configuration, including antimalware monitoring. ## Prerequisites ### Operating system
-The Microsoft Antimalware for Azure solution includes the Microsoft Antimalware Client, and Service, Antimalware classic deployment model, Antimalware PowerShell cmdlets, and Azure Diagnostics Extension. The Microsoft Antimalware solution is supported on Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2 operating system families.
-It is not supported on the Windows Server 2008 operating system, and also is not supported in Linux.
+The Microsoft Antimalware for Azure solution includes the Microsoft Antimalware Client, and Service, Antimalware classic deployment model, Antimalware PowerShell cmdlets, and Azure Diagnostics Extension. The Microsoft Antimalware solution is supported on Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2 operating system families.
+It isn't supported on the Windows Server 2008 operating system, and also isn't supported in Linux.
-Windows Defender is the built-in Antimalware enabled in Windows Server 2016. The Windows Defender Interface is also enabled by default on some Windows Server 2016 SKU's.
-The Azure VM Antimalware extension can still be added to a Windows Server 2016 Azure VM with Windows Defender, but in this scenario the extension will apply any optional configuration policies to be used by Windows Defender, the extension will not deploy any additional antimalware service.
-For more information, see [Update to Azure Antimalware Extension for Cloud Services](/archive/blogs/azuresecurity/update-to-azure-antimalware-extension-for-cloud-services).
+Windows Defender is the built-in Antimalware enabled in Windows Server 2016. The Windows Defender Interface is also enabled by default on some Windows Server 2016 SKUs. The Azure VM Antimalware extension can still be added to a Windows Server 2016 and above Azure VM with Windows Defender. In this scenario the extension applies any optional [configuration policies](../../security/fundamentals/antimalware.md#default-and-custom-antimalware-configuration) to be used by Windows Defender. The extension does not deploy any other antimalware service. See the [Samples](../../security/fundamentals/antimalware.md#samples) section of the Microsoft Antimalware article for more details.
### Internet connectivity
-The Microsoft Antimalware for Windows requires that the target virtual machine is connected to the internet to receive regular engine and signature updates.
+The Microsoft Antimalware for Windows requires that the target virtual machine is connected to the internet to receive regular engine and signature updates.
## Template deployment Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when deploying one or more virtual machines that require post deployment configuration such as onboarding to Azure Antimalware.
-The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root or top level of a Resource Manager JSON template.
+The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root or top level of a Resource Manager JSON template.
The placement of the JSON configuration affects the value of the resource name and type.
-For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
+For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
The following example assumes the VM extension is nested inside the virtual machine resource. When nesting the extension resource, the JSON is placed in the `"resources": []` object of the virtual machine.
AntimalwareEnabled
- Values: true/false - true = Enable
- - false = Error out, as false is not a supported value
+ - false = Error out, as false isn't a supported value
RealtimeProtectionEnabled
Microsoft Antimalware extension logs are available at - %Systemdrive%\WindowsAzu
| -2147156121 | Setup tried to remove competitor product. But competitor product uninstall failed | Try to remove the competitor product manually, reboot, and retry installation | | -2147156116 | Policy file validation failed | Make sure you pass a valid policy XML file to setup | | -2147156095 | Setup couldn't start the Antimalware service | Verify all binaries are correctly signed, and right licensing file is installed |
-| -2147023293 | A fatal error occurred during installation. In most cases, it will. Epp.msi, canΓÇÖt register\start\stop AM service or mini filter driver | MSI logs from EPP.msi are required here for future investigation |
-| -2147023277 | Installation package could not be opened | Verify that the package exists, and is accessible, or contact the application vendor to verify that this is a valid Windows Installer package |
+| -2147023293 | A fatal error occurred during installation. In most cases, it will. Epp.msi, can't register\start\stop AM service or mini filter driver | MSI logs from EPP.msi are required here for future investigation |
+| -2147023277 | Installation package couldn't be opened | Verify that the package exists, and is accessible, or contact the application vendor to verify that this is a valid Windows Installer package |
| -2147156109 | Windows Defender is required as a prerequisite | |
-| -2147205073 | The websso issuer is not supported | |
-| -2147024893 | The system cannot find the path specified | |
-| -2146885619 | Not a cryptographic message or the cryptographic message is not formatted correctly | |
-| -1073741819 | The instruction at 0x%p referenced memory at 0x%p. The memory could not be %s | |
+| -2147205073 | The websso issuer isn't supported | |
+| -2147024893 | The system can't find the path specified | |
+| -2146885619 | Not a cryptographic message or the cryptographic message isn't formatted correctly | |
+| -1073741819 | The instruction at 0x%p referenced memory at 0x%p. The memory couldn't be %s | |
| 1 | Incorrect Function | | ### Support
-If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
+If you need more help at any point in this article, you can contact the Azure experts on the [Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/scheduled-events.md
Previously updated : 06/01/2020 Last updated : 01/25/2023
Scheduled Events provides events in the following use cases:
Metadata Service exposes information about running VMs by using a REST endpoint that's accessible from within the VM. The information is available via a nonroutable IP so that it's not exposed outside the VM. ### Scope
-Scheduled events are delivered to and can be acknowleged by:
+Scheduled events are delivered to and can be acknowledged by:
- Standalone Virtual Machines. - All the VMs in an [Azure cloud service (classic)](../../cloud-services/index.yml).
Scheduled events are delivered to and can be acknowleged by:
> [!NOTE] > Scheduled Events for all virtual machines (VMs) in a Fabric Controller (FC) tenant are delivered to all VMs in a FC tenant. FC tenant equates to a standalone VM, an entire Cloud Service, an entire Availability Set, and a Placement Group for a VM Scale Set (VMSS) regardless of Availability Zone usage.
-> For example, if you have 100 VMs in a availability set and there is an update to one of them, the scheduled event will go to all 100, whereas if there are 100 single VMs in a zone, then event will only go to the VM which is getting impacted.
+> For example, if you have 100 VMs in a availability set and there's an update to one of them, the scheduled event will go to all 100, whereas if there are 100 single VMs in a zone, then event will only go to the VM which is getting impacted.
As a result, check the `Resources` field in the event to identify which VMs are affected.
For VNET enabled VMs, Metadata Service is available from a static nonroutable IP
> `http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01`
-If the VM is not created within a Virtual Network, the default cases for cloud services and classic VMs, additional logic is required to discover the IP address to use.
+If the VM isn't created within a Virtual Network, the default cases for cloud services and classic VMs, extra logic is required to discover the IP address to use.
To learn how to [discover the host endpoint](https://github.com/azure-samples/virtual-machines-python-scheduled-events-discover-endpoint-for-non-vnet-vm), see this sample. ### Version and Region Availability
The Scheduled Events service is versioned. Versions are mandatory; the current v
| 2020-07-01 | General Availability | All | <li> Added support for Event Duration | | 2019-08-01 | General Availability | All | <li> Added support for EventSource | | 2019-04-01 | General Availability | All | <li> Added support for Event Description |
-| 2019-01-01 | General Availability | All | <li> Added support for virtual machine scale sets EventType 'Terminate' |
+| 2019-01-01 | General Availability | All | <li> Added support for Virtual Machine Scale Sets EventType 'Terminate' |
| 2017-11-01 | General Availability | All | <li> Added support for Spot VM eviction EventType 'Preempt'<br> | | 2017-08-01 | General Availability | All | <li> Removed prepended underscore from resource names for IaaS VMs<br><li>Metadata header requirement enforced for all requests | | 2017-03-01 | Preview | All | <li>Initial release |
The Scheduled Events service is versioned. Versions are mandatory; the current v
### Enabling and Disabling Scheduled Events Scheduled Events is enabled for your service the first time you make a request for events. You should expect a delayed response in your first call of up to two minutes.
-Scheduled Events is disabled for your service if it does not make a request for 24 hours.
+Scheduled Events is disabled for your service if it doesn't make a request for 24 hours.
### User-initiated Maintenance User-initiated VM maintenance via the Azure portal, API, CLI, or PowerShell results in a scheduled event. You then can test the maintenance preparation logic in your application, and your application can prepare for user-initiated maintenance.
-If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. This will prevent delays in recovering your application back to a good state.
+If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. This arrangement will prevent delays in recovering your application back to a good state.
## Use the API
In the case where there are scheduled events, the response contains an array of
| - | - | | Document Incarnation | Integer that increases when the events array changes. Documents with the same incarnation contain the same event information, and the incarnation will be incremented when an event changes. | | EventId | Globally unique identifier for this event. <br><br> Example: <br><ul><li>602d9444-d2cd-49c7-8624-8643e7171297 |
-| EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there is no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). This event is made available on a best effort basis <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). <li> `Terminate`: The virtual machine is scheduled to be deleted. |
+| EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there's no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). This event is made available on a best effort basis <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). <li> `Terminate`: The virtual machine is scheduled to be deleted. |
| ResourceType | Type of resource this event affects. <br><br> Values: <ul><li>`VirtualMachine`| | Resources| List of resources this event affects. <br><br> Example: <br><ul><li> ["FrontEnd_IN_0", "BackEnd_IN_0"] | | EventStatus | Status of this event. <br><br> Values: <ul><li>`Scheduled`: This event is scheduled to start after the time specified in the `NotBefore` property.<li>`Started`: This event has started.</ul> No `Completed` or similar status is ever provided. The event is no longer returned when the event is finished. | NotBefore| Time after which this event can start. The event is guaranteed to not start before this time. Will be blank if the event has already started <br><br> Example: <br><ul><li> Mon, 19 Sep 2016 18:29:47 GMT | | Description | Description of this event. <br><br> Example: <br><ul><li> Host server is undergoing maintenance. | | EventSource | Initiator of the event. <br><br> Example: <br><ul><li> `Platform`: This event is initiated by platform. <li>`User`: This event is initiated by user. |
-| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li> `0`: The event will not interrupt the VM or impact its availability (eg. update to the network) <li>`-1`: The default value used if the impact duration is either unknown or not applicable. |
+| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li> `0`: The event won't interrupt the VM or impact its availability (for example, update to the network) <li>`-1`: The default value used if the impact duration is either unknown or not applicable. |
### Event Scheduling Each event is scheduled a minimum amount of time in the future based on the event type. This time is reflected in an event's `NotBefore` property.
Each event is scheduled a minimum amount of time in the future based on the even
| Terminate | [User Configurable](../../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md#enable-terminate-notifications): 5 to 15 minutes | > [!NOTE]
-> In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a `NotBefore` that is typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there is a high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible.
+> In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a `NotBefore` that is typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there's a high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible.
>[!NOTE] > In the case the host node experiences a hardware failure Azure will bypass the minimum notice period an immediately begin the recovery process for affected virtual machines. This reduces recovery time in the case that the affected VMs are unable to respond. During the recovery process an event will be created for all impacted VMs with `EventType = Reboot` and `EventStatus = Started`.
The following JSON sample is expected in the `POST` request body. The request sh
} ```
-The service will always return a 200 success code in the case of a valid event ID, even if it was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed.
+The service will always return a 200 success code for a valid event ID, even if it was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed.
#### Bash sample
def confirm_scheduled_event(event_id):
> Acknowledging an event allows the event to proceed for all `Resources` in the event, not just the VM that acknowledges the event. Therefore, you can choose to elect a leader to coordinate the acknowledgement, which might be as simple as the first machine in the `Resources` field. ## Example Responses
-The following is an example of a series of events that were seen by two VMs that were live migrated to another node.
+The following response is an example of a series of events that were seen by two VMs that were live migrated to another node.
-The `DocumentIncarnation` is changing every time there is new information in `Events`. An approval of the event would allow the freeze to proceed for both WestNO_0 and WestNO_1. The `DurationInSeconds` of -1 indicates that the platform does not know how long the operation will take.
+The `DocumentIncarnation` is changing every time there's new information in `Events`. An approval of the event would allow the freeze to proceed for both WestNO_0 and WestNO_1. The `DurationInSeconds` of -1 indicates that the platform doesn't know how long the operation will take.
```JSON {
def advanced_sample(last_document_incarnation):
int(event["DurationInSeconds"]) < 9): confirm_scheduled_event(event["EventId"])
- # Events that may be impactful (eg. Reboot or redeploy) may need custom
+ # Events that may be impactful (for example, Reboot or redeploy) may need custom
# handling for your application else: #TODO Custom handling for impactful events
virtual-machines Tutorial Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-disaster-recovery.md
Previously updated : 11/05/2020 Last updated : 01/25/2023 #Customer intent: As an Azure admin, I want to prepare for disaster recovery by replicating my Linux VMs to another Azure region.
This tutorial shows you how to set up disaster recovery for Azure VMs running Li
> * Run a disaster recovery drill to check it works as expected > * Stop replicating the VM after the drill
-When you enable replication for a VM, the Site Recovery Mobility service extension installs on the VM, and registers it with [Azure Site Recovery](../../site-recovery/site-recovery-overview.md). During replication, VM disk writes are send to a cache storage account in the source VM region. Data is sent from there to the target region, and recovery points are generated from the data. When you fail a VM over to another region during disaster recovery, a recovery point is used to create a VM in the target region.
+When you enable replication for a VM, the Site Recovery Mobility service extension installs on the VM, and registers it with [Azure Site Recovery](../../site-recovery/site-recovery-overview.md). During replication, VM disk writes are sent to a cache storage account in the source VM region. Data is sent from there to the target region, and recovery points are generated from the data. When you fail a VM over to another region during disaster recovery, a recovery point is used to create a VM in the target region.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
If you don't have an Azure subscription, create a [free account](https://azure.m
| Storage tag | Allows data to be written from the VM to the cache storage account. Azure AD tag | Allows access to all IP addresses that correspond to Azure AD.
- EventsHub tag | Allows access to Site Recovery monitoring.
- AzureSiteRecovery tag | Allows access to the Site Recovery service in any region.
+ Events Hub tag | Allows access to Site Recovery monitoring.
+ Azure Site Recovery tag | Allows access to the Site Recovery service in any region.
GuestAndHybridManagement | Use if you want to automatically upgrade the Site Recovery Mobility agent that's running on VMs enabled for replication. 5. Make sure VMs have the latest root certificates. On Linux VMs, follow the guidance provided by your Linux distributor, to get the latest trusted root certificates and certificate revocation list on the VM.
You can optionally enable disaster recovery when you create a VM.
- An app-consistent snapshot is taken every 4 hours. - By default Site Recovery stores recovery points for 24 hours.
-7. In **Availability options**, specify whether the VM is deploy as standalone, in an availability zone, or in an availability set.
+7. In **Availability options**, specify whether the VM will deploy as standalone, in an availability zone, or in an availability set.
:::image type="content" source="./media/tutorial-disaster-recovery/create-vm.png" alt-text="Enable replication on the VM management properties page.":::
If you want to enable disaster recovery on an existing VM, use this procedure.
- You can customize the storage type as needed. - **Replication settings**. Shows the vault in which the VM is located, and the replication policy used for the VM. By default, recovery points created by Site Recovery for the VM are kept for 24 hours. - **Extension settings**. Indicates that Site Recovery manages updates to the Site Recovery Mobility Service extension that's installed on VMs you replicate.
- - The indicated Azure automation account manages the update process.
+ - The indicated Azure Automation account manages the update process.
- You can customize the automation account. :::image type="content" source="./media/tutorial-disaster-recovery/settings-summary.png" alt-text="Page showing summary of target and replication settings.":::
The VM is automatically cleaned up by Site Recovery after the drill.
### Stop replicating the VM
-After completing a disaster recovery drill, we suggest you continue to try out a full failover. If you don't want to do a full failover, you can disable replication. This does the following:
+After completing a disaster recovery drill, we suggest you continue to try out a full failover. If you don't want to do a full failover, you can disable replication. Disabling replication will:
-- Removes the VM from the Site Recovery list of replicated machines.-- Stops Site Recovery billing for the VM.-- Automatically cleans up source replication settings.
+- Remove the VM from the Site Recovery list of replicated machines.
+- Stop Site Recovery billing for the VM.
+- Automatically clean up source replication settings.
Stop replication as follows:
virtual-machines Managed Disks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/managed-disks-overview.md
To protect against regional disasters, [Azure Backup](../backup/backup-overview.
#### Azure Disk Backup
-Azure Backup offers Azure Disk Backup (preview) as a native, cloud-based backup solution that protects your data in managed disks. It's a simple, secure, and cost-effective solution that enables you to configure protection for managed disks in a few steps. Azure Disk Backup offers a turnkey solution that provides snapshot lifecycle management for managed disks by automating periodic creation of snapshots and retaining it for configured duration using backup policy. For details on Azure Disk Backup, see [Overview of Azure Disk Backup (in preview)](../backup/disk-backup-overview.md).
+Azure Backup offers Azure Disk Backup (preview) as a native, cloud-based backup solution that protects your data in managed disks. It's a simple, secure, and cost-effective solution that enables you to configure protection for managed disks in a few steps. Azure Disk Backup offers a turnkey solution that provides snapshot lifecycle management for managed disks by automating periodic creation of snapshots and retaining it for configured duration using backup policy. For details on Azure Disk Backup, see [Overview of Azure Disk Backup](../backup/disk-backup-overview.md).
### Granular access control
virtual-machines Share Gallery Direct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md
This article covers how to share an Azure Compute Gallery with specific subscrip
> [!IMPORTANT] > Azure Compute Gallery ΓÇô direct shared gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
-> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). We will follow up within 5 business days after submitting the form. No additional access required to consume images, Creating VMs from a direct shared gallery is open to all Azure users in the target subscription or tenant the gallery is shared with.
+> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). Please submit the form and share your use case, We will evaluate the request and follow up in 10 business days after submitting the form. No additional access required to consume images, Creating VMs from a direct shared gallery is open to all Azure users in the target subscription or tenant the gallery is shared with. In most scenarios RBAC/Cross-tenant sharing using service principal is sufficient, request access to this feature only if you wish to share images widely with all users in the subscription/tenant.
> > During the preview, you need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated. - There are three main ways to share images in an Azure Compute Gallery, depending on who you want to share with: | Share with\: | Option |
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
description: Learn how to configure VPN clients for P2S configurations that use
Previously updated : 01/10/2023 Last updated : 01/25/2023
When you open the zip file, you'll see the **AzureVPN** folder. Locate the **azu
1. In the window, navigate to the **azurevpnconfig.xml** file, select it, then click **Open**.
-1. From the **Certificate Information** dropdown, select the name of the child certificate (the client certificate). For example, **P2SChildCert**.
+1. From the **Certificate Information** dropdown, select the name of the child certificate (the client certificate). For example, **P2SChildCert**. You can also (optionally) select a [Secondary Profile](#secondary-profile).
:::image type="content" source="./media/point-to-site-vpn-client-cert-windows/configure-certificate.png" alt-text="Screenshot showing Azure VPN client profile configuration page." lightbox="./media/point-to-site-vpn-client-cert-windows/configure-certificate.png":::
When you open the zip file, you'll see the **AzureVPN** folder. Locate the **azu
The following sections discuss additional optional configuration settings that are available for the Azure VPN Client.
-#### Secondary VPN client profile
+#### Secondary Profile
[!INCLUDE [Secondary profile](../../includes/vpn-gateway-azure-vpn-client-secondary-profile.md)]
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
Azure VPN Gateway is a service that uses a specific type of virtual network gate
## <a name="vpn"></a>About VPN gateways
-A VPN gateway is a type of virtual network gateway. A virtual network gateway is composed of two or more Azure-manged VMs that are automatically configured and deployed to a specific subnet you create called the *gateway subnet*. The gateway VMs contain routing tables and run specific gateway services.
+A VPN gateway is a type of virtual network gateway. A virtual network gateway is composed of two or more Azure-managed VMs that are automatically configured and deployed to a specific subnet you create called the *GatewaySubnet*. The gateway VMs contain routing tables and run specific gateway services.
One of the settings that you specify when creating a virtual network gateway is the "gateway type". The gateway type determines how the virtual network gateway will be used and the actions that the gateway takes. A virtual network can have two virtual network gateways; one VPN gateway and one ExpressRoute gateway. The gateway type 'Vpn' specifies that the type of virtual network gateway created is a **VPN gateway**. This distinguishes it from an ExpressRoute gateway, which uses a different gateway type. For more information, see [Gateway types](vpn-gateway-about-vpn-gateway-settings.md#gwtype).