Updates from: 02/22/2023 02:12:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Integrate With App Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/integrate-with-app-code-samples.md
Title: Azure Active Directory B2C integrate with app samples description: Code samples for integrating Azure AD B2C to mobile, desktop, web, and single-page applications. -+ - Previously updated : 06/21/2022+ Last updated : 02/21/2023
The following tables provide links to samples for applications including iOS, Android, .NET, and Node.js.
-## Mobile and desktop apps
-
-| Sample | Description |
-|--| -- |
-| [ios-swift-native-msal](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal) | An iOS sample in Swift that authenticates Azure AD B2C users and calls an API using OAuth 2.0 |
-| [android-native-msal](https://github.com/Azure-Samples/ms-identity-android-java#b2cmodefragment-class) | A simple Android app showcasing how to use MSAL to authenticate users via Azure Active Directory B2C, and access a Web API with the resulting tokens. |
-| [ios-native-appauth](https://github.com/Azure-Samples/active-directory-b2c-ios-native-appauth) | A sample that shows how you can use a third-party library to build an iOS application in Objective-C that authenticates Microsoft identity users to our Azure AD B2C identity service. |
-| [android-native-appauth](https://github.com/Azure-Samples/active-directory-b2c-android-native-appauth) | A sample that shows how you can use a third-party library to build an Android application that authenticates Microsoft identity users to our B2C identity service and calls a web API using OAuth 2.0 access tokens. |
-| [dotnet-desktop](https://github.com/Azure-Samples/active-directory-b2c-dotnet-desktop) | A sample that shows how a Windows Desktop .NET (WPF) application can sign in a user using Azure AD B2C, get an access token using MSAL.NET and call an API. |
-| [xamarin-native](https://github.com/Azure-Samples/active-directory-b2c-xamarin-native) | A simple Xamarin Forms app showcasing how to use MSAL to authenticate users via Azure Active Directory B2C, and access a Web API with the resulting tokens. |
- ## Web apps and APIs | Sample | Description |
The following tables provide links to samples for applications including iOS, An
| [ms-identity-b2c-javascript-spa](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa) | A VanillaJS single page application (SPA) calling a web API. Authentication is done with Azure AD B2C by using MSAL.js. This sample uses the authorization code flow with PKCE. | | [javascript-nodejs-management](https://github.com/Azure-Samples/ms-identity-b2c-javascript-nodejs-management/tree/main/Chapter1) | A VanillaJS single page application (SPA) calling Microsoft Graph to manage users in a B2C directory. Authentication is done with Azure AD B2C by using MSAL.js. This sample uses the authorization code flow with PKCE.|
+## Mobile and desktop apps
+
+| Sample | Description |
+|--| -- |
+| [ios-swift-native-msal](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal) | An iOS sample in Swift that authenticates Azure AD B2C users and calls an API using OAuth 2.0 |
+| [android-native-msal](https://github.com/Azure-Samples/ms-identity-android-java#b2cmodefragment-class) | A simple Android app showcasing how to use MSAL to authenticate users via Azure Active Directory B2C, and access a Web API with the resulting tokens. |
+| [ios-native-appauth](https://github.com/Azure-Samples/active-directory-b2c-ios-native-appauth) | A sample that shows how you can use a third-party library to build an iOS application in Objective-C that authenticates Microsoft identity users to our Azure AD B2C identity service. |
+| [android-native-appauth](https://github.com/Azure-Samples/active-directory-b2c-android-native-appauth) | A sample that shows how you can use a third-party library to build an Android application that authenticates Microsoft identity users to our B2C identity service and calls a web API using OAuth 2.0 access tokens. |
+| [dotnet-desktop](https://github.com/Azure-Samples/active-directory-b2c-dotnet-desktop) | A sample that shows how a Windows Desktop .NET (WPF) application can sign in a user using Azure AD B2C, get an access token using MSAL.NET and call an API. |
+| [xamarin-native](https://github.com/Azure-Samples/active-directory-b2c-xamarin-native) | A simple Xamarin Forms app showcasing how to use MSAL to authenticate users via Azure Active Directory B2C, and access a Web API with the resulting tokens. |
+ ## Console/Daemon apps | Sample | Description |
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/29/2023 Last updated : 02/21/2023
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
Previously updated : 02/17/2023 Last updated : 02/21/2023
The provisioning mode supported by an application is also visible on the **Provi
## Benefits of automatic provisioning
-The number of applications used in modern organizations continues to grow. IT admins must manage access management at scale. Admins use standards such as SAML or OIDC for single sign-on (SSO), but access also requires users to be provisioned into the app. To many admins, provisioning means manually creating every user account or uploading CSV files each week. These processes are time-consuming, expensive, and error prone. Solutions such as SAML just-in-time (JIT) have been adopted to automate provisioning. Enterprises also need a solution to deprovision users when they leave the organization or no longer require access to certain apps based on role change.
+The number of applications used in modern organizations continues to grow. You, as an IT admin, must manage access management at scale. You use standards such as SAML or OIDC for single sign-on (SSO), but access also requires you provision users into an app. You might think provisioning means manually creating every user account or uploading CSV files each week. These processes are time-consuming, expensive, and error prone. To streamline the process, use SAML just-in-time (JIT) to automate provisioning. Use the same process to deprovision users when they leave the organization or no longer require access to certain apps based on role change.
Some common motivations for using automatic provisioning include:
Some common motivations for using automatic provisioning include:
- Saving on costs associated with hosting and maintaining custom-developed provisioning solutions and scripts. - Securing your organization by instantly removing users' identities from key SaaS apps when they leave the organization. - Easily importing a large number of users into a particular SaaS application or system.-- Having a single set of policies to determine who is provisioned and who can sign in to an app.
+- A single set of policies to determine provisioned users that can sign in to an app.
Azure AD user provisioning can help address these challenges. To learn more about how customers have been using Azure AD user provisioning, read the [ASOS case study](https://aka.ms/asoscasestudy). The following video provides an overview of user provisioning in Azure AD.
Azure AD features pre-integrated support for many popular SaaS apps and human re
![Image that shows logos for DropBox, Salesforce, and others.](./media/user-provisioning/gallery-app-logos.png)
- If you want to request a new application for provisioning, you can [request that your application be integrated with our app gallery](../manage-apps/v2-howto-app-gallery-listing.md). For a user provisioning request, we require the application to have a SCIM-compliant endpoint. Request that the application vendor follows the SCIM standard so we can onboard the app to our platform quickly.
+ To request a new application for provisioning, see [Submit a request to publish your application in Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). For a user provisioning request, we require the application to have a SCIM-compliant endpoint. Request that the application vendor follows the SCIM standard so we can onboard the app to our platform quickly.
* **Applications that support SCIM 2.0**: For information on how to generically connect applications that implement SCIM 2.0-based user management APIs, see [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md).
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
Number matching is a good example of protection for an authentication method tha
As MFA fatigue attacks rise, number matching becomes more critical to sign-in security. As a result, Microsoft will change the default behavior for push notifications in Microsoft Authenticator. >[!NOTE]
->Number matching will begin to be enabled for all users of Microsoft Authenticator starting February 27, 2023.
+>Number matching will begin to be enabled for all users of Microsoft Authenticator starting May 08, 2023.
<!Add link to Mayur Blog post here>
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Now we'll walk through each step:
1. Azure AD completes the sign-in process by sending a primary refresh token back to indicate successful sign-in. 1. If the user sign-in is successful, the user can access the application.
+## Certificate-based authentication is MFA capable
+
+Azure AD CBA is an MFA (Multi factor authentication) capable method, that is Azure AD CBA can be either Single (SF) or Multi-factor (MF) depending on the tenant configuration. Enabling CBA for a user indicates the user is potentially capable of MFA. This means a user may need additional configuration to get MFA and proof up to register other authentication methods when the user is in scope for CBA.
+
+If CBA enabled user only has a Single Factor (SF) certificate and need MFA
+ 1. Use Password + SF certificate.
+ 1. Issue Temporary Access Pass (TAP)
+ 1. Admin adds Phone Number to user account and allows Voice/SMS method for user.
+
+If CBA enabled user has not yet been issued a certificate and need MFA
+ 1. Issue Temporary Access Pass (TAP)
+ 1. Admin adds Phone Number to user account and allows Voice/SMS method for user.
+
+If CBA enabled user cannot use MF cert (such as on mobile device without smart card support) and need MFA
+ 1. Issue Temporary Access Pass (TAP)
+ 1. User Register another MFA method (when user can use MF cert)
+ 1. Use Password + MF cert (when user can use MF cert)
+ 1. Admin adds Phone Number to user account and allows Voice/SMS method for user
++ ## MFA with Single-factor certificate-based authentication
-Azure AD CBA supports second factors to meet MFA requirements with single-factor certificates. Users can use either passwordless sign-in or FIDO2 security keys as second factors when the first factor is single-factor CBA. Users need to have another way to get MFA and register passwordless sign-in or FIDO2 in advance to signing in with Azure AD CBA.
+Azure AD CBA can be used as a second factor to meet MFA requirements with single-factor certificates. The supported combintaions are
+
+CBA (first factor) + passwordless phone sign-in (PSI as second factor)
+CBA (first factor) + FIDO2 security keys
+Password (first factor) + CBA (second factor)
+
+Users need to have another way to get MFA and register passwordless sign-in or FIDO2 in advance to signing in with Azure AD CBA.
>[!IMPORTANT] >A user will be considered MFA capable when a user is in scope for Certificate-based authentication auth method. This means user will not be able to use proof up as part of their authentication to registerd other available methods. More info on [Azure AD MFA](../authentication/concept-mfa-howitworks.md)
active-directory Concept Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication.md
The following scenarios are supported:
The following scenarios aren't supported: -- Certificate Authority hints aren't supported, so the list of certificates that appears for users in the certificate picket UI isn't scoped.
+- Certificate Authority hints aren't supported, so the list of certificates that appears for users in the certificate picker UI isn't scoped.
- Only one CRL Distribution Point (CDP) for a trusted CA is supported. - The CDP can be only HTTP URLs. We don't support Online Certificate Status Protocol (OCSP), or Lightweight Directory Access Protocol (LDAP) URLs. - Configuring other certificate-to-user account bindings, such as using the **Subject**, **Subject + Issuer** or **Issuer + Serial Number**, arenΓÇÖt available in this release.
active-directory Howto Authentication Methods Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-methods-activity.md
The registration details report shows the following information for each user:
- SSPR Registered (Registered, Not Registered) - SSPR Enabled (Enabled, Not Enabled) - SSPR Capable (Capable, Not Capable) -- Methods registered (Email, Mobile Phone, Alternative Mobile Phone, Office Phone, Microsoft Authenticator Push, Software One Time Passcode, FIDO2, Security Key, Security questions, Hardware OATH token)
+- Methods registered (Alternate Mobile Phone, Email, FIDO2 Security Key, Hardware OATH token, Microsoft Authenticator app, Microsoft Passwordless phone sign-in, Mobile Phone, Office Phone, Security questions, Software OATH token, Temporary Access Pass, Windows Hello for Business)
![Screenshot of user registration details](media/how-to-authentication-methods-usage-insights/registration-details.png)
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
To enable combined registration, complete these steps:
![Enable the combined security info experience for users](media/howto-registration-mfa-sspr-combined/enable-the-combined-security-info.png)
+ > [!IMPORTANT]
+ > If your Azure tenant has already been enabled for combined registration, you might not see the configuration option for **Users can use the combined security information registration experience** or even see it grayed out.
+ > [!NOTE] > After you enable combined registration, users who register or confirm their phone number or mobile app through the new experience can use them for Azure AD Multi-Factor Authentication and SSPR, if those methods are enabled in the Azure AD Multi-Factor Authentication and SSPR policies. >
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/what-is-cloud-sync.md
The following table provides a comparison between Azure AD Connect and Azure AD
| Allow basic customization for attribute flows |ΓùÅ |ΓùÅ | | Synchronize Exchange online attributes |ΓùÅ |ΓùÅ | | Synchronize extension attributes 1-15 |ΓùÅ |ΓùÅ |
-| Synchronize customer defined AD attributes (directory extensions) |ΓùÅ |ΓùÅ|
+| Synchronize customer defined AD attributes (directory extensions) |ΓùÅ|ΓùÅ|
| Support for Password Hash Sync |ΓùÅ|ΓùÅ| | Support for Pass-Through Authentication |ΓùÅ|| | Support for federation |ΓùÅ|ΓùÅ|
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Previously updated : 09/26/2022 Last updated : 02/16/2023
Applications must have the Intune SDK with policy assurance implemented and must
The following client apps are confirmed to support this setting, this list isn't exhaustive and is subject to change:
+- Adobe Acrobat Reader mobile app
- iAnnotate for Office 365 - Microsoft Cortana - Microsoft Edge
The following client apps are confirmed to support this setting, this list isn't
- MultiLine for Intune - Nine Mail - Email and Calendar - Notate for Intune
+- Provectus - Secure Contacts
- Yammer (Android, iOS, and iPadOS) This list isn't all encompassing, if your app isn't in this list please check with the application vendor to confirm support.
active-directory Quickstart V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-console.md
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
-> [!div renderon="portal" class="sxs-lookup"]
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
> In this quickstart, you download and run a code sample that demonstrates how a Node.js console application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity. > > This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) with the [client credentials grant](v2-oauth2-client-creds-grant-flow.md).
active-directory Workload Identities Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identities-faqs.md
+
+ Title: Workload identities license plans faq
+description: Learn about workload identities license plans, features and capabilities.
++++++ Last updated : 2/21/2023+++
+#Customer intent: I want to know about workload identities licensing plans
++
+# Frequently asked questions about workload identities license plans
+
+[Workload identities](workload-identities-overview.md) is now available in two editions: **Free** and **Workload Identities Premium**. The free edition of workload identities is included with a subscription of a commercial online service such as [Azure](https://azure.microsoft.com/) and [Power Platform](https://powerplatform.microsoft.com/). The Workload
+Identities Premium offering is available through a Microsoft representative, the [Open Volume License
+Program](https://www.microsoft.com/licensing/how-to-buy/how-to-buy), and the [Cloud Solution Providers program](/azure/lighthouse/concepts/cloud-solution-provider). Azure and Microsoft 365 subscribers can also purchase Workload
+Identities Premium online.
+
+For more information, see [what are workload identities?](workload-identities-overview.md)
+
+>[!NOTE]
+>Workload Identities Premium is a standalone product and isn't included in other premium product plans. All subscribers require a license to use Workload Identities Premium features.
+
+Learn more about [workload identities
+pricing](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-identities#office-StandaloneSKU-k3hubfz).
+
+## What features are included in Workload Identities Premium plan and which features are free?
+
+|Capabilities | Description | Free | Premium |
+|:--|:-|:|:--|
+| **Authentication and authorization**| | | |
+| Create, read, update, delete workload identities | Create and update identities for securing service to service access | Yes | Yes |
+| Authenticate workload identities and tokens to access resources | Use Azure Active Directory (Azure AD) to protect resource access | Yes| Yes |
+| Workload identities sign-in activity and audit trail | Monitor and track workload identity behavior | Yes | Yes |
+| **Managed identities**| Use Azure AD identities in Azure without handling credentials | Yes| Yes |
+| Workload identity federation | Use workloads tested by external Identity Providers (IdPs) to access Azure AD protected resources | Yes | Yes |
+| **Conditional Access (CA)** | | |
+| CA policies for workload identities |Define the condition in which a workload can access a resource, such as an IP range | | Yes |
+|**Lifecycle Management**| | | |
+|Access reviews for service provider-assigned privileged roles | Closely monitor workload identities with impactful permissions | | Yes |
+|**Identity Protection** | | |
+|Identity Protection for workload identities | Detect and remediate compromised workload identities | | Yes |
+
+## What is the cost of Workload Identities Premium plan?
+
+Check the pricing for the [Microsoft Entra Workload Identities
+Premium](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-identities#office-StandaloneSKU-k3hubfz)
+plan.
+
+## How do I purchase a Workload Identities Premium plan?
+
+You need an Azure or Microsoft 365 subscription. You can use a
+current subscription or set up a new one. Then, sign into the [Microsoft
+Entra admin
+center](https://entra.microsoft.com/)
+with your credentials to buy Workload Identities licenses.
+
+## Through what channels can I purchase Workload Identities Premium plan?
+
+You can purchase the plan through Enterprise Agreement (EA)/Enterprise Subscription (EAS), Cloud Solution Providers (CSPs), or Web Direct.
+
+## Where can I find more feature details to determine if I need a license(s)?
+
+Entra workload identities has three premium features that require a license.
+
+- [Conditional Access](../conditional-access/workload-identity.md):
+Supports location or risk-based policies for workload identities.
+
+- [Identity Protection](../identity-protection/concept-workload-identity-risk.md):
+Provides reports of compromised credentials, anomalous sign-ins, and
+suspicious changes to accounts.
+
+- [Access Reviews](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/introducing-azure-ad-access-reviews-for-service-principals/ba-p/1942488):
+Enables delegation of reviews to the right people, focused on the most
+important privileged roles.
+
+## What do the numbers in each category on the [Workload identities - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) mean?
+
+Category definitions:
+
+- **Enterprise apps/Service Principals**: This category includes multi-tenant apps, gallery apps, non-gallery apps and service principals.
+
+- **Microsoft apps**: Apps such as Outlook and Microsoft Teams.
+
+- [**Managed Identities**](https://entra.microsoft.com/#home): An identity for
+applications for connecting resources that support Azure AD authentication.
+
+## How many licenses do I need to purchase? Do I need to license all workload identities including Microsoft and Managed Service Identities?
+
+All workload identities - service principles, apps and managed identities, configured in your directory for a Microsoft Entra
+Workload Identities Premium feature require a license. Select and prioritize the identities based on the available licenses. Remove
+the workload identities from the directory that are no longer required.
+
+The following identity functionalities are currently available to view
+in a directory:
+
+- Identity Protection: All single-tenant and multi-tenant service
+ principals excluding managed identities and Microsoft apps.
+
+- Conditional Access: Single-tenant service principals (excluding
+ managed identities) capable of acting as a subject/client, having a
+ defined credential.
+
+- Access reviews: All single-tenant and multi-tenant service
+ principals assigned to privileged roles.
+
+>[!NOTE]
+>Functionality is subject to change, and feature coverage is
+intended to expand.
+
+## Do these licenses require individual workload identities assignment?
+
+No, license assignment isn't required. One license in the tenant unlocks features for workload identities.
+
+## Can I get a free trial of Workload Identities Premium?
+
+Yes. you can get a [90-day free trial](https://entra.microsoft.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade).
+In the Modern channel, a 30-day only trial is available. Free trial is
+unavailable in Government clouds.
+
+## Is the Workload Identities Premium edition available on Government clouds?
+
+Yes, it's available.
+
+## Is it possible to have a mix of Azure AD Premium P1, Azure AD Premium P2 and Workload Identities Premium licenses in one tenant?
+
+Yes, customers can have a mixture of license plans in one tenant.
active-directory Workload Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identities-overview.md
Here are some ways you can use workload identities:
## Next steps
-Learn how to [secure access of workload identities](../conditional-access/workload-identity.md) with adaptive policies.
+- Learn how to [secure access of workload identities](../conditional-access/workload-identity.md) with adaptive policies.
+- Get answers to [frequently asked questions about workload identities](workload-identities-faqs.md).
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md
For a workflow triggered by a pull request event, specify an **Entity type** of
Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields: -- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
+- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod. - **Namespace** is the service account namespace. - **Name** is the name of the federated credential, which can't be changed later.
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
To add a federated identity for GitHub actions, follow these steps:
:::image type="content" source="media/workload-identity-federation-create-trust/add-credential.png" alt-text="Screenshot of the Add a credential window, showing sample values." ::: - Use the following values from your Azure AD application registration for your GitHub workflow: - `AZURE_CLIENT_ID` the **Application (client) ID**
Select the **Kubernetes accessing Azure resources** scenario from the dropdown m
Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields: -- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
+- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod. - **Namespace** is the service account namespace. - **Name** is the name of the federated credential, which can't be changed later.
az ad app federated-credential create --id f6475511-fd81-4965-a00e-41e7792b7b9c
### Kubernetes example
-*issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+*issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
*subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`.
az ad app federated-credential delete --id f6475511-fd81-4965-a00e-41e7792b7b9c
::: zone pivot="identity-wif-apps-methods-powershell" ## Prerequisites+ - To run the example scripts, you have two options: - Use [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open by using the **Try It** button in the upper-right corner of code blocks. - Run scripts locally with Azure PowerShell, as described in the next section.
New-AzADAppFederatedCredential -ApplicationObjectId $appObjectId -Audience api:/
### Kubernetes example - *ApplicationObjectId*: the object ID of the app (not the application (client) ID) you previously registered in Azure AD.-- *Issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+- *Issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
- *Subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`. - *Name* is the name of the federated credential, which can't be changed later. - *Audience* lists the audiences that can appear in the `aud` claim of the external token.
And you get the response:
Run the following method to configure a federated identity credential on an app and create a trust relationship with a Kubernetes service account. Specify the following parameters: -- *issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+- *issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
- *subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`. - *name* is the name of the federated credential, which can't be changed later. - *audiences* lists the audiences that can appear in the external token. This field is mandatory. The recommended value is "api://AzureADTokenExchange".
active-directory Howto Hybrid Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-azure-ad-join.md
Organizations can test hybrid Azure AD join on a subset of their environment bef
Some organizations may not be able to use Azure AD Connect to configure AD FS. The steps to configure the claims manually can be found in the article [Configure hybrid Azure Active Directory join manually](hybrid-azuread-join-manual.md).
-### Government cloud
+### US Government cloud (inclusive of GCCHigh and DoD)
For organizations in [Azure Government](https://azure.microsoft.com/global-infrastructure/government/), hybrid Azure AD join requires devices to have access to the following Microsoft resources from inside your organization's network: -- `https://enterpriseregistration.microsoftonline.us`
+- `https://enterpriseregistration.windows.net` **and** `https://enterpriseregistration.microsoftonline.us`
- `https://login.microsoftonline.us` - `https://device.login.microsoftonline.us` - `https://autologon.microsoft.us` (If you use or plan to use seamless SSO)
active-directory 2 Secure Access Current State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/2-secure-access-current-state.md
Title: Discover the current state of external collaboration with Azure Active Directory
-description: Learn methods to discover the current state of your collaboration
+ Title: Discover the current state of external collaboration in your organization
+description: Discover the current state of an organization's collaboration with audit logs, reporting, allowlist, blocklist, and more.
Previously updated : 12/15/2022 Last updated : 02/21/2023
Before you learn about the current state of your external collaboration, determine a security posture. Consider centralized vs. delegated control, also governance, regulatory, and compliance targets.
-Learn more: [Determine your security posture for external users](1-secure-access-posture.md)
+Learn more: [Determine your security posture for external access with Azure Active Directory](1-secure-access-posture.md)
-Users in your organization likely collaborate with users from other organizations. Collaboration can occur with productivity applications like Microsoft 365, by email, or sharing resources with external users. The foundation of your governance plan can include:
+Users in your organization likely collaborate with users from other organizations. Collaboration occurs with productivity applications like Microsoft 365, by email, or sharing resources with external users. These scenarios include users:
-* Users initiating external collaboration
-* Collaboration with external users and organizations
-* Access granted to external users
+* Initiating external collaboration
+* Collaborating with external users and organizations
+* Granting access to external users
-## Users initiating external collaboration
+## Determine who initiates external collaboration
-Users seeking external collaboration know the applications needed for their work, and when access ends. Therefore, determine users with delegated permission to invite external users, create access packages, and complete access reviews.
+Generally, users seeking external collaboration know the applications to use, and when access ends. Therefore, determine users with delegated permissions to invite external users, create access packages, and complete access reviews.
To find collaborating users:
-* [Microsoft 365, audit log activities](/microsoft-365/compliance/audit-log-activities?view=o365-worldwide&preserve-view=true)
-* [Auditing and reporting a B2B collaboration user](../external-identities/auditing-and-reporting.md)
+* Microsoft 365 [Audit log activities](/microsoft-365/compliance/audit-log-activities?view=o365-worldwide&preserve-view=true) - search for events and discover activities audited in Microsoft 365
+* [Auditing and reporting a B2B collaboration user](../external-identities/auditing-and-reporting.md) - verify guest user access, and see records of system and user activities
-## Collaboration with external users and organizations
+## Enumerate guest users and organizations
-External users might be Azure AD B2B users with partner-managed credentials, or external users with locally provisioned credentials. Typically, these users are a UserType of Guest. See, [B2B collaboration overview](../external-identities/what-is-b2b.md).
+External users might be Azure AD B2B users with partner-managed credentials, or external users with locally provisioned credentials. Typically, these users are the Guest UserType. To learn about inviting guests users and sharing resources, see [B2B collaboration overview](../external-identities/what-is-b2b.md).
You can enumerate guest users with:
You can enumerate guest users with:
* [PowerShell](/graph/api/user-list?tabs=http) * [Azure portal](../enterprise-users/users-bulk-download.md)
-There are tools to identify Azure AD B2B collaboration, external Azure AD tenants and users accessing applications:
+Use the following tools to identify Azure AD B2B collaboration, external Azure AD tenants, and users accessing applications:
-* [PowerShell module](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MSIDCrossTenantAccessActivity)
-* [Azure Monitor workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md)
+* PowerShell module, [Get MsIdCrossTenantAccessActivity](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MSIDCrossTenantAccessActivity)
+* [Cross-tenant access activity workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md)
-### Email domains and companyName property
+### Discover email domains and companyName property
-Determine external organizations with the domain names of external user email addresses. This discovery might not be possible with consumer identity providers such as Google. We recommend you write the companyName attribute to identify external organizations.
+You can determine external organizations with the domain names of external user email addresses. This discovery might not be possible with consumer identity providers. We recommend you write the companyName attribute to identify external organizations.
-### Allowlist, blocklist, and entitlement management
+### Use allowlist, blocklist, and entitlement management
-For your organization to collaborate with, or block, specific organizations, at the tenant level, there is allowlist or blocklist. Use this feature to control B2B invitations and redemptions regardless of source (such as Microsoft Teams, SharePoint, or the Azure portal). See, [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
+Use the allowlist or blocklist to enable your organization to collaborate with, or block, organizations at the tenant level. Control B2B invitations and redemptions regardless of source (such as Microsoft Teams, SharePoint, or the Azure portal).
+
+See, [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md)
If you use entitlement management, you can confine access packages to a subset of partners with the **Specific connected organizations** option, under New access packages, in Identity Governance.
- ![Screenshot of the Specific connected organizations option, under New access packages.](media/secure-external-access/2-new-access-package.png)
+ ![Screenshot of settings and options under Identity Governance, New access package.](media/secure-external-access/2-new-access-package.png)
-## External user access
+## Determine external user access
-After you have an inventory of external users and organizations, determine the access to grant to these users. You can use the Microsoft Graph API to determine Azure AD group membership or application assignment.
+With an inventory of external users and organizations, determine the access to grant to the users. You can use the Microsoft Graph API to determine Azure AD group membership or application assignment.
* [Working with groups in Microsoft Graph](/graph/api/resources/groups-overview?context=graph%2Fcontext&view=graph-rest-1.0&preserve-view=true) * [Applications API overview](/graph/applications-concept-overview?view=graph-rest-1.0&preserve-view=true)
active-directory Active Directory Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-whatis.md
After you choose your Azure AD license, you'll get access to some or all of the
|Managed identities for Azure resources|Provide your Azure services with an automatically managed identity in Azure AD that can authenticate any Azure AD-supported authentication service, including Key Vault. For more information, see [What is managed identities for Azure resources?](../managed-identities-azure-resources/overview.md).| |Privileged identity management (PIM)|Manage, control, and monitor access within your organization. This feature includes access to resources in Azure AD and Azure, and other Microsoft Online Services, like Microsoft 365 or Intune. For more information, see [Azure AD Privileged Identity Management](../privileged-identity-management/index.yml).| |Reports and monitoring|Gain insights into the security and usage patterns in your environment. For more information, see [Azure Active Directory reports and monitoring](../reports-monitoring/index.yml).|
+| Workload identities| Give an identity to your software workload (such as an application, service, script, or container) to authenticate and access other services and resources. For more information, see [workload identities faqs](../develop/workload-identities-faqs.md).
## Terminology
active-directory Whats Deprecated Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-deprecated-azure-ad.md
Use the following table to learn about changes including deprecations, retiremen
|Functionality, feature, or service|Change|Change date | |||:|
-|Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|Feb 27, 2023|
+|Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|May 8, 2023|
|Azure AD DS [virtual network deployments](../../active-directory-domain-services/migrate-from-classic-vnet.md)|Retirement|Mar 1, 2023| |[License management API, PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/migrate-your-apps-to-access-the-license-managements-apis-from/ba-p/2464366)|Retirement|*Mar 31, 2023| |[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Jun 30, 2023|
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly.
+## January 2023
+
+### General Availability - Azure AD Domain
+
+**Type:** New feature
+**Service category:** Azure AD Domain Services
+**Product capability:** Azure AD Domain Services
+
+Now within the Azure portal you have access to view key data for your Azure AD-DS Domain Controllers such as: LDAP Searches/sec, Total Query Received/sec, DNS Total Response Sent/sec, LDAP Successful Binds/sec, memory usage, processor time, Kerberos Authentications, and NTLM Authentications. For more information, see: [Check fleet metrics of Azure Active Directory Domain Services](/azure/active-directory-domain-services/fleet-metrics).
+++
+### General Availability - Add multiple domains to the same SAML/Ws-Fed based identity provider configuration for your external users
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+An IT admin can now add multiple domains to a single SAML/WS-Fed identity provider configuration to invite users from multiple domains to authenticate from the same identity provider endpoint. For more information, see: [Federation with SAML/WS-Fed identity providers for guest users](../external-identities/direct-federation.md).
+++
+### General Availability - New risk in Identity Protection: Anomalous user activity
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+This risk detection baselines normal administrative user behavior in Azure AD, and spots anomalous patterns of behavior like suspicious changes to the directory. The detection is triggered against the administrator making the change or the object that was changed. For more information, see: [User-linked detections](../identity-protection/concept-identity-protection-risks.md#user-linked-detections).
+++
+### General Availability - Administrative unit support for devices
+
+**Type:** New feature
+**Service category:** Directory Management
+**Product capability:** AuthZ/Access Delegation
+
+You can now use administrative units to delegate management of specified devices in your tenant by adding devices to an administrative unit, and assigning built-in and custom device management roles scoped to that administrative unit. For more information, see: [Device management](../roles/administrative-units.md#device-management).
+++
+### General Availability - Azure AD Terms of Use (ToU) API
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+Represents a tenant's customizable terms of use agreement that is created, and managed, with Azure Active Directory (Azure AD). You can use the following methods to create and manage the [Azure Active Directory Terms of Use feature](/graph/api/resources/agreement?#json-representation) according to your scenario. For more information, see: [agreement resource type](/graph/api/resources/agreement).
+++ ## December 2022 ### General Availability - Risk-based Conditional Access for workload identities
Customers can now bring one of the most powerful forms of access control in the
**Service category:** Enterprise Apps **Product capability:** Identity Lifecycle Management
-Restore a recently deleted application, group, servicePrincipal, administrative unit, or user object from deleted items. If an item was accidentally deleted, you can fully restore the item. This isn't applicable to security groups, which are deleted permanently. A recently deleted item will remain available for up to 30 days. After 30 days, the item is permanently deleted. For more information, see: [servicePrincipal resource type](/graph/api/resources/serviceprincipal).
+Restore a recently deleted application, group, servicePrincipal, administrative unit, or user object from deleted items. If an item was accidentally deleted, you can fully restore the item. This isn't applicable to security groups, which are deleted permanently. A recently deleted item remains available for up to 30 days. After 30 days, the item is permanently deleted. For more information, see: [servicePrincipal resource type](/graph/api/resources/serviceprincipal).
Restore a recently deleted application, group, servicePrincipal, administrative
**Service category:** Authentications (Logins) **Product capability:** Identity Security & Protection
-We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business much easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Migrate to cloud authentication using Staged Rollout](../hybrid/how-to-connect-staged-rollout.md).
+We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Migrate to cloud authentication using Staged Rollout](../hybrid/how-to-connect-staged-rollout.md).
We're excited to announce the general availability of hybrid cloud Kerberos trus
**Service category:** Authentications (Logins) **Product capability:** User Authentication
-We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business much easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Hybrid Cloud Kerberos Trust Deployment](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust).
+We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Hybrid Cloud Kerberos Trust Deployment](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust).
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
For more attributes, see the [Workday attribute reference](../app-provisioning/w
## Importance of time
-To ensure timing accuracy of scheduled workflows itΓÇÖs curial to consider:
+To ensure timing accuracy of scheduled workflows itΓÇÖs crucial to consider:
- The time portion of the attribute must be set accordingly, for example the `employeeHireDate` should have a time at the beginning of the day like 1AM or 5AM and the `employeeLeaveDateTime` should have time at the end of the day like 9PM or 11PM - The Workflows won't run earlier than the time specified in the attribute, however the [tenant schedule (default 3h)](customize-workflow-schedule.md) may delay the workflow run. For instance, if you set the `employeeHireDate` to 8AM but the tenant schedule doesn't run until 9AM, the workflow won't be processed until then. If a new hire is starting at 8AM, you would want to set the time to something like (start time - tenant schedule) to ensure it had run before the employee arrives.
active-directory Howto Configure Prerequisites For Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md
Once you have the app registration configured, you can run activity log queries
1. Use one of the following queries to start using Microsoft Graph for accessing activity logs: - GET `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits` - GET `https://graph.microsoft.com/v1.0/auditLogs/signIns`
- - For more information on Microsoft Graph queries for activity logs, see [Activity reports API overview](/graph/api/resources/azuread-auditlog-overview)
+ - For more information on Microsoft Graph queries for activity logs, see [Activity reports API overview](/graph/api/resources/azure-ad-auditlog-overview)
![Screenshot of an activity log GET query in Microsoft Graph.](./media/howto-configure-prerequisites-for-reporting-api/graph-sample-get-query.png)
Programmatic access APIs:
* [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md) * [Audit API reference](/graph/api/resources/directoryaudit)
-* [Sign-in API reference](/graph/api/resources/signin)
+* [Sign-in API reference](/graph/api/resources/signin)
active-directory Hpesaas Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hpesaas-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<SUBDOMAIN>.saas.hpe.com` > [!NOTE]
- > The Identifier value is not real. Update this value with the actual Identifier. Contact [HPE SaaS Client support team](https://www.sas.com/en_us/contact.html) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Identifier value is not real. Update this value with the actual Identifier. Contact [HPE SaaS Client support team](https://support.hpe.com/connect/s/?language=en_US) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
Title: Cluster configuration in Azure Kubernetes Services (AKS) description: Learn how to configure a cluster in Azure Kubernetes Service (AKS)- - Previously updated : 12/09/2022 Last updated : 02/16/2023 # Configure an AKS cluster
AKS supports Ubuntu 18.04 as the default node operating system (OS) in general a
## Container runtime configuration
-A container runtime is software that executes containers and manages container images on a node. The runtime helps abstract away sys-calls or operating system (OS) specific functionality to run containers on Linux or Windows. For Linux node pools, `containerd` is used for node pools using Kubernetes version 1.19 and greater. For Windows Server 2019 node pools, `containerd` is generally available and will be the only container runtime option in Kubernetes 1.21 and greater. Docker is no longer supported as of September 2022. For more information about this deprecation, see the [AKS release notes][aks-release-notes].
+A container runtime is software that executes containers and manages container images on a node. The runtime helps abstract away sys-calls or operating system (OS) specific functionality to run containers on Linux or Windows. Linux node pools use `containerd` for node pools using Kubernetes version 1.19 and greater. On Windows Server 2019 node pools, `containerd` is generally available and the only container runtime option on Kubernetes 1.21 and greater. Docker is no longer supported as of September 2022. For more information about this deprecation, see the [AKS release notes][aks-release-notes].
-[`Containerd`](https://containerd.io/) is an [OCI](https://opencontainers.org/) (Open Container Initiative) compliant core container runtime that provides the minimum set of required functionality to execute containers and manage images on a node. It was [donated](https://www.cncf.io/announcement/2017/03/29/containerd-joins-cloud-native-computing-foundation/) to the Cloud Native Compute Foundation (CNCF) in March of 2017. The current Moby (upstream Docker) version that AKS uses already uses and is built on top of `containerd`, as shown above.
+[`Containerd`](https://containerd.io/) is an [OCI](https://opencontainers.org/) (Open Container Initiative) compliant core container runtime that provides the minimum set of required functionality to execute containers and manage images on a node. `Containerd` was [donated](https://www.cncf.io/announcement/2017/03/29/containerd-joins-cloud-native-computing-foundation/) to the Cloud Native Compute Foundation (CNCF) in March of 2017. The current Moby (upstream Docker) version that AKS uses is built on top of `containerd`.
-With a `containerd`-based node and node pools, instead of talking to the `dockershim`, the kubelet will talk directly to `containerd` via the CRI (container runtime interface) plugin, removing extra hops on the flow when compared to the Docker CRI implementation. As such, you'll see better pod startup latency and less resource (CPU and memory) usage.
+With a `containerd`-based node and node pools, instead of talking to the `dockershim`, the kubelet talks directly to `containerd` using the CRI (container runtime interface) plugin, removing extra hops in the data flow when compared to the Docker CRI implementation. As such, you see better pod startup latency and less resource (CPU and memory) usage.
-By using `containerd` for AKS nodes, pod startup latency improves and node resource consumption by the container runtime decreases. These improvements are enabled by this new architecture where kubelet talks directly to `containerd` through the CRI plugin while in Moby/docker architecture kubelet would talk to the `dockershim` and docker engine before reaching `containerd`, thus having extra hops on the flow.
+By using `containerd` for AKS nodes, pod startup latency improves and node resource consumption by the container runtime decreases. These improvements through this new architecture enable kubelet communicating directly to `containerd` through the CRI plugin. While in a Moby/docker architecture, kubelet communicates to the `dockershim` and docker engine before reaching `containerd`, therefore having extra hops in the data flow.
![Docker CRI 2](media/cluster-configuration/containerd-cri.png)
-`Containerd` works on every GA version of Kubernetes in AKS, and in every upstream kubernetes version above v1.19, and supports all Kubernetes and AKS features.
+`Containerd` works on every GA version of Kubernetes in AKS, and in every newer Kubernetes version above v1.19, and supports all Kubernetes and AKS features.
> [!IMPORTANT] > Clusters with Linux node pools created on Kubernetes v1.19 or greater default to `containerd` for its container runtime. Clusters with node pools on a earlier supported Kubernetes versions receive Docker for their container runtime. Linux node pools will be updated to `containerd` once the node pool Kubernetes version is updated to a version that supports `containerd`.
By using `containerd` for AKS nodes, pod startup latency improves and node resou
* For `containerd`, we recommend using [`crictl`](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl) as a replacement CLI instead of the Docker CLI for **troubleshooting** pods, containers, and container images on Kubernetes nodes (for example, `crictl ps`).
- * It doesn't provide the complete functionality of the docker CLI. It's intended for troubleshooting only.
+ * `Containerd` doesn't provide the complete functionality of the docker CLI. It's available for troubleshooting only.
* `crictl` offers a more kubernetes-friendly view of containers, with concepts like pods, etc. being present. * `Containerd` sets up logging using the standardized `cri` logging format (which is different from what you currently get from docker's json driver). Your logging solution needs to support the `cri` logging format (like [Azure Monitor for Containers](../azure-monitor/containers/container-insights-enable-new-cluster.md)) * You can no longer access the docker engine, `/var/run/docker.sock`, or use Docker-in-Docker (DinD). * If you currently extract application logs or monitoring data from Docker engine, use [Container insights](../azure-monitor/containers/container-insights-enable-new-cluster.md) instead. AKS doesn't support running any out of band commands on the agent nodes that could cause instability.
- * Building images and directly using the Docker engine using the methods above isn't recommended. Kubernetes isn't fully aware of those consumed resources, and those methods present numerous issues as described [here](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/) and [here](https://securityboulevard.com/2018/05/escaping-the-whale-things-you-probably-shouldnt-do-with-docker-part-1/).
+ * Building images and directly using the Docker engine using the methods mentioned earlier aren't recommended. Kubernetes isn't fully aware of those consumed resources, and those methods present numerous issues as described [here](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/) and [here](https://securityboulevard.com/2018/05/escaping-the-whale-things-you-probably-shouldnt-do-with-docker-part-1/).
* Building images - You can continue to use your current Docker build workflow as normal, unless you're building images inside your AKS cluster. In this case, consider switching to the recommended approach for building images using [ACR Tasks](../container-registry/container-registry-quickstart-task-cli.md), or a more secure in-cluster option like [Docker Buildx](https://github.com/docker/buildx). ## Generation 2 virtual machines
-Azure supports [Generation 2 (Gen2) virtual machines (VMs)](../virtual-machines/generation-2.md). Generation 2 VMs support key features that aren't supported in generation 1 VMs (Gen1). These features include increased memory, Intel Software Guard Extensions (Intel SGX), and virtualized persistent memory (vPMEM).
+Azure supports [Generation 2 (Gen2) virtual machines (VMs)](../virtual-machines/generation-2.md). Generation 2 VMs support key features not supported in generation 1 VMs (Gen1). These features include increased memory, Intel Software Guard Extensions (Intel SGX), and virtualized persistent memory (vPMEM).
Generation 2 VMs use the new UEFI-based boot architecture rather than the BIOS-based architecture used by generation 1 VMs. Only specific SKUs and sizes support Gen2 VMs. Check the [list of supported sizes](../virtual-machines/generation-2.md#generation-2-vm-sizes), to see if your SKU supports or requires Gen2.
-Additionally not all VM images support Gen2, on AKS Gen2 VMs will use the new [AKS Ubuntu 18.04 image](#os-configuration). This image supports all Gen2 SKUs and sizes.
+Additionally not all VM images support Gen2, on AKS Gen2 VMs use the new [AKS Ubuntu 18.04 image](#os-configuration). This image supports all Gen2 SKUs and sizes.
## Default OS disk sizing
-By default, when creating a new cluster or adding a new node pool to an existing cluster, the OS disk size is determined by the number for vCPUs. The number of vCPUs is based on the VM SKU and the default values are shown in the following table:
+When you create a new cluster or add a new node pool to an existing cluster, by default the OS disk size is determined by the number for vCPUs. The number of vCPUs is based on the VM SKU, and in the following table we list the default values:
-|VM SKU Cores (vCPUs)| Default OS Disk Tier | Provisioned IOPS | Provisioned Throughput (Mpbs) |
+|VM SKU Cores (vCPUs)| Default OS Disk Tier | Provisioned IOPS | Provisioned Throughput (Mbps) |
|--|--|--|--| | 1 - 7 | P10/128G | 500 | 100 | | 8 - 15 | P15/256G | 1100 | 125 |
By default, when creating a new cluster or adding a new node pool to an existing
## Ephemeral OS
-By default, Azure automatically replicates the operating system disk for a virtual machine to Azure storage to avoid data loss if the VM needs to be relocated to another host. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks, including slower node provisioning and higher read/write latency.
+By default, Azure automatically replicates the operating system disk for a virtual machine to Azure storage to avoid data loss when the VM is relocated to another host. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks. These drawbacks include, but aren't limited to, slower node provisioning and higher read/write latency.
By contrast, ephemeral OS disks are stored only on the host machine, just like a temporary disk. This configuration provides lower read/write latency, along with faster node scaling and cluster upgrades.
-Like the temporary disk, an ephemeral OS disk is included in the price of the virtual machine, so you don't incur more storage costs.
+Like the temporary disk, included in the price of the VM is an ephemeral OS disk.
> [!IMPORTANT]
-> When you don't explicitly request managed disks for the OS, AKS will default to ephemeral OS if possible for a given node pool configuration.
+> When you don't explicitly request managed disks for the OS, AKS defaults to ephemeral OS if possible for a given node pool configuration.
If you chose to use an ephemeral OS, the OS disk must fit in the VM cache. Size requirements and recommendations for VM cache are available in the [Azure VM documentation](../virtual-machines/ephemeral-os-disks.md).
-If you chose to use the AKS default VM size [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with the default OS disk size of 100 GB. The default VM size supports ephemeral OS, but only has 86 GB of cache size. This configuration would default to managed disks if you don't explicitly specify it. If you do request an ephemeral OS, you'll receive a validation error.
+If you chose to use the AKS default VM size [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with the default OS disk size of 100 GB. The default VM size supports ephemeral OS, but only has 86 GiB of cache size. This configuration would default to managed disks if you don't explicitly specify it. If you do request an ephemeral OS, you receive a validation error.
-If you request the same [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with a 60GB OS disk, this configuration would default to ephemeral OS. The requested size of 60GB is smaller than the maximum cache size of 86 GB.
+If you request the same [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with a 60 GiB OS disk, this configuration would default to ephemeral OS. The requested size of 60 GiB is smaller than the maximum cache size of 86 GiB.
-If you select the [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) SKU with 100 GB OS disk, this VM size supports ephemeral OS and has 200 GB of cache space. If you don't specify the OS disk type, the node pool would receive ephemeral OS by default.
+If you select the [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) SKU with 100 GB OS disk, this VM size supports ephemeral OS and has 200 GiB of cache space. If you don't specify the OS disk type, the node pool would receive ephemeral OS by default.
-The latest generation of VM series doesn't have a dedicated cache, but only temporary storage. Let's assume to use the [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with the default OS disk size of 100 GiB as an example. This VM size supports ephemeral OS disks, but only has 75 GiB of temporary storage. This configuration would default to managed OS disks if you don't explicitly specify it. If you do request an ephemeral OS disk, you'll receive a validation error.
+The latest generation of VM series doesn't have a dedicated cache, but only temporary storage. Let's assume to use the [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with the default OS disk size of 100 GiB as an example. This VM size supports ephemeral OS disks, but only has 75 GB of temporary storage. This configuration would default to managed OS disks if you don't explicitly specify it. If you do request an ephemeral OS disk, you receive a validation error.
-If you request the same [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with a 60 GiB OS disk, this configuration would default to ephemeral OS disks. The requested size of 60 GiB is smaller than the maximum temporary storage of 75 GiB.
+If you request the same [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with a 60 GiB OS disk, this configuration defaults to ephemeral OS disks. The requested size of 60 GiB is smaller than the maximum temporary storage of 75 GiB.
-If you chose to use [Standard_E4bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) SKU with 100 GiB OS disk, this VM size supports ephemeral OS and has 150 GiB of temporary storage. If you don't specify the OS disk type, the node pool is provisioned with an ephemeral OS by default.
+If you chose to use [Standard_E4bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) SKU with 100 GiB OS disk, this VM size supports ephemeral OS
+and has 150 GiB of temporary storage. If you don't specify the OS disk type, by default Azure provisions an ephemeral OS disk to the node pool.
Ephemeral OS requires at least version 2.15.0 of the Azure CLI. ### Use Ephemeral OS on new clusters
-Configure the cluster to use Ephemeral OS disks when the cluster is created. Use the `--node-osdisk-type` flag to set Ephemeral OS as the OS disk type for the new cluster.
+Configure the cluster to use ephemeral OS disks when the cluster is created. Use the `--node-osdisk-type` flag to set Ephemeral OS as the OS disk type for the new cluster.
```azurecli az aks create --name myAKSCluster --resource-group myResourceGroup -s Standard_DS3_v2 --node-osdisk-type Ephemeral ```
-If you want to create a regular cluster using network-attached OS disks, you can do so by specifying `--node-osdisk-type=Managed`. You can also choose to add more ephemeral OS node pools as described below.
+If you want to create a regular cluster using network-attached OS disks, you can do so by specifying `--node-osdisk-type=Managed`. You can also choose to add other ephemeral OS node pools as described below.
### Use Ephemeral OS on existing clusters
Similarly, you can specify the Mariner `os_sku` in [`azurerm_kubernetes_cluster_
## Custom resource group name
-When you deploy an Azure Kubernetes Service cluster in Azure, a second resource group is created for the worker nodes. By default, AKS names the node resource group `MC_resourcegroupname_clustername_location`, but you can also specify a custom name.
+When you deploy an Azure Kubernetes Service cluster in Azure, it also creates a second resource group for the worker nodes. By default, AKS names the node resource group `MC_resourcegroupname_clustername_location`, but you can specify a custom name.
-To specify a custom resource group name, install the `aks-preview` Azure CLI extension version 0.3.2 or later. When using the Azure CLI, include the `--node-resource-group` parameter of the `az aks create` command to specify a custom name for the resource group. If you use an Azure Resource Manager template to deploy an AKS cluster, you can define the resource group name by using the `nodeResourceGroup` property.
+To specify a custom resource group name, install the `aks-preview` Azure CLI extension version 0.3.2 or later. When using the Azure CLI, include the `--node-resource-group` parameter with the `az aks create` command to specify a custom name for the resource group. To deploy an AKS cluster with an Azure Resource Manager template, you can define the resource group name by using the `nodeResourceGroup` property.
```azurecli az aks create --name myAKSCluster --resource-group myResourceGroup --node-resource-group myNodeResourceGroup ```
-The secondary resource group is automatically created by the Azure resource provider in your own subscription. You can only specify the custom resource group name when the cluster is created.
+The secondary resource group is automatically created by the Azure resource provider in your own subscription. You can only specify the custom resource group name during cluster creation.
As you work with the node resource group, keep in mind that you can't: - Specify an existing resource group for the node resource group. - Specify a different subscription for the node resource group.-- Change the node resource group name after the cluster has been created.
+- Change the node resource group name after creating the cluster.
- Specify names for the managed resources within the node resource group. - Modify or delete Azure-created tags of managed resources within the node resource group. ## Node Restriction (Preview)
-The [Node Restriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission controller limits the Node and Pod objects a kubelet can modify. Node Restriction is on by default in AKS 1.24+ clusters. If you're using an older version, use the below commands to create a cluster with Node Restriction, or update an existing cluster to add Node Restriction.
+The [Node Restriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission controller limits the Node and Pod objects a kubelet can modify. Node Restriction is on by default in AKS 1.24+ clusters. If you're using an older version, use the following commands to create a cluster with Node Restriction, or update an existing cluster to add Node Restriction.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
To remove Node Restriction from a cluster.
az aks update -n aks -g myResourceGroup --disable-node-restriction ```
-## OIDC Issuer
-
-You can enable an OIDC Issuer URL of the provider, which allows the API server to discover public signing keys. The maximum lifetime of the token issued by the OIDC provider is 1 day.
-
-> [!WARNING]
-> Enable or disable OIDC Issuer changes the current service account token issuer to a new value, which can cause down time and restarts the API server. If the application pods using a service token remain in a failed state after you enable or disable the OIDC Issuer, we recommend you manually restart the pods.
-
-### Prerequisites
-
-* The Azure CLI version 2.42.0 or higher. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-* AKS version 1.22 and higher. If your cluster is running version 1.21 and the OIDC Issuer preview is enabled, we recommend you upgrade the cluster to the minimum required version supported.
-
-### Create an AKS cluster with OIDC Issuer
-
-Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer (preview). The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*:
-
-```azurecli-interactive
-az aks create -g myResourceGroup -n myAKSCluster --node-count 1 --enable-oidc-issuer
-```
-
-### Update an AKS cluster with OIDC Issuer
-
-Update an AKS cluster using the [az aks update][az-aks-update] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer (preview). The following example updates a cluster named *myAKSCluster*:
-
-```azurecli-interactive
-az aks update -g myResourceGroup -n myAKSCluster --enable-oidc-issuer
-```
-
-### Show the OIDC Issuer URL
-
-To get the OIDC Issuer URL, run the following command. Replace the default values for the cluster name and the resource group name.
-
-```azurecli-interactive
-az aks show -n myAKScluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv
-```
-
-### Rotate the OIDC key
-
-To rotate the OIDC key, perform the following command. Replace the default values for the cluster name and the resource group name.
-
-```azurecli-interactive
-az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup
-```
-
-> [!IMPORTANT]
-> Once you rotate the key, the old key (key1) expires after 24 hours. This means that both the old key (key1) and the new key (key2) are valid within the 24-hour period. If you want to invalidate the old key (key1) immediately, you need to rotate the OIDC key twice. Then key2 and key3 are valid, and key1 is invalid.
## Next steps
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
To validate that the secrets are mounted at the volume path that's specified in
[az-aks-show]: /cli/azure/aks#az-aks-show [az-rest]: /cli/azure/reference-index#az-rest [az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create
-[enable-oidc-issuer]: cluster-configuration.md#oidc-issuer
+[enable-oidc-issuer]: use-oidc-issuer.md
[workload-identity]: ./workload-identity-overview.md <!-- LINKS EXTERNAL -->
aks Http Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-proxy.md
The HTTP proxy with the Monitoring add-on supports the following configurations:
The following configurations aren't supported: - The Custom Metrics and Recommended Alerts features aren't supported when you use a proxy with trusted certificates
- - Outbound proxy isn't supported with Azure Monitor Private Link Scope (AMPLS)
## Next steps
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
aks Use Oidc Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-oidc-issuer.md
+
+ Title: Create an OpenID Connect provider for your Azure Kubernetes Service (AKS) cluster
+description: Learn how to configure the OpenID Connect (OIDC) provider for a cluster in Azure Kubernetes Service (AKS)
+ Last updated : 02/21/2023++
+# Create an OpenID Connect provider on Azure Kubernetes Service (AKS)
+
+[OpenID Connect][open-id-connect-overview] (OIDC) extends the OAuth 2.0 authorization protocol for use as an additional authentication protocol issued by Azure Active Directory (Azure AD). You can use OIDC to enable single sign-on (SSO) between your OAuth-enabled applications, on your Azure Kubernetes Service (AKS) cluster, by using a security token called an ID token. With your AKS cluster, you can enable OpenID Connect (OIDC) Issuer, which allows Azure Active Directory (Azure AD) or other cloud provider identity and access management platform, to discover the API server's public signing keys.
+
+AKS rotates the key automatically and periodically. If you don't want to wait, you can rotate the key manually and immediately. The maximum lifetime of the token issued by the OIDC provider is one day.
+
+> [!WARNING]
+> Enable or disable OIDC Issuer changes the current service account token issuer to a new value, which can cause down time and restarts the API server. If your application pods using a service token remain in a failed state after you enable or disable the OIDC Issuer, we recommend you manually restart the pods.
+
+In this article, you learn how to create, update, and manage the OIDC Issuer for your cluster.
+
+## Prerequisites
+
+* The Azure CLI version 2.42.0 or higher. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* AKS supports OIDC Issuer on version 1.22 and higher.
+
+## Create an AKS cluster with OIDC Issuer
+
+You can create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*:
+
+```azurecli-interactive
+az aks create -g myResourceGroup -n myAKSCluster --node-count 1 --enable-oidc-issuer
+```
+
+## Update an AKS cluster with OIDC Issuer
+
+You can update an AKS cluster using the [az aks update][az-aks-update] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example updates a cluster named *myAKSCluster*:
+
+```azurecli-interactive
+az aks update -g myResourceGroup -n myAKSCluster --enable-oidc-issuer
+```
+
+## Show the OIDC Issuer URL
+
+To get the OIDC Issuer URL, run the [az aks show][az-aks-show] command. Replace the default values for the cluster name and the resource group name.
+
+```azurecli-interactive
+az aks show -n myAKScluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv
+```
+
+### Rotate the OIDC key
+
+To rotate the OIDC key, run the [az aks oidc-issuer][az-aks-oidc-issuer] command. Replace the default values for the cluster name and the resource group name.
+
+```azurecli-interactive
+az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup
+```
+
+> [!IMPORTANT]
+> Once you rotate the key, the old key (key1) expires after 24 hours. This means that both the old key (key1) and the new key (key2) are valid within the 24-hour period. If you want to invalidate the old key (key1) immediately, you need to rotate the OIDC key twice. Then key2 and key3 are valid, and key1 is invalid.
+
+## Next steps
+
+* See [configure creating a trust relationship between an app and an external identity provider](../active-directory/develop/workload-identity-federation-create-trust.md) to understand how a federated identity credential creates a trust relationship between an application on your cluster and an external identity provider.
+* Review [Azure AD workload identity][azure-ad-workload-identity-overview] (preview). This authentication method integrates with the Kubernetes native capabilities to federate with any external identity providers on behalf of the application.
+* See [Secure pod network traffic][secure-pod-network-traffic] to understand how to use the Network Policy engine and create Kubernetes network policies to control the flow of traffic between pods in AKS.
+
+<!-- LINKS - external -->
+
+<!-- LINKS - internal -->
+[open-id-connect-overview]: ../active-directory/fundamentals/auth-oidc.md
+[azure-cli-install]: /cli/azure/install-azure-cli
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-update]: /cli/azure/aks#az-aks-update
+[az-aks-show]: /cli/azure/aks#az-aks-show
+[az-aks-oidc-issuer]: /cli/azure/aks/oidc-issuer
+[azure-ad-workload-identity-overview]: workload-identity-overview.md
+[secure-pod-network-traffic]: use-network-policies.md
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md
The environment variable `AZURE_WEBAPP_PACKAGE_PATH` sets the path to your web a
run: | dotnet restore dotnet build --configuration Release
- dotnet publish -c Release -o '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
+ dotnet publish -c Release --property:PublishDir='${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
``` **ASP.NET**
jobs:
run: | dotnet restore dotnet build --configuration Release
- dotnet publish -c Release -o '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
+ dotnet publish -c Release --property:PublishDir='${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
# Deploy to Azure Web apps - name: 'Run Azure webapp deploy action using publish profile credentials'
jobs:
run: | dotnet restore dotnet build --configuration Release
- dotnet publish -c Release -o '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
+ dotnet publish -c Release --property:PublishDir='${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
# Deploy to Azure Web apps - name: 'Run Azure webapp deploy action using publish profile credentials'
jobs:
run: | dotnet restore dotnet build --configuration Release
- dotnet publish -c Release -o '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
+ dotnet publish -c Release --property:PublishDir='${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/myapp'
# Deploy to Azure Web apps - name: 'Run Azure webapp deploy action using publish profile credentials'
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
You can create the web app using the [Azure CLI](/cli/azure/get-started-with-azu
### [Azure CLI](#tab/cli)
-Azure CLI has a command [`az webapp up`](/cli/azure/webapp#az_webapp_up) that will create the necessary resources and deploy your application in a single step.
+Azure CLI has a command [`az webapp up`](/cli/azure/webapp#az-webapp-up) that will create the necessary resources and deploy your application in a single step.
-In the terminal, deploy the code in your local folder using the [`az webapp up`](/cli/azure/webapp#az_webapp_up) command:
+In the terminal, deploy the code in your local folder using the [`az webapp up`](/cli/azure/webapp#az-webapp-up) command:
```azurecli az webapp up --runtime "PHP:8.0" --os-type=linux
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
The [Django sample application](https://github.com/Azure-Samples/msdocs-django-p
- Django validates the HTTP_HOST header in incoming requests. The sample code uses the [`WEBSITE_HOSTNAME` environment variable in App Service](reference-app-settings.md#app-environment) to add the app's domain name to Django's [ALLOWED_HOSTS](https://docs.djangoproject.com/en/4.1/ref/settings/#allowed-hosts) setting.
- :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="6" highlight="3":::
+ :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="6-8" highlight="3":::
- Django doesn't support [serving static files in production](https://docs.djangoproject.com/en/4.1/howto/static-files/deployment/). For this tutorial, you use [WhiteNoise](https://whitenoise.evans.io/) to enable serving the files. The WhiteNoise package was already installed with requirements.txt, and its middleware is added to the list.
- :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="11-14" highlight="14":::
+ :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="11-16" highlight="14":::
Then the static file settings are configured according to the Django documentation.
- :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="23-24":::
+ :::code language="python" source="~/msdocs-django-postgresql-sample-app/azureproject/production.py" range="25-26":::
For more information, see [Production settings for Django apps](configure-language-python.md#production-settings-for-django-apps).
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/runbooks.md
Title: Troubleshoot Azure Automation runbook issues description: This article tells how to troubleshoot and resolve issues with Azure Automation runbooks. Previously updated : 02/06/2022 Last updated : 02/21/2023
Follow [Step 5 - Add authentication to manage Azure resources](../learn/powershe
[Add permissions to Key Vault](../manage-runas-account.md#add-permissions-to-key-vault) to ensure that your Run As account has sufficient permissions to access Key Vault.
+## Scenario: Runbook fails with "Parameter length exceeded" error
+
+### Issue
+Your runbook uses parameters and fails with the following error:
+
+```error
+Total Length of Runbook Parameter names and values exceeds the limit of 30,000 characters. To avoid this issue, use Automation Variables to pass values to runbook.
+```
+
+### Cause
+There is a limit to the total length of characters of all Parameters that can be provided in Python 2.7, Python 3.8, and PowerShell 7.1 runbooks. The total length of all Parameter names, and Parameter values must not exceed 30,000 characters.
+
+### Resolution
+To overcome this issue, you can use Azure Automation [Variables](../shared-resources/variables.md) to pass values to runbook. You can alternatively reduce the number of characters in Parameter names and Parameter values to ensure that the total length does not exceed 30,000 characters.
++ ## Recommended documents * [Runbook execution in Azure Automation](../automation-runbook-execution.md)
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 02/03/2023 Last updated : 02/21/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes."
The following extensions are currently available for use with Arc-enabled Kubern
## Azure Monitor Container Insights
+- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters
+ Azure Monitor Container Insights provides visibility into the performance of workloads deployed on the Kubernetes cluster. Use this extension to collect memory and CPU utilization metrics from controllers, nodes, and containers. For more information, see [Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json).
For more information, see [Understand Azure Policy for Kubernetes clusters](../.
## Azure Key Vault Secrets Provider
+- **Supported distributions**: AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMWare Tanzu Kubernetes Grid
+ The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a CSI volume. For Azure Arc-enabled Kubernetes clusters, you can install the Azure Key Vault Secrets Provider extension to fetch secrets. For more information, see [Use the Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters](tutorial-akv-secrets-provider.md). ## Microsoft Defender for Containers
+- **Supported distributions**: AKS hybrid clusters provisioned from Azure, Cluster API Azure, Azure Red Hat OpenShift, Red Hat OpenShift (version 4.6 or newer), Google Kubernetes Engine Standard, Amazon Elastic Kubernetes Service, VMWare Tanzu Kubernetes Grid, Rancher Kubernetes Engine, Canonical Kubernetes Distribution
+ Microsoft Defender for Containers is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. It gathers information related to security like audit log data from the Kubernetes cluster, and provides recommendations and threat alerts based on gathered data. For more information, see [Enable Microsoft Defender for Containers](../../defender-for-cloud/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json).
For more information, see [Enable Microsoft Defender for Containers](../../defen
## Azure Arc-enabled Open Service Mesh
+- **Supported distributions**: AKS, AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, Rancher Kubernetes Engine, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMWare Tanzu Kubernetes Grid
+ [Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments. For more information, see [Azure Arc-enabled Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md). ## Azure Arc-enabled Data Services
+- **Supported distributions**: AKS, AKS on Azure Stack HCI, Azure Red Hat OpenShift, Google Kubernetes Engine, Canonical Kubernetes Distribution, OpenShift Container Platform, Amazon Elastic Kubernetes Service
+ Makes it possible for you to run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. This extension enables the *custom locations* feature, providing a way to configure Azure Arc-enabled Kubernetes clusters as target locations for deploying instances of Azure offerings. For more information, see [Azure Arc-enabled Data Services](../dat#create-custom-location). ## Azure App Service on Azure Arc
+- **Supported distributions**: AKS, AKS on Azure Stack HCI, Azure Red Hat OpenShift, Google Kubernetes Engine, OpenShift Container Platform
+ Allows you to provision an App Service Kubernetes environment on top of Azure Arc-enabled Kubernetes clusters. For more information, see [App Service, Functions, and Logic Apps on Azure Arc (Preview)](../../app-service/overview-arc-integration.md).
For more information, see [App Service, Functions, and Logic Apps on Azure Arc (
## Azure Event Grid on Kubernetes
+- **Supported distributions**: AKS, Red Hat OpenShift
+ Event Grid is an event broker used to integrate workloads that use event-driven architectures. This extension lets you create and manage Event Grid resources such as topics and event subscriptions on top of Azure Arc-enabled Kubernetes clusters. For more information, see [Event Grid on Kubernetes with Azure Arc (Preview)](../../event-grid/kubernetes/overview.md).
For more information, see [Event Grid on Kubernetes with Azure Arc (Preview)](..
## Azure API Management on Azure Arc
+- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters.
+ With the integration between Azure API Management and Azure Arc on Kubernetes, you can deploy the API Management gateway component as an extension in an Azure Arc-enabled Kubernetes cluster. This extension is [namespace-scoped](conceptual-extensions.md#extension-scope), not cluster-scoped. For more information, see [Deploy an Azure API Management gateway on Azure Arc (preview)](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md).
For more information, see [Deploy an Azure API Management gateway on Azure Arc (
## Azure Arc-enabled Machine Learning
+- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. Not currently supported for ARM 64.
+ The AzureML extension lets you deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters. For more information, see [Introduction to Kubernetes compute target in AzureML](../../machine-learning/how-to-attach-kubernetes-anywhere.md) and [Deploy AzureML extension on AKS or Arc Kubernetes cluster](../../machine-learning/how-to-deploy-kubernetes-extension.md). ## Flux (GitOps)
+- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. Not currently supported for ARM 64.
+ [GitOps on AKS and Azure Arc-enabled Kubernetes](conceptual-gitops-flux2.md) uses [Flux v2](https://fluxcd.io/docs/), a popular open-source tool set, to help manage cluster configuration and application deployment. GitOps is enabled in the cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` cluster extension resource. For more information, see [Tutorial: Deploy applications using GitOps with Flux v2](tutorial-use-gitops-flux2.md).
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023 #
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
ms. Previously updated : 02/01/2023 Last updated : 02/17/2023
This QuickStart shows you how to connect your SCVMM management server to Azure A
## Prerequisites >[!Note]
->If VMM server is running on Windows Server 2016 machine, ensure that [Open SSH package](https://github.com/PowerShell/Win32-OpenSSH/releases) is installed.
+>- If VMM server is running on Windows Server 2016 machine, ensure that [Open SSH package](https://github.com/PowerShell/Win32-OpenSSH/releases) is installed.
+>- If you deploy an older version of appliance (version lesser than 0.2.25), Arc operation fails with the error *Appliance cluster is not deployed with AAD authentication*. To fix this issue, download the latest version of the onboarding script and deploy the resource bridge again.
| **Requirement** | **Details** | | | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |
-| **SCVMM** | You need an SCVMM management server running version 2016 or later.<br/><br/> A private cloud that has at least one cluster with minimum free capacity of 16 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> For dynamic IP allocation to appliance VM, DHCP server is required. For static IP allocation, VMM static IP pool is required. |
+| **SCVMM** | You need an SCVMM management server running version 2016 or later.<br/><br/> A private cloud with minimum free capacity of 16 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> For dynamic IP allocation to appliance VM, DHCP server is required. For static IP allocation, VMM static IP pool is required. |
| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM as well as the deployment of the Arc Resource bridge VM. | | **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> To avoid network latency issues, we recommend executing the helper script directly in the VMM server machine.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you may experience performance issues. |
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Each action is made up of the following properties:
For information about how to use Azure Resource Manager templates to configure action groups, see [Action group Resource Manager templates](./action-groups-create-resource-manager-template.md).
-An action group is a **global** service, so there's no dependency on a specific Azure region. Requests from clients can be processed by action group services in any region. For instance, if one region of the action group service is down, the traffic is automatically routed and processed by other regions. As a global service, an action group helps provide a **disaster recovery** solution.
+An action group is a *global* service, so there's no dependency on a specific Azure region. Requests from clients can be processed by action group services in any region. For instance, if one region of the action group service is down, the traffic is automatically routed and processed by other regions. As a global service, an action group helps provide a disaster recovery solution.
## Create an action group by using the Azure portal
An action group is a **global** service, so there's no dependency on a specific
1. Select **Alerts**, and then select **Action groups**.
- :::image type="content" source="./media/action-groups/manage-action-groups.png" alt-text="Screenshot of the Alerts page in the Azure portal. The Action groups button is called out.":::
+ :::image type="content" source="./media/action-groups/manage-action-groups.png" alt-text="Screenshot that shows the Alerts page in the Azure portal. The Action groups button is called out.":::
1. Select **Create**.
- :::image type="content" source="./media/action-groups/create-action-group.png" alt-text="Screenshot of the Action groups page in the Azure portal. The Create button is called out.":::
+ :::image type="content" source="./media/action-groups/create-action-group.png" alt-text="Screenshot that shows the Action groups page in the Azure portal. The Create button is called out.":::
1. Enter information as explained in the following sections. ### Configure basic action group settings
-1. Under **Project details**
- - Select values for **Subscription** and **Resource group**.
- - Select the region
+1. Under **Project details**, select:
+ - Values for **Subscription** and **Resource group**.
+ - The region.
| Option | Behavior | | | -- |
- | Global | The action groups service decides where to store the action group. The action group is persisted in at least two regions to ensure regional resiliency. Processing of actions may be done in any [geographic region](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview).<br></br>Voice, SMS and email actions performed as the result of [service health alerts](../../service-health/alerts-activity-log-service-notifications-portal.md) are resilient to Azure live-site-incidents. |
+ | Global | The action groups service decides where to store the action group. The action group is persisted in at least two regions to ensure regional resiliency. Processing of actions may be done in any [geographic region](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview).<br></br>Voice, SMS, and email actions performed as the result of [service health alerts](../../service-health/alerts-activity-log-service-notifications-portal.md) are resilient to Azure live-site incidents. |
| Regional | The action group is stored within the selected region. The action group is [zone-redundant](../../availability-zones/az-region.md#highly-available-services). Processing of actions is performed within the region.</br></br>Use this option if you want to ensure that the processing of your action group is performed within a specific [geographic boundary](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview). |
- The action group is saved in the subscription, region and resource group that you select.
+ The action group is saved in the subscription, region, and resource group that you select.
1. Under **Instance details**, enter values for **Action group name** and **Display name**. The display name is used in place of a full action group name when the group is used to send notifications.
- :::image type="content" source="./media/action-groups/action-group-1-basics.png" alt-text="Screenshot of the Create action group dialog box. Values are visible in the Subscription, Resource group, Action group name, and Display name boxes.":::
+ :::image type="content" source="./media/action-groups/action-group-1-basics.png" alt-text="Screenshot that shows the Create action group dialog. Values are visible in the Subscription, Resource group, Action group name, and Display name boxes.":::
### Configure notifications
An action group is a **global** service, so there's no dependency on a specific
- **Email/SMS message/Push/Voice**: Send various notification types to specific recipients. - **Name**: Enter a unique name for the notification.- - **Details**: Based on the selected notification type, enter an email address, phone number, or other information.
+ - **Common alert schema**: You can choose to turn on the common alert schema, which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor. For more information about this schema, see [Common alert schema](./alerts-common-schema.md).
- - **Common alert schema**: You can choose to turn on the common alert schema, which provides the advantage of having a single extensible and unified alert payload across all the alert services in Monitor. For more information about this schema, see [Common alert schema](./alerts-common-schema.md).
-
- :::image type="content" source="./media/action-groups/action-group-2-notifications.png" alt-text="Screenshot of the Notifications tab of the Create action group dialog box. Configuration information for an email notification is visible.":::
+ :::image type="content" source="./media/action-groups/action-group-2-notifications.png" alt-text="Screenshot that shows the Notifications tab of the Create action group dialog. Configuration information for an email notification is visible.":::
-1. Select OK.
+1. Select **OK**.
### Configure actions
An action group is a **global** service, so there's no dependency on a specific
- A webhook - **Name**: Enter a unique name for the action.- - **Details**: Enter appropriate information for your selected action type. For instance, you might enter a webhook URI, the name of an Azure app, an ITSM connection, or an Automation runbook. For an ITSM action, also enter values for **Work item** and other fields that your ITSM tool requires.
+ - **Common alert schema**: You can choose to turn on the common alert schema, which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor. For more information about this schema, see [Common alert schema](./alerts-common-schema.md).
- - **Common alert schema**: You can choose to turn on the common alert schema, which provides the advantage of having a single extensible and unified alert payload across all the alert services in Monitor. For more information about this schema, see [Common alert schema](./alerts-common-schema.md).
-
- :::image type="content" source="./media/action-groups/action-group-3-actions.png" alt-text="Screenshot of the Actions tab of the Create action group dialog box. Several options are visible in the Action type list.":::
+ :::image type="content" source="./media/action-groups/action-group-3-actions.png" alt-text="Screenshot that shows the Actions tab of the Create action group dialog. Several options are visible in the Action type list.":::
### Create the action group
-1. If you'd like to assign a key-value pair to the action group, select **Next: Tags** or the **Tags** tab. Otherwise, skip this step. By using tags, you can categorize your Azure resources. Tags are available for all Azure resources, resource groups, and subscriptions.
+1. To assign a key-value pair to the action group, select **Next: Tags**. Alternately, at the top of the page, select the **Tags** tab. Otherwise, skip this step. By using tags, you can categorize your Azure resources. Tags are available for all Azure resources, resource groups, and subscriptions.
- :::image type="content" source="./media/action-groups/action-group-4-tags.png" alt-text="Screenshot of the Tags tab of the Create action group dialog box. Values are visible in the Name and Value boxes.":::
+ :::image type="content" source="./media/action-groups/action-group-4-tags.png" alt-text="Screenshot that shows the Tags tab of the Create action group dialog. Values are visible in the Name and Value boxes.":::
1. To review your settings, select **Review + create**. This step quickly checks your inputs to make sure you've entered all required information. If there are issues, they're reported here. After you've reviewed the settings, select **Create** to create the action group.
- :::image type="content" source="./media/action-groups/action-group-5-review.png" alt-text="Screenshot of the Review + create tab of the Create action group dialog box. All configured values are visible.":::
+ :::image type="content" source="./media/action-groups/action-group-5-review.png" alt-text="Screenshot that shows the Review + create tab of the Create action group dialog. All configured values are visible.":::
> [!NOTE] >
-> When you configure an action to notify a person by email or SMS, they receive a confirmation indicating that they have been added to the action group.
+> When you configure an action to notify a person by email or SMS, they receive a confirmation that indicates they were added to the action group.
### Test an action group in the Azure portal (preview)
-When you create or update an action group in the Azure portal, you can **test** the action group.
+When you create or update an action group in the Azure portal, you can test the action group.
-1. Define an action, as described in the previous few sections. Then select **Review + create**.
+1. Define an action, as described in the previous few sections. Then select **Review + create**.
-> [!NOTE]
->
-> If you are editing an already exisitng action group, you must save changes to the action group before testing.
+ > [!NOTE]
+ >
+ > If you're editing an already existing action group, you must save changes to the action group before you begin testing.
-2. On the page that lists the information that you entered, select **Test action group**.
+1. On the page that lists the information you entered, select **Test action group**.
- :::image type="content" source="./media/action-groups/test-action-group.png" alt-text="Screenshot of test action group start page. A Test action group button is visible.":::
+ :::image type="content" source="./media/action-groups/test-action-group.png" alt-text="Screenshot that shows the test action group start page with the Test option.":::
-3. Select a sample type and the notification and action types that you want to test. Then select **Test**.
+1. Select a sample type and the notification and action types that you want to test. Then select **Test**.
- :::image type="content" source="./media/action-groups/test-sample-action-group.png" alt-text="Screenshot of the Test sample action group page. An email notification type and a webhook action type are visible.":::
+ :::image type="content" source="./media/action-groups/test-sample-action-group.png" alt-text="Screenshot that shows the Test sample action group page with an email notification type and a webhook action type.":::
-4. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you don't get test results.
+1. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you don't get test results.
- :::image type="content" source="./media/action-groups/stop-running-test.png" alt-text="Screenshot of the Test sample action group page. A dialog box contains a Stop button and asks the user about stopping the test.":::
+ :::image type="content" source="./media/action-groups/stop-running-test.png" alt-text="Screenshot that shows the Test Sample action group page. A dialog contains a Stop button and asks the user about stopping the test.":::
-5. When the test is complete, a test status of either **Success** or **Failed** appears. If the test failed and you'd like to get more information, select **View details**.
+1. When the test is finished, a test status of either **Success** or **Failed** appears. If the test failed and you want to get more information, select **View details**.
- :::image type="content" source="./media/action-groups/test-sample-failed.png" alt-text="Screenshot of the Test sample action group page. Error details are visible, and a white X on a red background indicates that a test failed.":::
+ :::image type="content" source="./media/action-groups/test-sample-failed.png" alt-text="Screenshot that shows the Test sample action group page showing a test that failed.":::
You can use the information in the **Error details** section to understand the issue. Then you can edit, save changes, and test the action group again.
The following table describes the role membership requirements that are needed f
> [!NOTE] >
-> You can run a limited number of tests per time period. To check which limits apply to your situation, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
+> You can run a limited number of tests per time period. To check which limits apply to your situation, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md).
>
-> When you configure an action group in the portal, you can opt in or out of the common alert schema.
+> When you configure an action group in the portal, you can opt in or out of the common alert schema:
> > - To find common schema samples for all sample types, see [Common alert schema definitions for Test Action Group](./alerts-common-schema-test-action-definitions.md). > - To find non-common schema alert definitions, see [Non-common alert schema definitions for Test Action Group](./alerts-non-common-schema-definitions.md). - ## Create an action group with a Resource Manager template You can use an [Azure Resource Manager template](../../azure-resource-manager/templates/syntax.md) to configure action groups. Using templates, you can automatically set up action groups that can be reused in certain types of alerts. These action groups ensure that all the correct parties are notified when an alert is triggered.
The basic steps are:
### Resource Manager templates for an action group
-To create an action group using a Resource Manager template, you create a resource of the type `Microsoft.Insights/actionGroups`. Then you fill in all related properties. Here are two sample templates that create an action group.
+To create an action group by using a Resource Manager template, you create a resource of the type `Microsoft.Insights/actionGroups`. Then you fill in all related properties. Here are two sample templates that create an action group.
-First template, describes how to create a Resource Manager template for an action group where the action definitions are hard-coded in the template. Second template, describes how to create a template that takes the webhook configuration information as input parameters when the template is deployed.
+The first template describes how to create a Resource Manager template for an action group where the action definitions are hard-coded in the template. The second template describes how to create a template that takes the webhook configuration information as input parameters when the template is deployed.
```json {
First template, describes how to create a Resource Manager template for an actio
After you create an action group, you can view it in the portal:
-1. From the **Monitor** page, select **Alerts**.
-1. Select **Manage actions**.
+1. On the **Monitor** page, select **Alerts**.
+1. Select **Manage actions**.
1. Select the action group that you want to manage. You can: - Add, edit, or remove actions.
The following sections provide information about the various actions and notific
To check limits on Automation runbook payloads, see [Automation limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits).
-You may have a limited number of runbook actions per action group.
+You might have a limited number of runbook actions per action group.
-### Azure app push notifications
+### Azure App Service push notifications
To enable push notifications to the Azure mobile app, provide the email address that you use as your account ID when you configure the Azure mobile app. For more information about the Azure mobile app, see [Get the Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/).
You might have a limited number of Azure app actions per action group.
### Email Ensure that your email filtering and any malware/spam prevention services are configured appropriately. Emails are sent from the following email addresses:
-
+ - azure-noreply@microsoft.com - azureemail-noreply@microsoft.com - alerts-noreply@mail.windowsazure.com
-You may have a limited number of email actions per action group. For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
+You might have a limited number of email actions per action group. For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md).
### Email Azure Resource Manager role
When you use this type of notification, you can send email to the members of a s
A notification email is sent only to the *primary email* address.
-If your *primary email* doesn't receive notifications, take the following steps:
+If your primary email doesn't receive notifications:
1. In the Azure portal, go to **Active Directory**. 1. On the left, select **All users**. On the right, a list of users appears.
-1. Select the user whose *primary email* you'd like to review.
+1. Select the user whose *primary email* you want to review.
- :::image type="content" source="media/action-groups/active-directory-user-profile.png" alt-text="Screenshot of the Azure portal All users page. Information about one user is visible but is indecipherable." border="true":::
+ :::image type="content" source="media/action-groups/active-directory-user-profile.png" alt-text="Screenshot that shows the Azure portal All users page. Information about one user is visible but is indecipherable." border="true":::
1. In the user profile, look under **Contact info** for an **Email** value. If it's blank:
If your *primary email* doesn't receive notifications, take the following steps:
1. Enter an email address. 1. At the top of the page, select **Save**.
- :::image type="content" source="media/action-groups/active-directory-add-primary-email.png" alt-text="Screenshot of a user profile page in the Azure portal. The Edit button and the Email box are called out." border="true":::
+ :::image type="content" source="media/action-groups/active-directory-add-primary-email.png" alt-text="Screenshot that shows a user profile page in the Azure portal. The Edit button and the Email box are called out." border="true":::
-You may have a limited number of email actions per action group. To check which limits apply to your situation, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
+You might have a limited number of email actions per action group. To check which limits apply to your situation, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md).
-When you set up the Azure Resource Manager role:
+When you set up the Resource Manager role:
-1. Assign an entity of type **"User"** to the role.
+1. Assign an entity of type **User** to the role.
1. Make the assignment at the **subscription** level. 1. Make sure an email address is configured for the user in their **Azure AD profile**. > [!NOTE] >
-> It can take up to **24 hours** for a customer to start receiving notifications after they add a new Azure Resource Manager role to their subscription.
+> It can take up to 24 hours for a customer to start receiving notifications after they add a new Azure Resource Manager role to their subscription.
### Event Hubs
An Event Hubs action publishes notifications to Event Hubs. For more information
An action that uses Functions calls an existing HTTP trigger endpoint in Functions. For more information about Functions, see [Azure Functions](../../azure-functions/functions-get-started.md). To handle a request, your endpoint must handle the HTTP POST verb.
-When you define the function action, the function's HTTP trigger endpoint and access key are saved in the action definition, for example, `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=<access_key>`. If you change the access key for the function, you need to remove and recreate the function action in the action group.
+When you define the function action, the function's HTTP trigger endpoint and access key are saved in the action definition, for example, `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=<access_key>`. If you change the access key for the function, you must remove and re-create the function action in the action group.
-You may have a limited number of function actions per action group.
+You might have a limited number of function actions per action group.
> [!NOTE] >
- > The function must have access to the storage account. If not, no keys will be available and the function URI will not be accessible.
+ > The function must have access to the storage account. If not, no keys will be available and the function URI won't be accessible.
> [Learn about restoring access to the storage account](../../azure-functions/functions-recover-storage-account.md) ### ITSM
You might have a limited number of ITSM actions per action group.
### Logic Apps
-You may have a limited number of Logic Apps actions per action group.
+You might have a limited number of Logic Apps actions per action group.
### Secure webhook
-When you use a secure webhook action, you must use Azure AD to secure the connection between your action group and your protected web API, which is your webhook endpoint.
-
-The secure webhook Action authenticates to the protected API using a Service Principal instance in the AD tenant of the "AZNS AAD Webhook" Azure AD Application. To make the action group work, this Azure AD Webhook Service Principal needs to be added as member of a role on the target Azure AD application that grants access to the target endpoint.
-
+When you use a secure webhook action, you must use Azure AD to secure the connection between your action group and your protected web API, which is your webhook endpoint.
+
+The secure webhook action authenticates to the protected API by using a Service Principal instance in the Azure AD tenant of the "AZNS AAD Webhook" Azure AD application. To make the action group work, this Azure AD Webhook Service Principal must be added as a member of a role on the target Azure AD application that grants access to the target endpoint.
+ For an overview of Azure AD applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md). Follow these steps to take advantage of the secure webhook functionality. > [!NOTE] >
-> Basic authentication is not supported for SecureWebhook. To use basic authentication you must use Webhook.
+> Basic authentication isn't supported for `SecureWebhook`. To use basic authentication, you must use `Webhook`.
-> [!NOTE]
->
-> If you use the webhook action, your target webhook endpoint needs to be able to process the various JSON payloads that different alert sources emit. If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the Logic Apps action to transform the alert schema to meet the target webhook's expectations.
+If you use the webhook action, your target webhook endpoint must be able to process the various JSON payloads that different alert sources emit. If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the Logic Apps action to transform the alert schema to meet the target webhook's expectations.
-1. Create an Azure AD application for your protected web API. For detailed information, see [Protected web API: App registration](../../active-directory/develop/scenario-protected-web-api-app-registration.md). Configure your protected API to be called by a daemon app, and expose application permissions, not delegated permissions. For more information about these permissions, see [If your web API is called by a service or daemon app](../../active-directory/develop/scenario-protected-web-api-app-registration.md#if-your-web-api-is-called-by-a-service-or-daemon-app).
+1. Create an Azure AD application for your protected web API. For more information, see [Protected web API: App registration](../../active-directory/develop/scenario-protected-web-api-app-registration.md). Configure your protected API to be called by a daemon app and expose application permissions, not delegated permissions. For more information about these permissions, see [If your web API is called by a service or daemon app](../../active-directory/develop/scenario-protected-web-api-app-registration.md#if-your-web-api-is-called-by-a-service-or-daemon-app).
> [!NOTE] >
- > Configure your protected web API to accept V2.0 access tokens. For detailed information about this setting, see [Azure Active Directory app manifest](../../active-directory/develop/reference-app-manifest.md#accesstokenacceptedversion-attribute).
+ > Configure your protected web API to accept V2.0 access tokens. For more information about this setting, see [Azure Active Directory app manifest](../../active-directory/develop/reference-app-manifest.md#accesstokenacceptedversion-attribute).
1. To enable the action group to use your Azure AD application, use the PowerShell script that follows this procedure.
For an overview of Azure AD applications and service principals, see [Microsoft
> You must be assigned the [Azure AD Application Administrator role](../../active-directory/roles/permissions-reference.md#all-roles) to run this script. 1. Modify the PowerShell script's `Connect-AzureAD` call to use your Azure AD tenant ID.
- 1. Modify the PowerShell script's `$myAzureADApplicationObjectId` variable to use the Object ID of your Azure AD application.
+ 1. Modify the PowerShell script's `$myAzureADApplicationObjectId` variable to use the object ID of your Azure AD application.
1. Run the modified script. > [!NOTE] >
- > The service principle needs to be assigned an **owner role** of the Azure AD application to be able to create or modify the secure webhook action in the action group.
+ > The service principal must be assigned an **owner role** of the Azure AD application to be able to create or modify the secure webhook action in the action group.
1. Configure the secure webhook action. 1. Copy the `$myApp.ObjectId` value that's in the script. 1. In the webhook action definition, in the **Object Id** box, enter the value that you copied.
- :::image type="content" source="./media/action-groups/action-groups-secure-webhook.png" alt-text="Screenshot of the Secured Webhook dialog box in the Azure portal. The Object ID box is visible." border="true":::
+ :::image type="content" source="./media/action-groups/action-groups-secure-webhook.png" alt-text="Screenshot that shows the Secured Webhook dialog in the Azure portal with the Object Id box." border="true":::
#### Secure webhook PowerShell script
Write-Host $myApp.AppRoles
### SMS
-For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
+For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md).
For important information about using SMS notifications in action groups, see [SMS alert behavior in action groups](./alerts-sms-behavior.md).
For information about pricing for supported countries/regions, see [Azure Monito
### Voice
-For important information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App push notifications, and webhook posts](./alerts-rate-limiting.md).
+For important information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md).
You might have a limited number of voice actions per action group.
For information about pricing for supported countries/regions, see [Azure Monito
> [!NOTE] >
-> If you use the webhook action, your target webhook endpoint needs to be able to process the various JSON payloads that different alert sources emit. You can not pass security ceritifcate through a webhook action. If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the Logic Apps action to transform the alert schema to meet the target webhook's expectations.
+> If you use the webhook action, your target webhook endpoint must be able to process the various JSON payloads that different alert sources emit. You can't pass security certificates through a webhook action. If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the Logic Apps action to transform the alert schema to meet the target webhook's expectations.
Webhook action groups use the following rules: - A webhook call is attempted at most three times.- - The first call waits 10 seconds for a response.--- Between the first and second call it waits 20 seconds for a response.--- Between the second and third call it waits 40 seconds for a response.-
+- Between the first and second call, it waits 20 seconds for a response.
+- Between the second and third call, it waits 40 seconds for a response.
- The call is retried if any of the following conditions are met: - A response isn't received within the timeout period.
- - One of the following HTTP status codes is returned: 408, 429, 503, 504 or TaskCancellationException.
- - If any one of the above errors is encountered an additional 5 seconds wait for the response.
+ - One of the following HTTP status codes is returned: 408, 429, 503, 504, or `TaskCancellationException`.
+ - If any one of the preceding errors is encountered, wait an additional 5 seconds for the response.
- If three attempts to call the webhook fail, no action group calls the endpoint for 15 minutes.
For source IP address ranges, see [Action group IP addresses](../app/ip-addresse
- Learn more about [SMS alert behavior](./alerts-sms-behavior.md). - Gain an [understanding of the activity log alert webhook schema](./activity-log-alerts-webhook.md).-- Learn more about [ITSM Connector](./itsmc-overview.md).
+- Learn more about the [ITSM Connector](./itsmc-overview.md).
- Learn more about [rate limiting](./alerts-rate-limiting.md) on alerts.-- Get an [overview of activity log alerts](./alerts-overview.md), and learn how to receive alerts.
+- Get an [overview of activity log alerts](./alerts-overview.md) and learn how to receive alerts.
- Learn how to [configure alerts whenever a Service Health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
azure-monitor Alerts Classic.Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-classic.overview.md
Title: Overview of classic alerts in Azure Monitor
-description: Classic alerts are being deprecated. Alerts enable you to monitor Azure resource metrics, events, or logs and be notified when a condition you specify is met.
+description: Classic alerts will be deprecated. Alerts enable you to monitor Azure resource metrics, events, or logs, and they notify you when a condition you specify is met.
Last updated 2/23/2022
-# What are classic alerts in Microsoft Azure?
+# What are classic alerts in Azure?
> [!NOTE]
-> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
+> This article describes how to create older classic metric alerts. Azure Monitor now supports [near real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **February 29, 2024**.
>
-Alerts allow you to configure conditions over data and become notified when the conditions match the latest monitoring data.
+Alerts allow you to configure conditions over data, and they notify you when the conditions match the latest monitoring data.
## Old and new alerting capabilities
-In the past Azure Monitor, Application Insights, Log Analytics, and Service Health had separate alerting capabilities. Overtime, Azure improved and combined both the user interface and different methods of alerting. The consolidation is still in process.
+In the past, Azure Monitor, Application Insights, Log Analytics, and Service Health had separate alerting capabilities. Over time, Azure improved and combined both the user interface and different methods of alerting. The consolidation is still in process.
-You can view classic alerts only in the classic alerts user screen in the Azure portal. You get this screen from the **View classic alerts** button on the alerts screen.
+You can view classic alerts only on the classic alerts user screen in the Azure portal. To see this screen, select **View classic alerts** on the **Alerts** screen.
- ![Alert choices in Azure portal](media/alerts-classic.overview/monitor-alert-screen2.png)
+ ![Screenshot that shows alert choices in the Azure portal.](media/alerts-classic.overview/monitor-alert-screen2.png)
The new alerts user experience has the following benefits over the classic alerts experience:-- **Better notification system** - All newer alerts use action groups, which are named groups of notifications and actions that can be reused in multiple alerts. Classic metric alerts and older Log Analytics alerts do not use action groups.-- **A unified authoring experience** - All alert creation for metrics, logs and activity log across Azure Monitor, Log Analytics, and Application Insights is in one place.-- **View fired Log Analytics alerts in Azure portal** - You can now also see fired Log Analytics alerts in your subscription. Previously these were in a separate portal.-- **Separation of fired alerts and alert rules** - Alert rules (the definition of condition that triggers an alert), and Fired Alerts (an instance of the alert rule firing) are differentiated, so the operational and configuration views are separated.-- **Better workflow** - The new alerts authoring experience guides the user along the process of configuring an alert rule, which makes it simpler to discover the right things to get alerted on.-- **Smart Alerts consolidation** and **setting alert state** - Newer alerts include auto grouping functionality showing similar alerts together to reduce overload in the user interface.
+- **Better notification system:** All newer alerts use action groups. You can reuse these named groups of notifications and actions in multiple alerts. Classic metric alerts and older Log Analytics alerts don't use action groups.
+- **A unified authoring experience:** All alert creation for metrics, logs, and activity logs across Azure Monitor, Log Analytics, and Application Insights is in one place.
+- **View fired Log Analytics alerts in the Azure portal:** You can now also see fired Log Analytics alerts in your subscription. Previously, these alerts were in a separate portal.
+- **Separation of fired alerts and alert rules:** Alert rules (the definition of condition that triggers an alert) and fired alerts (an instance of the alert rule firing) are differentiated. Now the operational and configuration views are separated.
+- **Better workflow:** The new alerts authoring experience guides the user along the process of configuring an alert rule. This change makes it simpler to discover the right things to get alerted on.
+- **Smart alerts consolidation and setting alert state:** Newer alerts include auto grouping functionality that shows similar alerts together to reduce overload in the user interface.
The newer metric alerts have the following benefits over the classic metric alerts:-- **Improved latency**: Newer metric alerts can run as frequently as every one minute. Older metric alerts always run at a frequency of 5 minutes. Newer alerts have increasing smaller delay from issue occurrence to notification or action (3 to 5 minutes). Older alerts are 5 to 15 minutes depending on the type. Log alerts typically have 10 to 15-minute delay due to the time it takes to ingest the logs, but newer processing methods are reducing that time. -- **Support for multi-dimensional metrics**: You can alert on dimensional metrics allowing you to monitor an interesting segment of the metric.-- **More control over metric conditions**: You can define richer alert rules. The newer alerts support monitoring the maximum, minimum, average, and total values of metrics.-- **Combined monitoring of multiple metrics**: You can monitor multiple metrics (currently, up to two metrics) with a single rule. An alert is triggered if both metrics breach their respective thresholds for the specified time-period.-- **Better notification system**: All newer alerts use [action groups](./action-groups.md), which are named groups of notifications and actions that can be reused in multiple alerts. Classic metric alerts and older Log Analytics alerts do not use action groups. -- **Metrics from Logs** (public preview): Log data going into Log Analytics can now be extracted and converted into Azure Monitor metrics and then alerted on just like other metrics.
-See [Alerts (classic)]() for the terminology specific to classic alerts.
-
+- **Improved latency:** Newer metric alerts can run as frequently as every minute. Older metric alerts always run at a frequency of 5 minutes. Newer alerts have increasing smaller delay from issue occurrence to notification or action (3 to 5 minutes). Older alerts are 5 to 15 minutes depending on the type. Log alerts typically have a delay of 10 minutes to 15 minutes because of the time it takes to ingest the logs. Newer processing methods are reducing that time.
+- **Support for multidimensional metrics:** You can alert on dimensional metrics. Now you can monitor an interesting segment of the metric.
+- **More control over metric conditions:** You can define richer alert rules. The newer alerts support monitoring the maximum, minimum, average, and total values of metrics.
+- **Combined monitoring of multiple metrics:** You can monitor multiple metrics (currently, up to two metrics) with a single rule. An alert triggers if both metrics breach their respective thresholds for the specified time period.
+- **Better notification system:** All newer alerts use [action groups](./action-groups.md). You can reuse these named groups of notifications and actions in multiple alerts. Classic metric alerts and older Log Analytics alerts don't use action groups.
+- **Metrics from logs (preview):** You can now extract and convert log data that goes into Log Analytics into Azure Monitor metrics and then alert on it like other metrics. For the terminology specific to classic alerts, see [Alerts (classic)]().
## Classic alerts on Azure Monitor data
-There are two types of classic alerts available - metric alerts and activity log alerts.
-
-* **Classic metric alerts** - This alert triggers when the value of a specified metric crosses a threshold that you assign. The alert generates a notification when that threshold is crossed and the alert condition is met. At that point, the alert is considered "Activated". It generates another notification when it is "Resolved" - that is, when the threshold is crossed again and the condition is no longer met.
+Two types of classic alerts are available:
-* **Classic activity log alerts** - A streaming log alert that triggers on an Activity Log event entry that matches your filter criteria. These alerts have only one state, "Activated". The alert engine simply applies the filter criteria to any new event. It does not search to find older entries. These alerts can notify you when a new Service Health incident occurs or when a user or application performs an operation in your subscription, for example, "Delete virtual machine."
+* **Classic metric alerts**: This alert triggers when the value of a specified metric crosses a threshold that you assign. The alert generates a notification when that threshold is crossed and the alert condition is met. At that point, the alert is considered "Activated." It generates another notification when it's "Resolved," that is, when the threshold is crossed again and the condition is no longer met.
+* **Classic activity log alerts**: A streaming log alert that triggers on an activity log event entry that matches your filter criteria. These alerts have only one state: "Activated." The alert engine applies the filter criteria to any new event. It doesn't search to find older entries. These alerts can notify you when a new Service Health incident occurs or when a user or application performs an operation in your subscription. An example of an operation might be "Delete virtual machine."
-For resource log data available through Azure Monitor, route the data into Log Analytics and use a log query alert. Log Analytics now uses the [new alerting method](./alerts-overview.md)
+For resource log data available through Azure Monitor, route the data into Log Analytics and use a log query alert. Log Analytics now uses the [new alerting method](./alerts-overview.md).
The following diagram summarizes sources of data in Azure Monitor and, conceptually, how you can alert off of that data.
-![Alerts explained](media/alerts-classic.overview/Alerts_Overview_Resource_v5.png)
+![Diagram that explains alerts.](media/alerts-classic.overview/Alerts_Overview_Resource_v5.png)
## Taxonomy of alerts (classic) Azure uses the following terms to describe classic alerts and their functions:
-* **Alert** - a definition of criteria (one or more rules or conditions) that becomes activated when met.
-* **Active** - the state when the criteria defined by a classic alert is met.
-* **Resolved** - the state when the criteria defined by a classic alert is no longer met after previously having been met.
-* **Notification** - the action taken based off of a classic alert becoming active.
-* **Action** - a specific call sent to a receiver of a notification (for example, emailing an address or posting to a webhook URL). Notifications can usually trigger multiple actions.
+* **Alert**: A definition of criteria (one or more rules or conditions) that becomes activated when met.
+* **Active**: The state when the criteria defined by a classic alert are met.
+* **Resolved**: The state when the criteria defined by a classic alert are no longer met after they were previously met.
+* **Notification**: The action taken based off of a classic alert becoming active.
+* **Action**: A specific call sent to a receiver of a notification (for example, emailing an address or posting to a webhook URL). Notifications can usually trigger multiple actions.
## How do I receive a notification from an Azure Monitor classic alert?
-Historically, Azure alerts from different services used their own built-in notification methods.
+Historically, Azure alerts from different services used their own built-in notification methods.
+
+Azure Monitor created a reusable notification grouping called *action groups*. Action groups specify a set of receivers for a notification. Any time an alert is activated that references the action group, all receivers receive that notification. With action groups, you can reuse a grouping of receivers (for example, your on-call engineer list) across many alert objects.
-Azure Monitor created a reusable notification grouping called *action groups*. Action groups specify a set of receivers for a notification. Any time an alert is activated that references the Action Group, all receivers receive that notification. Action groups allow you to reuse a grouping of receivers (for example, your on-call engineer list) across many alert objects. Action groups support notification by posting to a webhook URL in addition to email addresses, SMS numbers, and a number of other actions. For more information, see [action groups](./action-groups.md).
+Action groups support notification by posting to a webhook URL and to email addresses, SMS numbers, and several other actions. For more information, see [Action groups](./action-groups.md).
-Older classic Activity Log alerts use action groups.
+Older classic activity log alerts use action groups. But the older metric alerts don't use action groups. Instead, you can configure the following actions:
-However, the older metric alerts do not use action groups. Instead, you can configure the following actions:
-- Send email notifications to the service administrator, to coadministrators, or to additional email addresses that you specify.-- Call a webhook, which enables you to launch additional automation actions.
+- Send email notifications to the service administrator, co-administrators, or other email addresses that you specify.
+- Call a webhook, which enables you to launch other automation actions.
-Webhooks enables automation and remediation, for example, using:
-- Azure Automation Runbook-- Azure Function-- Azure Logic App-- a third-party service
+Webhooks enable automation and remediation, for example, by using:
+- Azure Automation runbooks
+- Azure Functions
+- Azure Logic Apps
+- A third-party service
## Next steps
-Get information about alert rules and configuring them by using:
-
-* Learn more about [Metrics](../data-platform.md)
-* Configure [classic Metric Alerts via Azure portal](alerts-classic-portal.md)
-* Configure [classic Metric Alerts PowerShell](alerts-classic-portal.md)
-* Configure [classic Metric Alerts Command-line interface (CLI)](alerts-classic-portal.md)
-* Configure [classic Metric Alerts Azure Monitor REST API](/rest/api/monitor/alertrules)
-* Learn more about [Activity Log](../essentials/platform-logs-overview.md)
-* Configure [Activity Log Alerts via Azure portal](./activity-log-alerts.md)
-* Configure [Activity Log Alerts via Resource Manager](./alerts-activity-log.md)
-* Review the [activity log alert webhook schema](activity-log-alerts-webhook.md)
-* Learn more about [Action groups](./action-groups.md)
-* Configure [newer Alerts](alerts-metric.md)
+Get information about alert rules and how to configure them:
+
+* Learn more about [metrics](../data-platform.md).
+* Configure [classic metric alerts via the Azure portal](alerts-classic-portal.md).
+* Configure [classic metric alerts via PowerShell](alerts-classic-portal.md).
+* Configure [classic metric alerts via the command-line interface (CLI)](alerts-classic-portal.md).
+* Configure [classic metric alerts via the Azure Monitor REST API](/rest/api/monitor/alertrules).
+* Learn more about [activity logs](../essentials/platform-logs-overview.md).
+* Configure [activity log alerts via the Azure portal](./activity-log-alerts.md).
+* Configure [activity log alerts via Azure Resource Manager](./alerts-activity-log.md).
+* Review the [activity log alert webhook schema](activity-log-alerts-webhook.md).
+* Learn more about [action groups](./action-groups.md).
+* Configure [newer alerts](alerts-metric.md).
azure-monitor Alerts Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-logic-apps.md
Title: Customize alert notifications using Logic Apps
+ Title: Customize alert notifications by using Logic Apps
description: Learn how to create a logic app to process Azure Monitor alerts.
Last updated 02/09/2023
-# Customer intent: As an administrator I want to create a logic app that is triggered by an alert so that I can send emails or Teams messages when an alert is fired.
+# Customer intent: As an administrator, I want to create a logic app that's triggered by an alert so that I can send emails or Teams messages when an alert is fired.
-# Customize alert notifications using Logic Apps
+# Customize alert notifications by using Logic Apps
-This article shows you how to create a Logic App and integrate it with an Azure Monitor Alert.
+This article shows you how to create a logic app and integrate it with an Azure Monitor alert.
-[Azure Logic Apps](../../logic-apps/logic-apps-overview.md) allows you to build and customize workflows for integration. Use Logic Apps to customize your alert notifications.
-
-- Customize the alerts email, using your own email subject and body format. -- Customize the alert metadata by looking up tags for affected resources or fetching a log query search result. For information on how to access the search result rows containing alerts data, see:
+You can use [Azure Logic Apps](../../logic-apps/logic-apps-overview.md) to build and customize workflows for integration. Use Logic Apps to customize your alert notifications. You can:
+
+- Customize the alerts email by using your own email subject and body format.
+- Customize the alert metadata by looking up tags for affected resources or fetching a log query search result. For information on how to access the search result rows that contain alerts data, see:
- [Azure Monitor Log Analytics API response format](../logs/api/response-format.md) - [Query/management HTTP response](/azure/data-explorer/kusto/api/rest/response)-- Integrate with external services using existing connectors like Outlook, Microsoft Teams, Slack and PagerDuty, or by configuring the Logic App for your own services.
+- Integrate with external services by using existing connectors like Outlook, Microsoft Teams, Slack, and PagerDuty. You can also configure the logic app for your own services.
+
+This example creates a logic app that uses the [common alerts schema](./alerts-common-schema.md) to send details from the alert.
-In this example, the following steps create a Logic App that uses the [common alerts schema](./alerts-common-schema.md) to send details from the alert. The example uses the following steps:
+## Create a logic app
-1. [Create a Logic App](#create-a-logic-app) for sending an email or a Teams post.
-1. [Create an alert action group](#create-an-action-group) that triggers the logic app.
-1. [Create a rule](#create-a-rule-using-your-action-group) the uses the action group.
+1. In the [Azure portal](https://portal.azure.com/), create a new logic app. In the **Search** bar at the top of the page, enter **Logic App**.
+1. On the **Logic App** page, select **Add**.
+1. Select the **Subscription** and **Resource group** for your logic app.
+1. Set **Logic App name**. For **Plan type**, select **Consumption**.
+1. Select **Review + create** > **Create**.
+1. Select **Go to resource** after the deployment is finished.
-## Create a Logic App
+ :::image type="content" source="./media/alerts-logic-apps/create-logic-app.png" alt-text="Screenshot that shows the Create Logic App page.":::
+1. On the **Logic Apps Designer** page, select **When a HTTP request is received**.
-1. In the [portal](https://portal.azure.com/), create a new Logic app. In the **Search** bar at the top of the page, enter "Logic App".
-1. On the **Logic App** page, select **+Add**.
-1. Select the **Subscription** and **Resource group** for your Logic App.
-1. Set **Logic App name**, and select **Consumption Plan type**.
-1. Select **Review + create**, then select **Create**.
-1. Select **Go to resource** when the deployment is complete.
-1. On the **Logic Apps Designer** page, select **When a HTTP request is received**.
+ :::image type="content" source="./media/alerts-logic-apps/logic-apps-designer.png" alt-text="Screenshot that shows the Logic Apps Designer start page.":::
1. Paste the common alert schema into the **Request Body JSON Schema** field from the following JSON: ```json
In this example, the following steps create a Logic App that uses the [common al
} ```
- :::image type="content" source="./media/alerts-logic-apps/configure-http-request-received.png" alt-text="A screenshot showing the parameters for the http request received step.":::
+ :::image type="content" source="./media/alerts-logic-apps/configure-http-request-received.png" alt-text="Screenshot that shows the Parameters tab for the When a HTTP request is received pane.":::
-1. (Optional). You can customize the alert notification by extracting information about the affected resource on which the alert fired, e.g. the resourceΓÇÖs tags. You can then include those resource tags in the alert payload and use the information in your logical expressions for sending the notifications. To do this, we will:
+1. (Optional). You can customize the alert notification by extracting information about the affected resource on which the alert fired, for example, the resource's tags. You can then include those resource tags in the alert payload and use the information in your logical expressions for sending the notifications. To do this step, we will:
- Create a variable for the affected resource IDs.
- - Split the resource ID into in an array so we can use its various elements (e.g. subscription, resource group).
- - Use the Azure Resource Manager connector to read the resourceΓÇÖs metadata.
- - Fetch the resourceΓÇÖs tags which can then be used in subsequent steps of the Logic App.
+ - Split the resource ID into an array so that we can use its various elements (for example, subscription and resource group).
+ - Use the Azure Resource Manager connector to read the resource's metadata.
+ - Fetch the resource's tags, which can then be used in subsequent steps of the logic app.
- 1. Select **+** and **Add an action** to insert a new step.
+ 1. Select **+** > **Add an action** to insert a new step.
1. In the **Search** field, search for and select **Initialize variable**.
- 1. In the **Name** field, enter the name of the variable, such as 'AffectedResources'.
+ 1. In the **Name** field, enter the name of the variable, such as **AffectedResources**.
1. In the **Type** field, select **Array**.
- 1. In the **Value** field, select **Add dynamic Content**. Select the **Expression** tab, and enter this string: `split(triggerBody()?['data']?['essentials']?['alertTargetIDs'][0], '/')`.
+ 1. In the **Value** field, select **Add dynamic Content**. Select the **Expression** tab and enter the string `split(triggerBody()?['data']?['essentials']?['alertTargetIDs'][0], '/')`.
- :::image type="content" source="./media/alerts-logic-apps/initialize-variable.png" alt-text="A screenshot showing the parameters for the initializing a variable in Logic Apps.":::
+ :::image type="content" source="./media/alerts-logic-apps/initialize-variable.png" alt-text="Screenshot that shows the Parameters tab for the Initialize variable pane.":::
- 1. Select **+** and **Add an action** to insert another step.
- 1. In the **Search** field, search for and select **Azure Resource Manager**, and then **Read a resource**.
- 1. Populate the fields of the **Read a resource** action with the array values from the `AffectedResources` variable. In each of the fields, click inside the field, and scroll down to **Enter a custom value**. Select **Add dynamic content**, and then select the **Expression** tab. Enter the strings from this table:
+ 1. Select **+** > **Add an action** to insert another step.
+ 1. In the **Search** field, search for and select **Azure Resource Manager** > **Read a resource**.
+ 1. Populate the fields of the **Read a resource** action with the array values from the `AffectedResources` variable. In each of the fields, select the field and scroll down to **Enter a custom value**. Select **Add dynamic content**, and then select the **Expression** tab. Enter the strings from this table:
|Field|String value| |||
In this example, the following steps create a Logic App that uses the [common al
The dynamic content now includes tags from the affected resource. You can use those tags when you configure your notifications as described in the following steps. 1. Send an email or post a Teams message.
-1. Select **+** and **Add an action** to insert a new step.
+1. Select **+** > **Add an action** to insert a new step.
- :::image type="content" source="./media/alerts-logic-apps/configure-http-request-received.png" alt-text="A screenshot showing the parameters for the when http request received step.":::
+ :::image type="content" source="./media/alerts-logic-apps/configure-http-request-received.png" alt-text="Screenshot that shows the parameters for When a HTTP request is received.":::
## [Send an email](#tab/send-email)
-1. In the search field, search for *outlook*.
-1. Select **Office 365 Outlook**.
- :::image type="content" source="./media/alerts-logic-apps/choose-operation-outlook.png" alt-text="A screenshot showing add action page of the logic apps designer with Office 365 Outlook selected.":::
+1. In the search field, search for **Outlook**.
+1. Select **Office 365 Outlook**.
+
+ :::image type="content" source="./media/alerts-logic-apps/choose-operation-outlook.png" alt-text="Screenshot that shows the Add an action page of the Logic Apps Designer with Office 365 Outlook selected.":::
1. Select **Send an email (V2)** from the list of actions.
-1. Sign into Office 365 when prompted to create a connection.
-1. Create the email **Body** by entering static text and including content taken from the alert payload by choosing fields from the **Dynamic content** list.
+1. Sign in to Office 365 when you're prompted to create a connection.
+1. Create the email **Body** by entering static text and including content taken from the alert payload by choosing fields from the **Dynamic content** list.
For example:
- - Enter the text: `An alert has been triggered with this monitoring condition:`. Then, select **monitorCondition** from the **Dynamic content** list.
- - Enter the text: `Date fired:`. Then, select **firedDateTime** from the **Dynamic content** list.
- - Enter the text: `Affected resources:`. Then, select **alertTargetIDs** from the **Dynamic content** list.
+ - **An alert has monitoring condition:** Select **monitorCondition** from the **Dynamic content** list.
+ - **Date fired:** Select **firedDateTime** from the **Dynamic content** list.
+ - **Affected resources:** Select **alertTargetIDs** from the **Dynamic content** list.
-1. In the **Subject** field, create the subject text by entering static text and including content taken from the alert payload by choosing fields from the **Dynamic content** list.
-For example:
- - Enter the text: `Alert:`. Then, select **alertRule** from the **Dynamic content** list.
- - Enter the text: `with severity:`. Then, select **severity** from the **Dynamic content** list.
- - Enter the text: `has condition:`. Then, select **monitorCondition** from the **Dynamic content** list.
+1. In the **Subject** field, create the subject text by entering static text and including content taken from the alert payload by choosing fields from the **Dynamic content** list. For example:
+ - **Alert:** Select **alertRule** from the **Dynamic content** list.
+ - **with severity:** Select **severity** from the **Dynamic content** list.
+ - **has condition:** Select **monitorCondition** from the **Dynamic content** list.
-1. Enter the email address to send the alert to in the **To** field.
+1. Enter the email address to send the alert to the **To** field.
1. Select **Save**.
- :::image type="content" source="./media/alerts-logic-apps/configure-email.png" alt-text="A screenshot showing the parameters tab for the send email action.":::
+ :::image type="content" source="./media/alerts-logic-apps/configure-email.png" alt-text="Screenshot that shows the Parameters tab on the Send an email pane.":::
-You've created a Logic App that sends an email to the specified address, with details from the alert that triggered it.
+You've created a logic app that sends an email to the specified address, with details from the alert that triggered it.
-The next step is to create an action group to trigger your Logic App.
+The next step is to create an action group to trigger your logic app.
## [Post a Teams message](#tab/send-teams-message)
-1. In the search field, search for *Microsoft Teams*.
-1. Select **Microsoft Teams**
- :::image type="content" source="./media/alerts-logic-apps/choose-operation-teams.png" alt-text="A screenshot showing add action page of the logic apps designer with Microsoft Teams selected.":::
-1. Select **Post a message in a chat or channel** from the list of actions.
-1. Sign into Teams when prompted to create a connection.
-1. Select **User** from the **Post as** dropdown.
+1. In the search field, search for **Microsoft Teams**.
+1. Select **Microsoft Teams**.
+
+ :::image type="content" source="./media/alerts-logic-apps/choose-operation-teams.png" alt-text="Screenshot that shows the Add an action page of the Logic Apps Designer with Microsoft Teams selected.":::
+1. Select **Post message in a chat or channel** from the list of actions.
+1. Sign in to Teams when you're prompted to create a connection.
+1. Select **User** from the **Post as** dropdown.
1. Select **Group chat** from the **Post in** dropdown. 1. Select your group from the **Group chat** dropdown.
-1. Create the message text in the **Message** field by entering static text and including content taken from the alert payload by choosing fields from the **Dynamic content** list.
- For example:
- 1. Enter `Alert:` then select **alertRule** from the **Dynamic content** list.
- 1. Enter `with severity:` and select **severity** from the **Dynamic content** list.
- 1. Enter `was fired at:` and select **firedDateTime** from the **Dynamic content** list.
+1. Create the message text in the **Message** field by entering static text and including content taken from the alert payload by choosing fields from the **Dynamic content** list. For example:
+ 1. **Alert:** Select **alertRule** from the **Dynamic content** list.
+ 1. **with severity:** Select **severity** from the **Dynamic content** list.
+ 1. **was fired at:** Select **firedDateTime** from the **Dynamic content** list.
1. Add more fields according to your requirements.
-1. Select **Save**
- :::image type="content" source="./media/alerts-logic-apps/configure-teams-message.png" alt-text="A screenshot showing the parameters tab for the post a message in a chat or channel action.":::
+1. Select **Save**.
+
+ :::image type="content" source="./media/alerts-logic-apps/configure-teams-message.png" alt-text="Screenshot that shows the Parameters tab on the Post message in a chat or channel pane.":::
-You've created a Logic App that sends a Teams message to the specified group, with details from the alert that triggered it.
+You've created a logic app that sends a Teams message to the specified group, with details from the alert that triggered it.
-The next step is to create an action group to trigger your Logic App.
+The next step is to create an action group to trigger your logic app.
## Create an action group
-To trigger your Logic app, create an action group, then create an alert that uses that action group.
+To trigger your logic app, create an action group. Then create an alert that uses that action group.
-1. Go to the Azure Monitor page and select **Alerts** from the sidebar.
-
-1. Select **Action groups**, then select **Create**.
-1. Select a **Subscription**, **Resource group** and **Region**.
-1. Enter an **Actions group name** and **Display name**.
+1. Go to the **Azure Monitor** page and select **Alerts** from the pane on the left.
+1. Select **Action groups** > **Create**.
+1. Select values for **Subscription**, **Resource group**, and **Region**.
+1. Enter a name for **Action group name** and **Display name**.
1. Select the **Actions** tab.
-1. In the **Actions** tab under **Action type**, select **Logic App**.
+
+ :::image type="content" source="./media/alerts-logic-apps/create-action-group.png" alt-text="Screenshot that shows the Actions tab on the Create an action group page.":::
+
+1. On the **Actions** tab under **Action type**, select **Logic App**.
1. In the **Logic App** section, select your logic app from the dropdown.
-1. Set **Enable common alert schema** to *Yes*. If you select *No*, the alert type determines which alert schema is used. For more information about alert schemas, see [Context specific alert schemas](./alerts-non-common-schema-definitions.md).
+1. Set **Enable common alert schema** to **Yes**. If you select **No**, the alert type determines which alert schema is used. For more information about alert schemas, see [Context-specific alert schemas](./alerts-non-common-schema-definitions.md).
1. Select **OK**. 1. Enter a name in the **Name** field.
-1. Select **Review + create**, the **Create**.
+1. Select **Review + create** > **Create**.
+
+ :::image type="content" source="./media/alerts-logic-apps/create-action-group-actions.png" alt-text="Screenshot that shows the Actions tab on the Create an action group page and the Logic App pane.":::
## Test your action group 1. Select your action group.
-1. In the **Logic App** section, select **Test action group(preview)**.
-1. Select a **Sample alert type** from the dropdown.
-1. Select **Test**.
-
+1. In the **Logic App** section, select **Test action group (preview)**.
+
+ :::image type="content" source="./media/alerts-logic-apps/test-action-group1.png" alt-text="Screenshot that shows an action group details page with the Test action group option.":::
+1. Select a sample alert type from the **Select sample type** dropdown.
+1. Select **Test**.
-The following email is sent to the specified account:
+ :::image type="content" source="./media/alerts-logic-apps/test-action-group2.png" alt-text="Screenshot that shows an action group details Test page.":::
+ The following email is sent to the specified account:
+ :::image type="content" source="./media/alerts-logic-apps/sample-output-email.png" alt-text="Screenshot that shows a sample email sent by the Test page.":::
-## Create a rule using your action group
+## Create a rule by using your action group
-1. [Create a rule](./alerts-create-new-alert-rule.md) for one of your resources.
-
-1. In the actions section of your rule, select **Select action groups**.
+1. [Create a rule](./alerts-create-new-alert-rule.md) for one of your resources.
+1. On the **Actions** tab of your rule, choose **Select action groups**.
1. Select your action group from the list.
-1. Select **Select**.
+1. Choose **Select**.
1. Finish the creation of your rule.
- :::image type="content" source="./media/alerts-logic-apps/select-action-groups.png" alt-text="A screenshot showing the actions tab of the create rules page and the select action groups pane.":::
+
+ :::image type="content" source="./media/alerts-logic-apps/select-action-groups.png" alt-text="Screenshot that shows the Actions tab on the Create an alert rule pane and the Select action groups pane.":::
## Next steps
-* [Learn more about action groups](./action-groups.md).
-* [Learn more about the common alert schema](./alerts-common-schema.md).
+* [Learn more about action groups](./action-groups.md)
+* [Learn more about the common alert schema](./alerts-common-schema.md)
azure-monitor Alerts Manage Alert Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md
description: Manage your alert rules in the Azure portal, or using the CLI or Po
Previously updated : 08/03/2022 Last updated : 02/20/2023 # Manage your alert rules
Manage your alert rules in the Azure portal, or using the CLI or PowerShell.
## Manage alert rules in the Azure portal 1. In the [portal](https://portal.azure.com/), select **Monitor**, then **Alerts**.
-1. From the top command bar, select **Alert rules**. You'll see all of your alert rules across subscriptions. You can filter the list of rules using the available filters: **Resource group**, **Resource type**, **Resource** and **Signal type**.
+1. From the top command bar, select **Alert rules**. The page shows all your alert rules across on all subscriptions.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-rules-page.png" alt-text="Screenshot of alerts rules page.":::
+
+1. You can filter the list of rules using the available filters:
+ - Subscription
+ - Alert condition
+ - Severity
+ - User response
+ - Monitor service
+ - Signal type
+ - Resource group
+ - Target resource type
+ - Resource name
+ - Suppression status
+
+ > [!NOTE]
+ > If you filter on a `target resource type` scope, the alerts rules list doesnΓÇÖt include resource health alert rules. To see the resource health alert rules, remove the `Target resource type` filter, or filter the rules based on the `Resource group` or `Subscription`.
+ 1. Select the alert rule that you want to edit. You can select multiple alert rules and enable or disable them. Multi-selecting rules can be useful when you want to perform maintenance on specific resources. 1. Edit any of the fields in the following sections. You can't edit the **Alert Rule Name**, or the **Signal type** of an existing alert rule. - **Scope**. You can edit the scope for all alert rules **other than**:
To enable recommended alert rules:
1. In the **Notify me by** section, select the way you want to be notified if an alert is fired. 1. Select **Enable**.
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-enable-recommended-alert-rule-pane.png" alt-text="Screenshot of recommended alert rules pane.":::
## Manage metric alert rules with the Azure CLI
azure-monitor Alerts Rate Limiting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-rate-limiting.md
Title: Rate limiting for SMS, emails, push notifications
-description: Understand how Azure limits the number of possible SMS, email, Azure App push or webhook notifications from an action group.
+description: Understand how Azure limits the number of possible SMS, email, Azure App Service push, or webhook notifications from an action group.
Last updated 2/23/2022
-# Rate limiting for Voice, SMS, emails, Azure App push notifications and webhook posts
-Rate limiting is a suspension of notifications that occurs when too many are sent to a particular phone number, email address or device. Rate limiting ensures that alerts are manageable and actionable.
+# Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts
+Rate limiting is a suspension of notifications that occurs when too many notifications are sent to a particular phone number, email address, or device. Rate limiting ensures that alerts are manageable and actionable.
The rate limit thresholds in **production** are: -- **SMS**: No more than 1 SMS every 5 minutes.-- **Voice**: No more than 1 Voice call every 5 minutes.
+- **SMS**: No more than one SMS every 5 minutes.
+- **Voice**: No more than one voice call every 5 minutes.
- **Email**: No more than 100 emails in an hour.
-
- Other actions are not rate limited.
+
+ Other actions aren't rate limited.
The rate limit thresholds for **test action group** are: -- **SMS**: No more than 1 SMS every 1 minute.-- **Voice**: No more than 1 Voice call every 1 minute.-- **Email**: No more than 2 emails in every 1 minute.
-
- Other actions are not rate limited.
+- **SMS**: No more than one SMS every 1 minute.
+- **Voice**: No more than one voice call every 1 minute.
+- **Email**: No more than two emails in every 1 minute.
+
+ Other actions aren't rate limited.
## Rate limit rules - A particular phone number or email is rate limited when it receives more messages than the threshold allows. - A phone number or email can be part of action groups across many subscriptions. Rate limiting applies across all subscriptions. It applies as soon as the threshold is reached, even if messages are sent from multiple subscriptions.-- When an email address is rate limited, an additional notification is sent to communicate the rate limiting. The email states when the rate limiting expires.
+- When an email address is rate limited, another notification is sent to communicate the rate limiting. The email states when the rate limiting expires.
## Next steps ## * Learn more about [SMS alert behavior](alerts-sms-behavior.md).
-* Get an [overview of activity log alerts](./alerts-overview.md), and learn how to receive alerts.
+* Get an [overview of activity log alerts](./alerts-overview.md) and learn how to receive alerts.
* Learn how to [configure alerts whenever a service health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
azure-monitor Alerts Sms Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-sms-behavior.md
Title: SMS Alert behavior in Action Groups
-description: SMS message format and responding to SMS messages to unsubscribe, resubscribe or request help.
+ Title: SMS alert behavior in action groups
+description: SMS message format and responding to SMS messages to unsubscribe, resubscribe, or request help.
Last updated 2/23/2022
-# SMS Alert Behavior in Action Groups
+# SMS alert behavior in action groups
-## Overview
-Action groups enable you to configure a list of actions. These groups are used when defining alerts; ensuring that a particular action group is notified when the alert is triggered. One of the actions supported is SMS; SMS notifications support bi-directional communication. A user may respond to an SMS to:
+Action groups enable you to configure a list of actions. These groups are used when you define alerts. They ensure that a particular action group is notified when the alert is triggered. One of the actions supported is SMS. SMS notifications support bidirectional communication. A user can respond to an SMS to:
-- **Unsubscribe from alerts:** A user may unsubscribe from all SMS alerts for all action groups, or a single action group.-- **Resubscribe to alerts:** A user may resubscribe to all SMS alerts for all action groups, or a single action group. -- **Request help:** A user may ask for more information on the SMS. They are redirected to this article.
+- **Unsubscribe from alerts:** A user can unsubscribe from all SMS alerts for all action groups or a single action group.
+- **Resubscribe to alerts:** A user can resubscribe to all SMS alerts for all action groups or a single action group.
+- **Request help:** A user can ask for more information on the SMS. Users are redirected to this article.
-This article covers the behavior of the SMS alerts and the response actions the user can take based on the locale of the user:
+This article covers the behavior of SMS alerts and the response actions the user can take based on the locale of the user.
-## Receiving an SMS Alert
+## Receive an SMS alert
An SMS receiver configured as part of an action group receives an SMS when an alert is triggered. The SMS contains the following information:
-* Shortname of the action group this alert was sent to
+
+* Short name of the action group where this alert was sent
* Title of the alert | REPLY | Description | | -- | -- |
-| DISABLE `<Action Group Short name>` | Disables further SMS from the Action Group |
-| ENABLE `<Action Group Short name>` | Re-enables SMS from the Action Group |
-| STOP | Disables further SMS from all Action Groups |
-| START | Re-enables SMS from ALL Action Groups |
+| DISABLE `<Action Group Short name>` | Disables further SMS from the action group. |
+| ENABLE `<Action Group Short name>` | Re-enables SMS from the action group. |
+| STOP | Disables further SMS from all action groups. |
+| START | Re-enables SMS from all action groups. |
| HELP | A response is sent to the user with a link to this article. | >[!NOTE]
->If a user has unsubscribed from SMS alerts, but is then added to a new action group; they WILL receive SMS alerts for that new action group, but remain unsubscribed from all previous action groups.
+>If a user has unsubscribed from SMS alerts but is then added to a new action group, they *will* receive SMS alerts for that new action group but remain unsubscribed from all previous action groups.
-## Next Steps
-Get an [overview of activity log alerts](./alerts-overview.md) and learn how to get alerted
-Learn more about [SMS rate limiting](alerts-rate-limiting.md)
-Learn more about [action groups](./action-groups.md)
+## Next steps
+* Get an [overview of activity log alerts](./alerts-overview.md) and learn how to get alerted.
+* Learn more about [SMS rate limiting](alerts-rate-limiting.md).
+* Learn more about [action groups](./action-groups.md).
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
First, instrument your Python application with latest [OpenCensus Python SDK](./
} ```
-You can find a Django sample application in the sample Azure Monitor OpenCensus Python samples repository located [here](https://github.com/givenscj/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
+You can find a Django sample application in the sample Azure Monitor OpenCensus Python samples repository located [here](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
## Tracking Flask applications 1. Download and install `opencensus-ext-flask` from [PyPI](https://pypi.org/project/opencensus-ext-flask/) and instrument your application with the `flask` middleware. Incoming requests sent to your `flask` application will be tracked.
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
LOGGING = {
"azure": { "level": "DEBUG", "class": "opencensus.ext.azure.log_exporter.AzureLogHandler",
- "instrumentation_key": "<your-ikey-here>",
+ "connection_string": "<your-application-insights-connection-string>",
}, "console": { "level": "DEBUG",
azure-monitor Change Analysis Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-enable.md
The Change Analysis service:
- Easily navigate through all resource changes. - Identify relevant changes in the troubleshooting or monitoring context.
-Register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription to make the resource properties and configuration change data available. The `Microsoft.ChangeAnalysis` resource is automatically registered as you either:
+Register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription to make the resource properties and configuration change data available. The `Microsoft.ChangeAnalysis` resource provider is automatically registered as you either:
- Enter any UI entry point, like the Web App **Diagnose and Solve Problems** tool, or - Bring up the Change Analysis standalone tab.
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md
The following table describes the settings you can configure to control metric c
ConfigMap is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMap overruling the collections.
+### Agent settings for outbound proxy with Azure Monitor Private Link Scope (AMPLS)
+
+| Key | Data type | Value | Description |
+|--|--|--|--|
+| `[agent_settings.proxy_config] ignore_proxy_settings =` | Boolean | True or false | Set this value to true to ignore proxy settings. On both AKS & Arc K8s enviornments, if your cluster is configured with forward proxy, then proxy settings are automatically applied and used for the agent. For certain configurations, such as, with AMPLS + Proxy, you may with for the proxy config to be ignored. . By default, this setting is set to `false`. |
+ ## Configure and deploy ConfigMaps To configure and deploy your ConfigMap configuration file to your cluster:
To configure and deploy your ConfigMap configuration file to your cluster:
The configuration change can take a few minutes to finish before taking effect. Then all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, so not all of them restart at the same time. When the restarts are finished, a message similar to this example includes the following result: `configmap "container-azm-ms-agentconfig" created`. + ## Verify configuration To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n kube-system`. If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following example:
azure-monitor Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/availability-zones.md
Azure Monitor currently supports the following regions: - East US 2 - West US 2
+- Canada Central
+- France Central
+- Japan East
## Dedicated clusters Azure Monitor support for availability zones requires a Log Analytics workspace linked to an [Azure Monitor dedicated cluster](logs-dedicated-clusters.md). Dedicated Clusters are a deployment option that enables advanced capabilities for Azure Monitor Logs including availability zones.
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Configure a table for Basic logs if:
These tables currently support Basic logs:
- | Table | Details|
+ | Service | Table |
|:|:| | Custom tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) |
- | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations) | Communication Services incoming requests Calls. |
- | [ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary) | Communication Services recording summary logs. |
- | [ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | Communication Services Rooms incoming requests operations. |
- | [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs) | Health Data Services operational logs. |
- | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | Application Insights Freeform traces. |
- | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations) | Azure Media Services encoder connects, disconnects, or discontinues. |
- | [AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests) | Azure Media Services HTTP request details for key, or license acquisition. |
- | [AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth) | Azure Media Services account health status. |
- | [AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) | Azure Media Services information about requests to streaming endpoints. |
- | [ASCAuditLogs](/azure/azure-monitor/reference/tables/ASCAuditLogs) | Azure Sphere audit logs generated by Azure Sphere service and devices. |
- | [ASCDeviceEvents](/azure/azure-monitor/reference/tables/ASCDeviceEvents) | Azure Sphere devices operations, with information about event types, event categories, event classes, event descriptions etc. |
- | [AVNMNetworkGroupMembershipChange](/azure/azure-monitor/reference/tables/AVNMNetworkGroupMembershipChange) | Azure Virtual Network Manager changes to network group membership of network resources. |
- | [AZFWNetworkRule](/azure/azure-monitor/reference/tables/AZFWNetworkRule) | Azure Firewalls network rules logs including data plane packet and rule's attributes. |
- | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | Azure Container Apps logs, generated within a Container Apps environment. |
- | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records. |
- | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) | Dev Center resources data plane audit logs. For example, dev boxes and environment stop, start, delete. |
- | [StorageBlobLogs](/azure/azure-monitor/reference/tables/StorageBlobLogs) | Azure Storage blob service logs. |
- | [StorageFileLogs](/azure/azure-monitor/reference/tables/StorageFileLogs) | Azure Storage file service logs. |
- | [StorageQueueLogs](/azure/azure-monitor/reference/tables/StorageQueueLogs) | Azure Storage queue service logs. |
- | [StorageTableLogs](/azure/azure-monitor/reference/tables/StorageTableLogs) | Azure Storage table service logs. |
+ | Application Insights | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) |
+ | Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) |
+ | Container Insights | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) |
+ | Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) |
+ | Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) |
+ | Dev Center | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) |
+ | Firewalls | [AZFWNetworkRule](/azure/azure-monitor/reference/tables/AZFWNetworkRule) |
+ | Health Data | [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs) |
+ | Media Services | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) |
+ | Sphere | [ASCAuditLogs](/azure/azure-monitor/reference/tables/ASCAuditLogs)<br>[ASCDeviceEvents](/azure/azure-monitor/reference/tables/ASCDeviceEvents) |
+ | Storage | [StorageBlobLogs](/azure/azure-monitor/reference/tables/StorageBlobLogs)<br>[StorageFileLogs](/azure/azure-monitor/reference/tables/StorageFileLogs)<br>[StorageQueueLogs](/azure/azure-monitor/reference/tables/StorageQueueLogs)<br>[StorageTableLogs](/azure/azure-monitor/reference/tables/StorageTableLogs) |
+ | Storage Mover | [StorageMoverJobRunLogs](/azure/azure-monitor/reference/tables/StorageMoverJobRunLogs) |
+ | Virtual Network Manager | [AVNMNetworkGroupMembershipChange](/azure/azure-monitor/reference/tables/AVNMNetworkGroupMembershipChange) |
> [!NOTE] > Tables created with the [Data Collector API](data-collector-api.md) don't support Basic logs.
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Azure Monitor collects and aggregates the data from every layer and component of
Azure Monitor also includes Azure Monitor SCOM Managed Instance, which allows you to move your on-premises System Center Operation Manager (Operations Manager) installation to the cloud in Azure.
-Use Azure Monitor to monitor these types resources in Azure, other clouds, or on-premises:
+Use Azure Monitor to monitor these types of resources in Azure, other clouds, or on-premises:
- Applications - Virtual machines - Guest operating systems
You may need to integrate Azure Monitor with other systems or to build custom so
## Next steps - [Getting started with Azure Monitor](getting-started.md) - [Sources of monitoring data for Azure Monitor](data-sources.md)-- [Data collection in Azure Monitor](essentials/data-collection.md)
+- [Data collection in Azure Monitor](essentials/data-collection.md)
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 12/23/2022 Last updated : 2/17/2022
Azure NetApp Files Standard network features are supported for the following reg
* North Central US * North Europe * Norway East
+* Norway West
* South Africa North * South Central US * South India
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
Previously updated : 12/13/2022 Last updated : 02/21/2023 #Customer intent: As an IT admin new to Azure NetApp Files, I want to quickly set up Azure NetApp Files and create a volume.
Use the Azure portal, PowerShell, or the Azure CLI to [register for NetApp Resou
![Select Azure NetApp Files](../media/azure-netapp-files/azure-netapp-files-select-azure-netapp-files.png)
-2. Click **+ Create** to create a new NetApp account.
+2. Select **+ Create** to create a new NetApp account.
3. In the New NetApp Account window, provide the following information: 1. Enter **myaccount1** for the account name. 2. Select your subscription.
- 3. Select **Create new** to create new resource group. Enter **myRG1** for the resource group name. Click **OK**.
+ 3. Select **Create new** to create new resource group. Enter **myRG1** for the resource group name. Select **OK**.
4. Select your account location. ![New NetApp Account window](../media/azure-netapp-files/azure-netapp-files-new-account-window.png) ![Resource group window](../media/azure-netapp-files/azure-netapp-files-resource-group-window.png)
-4. Click **Create** to create your new NetApp account.
+4. Select **Create** to create your new NetApp account.
# [PowerShell](#tab/azure-powershell)
The following code snippet shows how to create a NetApp account in an Azure Reso
1. From the Azure NetApp Files management blade, select your NetApp account (**myaccount1**).
- ![Select NetApp account](../media/azure-netapp-files/azure-netapp-files-select-netapp-account.png)
+ ![Screenshot of selecting NetApp account menu.](../media/azure-netapp-files/azure-netapp-files-select-netapp-account.png)
-2. From the Azure NetApp Files management blade of your NetApp account, click **Capacity pools**.
+2. From the Azure NetApp Files management blade of your NetApp account, select **Capacity pools**.
- ![Click Capacity pools](../media/azure-netapp-files/azure-netapp-files-click-capacity-pools.png)
+ ![Screenshot of Capacity pool selection interface.](../media/azure-netapp-files/azure-netapp-files-click-capacity-pools.png)
-3. Click **+ Add pools**.
+3. Select **+ Add pools**.
- ![Click Add pools](../media/azure-netapp-files/azure-netapp-files-new-capacity-pool.png)
+ :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot of new capacity pool options.":::
4. Provide information for the capacity pool: * Enter **mypool1** as the pool name.
The following code snippet shows how to create a NetApp account in an Azure Reso
* Specify **4 (TiB)** as the pool size. * Use the **Auto** QoS type.
-5. Click **Create**.
+5. Select **Create**.
# [PowerShell](#tab/azure-powershell)
The following code snippet shows how to create a capacity pool in an Azure Resou
# [Portal](#tab/azure-portal)
-1. From the Azure NetApp Files management blade of your NetApp account, click **Volumes**.
+1. From the Azure NetApp Files management blade of your NetApp account, select **Volumes**.
- ![Click Volumes](../media/azure-netapp-files/azure-netapp-files-click-volumes.png)
+ ![Screenshot of select volumes interface.](../media/azure-netapp-files/azure-netapp-files-click-volumes.png)
-2. Click **+ Add volume**.
+2. Select **+ Add volume**.
- ![Click Add volumes](../media/azure-netapp-files/azure-netapp-files-click-add-volumes.png)
+ ![Screenshot of add volumes interface.](../media/azure-netapp-files/azure-netapp-files-click-add-volumes.png)
3. In the Create a Volume window, provide information for the volume: 1. Enter **myvol1** as the volume name. 2. Select your capacity pool (**mypool1**). 3. Use the default value for quota.
- 4. Under virtual network, click **Create new** to create a new Azure virtual network (Vnet). Then fill in the following information:
+ 4. Under virtual network, select **Create new** to create a new Azure virtual network (VNet). Then fill in the following information:
* Enter **myvnet1** as the Vnet name. * Specify an address space for your setting, for example, 10.7.0.0/16 * Enter **myANFsubnet** as the subnet name. * Specify the subnet address range, for example, 10.7.0.0/24. You cannot share the dedicated subnet with other resources. * Select **Microsoft.NetApp/volumes** for subnet delegation.
- * Click **OK** to create the Vnet.
+ * Select **OK** to create the VNet.
5. In subnet, select the newly created Vnet (**myvnet1**) as the delegate subnet.
- ![Create a volume window](../media/azure-netapp-files/azure-netapp-files-create-volume-window.png)
+ ![Screenshot of create a volume window.](../media/azure-netapp-files/azure-netapp-files-create-volume-window.png)
- ![Create virtual network window](../media/azure-netapp-files/azure-netapp-files-create-virtual-network-window.png)
+ ![Screenshot of create a virtual network window.](../media/azure-netapp-files/azure-netapp-files-create-virtual-network-window.png)
-4. Click **Protocol**, and then complete the following actions:
+4. Select **Protocol**, and then complete the following actions:
* Select **NFS** as the protocol type for the volume. * Enter **myfilepath1** as the file path that will be used to create the export path for the volume. * Select the NFS version (**NFSv3** or **NFSv4.1**) for the volume. See [considerations](azure-netapp-files-create-volumes.md#considerations) and [best practice](azure-netapp-files-create-volumes.md#best-practice) about NFS versions.
- ![Specify NFS protocol for quickstart](../media/azure-netapp-files/azure-netapp-files-quickstart-protocol-nfs.png)
+ ![Screenshot of NFS protocol for selection.](../media/azure-netapp-files/azure-netapp-files-quickstart-protocol-nfs.png)
-5. Click **Review + create** to display information for the volume you are creating.
+5. Select **Review + create** to display information for the volume you are creating.
-6. Click **Create** to create the volume.
+6. Select **Create** to create the volume.
The created volume appears in the Volumes blade.
- ![Volume created](../media/azure-netapp-files/azure-netapp-files-create-volume-created.png)
+ ![Screenshot of volume creation confirmation.](../media/azure-netapp-files/azure-netapp-files-create-volume-created.png)
# [PowerShell](#tab/azure-powershell)
When you are done and if you want to, you can delete the resource group. The act
1. In the Azure portal's search box, enter **Azure NetApp Files** and then select **Azure NetApp Files** from the list that appears.
-2. In the list of subscriptions, click the resource group (myRG1) you want to delete.
+2. In the list of subscriptions, select the resource group (myRG1) you want to delete.
- ![Navigate to resource groups](../media/azure-netapp-files/azure-netapp-files-azure-navigate-to-resource-groups.png)
+ ![Screenshot of the resource groups menu.](../media/azure-netapp-files/azure-netapp-files-azure-navigate-to-resource-groups.png)
-3. In the resource group page, click **Delete resource group**.
+3. In the resource group page, select **Delete resource group**.
![Screenshot that highlights the Delete resource group button.](../media/azure-netapp-files/azure-netapp-files-azure-delete-resource-group.png) A window opens and displays a warning about the resources that will be deleted with the resource group.
-4. Enter the name of the resource group (myRG1) to confirm that you want to permanently delete the resource group and all resources in it, and then click **Delete**.
+4. Enter the name of the resource group (myRG1) to confirm that you want to permanently delete the resource group and all resources in it, and then select **Delete**.
- ![Confirm deleting resource group](../media/azure-netapp-files/azure-netapp-files-azure-confirm-resource-group-deletion.png )
+ ![Screenshot showing confirmation of deleting resource group.](../media/azure-netapp-files/azure-netapp-files-azure-confirm-resource-group-deletion.png )
# [PowerShell](#tab/azure-powershell)
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
Title: Resize the capacity pool or a volume for Azure NetApp Files | Microsoft Docs
+ Title: Resize the capacity pool or a volume for Azure NetApp Files | Microsoft Docs
description: Learn how to change the size of a capacity pool or a volume. Resizing the capacity pool changes the purchased Azure NetApp Files capacity. documentationcenter: ''
na Previously updated : 12/19/2022 Last updated : 02/21/2023 # Resize a capacity pool or a volume
For information about monitoring a volumeΓÇÖs capacity, see [Monitor the capacit
## Considerations * Volume quotas are indexed against `maxfiles` limits. Once a volume has surpassed a `maxfiles` limit, you cannot reduce the volume size below the quota that corresponds to that `maxfiles` limit. For more information and specific limits, see [`maxfiles` limits](azure-netapp-files-resource-limits.md#maxfiles-limits-).
+* Capacity pools with Basic network features have a minimum size of 4 TiB. For capacity pools with Standard network features, the minimum size is 2 TiB. For more information, see [Resource limits](azure-netapp-files-resource-limits.md)
## Resize the capacity pool using the Azure portal
-You can change the capacity pool size in 1-TiB increments or decrements. However, the capacity pool size cannot be smaller than the sum of the capacity of the volumes hosted in the pool, with a minimum of 4TiB. Resizing the capacity pool changes the purchased Azure NetApp Files capacity.
+You can change the capacity pool size in 1-TiB increments or decrements. However, the capacity pool size cannot be smaller than the sum of the capacity of the volumes hosted in the pool.
+
+Resizing the capacity pool changes the purchased Azure NetApp Files capacity.
1. From the NetApp Account view, go to **Capacity pools**, and select the capacity pool that you want to resize. 2. Right-click the capacity pool name or select the "…" icon at the end of the capacity pool row to display the context menu. Select **Resize**.
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 02/02/2023 Last updated : 02/21/2023 # Resource limits for Azure NetApp Files
The following table describes resource limits for Azure NetApp Files:
| Number of volumes per capacity pool | 500 | Yes | | Number of snapshots per volume | 255 | No | | Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No |
-| Minimum size of a single capacity pool | 4 TiB | No |
+| Minimum size of a single capacity pool | 2 TiB* | No |
| Maximum size of a single capacity pool | 500 TiB | No | | Minimum size of a single volume | 100 GiB | No | | Maximum size of a single volume | 100 TiB | No |
The following table describes resource limits for Azure NetApp Files:
| Maximum number of volumes that can be backed up per subscription | 5 | Y | | Maximum number of manual backups per volume per day | 5 | Y |
+\* [!INCLUDE [Limitations for capacity pool minimum of 2 TiB](includes/2-tib-capacity-pool.md)]
+ For more information, see [Capacity management FAQs](faq-capacity-management.md). For limits and constraints related to Azure NetApp Files network features, see [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#considerations).
azure-netapp-files Azure Netapp Files Set Up Capacity Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md
na Previously updated : 11/4/2021 Last updated : 02/21/2023 # Create a capacity pool for Azure NetApp Files
-Creating a capacity pool enables you to create volumes within it.
+Creating a capacity pool enables you to create volumes within it.
## Before you begin
-You must have already created a NetApp account.
-
-[Create a NetApp account](azure-netapp-files-create-netapp-account.md)
+You must have already [created a NetApp account](azure-netapp-files-create-netapp-account.md).
## Steps
You must have already created a NetApp account.
![Navigate to capacity pool](../media/azure-netapp-files/azure-netapp-files-navigate-to-capacity-pool.png)
-2. Click **+ Add pools** to create a new capacity pool.
+2. Select **+ Add pools** to create a new capacity pool.
The New Capacity Pool window appears. 3. Provide the following information for the new capacity pool:
You must have already created a NetApp account.
* **Size** Specify the size of the capacity pool that you are purchasing.
- The minimum capacity pool size is 4 TiB. You can change the size of a capacity pool in 1-TiB increments.
+ The minimum capacity pool size is 2 TiB. You can change the size of a capacity pool in 1-TiB increments.
+
+ >[!NOTE]
+ >[!INCLUDE [Limitations for capacity pool minimum of 2 TiB](includes/2-tib-capacity-pool.md)]
* **QoS** Specify whether the capacity pool should use the **Manual** or **Auto** QoS type.
You must have already created a NetApp account.
> [!IMPORTANT] > Setting **QoS type** to **Manual** is permanent. You cannot convert a manual QoS capacity pool to use auto QoS. However, you can convert an auto QoS capacity pool to use manual QoS. See [Change a capacity pool to use manual QoS](manage-manual-qos-capacity-pool.md#change-to-qos).
- ![New capacity pool](../media/azure-netapp-files/azure-netapp-files-new-capacity-pool.png)
+ :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot of new capacity pool options.":::
-4. Click **Create**.
+4. Select **Create**.
## Next steps
azure-netapp-files Azure Netapp Files Understand Storage Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
na Previously updated : 06/14/2021 Last updated : 02/21/2023 # Storage hierarchy of Azure NetApp Files
Before creating a volume in Azure NetApp Files, you must purchase and set up a pool for provisioned capacity. To set up a capacity pool, you must have a NetApp account. Understanding the storage hierarchy helps you set up and manage your Azure NetApp Files resources. > [!IMPORTANT]
-> Azure NetApp Files currently does not support resource migration between subscriptions.
+> Azure NetApp Files currently doesn't support resource migration between subscriptions.
+
+## <a name="conceptual_diagram_of_storage_hierarchy"></a>Conceptual diagram of storage hierarchy
+The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes.
+ ## <a name="azure_netapp_files_account"></a>NetApp accounts - A NetApp account serves as an administrative grouping of the constituent capacity pools. -- A NetApp account is not the same as your general Azure storage account.
+- A NetApp account isn't the same as your general Azure storage account.
- A NetApp account is regional in scope. - You can have multiple NetApp accounts in a region, but each NetApp account is tied to only a single region.
Understanding how capacity pools work helps you select the right capacity pool t
### General rules of capacity pools - A capacity pool is measured by its provisioned capacity.
- See [QoS types](#qos_types) for additional information.
+ For more information, see [QoS types](#qos_types).
- The capacity is provisioned by the fixed SKUs that you purchased (for example, a 4-TiB capacity). - A capacity pool can have only one service level. - Each capacity pool can belong to only one NetApp account. However, you can have multiple capacity pools within a NetApp account. -- A capacity pool cannot be moved across NetApp accounts.
- For example, in the [Conceptual diagram of storage hierarchy](#conceptual_diagram_of_storage_hierarchy) below, Capacity Pool 1 cannot be moved from US East NetApp account to US West 2 NetApp account.
-- A capacity pool cannot be deleted until all volumes within the capacity pool have been deleted.
+- You can't move a capacity pool across NetApp accounts.
+ For example, in the [Conceptual diagram of storage hierarchy](#conceptual_diagram_of_storage_hierarchy), you can't move Capacity Pool 1 US East NetApp account to US West 2 NetApp account.
+- You can't delete a capacity pool until you delete all volumes within the capacity pool.
### <a name="qos_types"></a>Quality of Service (QoS) types for capacity pools
-The QoS type is an attribute of a capacity pool. Azure NetApp Files provides two QoS types of capacity pools -- *auto (default)* and *manual*.
+The QoS type is an attribute of a capacity pool. Azure NetApp Files provides two QoS types of capacity pools--*auto (default)* and *manual*.
#### *Automatic (or auto)* QoS type
When you create a capacity pool, the default QoS type is auto.
In an auto QoS capacity pool, throughput is assigned automatically to the volumes in the pool, proportional to the size quota assigned to the volumes.
-The maximum throughput allocated to a volume depends on the service level of the capacity pool and the size quota of the volume. See [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md) for example calculation.
+The maximum throughput allocated to a volume depends on the service level of the capacity pool and the size quota of the volume. See [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md) for an example calculation.
For performance considerations about QoS types, see [Performance considerations for Azure NetApp Files](azure-netapp-files-performance-considerations.md). #### *Manual* QoS type
-When you [create a capacity pool](azure-netapp-files-set-up-capacity-pool.md), you can specify for the capacity pool to use the manual QoS type. You can also [change an existing capacity pool](manage-manual-qos-capacity-pool.md#change-to-qos) to use the manual QoS type. *Setting the capacity type to manual QoS is a permanent change.* You cannot convert a manual QoS type capacity tool to an auto QoS capacity pool.
+When you [create a capacity pool](azure-netapp-files-set-up-capacity-pool.md), you can specify for the capacity pool to use the manual QoS type. You can also [change an existing capacity pool](manage-manual-qos-capacity-pool.md#change-to-qos) to use the manual QoS type. *Setting the capacity type to manual QoS is a permanent change.* You can't convert a manual QoS type capacity tool to an auto QoS capacity pool.
-In a manual QoS capacity pool, you can assign the capacity and throughput for a volume independently. For minimum and maximum throughput levels, see [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md#resource-limits). The total throughput of all volumes created with a manual QoS capacity pool is limited by the total throughput of the pool. It is determined by the combination of the pool size and the service-level throughput. For instance, a 4-TiB capacity pool with the Ultra service level has a total throughput capacity of 512 MiB/s (4 TiB x 128 MiB/s/TiB) available for the volumes.
+In a manual QoS capacity pool, you can assign the capacity and throughput for a volume independently. For minimum and maximum throughput levels, see [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md#resource-limits). The total throughput of all volumes created with a manual QoS capacity pool is limited by the total throughput of the pool. It's determined by the combination of the pool size and the service-level throughput. For instance, a 4-TiB capacity pool with the Ultra service level has a total throughput capacity of 512 MiB/s (4 TiB x 128 MiB/s/TiB) available for the volumes.
##### Example of using manual QoS
When you use a manual QoS capacity pool with, for example, an SAP HANA system, a
- A volumeΓÇÖs throughput consumption counts against its poolΓÇÖs available throughput. See [Manual QoS type](#manual-qos-type). - Each volume belongs to only one pool, but a pool can contain multiple volumes.
-## <a name="conceptual_diagram_of_storage_hierarchy"></a>Conceptual diagram of storage hierarchy
-The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes.
-
-![Conceptual diagram of storage hierarchy](../media/azure-netapp-files/azure-netapp-files-storage-hierarchy.png)
- ## Next steps - [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
+
+ Title: Configure customer-managed keys for Azure NetApp Files volume encryption | Microsoft Docs
+description: Describes how to configure customer-managed keys for Azure NetApp Files volume encryption.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
++ Last updated : 02/21/2023+++
+# Configure customer-managed keys for Azure NetApp Files volume encryption
+
+Customer-managed keys in Azure NetApp Files volume encryption enable you to use your own keys rather than a Microsoft-managed key when creating a new volume. With customer-managed keys, you can fully manage the relationship between a key's life cycle, key usage permissions, and auditing operations on keys.
+
+## Considerations
+
+> [!IMPORTANT]
+> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Wait for an official confirmation email from the Azure NetApp Files team before using customer-managed keys.
+
+* Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption.
+* To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page.
+* Switching from user-assigned identity to the system-assigned identity isn't currently supported.
+* MSI Automatic certificate renewal isn't currently supported.
+* The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.**
+ * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message will communicate the date of eligibility.
+ * Version 2.42 or later of the Azure CLI supports running the `renewCredentials` operation with the [az netappfiles account command](/cli/azure/netappfiles/account#az-netappfiles-account-renew-credentials). For example:
+
+ `az netappfiles account renew-credentials ΓÇô-account-name myaccount ΓÇôresource-group myresourcegroup`
+
+ * If the account isn't eligible for MSI certificate renewal, an error will communicate the date and time when the account is eligible. It's recommended you run this operation periodically (for example, daily) to prevent the certificate from expiring and from the customer-managed key volume going offline.
+
+<!--
+ * You will need to call the operation via ARM REST API. Submit a POST request to `/subscriptions/<16 digit subscription ID>/resourceGroups/<resource_group_name>/providers/Microsoft.NetApp/netAppAccounts/<account name>/renewCredentials?api-version=2022-04`.
+ This operation is available with the Azure CLI, PowerShell, and SDK beginning with the `2022-05` versions.
+ * If the certificate is more than 46 days old, you can call proxy Azure Resource Manager (ARM) operation via REST API to renew the certificate. For example:
+ ```rest
+ /{accountResourceId}/renewCredentials?api-version=2022-01 ΓÇô example /subscriptions/<16 digit subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.NetApp/netAppAccounts/<account name>/renewCredentials?api-version=2022-01
+ ``` -->
+
+* Applying Azure network security groups on the private link subnet to Azure Key Vault isn't supported for Azure NetApp Files customer-managed keys. Network security groups don't affect connectivity to Private Link unless `Private endpoint network policy` is enabled on the subnet. It's recommended to keep this option disabled.
+* If Azure NetApp Files fails to create a customer-managed key volume, error messages are displayed. Refer to the [Error messages and troubleshooting](#error-messages-and-troubleshooting) section for more information.
+
+## Supported regions
+
+Azure NetApp Files customer-managed keys is supported for the following regions:
+
+* East Asia
+* East US 2
+* West Europe
+
+## Requirements
+
+Before creating your first customer-managed key volume, you must have set up:
+* An [Azure Key Vault](../key-vault/general/overview.md), containing at least one key.
+ * The key vault must have soft delete and purge protection enabled.
+ * The key must be of type RSA.
+* The key vault must have an [Azure Private Endpoint](../private-link/private-endpoint-overview.md).
+ * The private endpoint must reside in a different subnet than the one delegated to Azure NetApp Files. The subnet must be in the same VNet as the one delegated to Azure NetApp.
+
+For more information about Azure Key Vault and Azure Private Endpoint, refer to:
+* [Quickstart: Create a key vault ](../key-vault/general/quick-create-portal.md)
+* [Create or import a key into the vault](../key-vault/keys/quick-create-portal.md)
+* [Create a private endpoint](../private-link/create-private-endpoint-portal.md)
+* [More about keys and supported key types](../key-vault/keys/about-keys.md)
+* [Network security groups](../virtual-network/network-security-groups-overview.md)
+* [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md)
+
+## Configure a NetApp account to use customer-managed keys
+
+1. In the Azure portal and under Azure NetApp Files, select **Encryption**.
+
+ The **Encryption** page enables you to manage encryption settings for your NetApp account. It includes an option to let you set your NetApp account to use your own encryption key, which is stored in [Azure Key Vault](../key-vault/general/basic-concepts.md). This setting provides a system-assigned identity to the NetApp account, and it adds an access policy for the identity with the required key permissions.
+
+ :::image type="content" source="../media/azure-netapp-files/encryption-menu.png" alt-text="Screenshot of the encryption menu." lightbox="../media/azure-netapp-files/encryption-menu.png":::
+
+1. When you set your NetApp account to use customer-managed key, you have two ways to specify the Key URI:
+ * The **Select from key vault** option allows you to select a key vault and a key.
+ :::image type="content" source="../media/azure-netapp-files/select-key.png" alt-text="Screenshot of the select a key interface." lightbox="../media/azure-netapp-files/select-key.png":::
+
+ * The **Enter key URI** option allows you to enter manually the key URI.
+ :::image type="content" source="../media/azure-netapp-files/key-enter-uri.png" alt-text="Screenshot of the encryption menu showing key URI field." lightbox="../media/azure-netapp-files/key-enter-uri.png":::
+
+1. Select the identity type that you want to use for authentication to the Azure Key Vault. If your Azure Key Vault is configured to use Vault access policy as its permission model, then both options are available. Otherwise, only the user-assigned option is available.
+ * If you choose **System-assigned**, select the **Save** button. The Azure portal configures the NetApp account automatically with the following process: A system-assigned identity is added to your NetApp account. An access policy is to be created on your Azure Key Vault with key permissions Get, Encrypt, Decrypt.
+ * If you choose **User-assigned**, you must select an identity to use. Choosing **Select an identity** opens a context pane prompting you to select a user-assigned managed identity.
+
+ :::image type="content" source="../media/azure-netapp-files/encryption-user-assigned.png" alt-text="Screenshot of user-assigned submenu." lightbox="../media/azure-netapp-files/encryption-user-assigned.png":::
+
+ If you've configured your Azure Key Vault use Vault access policy, the Azure portal configures the NetApp account automatically with the following process: The user-assigned identity you select is added to your NetApp account. An access policy is created on your Azure Key Vault with the key permissions Get, Encrypt, Decrypt.
+
+ If you've configure your Azure Key Vault to use Azure role-based access control, then you need to make sure the selected user-assigned identity has a role assignment on the key vault with permissions for data actions:
+ * `Microsoft.KeyVault/vaults/keys/read`
+ * `Microsoft.KeyVault/vaults/keys/encrypt/action`
+ * `Microsoft.KeyVault/vaults/keys/decrypt/action`
+ The user-assigned identity you select is added to your NetApp account. Due to the customizable nature of role-based access control (RBAC), the Azure portal doesn't configure access to the key vault. See [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../key-vault/general/rbac-guide.md) for details on configuring Azure Key Vault.
+
+1. After selecting **Save** button, you'll receive a notification communicating the status of the operation. If the operation was not successful, an error message displays. Refer to [error messages and troubleshooting](#error-messages-and-troubleshooting) for assistance in resolving the error.
+
+## Use role-based access control
+
+You can use an Azure Key Vault that is configured to use Azure role-based access control. To configure customer-managed keys through Azure portal, you need to provide a user-assigned identity.
+
+1. In your Azure account, navigate to the **Access policies** menu.
+1. To create an access policy, under **Permission model**, select **Azure role-based access-control**.
+ :::image type="content" source="../media/azure-netapp-files/rbac-permission.png" alt-text="Screenshot of access configuration menu." lightbox="../media/azure-netapp-files/rbac-permission.png":::
+1. When creating the user-assigned role, there are three permissions required for customer-managed keys:
+ 1. `Microsoft.KeyVault/vaults/keys/read`
+ 1. `Microsoft.KeyVault/vaults/keys/encrypt/action`
+ 1. `Microsoft.KeyVault/vaults/keys/decrypt/action`
+
+ Although there are pre-defined roles with these privileges, it is recommended that you create a custom role with the required permissions. See [Azure custom roles](../role-based-access-control/custom-roles.md) for details.
+
+ ```json
+ {
+ "id": "/subscriptions/<subscription>/Microsoft.Authorization/roleDefinitions/<roleDefinitionsID>",
+ "properties": {
+ "roleName": "NetApp account",
+ "description": "Has the necessary permissions for customer-managed key encryption: get key, encrypt and decrypt",
+ "assignableScopes": [
+ "/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>"
+ ],
+ "permissions": [
+ {
+ "actions": [],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.KeyVault/vaults/keys/read",
+ "Microsoft.KeyVault/vaults/keys/encrypt/action",
+ "Microsoft.KeyVault/vaults/keys/decrypt/action"
+ ],
+ "notDataActions": []
+ }
+ ]
+ }
+ }
+ ```
+
+1. Once the custom role is created and available to use with the key vault, you can add a role assignment for your user-assigned identity.
+
+ :::image type="content" source="../media/azure-netapp-files/rbac-review-assign.png" alt-text="Screenshot of RBAC review and assign menu." lightbox="../media/azure-netapp-files/rbac-review-assign.png":::
+
+## Create an Azure NetApp Files volume using customer-managed keys
+
+1. From Azure NetApp Files, select **Volumes** and then **+ Add volume**.
+1. Follow the instructions in [Configure network features for an Azure NetApp Files volume](configure-network-features.md):
+ * [Set the Network Features option in volume creation page](configure-network-features.md#set-the-network-features-option).
+ * The network security group for the volumeΓÇÖs delegated subnet must allow incoming traffic from NetApp's storage VM.
+1. For a NetApp account configured to use a customer-managed key, the Create Volume page includes an option Encryption Key Source.
+
+ To encrypt the volume with your key, select **Customer-Managed Key** in the **Encryption Key Source** dropdown menu.
+
+ When you create a volume using a customer-managed key, you must also select **Standard** for the **Network features** option. Basic network features are not supported.
+
+ You must select a key vault private endpoint as well. The dropdown menu displays private endpoints in the selected Virtual network. If there's no private endpoint for your key vault in the selected virtual network, then the dropdown is empty, and you won't be able to proceed. If so, see to [Azure Private Endpoint](../private-link/private-endpoint-overview.md).
+
+ :::image type="content" source="../media/azure-netapp-files/keys-create-volume.png" alt-text="Screenshot of create volume menu." lightbox="../media/azure-netapp-files/keys-create-volume.png":::
+
+1. Continue to complete the volume creation process. Refer to:
+ * [Create an NFS volume](azure-netapp-files-create-volumes.md)
+ * [Create an SMB volume](azure-netapp-files-create-volumes-smb.md)
+ * [Create a dual-protocol volume](create-volumes-dual-protocol.md)
+
+## Rekey all volumes under a NetApp account
+
+If you have already configured your NetApp account for customer-managed keys and has one or more volumes encrypted with customer-managed keys, you can change the key that is used to encrypt all volumes under the NetApp account. You can select any key that is in the same key vault, changing key vaults isn't supported.
+
+1. Under your NetApp account, navigate to the **Encryption** menu. Under the **Current key** input field, select the **Rekey** link.
+
+1. In the **Rekey** menu, select one of the available keys from the dropdown menu. The chosen key must be different from the current key.
+
+1. Select **OK** to save. The rekey operation may take several minutes.
+
+## Error messages and troubleshooting
+
+This section lists error messages and possible resolutions when Azure NetApp Files fails to configure customer-managed key encryption or create a volume using a customer-managed key.
+
+### Errors configuring customer-managed key encryption on a NetApp account
+
+| Error Condition | Resolution |
+| -- | -- |
+| `The operation failed because the specified key vault key was not found` | When entering key URI manually, ensure that the URI is correct. |
+| `Azure Key Vault key is not a valid RSA key` | Ensure that the selected key is of type RSA. |
+| `Azure Key Vault key is not enabled` | Ensure that the selected key is enabled. |
+| `Azure Key Vault key is expired` | Ensure that the selected key is not expired. |
+| `Azure Key Vault key has not been activated` | Ensure that the selected key is active. |
+| `Key Vault URI is invalid` | When entering key URI manually, ensure that the URI is correct. |
+| `Azure Key Vault is not recoverable. Make sure that Soft-delete and Purge protection are both enabled on the Azure Key Vault` | Update the key vault recovery level to: <br> `ΓÇ£Recoverable/Recoverable+ProtectedSubscription/CustomizedRecoverable/CustomizedRecoverable+ProtectedSubscriptionΓÇ¥` |
+| `Account must be in the same region as the Vault` | Ensure the key vault is in the same region as the NetApp account. |
+
+### Errors creating a volume encrypted with customer-managed keys
+
+| Error Condition | Resolution |
+| -- | -- |
+| `Volume cannot be encrypted with Microsoft.KeyVault, NetAppAccount has not been configured with KeyVault encryption` | Your NetApp account doesn't have customer-managed key encryption enabled. Configure the NetApp account to use customer-managed key. |
+| `EncryptionKeySource cannot be changed` | No resolution. The `EncryptionKeySource` property of a volume can't be changed. |
+| `Unable to use the configured encryption key, please check if key is active` | Check that: <br> -Are all access policies correct on the key vault: Get, Encrypt, Decrypt? <br> -Does a private endpoint for the key vault exist? <br> -Is there a Virtual Network NAT in the VNet, with the delegated Azure NetApp Files subnet enabled? |
+
+## Next steps
+
+* [Azure NetApp Files API](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/netapp/resource-manager/Microsoft.NetApp/stable/2019-11-01)
azure-netapp-files Cross Zone Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-introduction.md
The preview of cross-zone replication is available in the following regions:
* Korea Central * North Europe * Norway East
-* Norway West
* South Africa North * Southeast Asia * South Central US
azure-netapp-files Faq Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-security.md
Previously updated : 04/08/2021 Last updated : 02/21/2023 # Security FAQs for Azure NetApp Files
This article answers frequently asked questions (FAQs) about Azure NetApp Files
## Can the network traffic between the Azure VM and the storage be encrypted?
-Azure NetApp Files data traffic is inherently secure by design, as it does not provide a public endpoint and data traffic stays within customer-owned VNet. Data-in-flight is not encrypted by default. However, data traffic from an Azure VM (running an NFS or SMB client) to Azure NetApp Files is as secure as any other Azure-VM-to-VM traffic.
+Azure NetApp Files data traffic is inherently secure by design, as it doesn't provide a public endpoint, and data traffic stays within customer-owned VNet. Data-in-flight isn't encrypted by default. However, data traffic from an Azure VM (running an NFS or SMB client) to Azure NetApp Files is as secure as any other Azure-VM-to-VM traffic.
-NFSv3 protocol does not provide support for encryption, so this data-in-flight cannot be encrypted. However, NFSv4.1 and SMB3 data-in-flight encryption can optionally be enabled. Data traffic between NFSv4.1 clients and Azure NetApp Files volumes can be encrypted using Kerberos with AES-256 encryption. See [Configure NFSv4.1 Kerberos encryption for Azure NetApp Files](configure-kerberos-encryption.md) for details. Data traffic between SMB3 clients and Azure NetApp Files volumes can be encrypted using the AES-CCM algorithm on SMB 3.0, and the AES-GCM algorithm on SMB 3.1.1 connections. See [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md) for details.
+NFSv3 protocol doesn't provide support for encryption, so this data-in-flight can't be encrypted. However, NFSv4.1 and SMB3 data-in-flight encryption can optionally be enabled. Data traffic between NFSv4.1 clients and Azure NetApp Files volumes can be encrypted using Kerberos with AES-256 encryption. See [Configure NFSv4.1 Kerberos encryption for Azure NetApp Files](configure-kerberos-encryption.md) for details. Data traffic between SMB3 clients and Azure NetApp Files volumes can be encrypted using the AES-CCM algorithm on SMB 3.0, and the AES-GCM algorithm on SMB 3.1.1 connections. See [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md) for details.
## Can the storage be encrypted at rest?
Azure NetApp Files cross-region replication uses TLS 1.2 AES-256 GCM encryption
## How are encryption keys managed?
-Key management for Azure NetApp Files is handled by the service. A unique XTS-AES-256 data encryption key is generated for each volume. An encryption key hierarchy is used to encrypt and protect all volume keys. These encryption keys are never displayed or reported in an unencrypted format. Encryption keys are deleted immediately when a volume is deleted.
+Key management for Azure NetApp Files is handled by the service. A unique XTS-AES-256 data encryption key is generated for each volume. An encryption key hierarchy is used to encrypt and protect all volume keys. These encryption keys are never displayed or reported in an unencrypted format. When you delete a volume, Azure NetApp Files immediately deletes the volume's encryption keys.
-Support for customer-managed keys (Bring Your Own Key) using Azure Dedicated HSM is available on a controlled basis in the East US, South Central US, West US 2, and US Gov Virginia regions. You can request access at [anffeedback@microsoft.com](mailto:anffeedback@microsoft.com). As capacity becomes available, requests will be approved.
+Customer-managed keys (Bring Your Own Key) using Azure Dedicated HSM is supported on a controlled basis. Support is currently available in the East US, South Central US, West US 2, and US Gov Virginia regions. You can request access at [anffeedback@microsoft.com](mailto:anffeedback@microsoft.com). As capacity becomes available, requests will be approved.
+
+[Customer-managed keys](configure-customer-managed-keys.md) are available with limited regional support.
## Can I configure the NFS export policy rules to control access to the Azure NetApp Files service mount target? Yes, you can configure up to five rules in a single NFS export policy.
-## Can I use Azure RBAC with Azure NetApp Files?
+## Can I use Azure role-based access control (RBAC) with Azure NetApp Files?
Yes, Azure NetApp Files supports Azure RBAC features. Along with the built-in Azure roles, you can [create custom roles](../role-based-access-control/custom-roles.md) for Azure NetApp Files.
For the complete list of API operations, see [Azure NetApp Files REST API](/rest
Yes, you can create [custom Azure policies](../governance/policy/tutorials/create-custom-policy-definition.md).
-However, you cannot create Azure policies (custom naming policies) on the Azure NetApp Files interface. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#considerations).
+However, you can't create Azure policies (custom naming policies) on the Azure NetApp Files interface. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#considerations).
## When I delete an Azure NetApp Files volume, is the data deleted safely?
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 12/21/2022 Last updated : 02/21/2023 # What's new in Azure NetApp Files Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## February 2023
+
+* [Customer-managed keys](configure-customer-managed-keys.md) (Preview)
+
+ Azure NetApp Files volumes now support encryption with customer-managed keys and Azure Key Vault to enable an extra layer of security for data at rest.
+
+ Data encryption with customer-managed keys for Azure NetApp Files allows you to bring your own key for data encryption at rest. You can use this feature to implement separation of duties for managing keys and data. Additionally, you can centrally manage and organize keys using Azure Key Vault. With customer-managed encryption, you are in full control of, and responsible for, a key's lifecycle, key usage permissions, and auditing operations on keys.
+
+* [Capacity pool enhancement](azure-netapp-files-set-up-capacity-pool.md) (Preview)
+
+ Azure NetApp Files now supports a lower limit of 2 TiB for capacity pool sizing with Standard network features.
+
+ You can now choose a minimum size of 2 TiB when creating a capacity pool. Capacity pools smaller than 4 TiB in size can only be used with volumes using standard network features. This enhancement provides a more cost effective solution for running workloads such as SAP-shared files and VDI that require lower capacity pool sizes for their capacity and performance needs. When you have less than 2-4 TiB capacity with proportional performance, this enhancement allows you to start with 2 TiB as a minimum pool size and increase with 1-TiB increments. For capacities less than 3 TiB, this enhancement saves cost by allowing you to re-evaluate volume planning to take advantage of savings of smaller capacity pools. This feature is supported in all [regions with Standard network features](azure-netapp-files-network-topologies.md#supported-regions).
+ ## December 2022 * [Azure Application Consistent Snapshot tool (AzAcSnap) 7](azacsnap-introduction.md)
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
Title: Troubleshoot common Azure deployment errors
description: Troubleshoot common Azure deployment errors for resources that are deployed with Bicep files or Azure Resource Manager templates (ARM templates). tags: top-support-issue Previously updated : 01/03/2023 Last updated : 02/21/2023
If your error code isn't listed, submit a GitHub issue. On the right side of the
| SubnetIsFull | There aren't enough available addresses in the subnet to deploy resources. You can release addresses from the subnet, use a different subnet, or create a new subnet. | [Manage subnets](../../virtual-network/virtual-network-manage-subnet.md) and [Virtual network FAQ](../../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets) <br><br> [Private IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) | | SubscriptionNotFound | A specified subscription for deployment can't be accessed. It could be the subscription ID is wrong, the user deploying the template doesn't have adequate permissions to deploy to the subscription, or the subscription ID is in the wrong format. When using ARM template nested deployments to deploy across scopes, provide the subscription's GUID. | [ARM template deploy across scopes](../templates/deploy-to-resource-group.md) <br><br> [Bicep file deploy across scopes](../bicep/deploy-to-resource-group.md) | | SubscriptionNotRegistered | When a resource is deployed, the resource provider must be registered for your subscription. When you use an Azure Resource Manager template for deployment, the resource provider is automatically registered in the subscription. Sometimes, the automatic registration doesn't complete in time. To avoid this intermittent error, register the resource provider before deployment. | [Resolve registration](error-register-resource-provider.md) |
+| SubscriptionRequestsThrottled | Azure Resource Manager throttles requests at the subscription level or tenant level. Resource providers like `Microsoft.Compute` also throttle requests specific to its operations. <br><br> When a limit is reached, you get a message and a value with the amount of time you should wait before sending a new request. For example: `Number of requests for subscription '<subscription-id-guid>' and operation '<resource provider>' exceeded the backend storage limit. Please try again after '6' seconds.` <br><br> An HTTP response returns a message like `HTTP status code 429 Too Many Requests` with a `Retry-After` value that specifies the number of seconds to wait before you send another request. | [Throttling Resource Manager requests](../management/request-limits-and-throttling.md) <br><br> [Troubleshooting API throttling errors - virtual machines](/troubleshoot/azure/virtual-machines/troubleshooting-throttling-errors) <br><br> [Azure Kubernetes Service throttling](/troubleshoot/azure/azure-kubernetes/error-code-subscriptionrequeststhrottled) |
| TemplateResourceCircularDependency | Remove unnecessary dependencies. | [Resolve circular dependencies](error-invalid-template.md#circular-dependency) | | TooManyTargetResourceGroups | Reduce number of resource groups for a single deployment. | [ARM template deploy across scopes](../templates/deploy-to-resource-group.md) <br><br> [Bicep file deploy across scopes](../bicep/deploy-to-resource-group.md) |
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
azure-signalr Server Graceful Shutdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/server-graceful-shutdown.md
In general, there will be four stages in a graceful shutdown process:
2. **Trigger `OnShutdown` hooks** You could register shutdown hooks for each hub you have owned in your server.
- They will be called with respect to the registered order right after we got an **FINACK** response from our Azure SignalR Service, which means this server has been set offline in the Azure SignalR Service.
+ They'll be called with respect to the registered order right after we got an **FINACK** response from our Azure SignalR Service, which means this server has been set offline in the Azure SignalR Service.
- You can broadcast messages or do some cleaning jobs in this stage, once all shutdown hooks has been executed, we will proceed to the next stage.
+ You can broadcast messages or do some cleaning jobs in this stage, once all shutdown hooks has been executed, we'll proceed to the next stage.
3. **Wait until all client connections finished**, depends on the mode you choose, it could be:
In general, there will be four stages in a graceful shutdown process:
Azure SignalR Service will hold existing clients.
- You may have to design a way, like broadcast a closing message to all clients, and then let your clients to decide when to close/reconnect itself.
+ You may have to design a way, like broadcast a closing message to all clients, and then let your clients decide when to close/reconnect itself.
Read [ChatSample](https://github.com/Azure/azure-signalr/tree/dev/samples/ChatSample) for sample usage, which we broadcast a 'exit' message to trigger client close in shutdown hook.
In general, there will be four stages in a graceful shutdown process:
Azure SignalR Service will try to reroute the client connection on this server to another valid server.
- In this scenario, `OnConnectedAsync` and `OnDisconnectedAsync` will be triggered on the new server and the old server respectively with an `IConnectionMigrationFeature` set in the `Context`, which can be used to identify if the client connection was being migrated-in or migrated-out. It could be useful especially for stateful scenarios.
+ In this scenario, `OnConnectedAsync` and `OnDisconnectedAsync` will be triggered on the new server and the old server respectively with an `IConnectionMigrationFeature` set in the `Context`, which can be used to identify if the client connection was being migrated-in or migrated-out. This feature could be useful especially for stateful scenarios.
The client connection will be immediately migrated after the current message has been delivered, which means the next message will be routed to the new server.
services.AddSignalR().AddAzureSignalR(option =>
### configure `OnConnected` and `OnDisconnected` while setting graceful shutdown mode to `MigrateClients`.
-We have introduced an "IConnectionMigrationFeature" to indicate if a connection was being migrated-in/out.
+We've introduced an "IConnectionMigrationFeature" to indicate if a connection was being migrated-in/out.
```csharp public class Chat : Hub {
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
Previously updated : 01/06/2023 Last updated : 02/21/2023 # Language support in Azure Video Indexer
-This article provides a comprehensive list of language support by service features in Azure Video Indexer. For the list and definitions of all the features, see [Overview](video-indexer-overview.md).
-
-Some languages are supported only through the API (see [Get Supported Languages](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Supported-Languages)) and not through the Video Indexer website or widgets. To make sure a language is supported for search, transcription, or translation by the Azure Video Indexer website and widgets, see the [front end language
-support table](#language-support-in-front-end-experiences) further below.
-
-## API language support
+This article explains Video Indexer's language options and provides a list of language support for each one. It includes the languages support for Video Indexer features, translation, language identification, customization, and the language settings of the Video Indexer website.
+
+## Supported languages per scenario
+
+This section explains the Video Indexer language options and has a table of the supported languages for each one.
+
+> [!IMPORTANT]
+> All of the languages listed support translation when indexing through the API.
+
+### Column explanations
+
+- **Supported source language** ΓÇô The language spoken in the media file supported for transcription, translation, and search.
+- **Language identification** - Whether the language can be automatically detected by Video Indexer when language identification is used for indexing. To learn more, see [Use Azure Video Indexer to auto identify spoken languages](language-identification-model.md) and the **Language Identification** section.
+- **Customization (language model)** - Whether the language can be used when customizing language models in Video Indexer. To learn more, see [Customize a Language model in Azure Video Indexer](customize-language-model-overview.md)
+- **Website Translation** ΓÇô Whether the language is supported for translation when using the [Azure Video Indexer website](https://aka.ms/vi-portal-link). Select the translated language in the language drop-down menu.
+
+ :::image type="content" source="media/language-support/website-translation.png" alt-text="Screenshow showing a menu with download, English and views as menu items. A tooltip is shown as mouseover on the English item and says Translation is set to English." lightbox="media/language-support/website-translation.png":::
+
+ The following insights are translated:
+
+ - Transcript
+ - Keywords
+ - Topics
+ - Labels
+ - Frame patterns (Only to Hebrew as of now)
+
+ All other insights appear in English when using translation.
+
+- **Website Language** - Whether the language can be selected for use on the [Azure Video Indexer website](https://aka.ms/vi-portal-link). Select the **Settings icon** then select the language in the **Language settings** dropdown.
+
+ :::image type="content" source="media/language-support/website-language.jpg" alt-text="Screenshow showing a menu with user settings show them all toggled to on." lightbox="media/language-support/website-language.jpg":::
+
+| **Language** | **Code** | **Supported source language** | **Language identification** | **Customization (language model)** | **Website Translation** | **Website Language** |
+|:-|:-:|:--:|::|:-:|:--:|:--:|
+| Afrikaans | af-ZA | | | | Γ£ö | |
+| Arabic (Israel) | ar-IL | Γ£ö | | Γ£ö | | |
+| Arabic (Iraq) | ar-IQ | Γ£ö | Γ£ö | | | |
+| Arabic (Jordan) | ar-JO | Γ£ö | Γ£ö | Γ£ö | | |
+| Arabic (Kuwait) | ar-KW | Γ£ö | Γ£ö | Γ£ö | | |
+| Arabic (Lebanon) | ar-LB | Γ£ö | | Γ£ö | | |
+| Arabic (Oman) | ar-OM | Γ£ö | Γ£ö | Γ£ö | | |
+| Arabic (Palestinian Authority) | ar-PS | Γ£ö | | Γ£ö | | |
+| Arabic (Qatar) | ar-QA | Γ£ö | Γ£ö | Γ£ö | | |
+| Arabic (Saudi Arabia) | ar-SA | Γ£ö | Γ£ö | Γ£ö | | |
+| Arabic (United Arab Emirates) | ar-AE | Γ£ö | Γ£ö | Γ£ö | | |
+| Arabic Egypt | ar-EG | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Arabic Modern Standard (Bahrain) | ar-BH | Γ£ö | Γ£ö | Γ£ö | | |
+| Arabic Syrian Arab Republic | ar-SY | Γ£ö | Γ£ö | Γ£ö | | |
+| Armenian | hy-AM | Γ£ö | | | | |
+| Bangla | bn-BD | | | | Γ£ö | |
+| Bosnian | bs-Latn | | | | Γ£ö | |
+| Bulgarian | bg-BG | Γ£ö | Γ£ö | | Γ£ö | |
+| Catalan | ca-ES | Γ£ö | Γ£ö | | Γ£ö | |
+| Chinese (Cantonese Traditional) | zh-HK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Chinese (Simplified) | zh-Hans | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Chinese (Simplified) | zh-CK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Chinese (Traditional) | zh-Hant | | | | Γ£ö | |
+| Croatian | hr-HR | Γ£ö | Γ£ö | | Γ£ö | |
+| Czech | cs-CZ | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Danish | da-DK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Dutch | nl-NL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| English Australia | en-AU | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| English United Kingdom | en-GB | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| English United States | en-US | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Estonian | et-EE | Γ£ö | Γ£ö | | Γ£ö | |
+| Fijian | en-FJ | | | | Γ£ö | |
+| Filipino | fil-PH | | | | Γ£ö | |
+| Finnish | fi-FI | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| French | fr-FR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| French (Canada) | fr-CA | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| German | de-DE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Greek | el-GR | Γ£ö | Γ£ö | | Γ£ö | |
+| Gujarati | gu-IN | Γ£ö | Γ£ö | | Γ£ö | |
+| Haitian | fr-HT | | | | Γ£ö | |
+| Hebrew | he-IL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Hindi | hi-IN | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Hungarian | hu-HU | | Γ£ö | | Γ£ö | Γ£ö |
+| Icelandic | is-IS | Γ£ö | | | | |
+| Indonesian | id-ID | | | | Γ£ö | |
+| Irish | ga-IE | Γ£ö | Γ£ö | | | |
+| Italian | it-IT | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Japanese | ja-JP | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Kannada | kn-IN | Γ£ö | Γ£ö | | | |
+| Kiswahili | sw-KE | | | | Γ£ö | |
+| Korean | ko-KR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Latvian | lv-LV | Γ£ö | Γ£ö | | Γ£ö | |
+| Lithuanian | lt-LT | | | | Γ£ö | |
+| Malagasy | mg-MG | | | | Γ£ö | |
+| Malay | ms-MY | Γ£ö | | | Γ£ö | |
+| Malayalam | ml-IN | Γ£ö | Γ£ö | | | |
+| Maltese | mt-MT | | | | Γ£ö | |
+| Norwegian | nb-NO | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Persian | fa-IR | Γ£ö | | Γ£ö | Γ£ö | |
+| Polish | pl-PL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese | pt-BR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese (Portugal) | pt-PT | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Romanian | ro-RO | Γ£ö | Γ£ö | | Γ£ö | |
+| Russian | ru-RU | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Samoan | en-WS | | | | | |
+| Serbian (Cyrillic) | sr-Cyrl-RS | | | | Γ£ö | |
+| Serbian (Latin) | sr-Latn-RS | | | | Γ£ö | |
+| Slovak | sk-SK | Γ£ö | Γ£ö | | Γ£ö | |
+| Slovenian | sl-SI | Γ£ö | Γ£ö | | Γ£ö | |
+| Spanish | es-ES | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Spanish (Mexico) | es-MX | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Swedish | sv-SE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Tamil | ta-IN | Γ£ö | Γ£ö | | Γ£ö | |
+| Telugu | te-IN | Γ£ö | Γ£ö | | | |
+| Thai | th-TH | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+| Tongan | to-TO | | | | Γ£ö | |
+| Turkish | tr-TR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Ukrainian | uk-UA | Γ£ö | Γ£ö | | Γ£ö | |
+| Urdu | ur-PK | | | | Γ£ö | |
+| Vietnamese | vi-VN | Γ£ö | Γ£ö | | Γ£ö | |
+
+## Get supported languages through the API
+
+Use the Get Supported Languages API call to pull a full list of supported languages per area. For more information, see [Get Supported Languages](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Supported-Languages).
The API returns a list of supported languages with the following values: ```json
-"name": "Language",
-"languageCode": "Code",
-"isRightToLeft": true/false,
-"isSourceLanguage": true/false,
-"isAutoDetect": true/false
+{
+ "name": "Language",
+ "languageCode": "Code",
+ "isRightToLeft": true/false,
+ "isSourceLanguage": true/false,
+ "isAutoDetect": true/false
+}
```
-Some notes for the above values are:
- - Supported source language:
- If `isSourceLanguage` is `false`, the language is supported for translation only.
- If `isSourceLanguage` is `true`, the language is supported as source for transcription, translation, and search.
-- Language identification (auto detection):
-
- If `isAutoDetect` set to `true`, the language is supported for language identification (LID) and multi-language identification (MLID).
-- The following insights are translated, otherwise will remain in English:
-
- - Transcript
- - Keywords
- - Topics
- - Labels
- - Frame patterns (Only to Hebrew as of now)
-
-| **Language** | **Code** | **Supported source language** | **Language identification** | **Customization** (language model) |
-|:--:|:--:|:--:|:-:|:--:|
-| Afrikaans | `af-ZA` | | | |
-| Arabic (Israel) | `ar-IL` | Γ£ö | | Γ£ö |
-| Arabic (Iraq) | `ar-IQ` | Γ£ö | Γ£ö | |
-| Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Kuwait) | `ar-KW` | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Lebanon) | `ar-LB` | Γ£ö | | Γ£ö |
-| Arabic (Oman) | `ar-OM` | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Paestinian Authority) | `ar-PS` | Γ£ö | | Γ£ö |
-| Arabic (Qatar) | `ar-QA` | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Egypt | `ar-EG` | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | Γ£ö| Γ£ö |
-| Armenian | `hy-AM` | Γ£ö | | |
-| Bangla | `bn-BD` | | | |
-| Bosnian | `bs-Latn` | | | |
-| Bulgarian | `bg-BG` | Γ£ö | Γ£ö | |
-| Catalan | `ca-ES` | Γ£ö | Γ£ö | |
-| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Simplified) | `zh-CK` | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Traditional) | `zh-Hant` | | | |
-| Croatian | `hr-HR` | Γ£ö | Γ£ö | |
-| Czech | `cs-CZ` | Γ£ö |Γ£ö | Γ£ö |
-| Danish | `da-DK` | Γ£ö |Γ£ö | Γ£ö |
-| Dutch | `nl-NL` | Γ£ö |Γ£ö | Γ£ö |
-| English Australia | `en-AU` | Γ£ö |Γ£ö | Γ£ö |
-| English United Kingdom | `en-GB` | Γ£ö |Γ£ö | Γ£ö |
-| English United States | `en-US` | Γ£ö |Γ£ö | Γ£ö |
-| Estonian | `et-EE` | Γ£ö |Γ£ö | |
-| Fijian | `en-FJ` | | | |
-| Filipino | `fil-PH` | | | |
-| Finnish | `fi-FI` | Γ£ö |Γ£ö | Γ£ö |
-| French | `fr-FR` | Γ£ö |Γ£ö | Γ£ö |
-| French (Canada) | `fr-CA` | Γ£ö |Γ£ö | Γ£ö |
-| German | `de-DE` | Γ£ö |Γ£ö | Γ£ö |
-| Greek | `el-GR` | Γ£ö |Γ£ö | |
-| Gujarati | `gu-IN` | Γ£ö |Γ£ö | |
-| Haitian | `fr-HT` | | | |
-| Hebrew | `he-IL` | Γ£ö |Γ£ö | Γ£ö |
-| Hindi | `hi-IN` | Γ£ö |Γ£ö | Γ£ö |
-| Hungarian | `hu-HU` | | Γ£ö | |
-| Icelandic | `is-IS` | Γ£ö | | |
-| Indonesian | `id-ID` | | | |
-| Irish | `ga-IE` | Γ£ö | Γ£ö | |
-| Italian | `it-IT` | Γ£ö | Γ£ö | Γ£ö |
-| Japanese | `ja-JP` | Γ£ö | Γ£ö | Γ£ö |
-| Kannada | `kn-IN` | Γ£ö | Γ£ö | |
-| Kiswahili | `sw-KE` | | | |
-| Korean | `ko-KR` | Γ£ö | Γ£ö| Γ£ö |
-| Latvian | `lv-LV` | Γ£ö | Γ£ö | |
-| Lithuanian | `lt-LT` | | | |
-| Malagasy | `mg-MG` | | | |
-| Malay | `ms-MY` | Γ£ö | | |
-| Malayalam | `ml-IN` |Γ£ö |Γ£ö | |
-| Maltese | `mt-MT` | | | |
-| Norwegian | `nb-NO` | Γ£ö |Γ£ö | Γ£ö |
-| Persian | `fa-IR` | Γ£ö | | Γ£ö |
-| Polish | `pl-PL` | Γ£ö |Γ£ö | Γ£ö |
-| Portuguese | `pt-BR` | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese (Portugal) | `pt-PT` | Γ£ö | Γ£ö | Γ£ö |
-| Romanian | `ro-RO` | Γ£ö | Γ£ö | |
-| Russian | `ru-RU` | Γ£ö | Γ£ö | Γ£ö |
-| Samoan | `en-WS` | | | |
-| Serbian (Cyrillic) | `sr-Cyrl-RS` | | | |
-| Serbian (Latin) | `sr-Latn-RS` | | | |
-| Slovak | `sk-SK` | Γ£ö | Γ£ö | |
-| Slovenian | `sl-SI` | Γ£ö | Γ£ö | |
-| Spanish | `es-ES` | Γ£ö | Γ£ö | Γ£ö |
-| Spanish (Mexico) | `es-MX` | Γ£ö | Γ£ö | Γ£ö |
-| Swedish | `sv-SE` | Γ£ö |Γ£ö | Γ£ö |
-| Tamil | `ta-IN` | Γ£ö | Γ£ö | |
-| Telugu | `te-IN` | Γ£ö | Γ£ö | |
-| Thai | `th-TH` | Γ£ö |Γ£ö | Γ£ö |
-| Tongan | `to-TO` | | | |
-| Turkish | `tr-TR` | Γ£ö | Γ£ö| Γ£ö |
-| Ukrainian | `uk-UA` | Γ£ö | Γ£ö | |
-| Urdu | `ur-PK` | | | |
-| Vietnamese | `vi-VN` | Γ£ö |Γ£ö| |
-
-**Default languages supported by Language identification (LID)**: German (de-DE) , English United States (en-US) , Spanish (es-ES) , French (fr-FR), Italian (it-IT) , Japanese (ja-JP), Portuguese (pt-BR), Russian (ru-RU), Chinese (Simplified) (zh-Hans).
-
-**Default languages supported by Multi-language identification (MLID)**: German (de-DE) , English United States (en-US) , Spanish (es-ES) , French (fr-FR).
-
-### Change default languages supported by LID and MLID
-
-When [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) through an API, you can specify to use other supported languages (listed in the table above) for LID and MLID by passing the `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by LID or MLID.
-
-> [!NOTE]
-> Language identification (LID) and Multi-language identification (MLID) compares speech at the language level, such as English and German.
-> Do not include multiple locales of the same language in the custom languages list.
-
-## Language support in front end experiences
-
-The following table describes language support in the Azure Video Indexer front end experiences.
-
-* website - the website column lists supported languages for the [Azure Video Indexer website](https://aka.ms/vi-portal-link). For for more information, see [Get started](video-indexer-get-started.md).
-* widgets - the [widgets](video-indexer-embed-widgets.md) column lists supported languages for translating the index file. For for more information, see [Get started](video-indexer-embed-widgets.md).
-
-| **Language** | **Code** | **Website** | **Widgets** |
-|::|::|:--:|:-:|
-| Afrikaans | `af-ZA` | | Γ£ö |
-| Arabic (Iraq) | `ar-IQ` | | |
-| Arabic (Israel) | `ar-IL` | | |
-| Arabic (Jordan) | `ar-JO` | | |
-| Arabic (Kuwait) | `ar-KW` | | |
-| Arabic (Lebanon) | `ar-LB` | | |
-| Arabic (Oman) | `ar-OM` | | |
-| Arabic (Palestinian Authority) | `ar-PS` | | |
-| Arabic (Qatar) | `ar-QA` | | |
-| Arabic (Saudi Arabia) | `ar-SA` | | |
-| Arabic (United Arab Emirates) | `ar-AE` | | |
-| Arabic Egypt | `ar-EG` | | Γ£ö |
-| Arabic Modern Standard (Bahrain) | `ar-BH` | | |
-| Arabic Syrian Arab Republic | `ar-SY` | | |
-| Bangla | `bn-BD` | |Γ£ö |
-| Bosnian | `bs-Latn` | | Γ£ö |
-| Bulgarian | `bg-BG` | | Γ£ö|
-| Catalan | `ca-ES` | | Γ£ö |
-| Chinese (Cantonese Traditional) | `zh-HK` | | Γ£ö |
-| Chinese (Simplified) | `zh-Hans` | Γ£ö |Γ£ö |
-| Chinese (Simplified) | `zh-CK` | Γ£ö |Γ£ö |
-| Chinese (Traditional) | `zh-Hant` | |Γ£ö |
-| Croatian | `hr-HR` | | |
-| Czech | `cs-CZ` | Γ£ö | Γ£ö |
-| Danish | `da-DK` | | Γ£ö |
-| Dutch | `nl-NL` | Γ£ö | Γ£ö |
-| English Australia | `en-AU` | | Γ£ö |
-| English United Kingdom | `en-GB` | | Γ£ö|
-| English United States | `en-US` | Γ£ö | Γ£ö |
-| Estonian | `et-EE` | | Γ£ö|
-| Fijian | `en-FJ` | | Γ£ö|
-| Filipino | `fil-PH` | |Γ£ö |
-| Finnish | `fi-FI` | | Γ£ö |
-| French | `fr-FR` | | Γ£ö |
-| French (Canada) | `fr-CA` | Γ£ö |Γ£ö |
-| German | `de-DE` | Γ£ö | |
-| Greek | `el-GR` | |Γ£ö |
-| Haitian | `fr-HT` | | Γ£ö |
-| Hebrew | `he-IL` | | Γ£ö |
-| Hindi | `hi-IN` | Γ£ö |Γ£ö |
-| Hungarian | `hu-HU` | Γ£ö | Γ£ö |
-| Indonesian | `id-ID` | | |
-| Italian | `it-IT` | | Γ£ö |
-| Japanese | `ja-JP` | Γ£ö | Γ£ö |
-| Kiswahili | `sw-KE` | Γ£ö | Γ£ö |
-| Korean | `ko-KR` |Γ£ö | Γ£ö|
-| Latvian | `lv-LV` | |Γ£ö |
-| Lithuanian | `lt-LT` | | Γ£ö |
-| Malagasy | `mg-MG` | | Γ£ö |
-| Malay | `ms-MY` | |Γ£ö |
-| Maltese | `mt-MT` | | |
-| Norwegian | `nb-NO` | | Γ£ö |
-| Persian | `fa-IR` | | |
-| Polish | `pl-PL` | Γ£ö | Γ£ö |
-| Portuguese | `pt-BR` | Γ£ö | Γ£ö |
-| Portuguese (Portugal) | `pt-PT` | |Γ£ö |
-| Romanian | `ro-RO` | | Γ£ö|
-| Russian | `ru-RU` | Γ£ö | Γ£ö |
-| Samoan | `en-WS` | | |
-| Serbian (Cyrillic) | `sr-Cyrl-RS` | | Γ£ö |
-| Serbian (Latin) | `sr-Latn-RS` | | |
-| Slovak | `sk-SK` | | Γ£ö |
-| Slovenian | `sl-SI` | | Γ£ö |
-| Spanish | `es-ES` | Γ£ö | Γ£ö |
-| Spanish (Mexico) | `es-MX` | | Γ£ö|
-| Swedish | `sv-SE` | Γ£ö | Γ£ö |
-| Tamil | `ta-IN` | | Γ£ö |
-| Thai | `th-TH` | |Γ£ö |
-| Tongan | `to-TO` | | Γ£ö |
-| Turkish | `tr-TR` | Γ£ö | Γ£ö |
-| Ukrainian | `uk-UA` | Γ£ö |Γ£ö |
-| Urdu | `ur-PK` | | Γ£ö |
-| Vietnamese | `vi-VN` | Γ£ö | Γ£ö |
+ If `isSourceLanguage` is false, the language is supported for translation only.
+ If `isSourceLanguage` is true, the language is supported as source for transcription, translation, and search.
+
+- Language identification (auto detection):
+
+ If `isAutoDetect` is true, the language is supported for language identification (LID) and multi-language identification (MLID).
+
+## Language Identification
+
+When uploading a media file to Video Indexer, you can specify the media file's source language. If indexing a file through the Video Indexer website, this can be done by selecting a language during the file upload. If you're submitting the indexing job through the API, it's done by using the language parameter. The selected language is then used to generate the transcription of the file.
+
+If you aren't sure of the source language of the media file or it may contain multiple languages, Video Indexer can detect the spoken languages. If you select either Auto-detect single language (LID) or multi-language (MLID) for the media fileΓÇÖs source language, the detected language or languages will be used to transcribe the media file. To learn more about LID and MLID, see Use Azure Video Indexer to auto identify spoken languages, see [Automatically identify the spoken language with language identification model](language-identification-model.md) and [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md)
+
+There's a limit of 10 languages allowed for identification during the indexing of a media file for both LID and MLID. The following are the 9 *default* languages of Language identification (LID) and Multi-language identification (MILD):
+
+- German (de-DE)
+- English United States (en-US)
+- Spanish (es-ES)
+- French (fr-FR)
+- Italian (it-IT)
+- Japanese (ja-JP)
+- Portuguese (pt-BR)
+- Russian (ru-RU)
+- Chinese (Simplified) (zh-Hans)
+
+## How to change the list of default languages
+
+If you need to use languages for identification that aren't used by default, you can customize the list to any 10 languages that support customization through either the website or the API:
+
+### Use the website to change the list
+
+1. Select the **Language ID** tab under Model customization. The list of languages is specific to the Video Indexer account you're using and for the signed-in user. The default list of languages is saved per user on their local device, per device, and browser. As a result, each user can configure their own default identified language list.
+1. Use **Add language** to search and add more languages. If 10 languages are already selected, you first must remove one of the existing detected languages before adding a new one.
+
+ :::image type="content" source="media/language-support/default-language.png" alt-text="Screenshot showing a table showing all of the selected languages." lightbox="media/language-support/default-language.png":::
+
+### Use the API to change the list
+
+When you upload a file, the Video Indexer language model cross references 9 languages by default. If there's a match, the model generates the transcription for the file with the detected language.
+
+Use the language parameter to specify `multi` (MLID) or `auto` (LID) parameters. Use the `customLanguages` parameter to specify up to 10 languages. (The parameter is used only when the language parameter is set to `multi` or `auto`.) To learn more about using the API, see [Use the Azure Video Indexer API](video-indexer-use-apis.md).
## Next steps
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
baremetal-infrastructure Nc2 Baremetal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/nc2-baremetal-overview.md
Alternatively, you can purchase licenses from Nutanix or from Azure Marketplace.
NC2 runs Nutanix Acropolis Operating System (AOS) and Nutanix Acropolis Hypervisor (AHV). -- Servers are pre-loaded with [AOS 6.1](https://www.nutanixbible.com/4-book-of-aos.html).-- AHV 6.1 is built into this product as the default hypervisor at no extra cost. - AHV hypervisor is based on open source Kernel-based Virtual Machine (KVM). - AHV will determine the lowest processor generation in the cluster and constrain all Quick Emulator (QEMU) domains to that level.
baremetal-infrastructure Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/skus.md
The following table presents component options for each available SKU.
|Core|Intel 6140, 36 Core, 2.3 GHz|Intel 6240, 36 Core, 2.6 GHz| |vCPUs|72|72| |RAM|576 GB|768 GB|
-|Storage|18.56 TB (8 x 1.92 TB SATA SSD, 2x1.6TB NVMe)|19.95 TB (2x375G Optane, 6x3.2TB NVMe)|
+|Storage|18.56 TB (8 x 1.92 TB SATA SSD, 2x1.6TB NVMe)|20.7 TB (2x750 GB Optane, 6x3.2 TB NVMe)|
|Network (available bandwidth between nodes)|25 Gbps|25 Gbps| ## Next steps
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
cdn Cdn Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-features.md
Title: Compare Azure Content Delivery Network (CDN) product features description: Learn about the features that each Azure Content Delivery Network (CDN) product supports. Previously updated : 11/15/2019 Last updated : 02/21/2023
The following table compares the features available with each product.
| Change optimization type | |**&#x2713;** | | | | Origin Port |All TCP ports |[Allowed origin ports](/previous-versions/azure/mt757337(v%3Dazure.100)#allowed-origin-ports) |All TCP ports |All TCP ports | | [Global server load balancing (GSLB)](../traffic-manager/traffic-manager-load-balancing-azure.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** |
-| [Fast purge](cdn-purge-endpoint.md) | **&#x2713;** |**&#x2713;**, Purge all and Wildcard purge are not supported by Azure CDN from Akamai currently |**&#x2713;** |**&#x2713;** |
+| [Fast purge](cdn-purge-endpoint.md) | **&#x2713;** |**&#x2713;**, Purge all and Wildcard purge aren't supported by Azure CDN from Akamai currently |**&#x2713;** |**&#x2713;** |
| [Asset pre-loading](cdn-preload-endpoint.md) | | |**&#x2713;** |**&#x2713;** | | Cache/header settings (using [caching rules](cdn-caching-rules.md)) |**&#x2713;** using [Standard rules engine](cdn-standard-rules-engine.md) |**&#x2713;** |**&#x2713;** | | | Customizable, rules based content delivery engine |**&#x2713;** using [Standard rules engine](cdn-standard-rules-engine.md) | | |**&#x2713;** using [rules engine](./cdn-verizon-premium-rules-engine.md) |
cdn Cdn Map Content To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-map-content-to-custom-domain.md
Title: 'Tutorial: Add a custom domain to your endpoint'
description: Use this tutorial to add a custom domain to an Azure Content Delivery Network endpoint so that your domain name is visible in your URL. -+ Previously updated : 04/12/2021- Last updated : 02/21/2023+ #Customer intent: As a website owner, I want to add a custom domain to my CDN endpoint so that my users can use my custom domain to access my content.
This tutorial shows how to add a custom domain to an Azure Content Delivery Network (CDN) endpoint.
-The endpoint name in your CDN profile is a subdomain of azureedge.net. By default when delivering content, the CDN profile domain is included within the URL.
+The endpoint name in your CDN profile is a subdomain of azureedge.net. By default when delivering content, the CDN profile domain gets included in the URL.
For example, `https://contoso.azureedge.net/photo.png`.
For Azure CDN, the source domain name is your custom domain name and the destina
Azure CDN routes traffic addressed to the source custom domain to the destination CDN endpoint hostname after it verifies the CNAME record.
-A custom domain and its subdomain can be added to only a single endpoint at a time.
+A custom domain and its subdomain can only get added to a single endpoint at a time.
Use multiple CNAME records for different subdomains from the same custom domain for different Azure services.
You can map a custom domain with different subdomains to the same CDN endpoint.
> [!NOTE] > - This tutorial uses the CNAME record type for multiple purposes:
-> - *Traffic routing* can be accomplished with a CNAME record as well as A or AAAA record types in Azure DNS. To apply, follow the steps below and replace the CNAME record with the record type of your choice.
-> - A CNAME record is **required** for custom domain *ownership validation* and must be available when adding the custom domain to a CDN Endpoint. More details below.
+> - *Traffic routing* can be accomplished with a CNAME record as well as A or AAAA record types in Azure DNS. To apply, use the following steps to replace the CNAME record with the record type of your choice.
+> - A CNAME record is **required** for custom domain *ownership validation* and must be available when adding the custom domain to a CDN Endpoint. More details in the following section.
# [**Azure DNS**](#tab/azure-dns)
To create a CNAME record for your custom domain:
3. Save your changes.
-4. If you're previously created a temporary cdnverify subdomain CNAME record, delete it.
+4. If you previously created a temporary cdnverify subdomain CNAME record, delete it.
5. If you're using this custom domain in production for the first time, follow the steps for [Add the custom domain with your CDN endpoint](#add-a-custom-domain-to-your-cdn-endpoint) and [Verify the custom domain](#verify-the-custom-domain).
After you've registered your custom domain, you can then add it to your CDN endp
:::image type="content" source="media/cdn-map-content-to-custom-domain/cdn-custom-domain-button.png" alt-text="Add custom domain button" border="true":::
-4. In **Add a custom domain**, **Endpoint hostname**, is pre-filled and is derived from your CDN endpoint URL: **\<endpoint-hostname>**.azureedge.net. It cannot be changed.
+4. In **Add a custom domain**, **Endpoint hostname**, gets generated and pre-filled from your CDN endpoint URL: **\<endpoint-hostname>**.azureedge.net. You can't change this value.
5. For **Custom hostname**, enter your custom domain, including the subdomain, to use as the source domain of your CNAME record. 1. For example, **www.contoso.com** or **cdn.contoso.com**. **Don't use the cdnverify subdomain name**.
After you've registered your custom domain, you can then add it to your CDN endp
6. Select **Add**.
- Azure verifies that the CNAME record exists for the custom domain name you entered. If the CNAME is correct, your custom domain will be validated.
+ Azure verifies that the CNAME record exists for the custom domain name you entered. If the CNAME is correct, your custom domain gets validated.
It can take some time for the new custom domain settings to propagate to all CDN edge nodes: - For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes.
After you've registered your custom domain, you can then add it to your CDN endp
New-AzCdnCustomDomain @parameters ```
-Azure verifies that the CNAME record exists for the custom domain name you entered. If the CNAME is correct, your custom domain will be validated.
+Azure verifies that the CNAME record exists for the custom domain name you entered. If the CNAME is correct, your custom domain gets validated.
It can take some time for the new custom domain settings to propagate to all CDN edge nodes:
Azure verifies that the CNAME record exists for the custom domain name you enter
After you've completed the registration of your custom domain, verify that the custom domain references your CDN endpoint.
-1. Ensure that you have public content that is cached at the endpoint. For example, if your CDN endpoint is associated with a storage account, Azure CDN will cache the content in a public container. Set your container to allow public access and it contains at least one file to test the custom domain.
+1. Ensure that you have public content that you want cached at the endpoint. For example, if your CDN endpoint is associated with a storage account, Azure CDN caches the content in a public container. Set your container to allow public access and it contains at least one file to test the custom domain.
2. In your browser, navigate to the address of the file by using the custom domain. For example, if your custom domain is `www.contoso.com`, the URL to the cached file should be similar to the following URL: `http://www.contoso.com/my-public-container/my-file.jpg`. Verify that the result is that same as when you access the CDN endpoint directly at **\<endpoint-hostname>**.azureedge.net.
If you no longer want to associate your endpoint with a custom domain, remove th
3. From the **Endpoint** page, under Custom domains, right-click the custom domain that you want to remove, then select **Delete** from the context menu. Select **Yes**.
- The custom domain is removed from your endpoint.
+ The custom domain gets removed from your endpoint.
# [**PowerShell**](#tab/azure-powershell-cleanup)
cdn Cdn Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-overview.md
Title: What is a content delivery network (CDN)? - Azure | Microsoft Docs description: Learn what Azure Content Delivery Network (CDN) is and how to use it to deliver high-bandwidth content. --+ ms.assetid: 866e0c30-1f33-43a5-91f0-d22f033b16c6 - Previously updated : 05/09/2018 Last updated : 02/21/2023 # What is a content delivery network on Azure?
-A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to users. CDNs store cached content on edge servers in point-of-presence (POP) locations that are close to end users, to minimize latency.
-Azure Content Delivery Network (CDN) offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world. Azure CDN can also accelerate dynamic content, which cannot be cached, by leveraging various network optimizations using CDN POPs. For example, route optimization to bypass Border Gateway Protocol (BGP).
+A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to users. A CDN store cached content on edge servers in point-of-presence (POP) locations that are close to end users, to minimize latency.
+
+Azure CDN offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world. Azure CDN can also accelerate dynamic content, which can't get cached, by using various network optimizations using CDN POPs. For example, route optimization to bypass Border Gateway Protocol (BGP).
The benefits of using Azure CDN to deliver web site assets include:
-* Better performance and improved user experience for end users, especially when using applications in which multiple round-trips are required to load content.
+* Better performance and improved user experience for end users, especially when using applications where multiple round-trips requests required by end users to load contents.
* Large scaling to better handle instantaneous high loads, such as the start of a product launch event.
-* Distribution of user requests and serving of content directly from edge servers so that less traffic is sent to the origin server.
+* Distribution of user requests and serving of content directly from edge servers so that less traffic gets sent to the origin server.
For a list of current CDN node locations, see [Azure CDN POP locations](cdn-pop-locations.md). ## How it works+ ![CDN Overview](./media/cdn-overview/cdn-overview.png) 1. A user (Alice) requests a file (also called an asset) by using a URL with a special domain name, such as _&lt;endpoint name&gt;_.azureedge.net. This name can be an endpoint hostname or a custom domain. The DNS routes the request to the best performing POP location, which is usually the POP that is geographically closest to the user.
For a list of current CDN node locations, see [Azure CDN POP locations](cdn-pop-
4. An edge server in the POP caches the file and returns the file to the original requestor (Alice). The file remains cached on the edge server in the POP until the time-to-live (TTL) specified by its HTTP headers expires. If the origin server didn't specify a TTL, the default TTL is seven days.
-5. Additional users can then request the same file by using the same URL that Alice used, and can also be directed to the same POP.
+5. More users can then request the same file by using the same URL that Alice used, and gets directed to the same POP.
6. If the TTL for the file hasn't expired, the POP edge server returns the file directly from the cache. This process results in a faster, more responsive user experience. ## Requirements * To use Azure CDN, you must own at least one Azure subscription. * You also need to create a CDN profile, which is a collection of CDN endpoints. Every CDN endpoint is a specific configuration which users can customize with required content delivery behavior and access. To organize your CDN endpoints by internet domain, web application, or some other criteria, you can use multiple profiles.
-* Since [Azure CDN pricing](https://azure.microsoft.com/pricing/details/cdn/) is applied at the CDN profile level, you must create multiple CDN profiles if you want to use a mix of pricing tiers. For information about the Azure CDN billing structure, see [Understanding Azure CDN billing](cdn-billing.md).
+* Since [Azure CDN pricing](https://azure.microsoft.com/pricing/details/cdn/) gets applied at the CDN profile level, so if you want to use a mix of pricing tiers you must create multiple CDN profiles. For information about the Azure CDN billing structure, see [Understanding Azure CDN billing](cdn-billing.md).
### Limitations+ Each Azure subscription has default limits for the following resources:
+ - The number of CDN profiles created.
+ - The number of endpoints created in a CDN profile.
+ - The number of custom domains mapped to an endpoint.
For more information about CDN subscription limits, see [CDN limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
cloud-services Applications Dont Support Tls 1 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/applications-dont-support-tls-1-2.md
description: Troubleshooting issues caused by applications that don't support TL
-tags: top-support-issue
+
+tag: top-support-issue
Previously updated : 03/16/2020 Last updated : 02/21/2023 # Troubleshooting applications that don't support TLS 1.2
cloud-services Automation Manage Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/automation-manage-cloud-services.md
Title: Manage Azure Cloud Services (classic) using Azure Automation | Microsoft
description: Learn about how the Azure Automation service can be used to manage Azure cloud services at scale. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Managing Azure Cloud Services (classic) using Azure Automation
cloud-services Cloud Services Allocation Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-allocation-failures.md
Title: Troubleshooting Cloud Service (classic) allocation failures | Microsoft D
description: Troubleshoot an allocation failure when you deploy Azure Cloud Services. Learn how allocation works and why allocation can fail. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Troubleshooting allocation failure when you deploy Cloud Services (classic) in Azure
cloud-services Cloud Services Certs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-certs-create.md
Title: Cloud Services (classic) and management certificates | Microsoft Docs
description: Learn about how to create and deploy certificates for cloud services and for authenticating with the management API in Azure. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Certificates overview for Azure Cloud Services (classic)
cloud-services Cloud Services Choose Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-choose-me.md
Title: What is Azure Cloud Services (classic) | Microsoft Docs
description: Learn about what Azure Cloud Services is, specifically that it's designed to support applications that are scalable, reliable, and inexpensive to operate. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Overview of Azure Cloud Services (classic)
cloud-services Cloud Services Configure Ssl Certificate Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-configure-ssl-certificate-portal.md
Title: Configure TLS for a cloud service | Microsoft Docs
description: Learn how to specify an HTTPS endpoint for a web role and how to upload a TLS/SSL certificate to secure your application. These examples use the Azure portal. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Configuring TLS for an application in Azure
cloud-services Cloud Services Connect To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-connect-to-custom-domain.md
Title: Connect a Cloud Service (classic) to a custom Domain Controller | Microso
description: Learn how to connect your web/worker roles to a custom AD Domain using PowerShell and AD Domain Extension Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Connecting Azure Cloud Services (classic) Roles to a custom AD Domain Controller hosted in Azure
cloud-services Cloud Services Custom Domain Name Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-custom-domain-name-portal.md
Title: Configure a custom domain name in Cloud Services (classic) | Microsoft Do
description: Learn how to expose your Azure application or data to the internet on a custom domain by configuring DNS settings. These examples use the Azure portal. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Configuring a custom domain name for an Azure cloud service (classic)
cloud-services Cloud Services Diagnostics Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-diagnostics-powershell.md
Title: Enable diagnostics in Azure Cloud Services (classic) using PowerShell | M
description: Learn how to use PowerShell to enable collecting diagnostic data from an Azure Cloud Service with the Azure Diagnostics extension. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Enable diagnostics in Azure Cloud Services (classic) using PowerShell
cloud-services Cloud Services Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-disaster-recovery-guidance.md
Title: Handling an Azure service disruption that impacts Azure Cloud Services (c
description: Learn what to do in the event of an Azure service disruption that impacts Azure Cloud Services. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # What to do in the event of an Azure service disruption that impacts Azure Cloud Services (classic)
cloud-services Cloud Services Dotnet Diagnostics Trace Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md
Title: Trace the flow in Cloud Services (classic) Application with Azure Diagnos
description: Add tracing messages to an Azure application to help debugging, measuring performance, monitoring, traffic analysis, and more. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Trace the flow of a Cloud Services (classic) application with Azure Diagnostics
cloud-services Cloud Services Dotnet Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-diagnostics.md
Title: How to use Azure diagnostics (.NET) with Cloud Services (classic) | Micro
description: Using Azure diagnostics to gather data from Azure cloud Services for debugging, measuring performance, monitoring, traffic analysis, and more. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Enabling Azure Diagnostics in Azure Cloud Services (classic)
cloud-services Cloud Services Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-get-started.md
Title: Get started with Azure Cloud Services (classic) and ASP.NET | Microsoft D
description: Learn how to create a multi-tier app using ASP.NET MVC and Azure. The app runs in a cloud service, with web role and worker role. It uses Entity Framework, SQL Database, and Azure Storage queues and blobs. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Get started with Azure Cloud Services (classic) and ASP.NET
cloud-services Cloud Services Dotnet Install Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-install-dotnet.md
Title: Install .NET on Azure Cloud Services (classic) roles
description: This article describes how to manually install the .NET Framework on your cloud service web and worker roles. Previously updated : 10/14/2020 Last updated : 02/21/2023 + # Install .NET on Azure Cloud Services (classic) roles
cloud-services Cloud Services Enable Communication Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-enable-communication-role-instances.md
Title: Communication for Roles in Cloud Services (classic) | Microsoft Docs
description: Role instances in Cloud Services can have endpoints (http, https, tcp, udp) defined for them that communicate with the outside or between other role instances. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Enable communication for role instances in Azure Cloud Services (classic)
cloud-services Cloud Services Guestos Family1 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-family1-retirement.md
Previously updated : 5/21/2017 Last updated : 02/21/2023 -+ # Guest OS Family 1 retirement notice
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 2/14/2023 Last updated : 02/21/2023 + # Azure Guest OS
cloud-services Cloud Services Guestos Retirement Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-retirement-policy.md
na Previously updated : 9/20/2017 Last updated : 02/21/2023 + # Azure Guest OS supportability and retirement policy The information on this page relates to the Azure Guest operating system ([Guest OS](cloud-services-guestos-update-matrix.md)) for Cloud Services worker and web roles (PaaS). It does not apply to Virtual Machines (IaaS).
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 1/31/2023 Last updated : 02/21/2023 + # Azure Guest OS releases and SDK compatibility matrix Provides you with up-to-date information about the latest Azure Guest OS releases for Cloud Services. This information helps you plan your upgrade path before a Guest OS is disabled. If you configure your roles to use *automatic* Guest OS updates as described in [Azure Guest OS Update Settings][Azure Guest OS Update Settings], it is not vital that you read this page.
cloud-services Cloud Services How To Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-configure-portal.md
description: Learn how to configure cloud services in Azure. Learn to update the
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # How to Configure and Azure Cloud Service (classic)
cloud-services Cloud Services How To Create Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-create-deploy-portal.md
Title: How to create and deploy a cloud service (classic) | Microsoft Docs
description: Learn how to use the Quick Create method to create a cloud service and use Upload to upload and deploy a cloud service package in Azure. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # How to create and deploy an Azure Cloud Service (classic)
cloud-services Cloud Services How To Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-manage-portal.md
Title: Common cloud service management tasks | Microsoft Docs
description: Learn how to manage Cloud Services in the Azure portal. These examples use the Azure portal. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Manage Cloud Services (classic) in the Azure portal
cloud-services Cloud Services How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-monitor.md
Title: Monitor an Azure Cloud Service (classic) | Microsoft Docs
description: Describes what monitoring an Azure Cloud Service involves and what some of your options are. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Introduction to Cloud Service (classic) Monitoring
cloud-services Cloud Services How To Scale Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-scale-portal.md
description: Learn how to use the portal to configure auto scale rules for a clo
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # How to configure auto scaling for a Cloud Service (classic) in the portal
cloud-services Cloud Services How To Scale Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-scale-powershell.md
description: (classic) Learn how to use PowerShell to scale a web role or worker
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # How to scale an Azure Cloud Service (classic) in PowerShell
cloud-services Cloud Services Model And Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-model-and-package.md
description: Describes the cloud service model (.csdef, .cscfg) and package (.cs
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # What is the Cloud Service (classic) model and how do I package it?
cloud-services Cloud Services Nodejs Chat App Socketio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md
Title: Node.js application using Socket.io - Azure
description: Use this tutorial to learn how to host a socket.IO-based chat application on Azure. Socket.IO provides real time communication for a Node.js server and clients. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Build a Node.js chat application with Socket.IO on an Azure Cloud Service (classic)
cloud-services Cloud Services Nodejs Develop Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-app.md
Title: Node.js Getting Started Guide
description: Learn how to create a simple Node.js web application and deploy it to an Azure cloud service. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Build and deploy a Node.js application to an Azure Cloud Service (classic)
cloud-services Cloud Services Nodejs Develop Deploy Express App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-express-app.md
Title: Build and deploy a Node.js Express app to Azure Cloud Services (classic)
description: Use this tutorial to create a new application using the Express module, which provides an MVC framework for creating Node.js web applications. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Build and deploy a Node.js web application using Express on an Azure Cloud Services (classic)
cloud-services Cloud Services Performance Testing Visual Studio Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-performance-testing-visual-studio-profiler.md
Title: Profiling a Cloud Service (classic) Locally in the Compute Emulator | Mic
description: Investigate performance issues in cloud services with the Visual Studio profiler Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Testing the Performance of a Cloud Service (classic) Locally in the Azure Compute Emulator Using the Visual Studio Profiler
cloud-services Cloud Services Powershell Create Cloud Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-powershell-create-cloud-container.md
Title: Create a cloud service (classic) container with PowerShell | Microsoft Do
description: This article explains how to create a cloud service container with PowerShell. The container hosts web and worker roles. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Use an Azure PowerShell command to create an empty cloud service (classic) container
cloud-services Cloud Services Python How To Use Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-how-to-use-service-management.md
Title: Use the Service Management API (Python) - feature guide
description: Learn how to programmatically perform common service management tasks from Python. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Use service management from Python
cloud-services Cloud Services Python Ptvs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-ptvs.md
Title: Get started with Python and Azure Cloud Services (classic)| Microsoft Doc
description: Overview of using Python Tools for Visual Studio to create Azure cloud services including web roles and worker roles. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Python web and worker roles with Python Tools for Visual Studio
cloud-services Cloud Services Role Config Xpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-config-xpath.md
description: The various XPath settings you can use in the cloud service role co
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Expose role configuration settings as an environment variable with XPath
cloud-services Cloud Services Role Enable Remote Desktop New Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-new-portal.md
Title: Use the portal to enable Remote Desktop for a Role
description: How to configure your azure cloud service application to allow remote desktop connections Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Enable Remote Desktop Connection for a Role in Azure Cloud Services (classic)
cloud-services Cloud Services Role Enable Remote Desktop Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-powershell.md
Title: Use PowerShell to enable Remote Desktop for a Role
description: How to configure your azure cloud service application using PowerShell to allow remote desktop connections Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Enable Remote Desktop Connection for a Role in Azure Cloud Services (classic) using PowerShell
cloud-services Cloud Services Role Enable Remote Desktop Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-visual-studio.md
Title: Using Visual Studio, enable Remote Desktop for a Role (Azure Cloud Servic
description: How to configure your Azure cloud service application to allow remote desktop connections Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Enable Remote Desktop Connection for a Role in Azure Cloud Services (classic) using Visual Studio
cloud-services Cloud Services Role Lifecycle Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-lifecycle-dotnet.md
Title: Handle Cloud Service (classic) lifecycle events | Microsoft Docs
description: Learn how to use the lifecycle methods of a Cloud Service role in .NET, including RoleEntryPoint, which provides methods to respond to lifecycle events. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Customize the Lifecycle of a Web or Worker role in .NET
cloud-services Cloud Services Sizes Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-sizes-specs.md
description: Lists the different virtual machine sizes (and IDs) for Azure cloud
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Sizes for Cloud Services (classic)
cloud-services Cloud Services Startup Tasks Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-startup-tasks-common.md
Title: Common startup tasks for Cloud Services (classic) | Microsoft Docs
description: Provides some examples of common startup tasks you may want to perform in your cloud services web role or worker role. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Common Cloud Service (classic) startup tasks
cloud-services Cloud Services Startup Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-startup-tasks.md
Title: Run Startup Tasks in Azure Cloud Services (classic) | Microsoft Docs
description: Startup tasks help prepare your cloud service environment for your app. This teaches you how startup tasks work and how to make them Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # How to configure and run startup tasks for an Azure Cloud Service (classic)
cloud-services Cloud Services Troubleshoot Common Issues Which Cause Roles Recycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md
Title: Common causes of Cloud Service (classic) roles recycling | Microsoft Docs
description: A cloud service role that suddenly recycles can cause significant downtime. Here are some common issues that cause roles to be recycled, which may help you reduce downtime. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Common issues that cause Azure Cloud Service (classic) roles to recycle
cloud-services Cloud Services Troubleshoot Constrained Allocation Failed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-constrained-allocation-failed.md
Previously updated : 02/22/2021 Last updated : 02/21/2023+
cloud-services Cloud Services Troubleshoot Default Temp Folder Size Too Small Web Worker Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-default-temp-folder-size-too-small-web-worker-role.md
Title: Default TEMP folder size is too small for a role | Microsoft Docs
description: A cloud service role has a limited amount of space for the TEMP folder. This article provides some suggestions on how to avoid running out of space. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Default TEMP folder size is too small on a cloud service (classic) web/worker role
cloud-services Cloud Services Troubleshoot Deployment Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-deployment-problems.md
Title: Troubleshoot cloud service (classic) deployment problems | Microsoft Docs
description: There are a few common problems you may run into when deploying a cloud service to Azure. This article provides solutions to some of them. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Troubleshoot Azure Cloud Services (Classic) deployment problems
cloud-services Cloud Services Troubleshoot Fabric Internal Server Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-fabric-internal-server-error.md
Previously updated : 02/22/2021 Last updated : 02/21/2023+ # Troubleshoot FabricInternalServerError or ServiceAllocationFailure when deploying a Cloud service (classic) to Azure
cloud-services Cloud Services Troubleshoot Location Not Found For Role Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-location-not-found-for-role-size.md
Previously updated : 06/06/2022 --- devx-track-azurepowershell-- kr2b-contr-experiment Last updated : 02/21/2023+ # Troubleshoot LocationNotFoundForRoleSize when deploying a Cloud service to Azure
cloud-services Cloud Services Troubleshoot Overconstrained Allocation Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-overconstrained-allocation-request.md
Previously updated : 02/22/2021 Last updated : 02/21/2023+ # Troubleshoot OverconstrainedAllocationRequest when deploying Cloud services (classic) to Azure
cloud-services Cloud Services Troubleshoot Roles That Fail Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-roles-that-fail-start.md
Title: Troubleshoot roles that fail to start | Microsoft Docs
description: Here are some common reasons why a Cloud Service role may fail to start. Solutions to these problems are also provided. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Troubleshoot Azure Cloud Service (classic) roles that fail to start
cloud-services Cloud Services Update Azure Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-update-azure-service.md
Title: How to update a cloud service (classic) | Microsoft Docs
description: Learn how to update cloud services in Azure. Learn how an update on a cloud service proceeds to ensure availability. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # How to update an Azure Cloud Service (classic)
cloud-services Cloud Services Workflow Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-workflow-process.md
Title: Workflow of Windows Azure VM Architecture | Microsoft Docs
description: This article provides overview of the workflow processes when you deploy a service. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Workflow of Windows Azure classic VM Architecture
cloud-services Diagnostics Extension To Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/diagnostics-extension-to-storage.md
Previously updated : 08/01/2016 Last updated : 02/21/2023+ # Store and view diagnostic data in Azure Storage
cloud-services Diagnostics Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/diagnostics-performance-counters.md
Title: Collect on Performance Counters in Azure Cloud Services (classic) | Micro
description: Learn how to discover, use, and create performance counters in Cloud Services with Azure Diagnostics and Application Insights. Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Collect performance counters for your Azure Cloud Service (classic)
cloud-services Mitigate Se https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/mitigate-se.md
vm-windows Previously updated : 07/12/2022 Last updated : 02/21/2023 +
cloud-services Resource Health For Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/resource-health-for-cloud-services.md
description: This article talks about Resource Health Check (RHC) Support for Mi
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Resource Health Check (RHC) Support for Azure Cloud Services (Classic)
cloud-services Schema Cscfg File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-cscfg-file.md
description: A service configuration (.cscfg) file specifies how many role insta
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Azure Cloud Services (classic) Config Schema (.cscfg File)
cloud-services Schema Cscfg Networkconfiguration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-cscfg-networkconfiguration.md
description: Learn about the child elements of the NetworkConfiguration element
Previously updated : 10/14/2020 Last updated : 02/21/2023 -
-thor: tagore
++ # Azure Cloud Services (classic) Config NetworkConfiguration Schema
cloud-services Schema Cscfg Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-cscfg-role.md
description: The Role element of a service configuration file specifies how many
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Azure Cloud Services (classic) Config Role Schema
cloud-services Schema Csdef File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-csdef-file.md
description: A service definition (.csdef) file defines a service model for an a
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Azure Cloud Services (classic) Definition Schema (.csdef File)
cloud-services Schema Csdef Loadbalancerprobe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-csdef-loadbalancerprobe.md
description: The customer defined LoadBalancerProbe is a health probe of endpoin
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Azure Cloud Services (classic) Definition LoadBalancerProbe Schema
cloud-services Schema Csdef Networktrafficrules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-csdef-networktrafficrules.md
description: Learn about NetworkTrafficRules, which limits the roles that can ac
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Azure Cloud Services (classic) Definition NetworkTrafficRules Schema
cloud-services Schema Csdef Webrole https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-csdef-webrole.md
description: Azure web role is customized for web application programming suppor
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Azure Cloud Services (classic) Definition WebRole Schema
cloud-services Schema Csdef Workerrole https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-csdef-workerrole.md
description: The Azure worker role is used for generalized development and may p
Previously updated : 10/14/2020 Last updated : 02/21/2023 -+ # Azure Cloud Services (classic) Definition WorkerRole Schema
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
communication-services Calling Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/calling-chat.md
# Teams Interoperability: Calling and chat > [!IMPORTANT]
-> Calling and chat interoperability is in private preview, and restricted to a limited number of Azure Communication Services early adopters. You can [submit this form to request participation in the preview](https://forms.office.com/r/F3WLqPjw0D), and we'll review your scenario(s) and evaluate your participation in the preview.
+> Calling and chat interoperability is in private preview and restricted to a limited number of Azure Communication Services early adopters. You can [submit this form to request participation in the preview](https://forms.office.com/r/F3WLqPjw0D), and we'll review your scenario(s) and evaluate your participation in the preview.
>
-> Private Preview APIs and SDKs are provided without a service-level agreement, aren't appropriate for production workloads, and should only be used with test users and test data. Certain features may not be supported or have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Private Preview APIs and SDKs are provided without a service-level agreement, aren't appropriate for production workloads, and should only be used with test users and data. Certain features may not be supported or have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
> > For support, questions, or to provide feedback or report issues, please use the [Teams interop ad hoc calling and chat channel](https://teams.microsoft.com/l/channel/19%3abfc7d5e0b883455e80c9509e60f908fb%40thread.tacv2/Teams%2520Interop%2520ad%2520hoc%2520calling%2520and%2520chat?groupId=d78f76f3-4229-4262-abfb-172587b7a6bb&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47). You must be a member of the Azure Communication Service TAP team.
-As part of this preview, the Azure Communication Services SDKs can be used to build applications that enable bring your own identity (BYOI) users to start 1:1 calls or 1:n chats with Teams users. [Standard Azure Communication Services pricing](https://azure.microsoft.com/pricing/details/communication-services/) applies to these users, but there's no extra fee for the interoperability capability itself. Custom applications built with Azure Communication Services to connect and communicate with Teams users or Teams voice applications can be used by end users or by bots, and there's no differentiation in how they appear to Teams users in Teams applications unless explicitly indicated by the developer of the application with a display name.
+As part of this preview, the Azure Communication Services SDKs can be used to build applications that enable bring your own identity (BYOI) users to start 1:1 calls or 1:n chats with Teams users. [Standard Azure Communication Services pricing](https://azure.microsoft.com/pricing/details/communication-services/) applies to these users, but there's no extra fee for the interoperability capability. Custom applications built with Azure Communication Services to connect and communicate with Teams users or Teams voice applications can be used by end users or by bots, and there's no differentiation in how they appear to Teams users in Teams applications unless explicitly indicated by the developer of the application with a display name.
-To enable calling and chat between your Communication Services users and your Teams tenant, allow your tenant via the [form](https://forms.office.com/r/F3WLqPjw0D) and enable the connection between the tenant and Communication Services resource.
+To enable calling and chat between your Communication Services users and Teams tenant, allow your tenant via the [form](https://forms.office.com/r/F3WLqPjw0D) and enable the connection between the tenant and Communication Services resource.
-## Enabling calling and chat interoperability in your Teams tenant
-Azure AD user with [Teams administrator role](../../../active-directory/roles/permissions-reference.md#teams-administrator) can run PowerShell cmdlet with MicrosoftTeams module to enable the Communication Services resource in the tenant. First, open the PowerShell and validate the existence of the Teams module with the following command:
+## Enable interoperability in your Teams tenant
+Azure AD user with [Teams administrator role](../../../active-directory/roles/permissions-reference.md#teams-administrator) can run PowerShell cmdlet with MicrosoftTeams module to enable the Communication Services resource in the tenant.
+
+### 1. Prepare the Microsoft Teams module
+
+First, open the PowerShell and validate the existence of the Teams module with the following command:
```script Get-module *teams* ```
-If you don't see the MicrosoftTeams module, you need to install it first. To install the module, you need to run PowerShell as an administrator. Then run the following command:
+If you don't see the `MicrosoftTeams` module, install it first. To install the module, you need to run PowerShell as an administrator. Then run the following command:
```script Install-Module -Name MicrosoftTeams ```
-You'll be informed about the modules that are going to be installed, which you can confirm with a `Y` or `A` answer. If the module is installed but is outdated, you can run the following command to update the module:
+You'll be informed about the modules that will be installed, which you can confirm with a `Y` or `A` answer. If the module is installed but is outdated, you can run the following command to update the module:
```script Update-Module MicrosoftTeams ```
+### 2. Connect to Microsoft Teams module
+ When the module is installed and ready, you can connect to the MicrosftTeams module with the following command. You'll be prompted with an interactive window to log in. The user account that you're going to use needs to have Teams administrator permissions. Otherwise, you might get an `access denied` response in the next steps. ```script Connect-MicrosoftTeams ```
+### 3. Enable tenant configuration
+
+Interoperability with Communication Services resources is controlled via tenant configuration and assigned policy. Teams tenant has a single tenant configuration, and Teams users have assigned global policy or custom policy. The following table shows possible scenarios and impacts on interoperability.
+
+| Tenant configuration | Global policy | Custom policy | Assigned policy | Interoperability |
+| | | | | |
+| True | True | True | Global | **Enabled** |
+| True | True | True | Custom | **Enabled** |
+| True | True | False | Global | **Enabled** |
+| True | True | False | Custom | Disabled |
+| True | False | True | Global | Disabled |
+| True | False | True | Custom | **Enabled** |
+| True | False | False | Global | Disabled |
+| True | False | False | Custom | Disabled |
+| False | True | True | Global | Disabled |
+| False | True | True | Custom | Disabled |
+| False | True | False | Global | Disabled |
+| False | True | False | Custom | Disabled |
+| False | False | True | Global | Disabled |
+| False | False | True | Custom | Disabled |
+| False | False | False | Global | Disabled |
+| False | False | False | Custom | Disabled |
+ After successful login, you can run the cmdlet [Set-CsTeamsAcsFederationConfiguration](/powershell/module/teams/set-csteamsacsfederationconfiguration) to enable Communication Services resource in your tenant. Replace the text `IMMUTABLE_RESOURCE_ID` with an immutable resource ID in your communication resource. You can find more details on how to get this information [here](../troubleshooting-info.md#getting-immutable-resource-id). ```script
$allowlist = @('IMMUTABLE_RESOURCE_ID')
Set-CsTeamsAcsFederationConfiguration -EnableAcsUsers $True -AllowedAcsResources $allowlist ```
+### 4. Enable tenant policy
+
+Each Teams user has assigned an `External Access Policy` that determines whether Communication Services users can call this Teams user. Use cmdlet
+[Set-CsExternalAccessPolicy](/powershell/module/skype/set-csexternalaccesspolicy) to ensure that the policy assigned to the Teams user has set `EnableAcsFederationAccess` to `$true`
+
+```script
+Set-CsExternalAccessPolicy -Identity Global -EnableAcsFederationAccess $true
+```
+ ## Get Teams user ID
const call = callAgent.startCall([teamsCallee]);
- Communication Services call recording isn't available for 1:1 calls. - Advanced call routing capabilities such as call forwarding, group call pickup, simultaneous ringing, and voice mail aren't supported. - Teams users can't set Communication Services users as forwarding/transfer targets.-- There are many features in the Teams client that don't work as expected during 1:1 calls with Communication Services users.
+- Many features in the Teams client don't work as expected during 1:1 calls with Communication Services users.
- Third-party [devices for Teams](/MicrosoftTeams/devices/teams-ip-phones) and [Skype IP phones](/skypeforbusiness/certification/devices-ip-phones) aren't supported. ## Chat
-With the Chat SDK, Communication Services users or endpoints can have group chats with Teams users, identified by their Azure Active Directory (AAD) object ID. You can easily modify an existing application that creates chats with other Communication Services users to create chats with Teams users instead. Here is an example of how to use the Chat SDK to add Teams users as participants. To learn how to use Chat SDK to send a message, manage participants, and more, see our [quickstart](../../quickstarts/chat/get-started.md?pivots=programming-language-javascript).
+With the Chat SDK, Communication Services users or endpoints can have group chats with Teams users, identified by their Azure Active Directory (Azure AD) object ID. You can easily modify an existing application that creates chats with other Communication Services users to create chats with Teams users instead. Here is an example of how to use the Chat SDK to add Teams users as participants. To learn how to use Chat SDK to send a message, manage participants, and more, see our [quickstart](../../quickstarts/chat/get-started.md?pivots=programming-language-javascript).
Creating a chat with a Teams user: ```js
createChatThreadRequest, createChatThreadOptions );
const threadId = createChatThreadResult.chatThread.id; return threadId; } ```
-To make testing easier, we've published a sample app [here](https://github.com/Azure-Samples/communication-services-web-chat-hero/tree/teams-interop-chat-adhoc). Update the app with your Communication Services resource and interop enabled Teams tenant to get started.
+To make testing easier, we've published a sample app [here](https://github.com/Azure-Samples/communication-services-web-chat-hero/tree/teams-interop-chat-adhoc). Update the app with your Communication Services resource, and interop enabled Teams tenant to get started.
**Limitations and known issues** </br> While in private preview, a Communication Services user can do various actions using the Communication Services Chat SDK, including sending and receiving plain and rich text messages, typing indicators, read receipts, real-time notifications, and more. However, most of the Teams chat features aren't supported. Here are some key behaviors and known issues: - Communication Services users can only initiate chats. - Communication Services users can't send or receive GIFs, images, or files. Links to files and images can be shared.-- Communication Services users can delete the chat. This removes the Teams user from the chat thread and hides the message history from the Teams client.-- Known issue: Communication Services users aren't displayed correctly in the participant list. They're currently displayed as External, but their people cards might need to be more consistent.
+- Communication Services users can delete the chat. This action removes the Teams user from the chat thread and hides the message history from the Teams client.
+- Known issue: Communication Services users aren't displayed correctly in the participant list. They're currently displayed as External, but their people cards show inconsistent data.
- Known issue: A chat can't be escalated to a call from within the Teams app. - Known issue: Editing of messages by the Teams user isn't supported. ## Privacy Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chats. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
-Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced. You must communicate this fact in real time to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred due to your failure to comply with this obligation.
+Microsoft will indicate via the Azure Communication Services API that recording or transcription has commenced. You must communicate this fact in real time to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred due to your failure to comply with this obligation.
communication-services Emergency Calling Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/emergency-calling-concept.md
Azure Communication Calling SDK can be used to add Enhanced Emergency dialing and Public Safety Answering Point (PSAP) call-back support to your applications in the United States (US), Puerto Rico (PR), the United Kingdom (GB), and Canada (CA). The capability to dial 911 (in US, PR, and CA) and 999 or 112 (in GB) and receive a call-back may be a requirement for your application. Verify the Emergency Calling requirements with your legal counsel.
-Calls to an emergency number are routed over the Microsoft network. Microsoft assigns a temporary phone number as the Call Line Identity (CLI) when an emergency call from the US, PR, GB, or CA are placed. Microsoft temporarily maintains a mapping of the phone number to the caller's identity. If there is a call-back from the PSAP, we route the call directly to the originating caller. The caller can accept incoming PSAP call even if inbound calling is disabled.
+Calls to an emergency number are routed over the Microsoft network. Microsoft assigns a temporary phone number as the Call Line Identity (CLI) when an emergency call from the US, PR, GB, or CA are placed. Microsoft temporarily maintains a mapping of the phone number to the caller's identity. If there's a call-back from the PSAP, we route the call directly to the originating caller. The caller can accept incoming PSAP call even if inbound calling is disabled.
The service is available for Microsoft phone numbers. It requires that the Azure resource from where the emergency call originates has a Microsoft-issued phone number enabled with outbound dialing (also referred to as ΓÇÿmake calls').
Azure Communication Services direct routing is currently in public preview and n
1. Microsoft validates the Azure resource has a Microsoft phone number enabled for outbound dialing 1. Microsoft Azure Communication Services emergency service replaces the userΓÇÖs phone number `alternateCallerId` with a temporary unique phone number. This number allocation remains in place for at least 60 minutes from the time that emergency number is first dialed 1. Microsoft maintains a temporary record (for approximately 60 minutes) of the userΓÇÖs identity to the unique phone number
-1. The emergency call will be first routed to a call center where an agent will request the callerΓÇÖs address
-1. The call center will then route the call to the appropriate PSAP in a proper region
+1. In the US, PR, and CA the emergency call will be first routed to a call center where an agent will request the callerΓÇÖs address. The call center will then route the call to the appropriate PSAP in a proper region
1. If the emergency call is unexpectedly dropped, the PSAP then makes a call-back to the user 1. On receiving the call-back within 60 minutes, Microsoft will route the inbound call directly to the user identity, which initiated the emergency call
-## Enabling Emergency calling
+## Enabling Emergency Calling
Emergency dialing is automatically enabled for all users of the Azure Communication Client Calling SDK with an acquired Microsoft telephone number that is enabled for outbound dialing in the Azure resource. To use emergency calling with Microsoft phone numbers, follow the steps:
Emergency dialing is automatically enabled for all users of the Azure Communicat
1. If the caller is outside of the supported countries, the call to 911 won't be permitted
-1. When testing your application in the US, dial 933 instead of 911. 933 is enabled for testing purposes; the recorded message will confirm the phone number the emergency call originates from. You should hear a temporary number assigned by Microsoft, which isn't the `alternateCallerId` provided by the application
+1. When testing your application in the US, dial 933 instead of 911. 933 is enabled for testing purposes; the recorded message confirms the phone number the emergency call originates from. You should hear a temporary number assigned by Microsoft, which isn't the `alternateCallerId` provided by the application
1. Ensure your application supports [receiving an incoming call](../../how-tos/calling-sdk/manage-calls.md#receive-an-incoming-call) so call-backs from the PSAP are appropriately routed to the originator of the emergency call. To test inbound calling is working correctly, place inbound VoIP calls to the user of the Calling SDK
The Emergency service is temporarily free to use for Azure Communication Service
## Emergency calling with Azure Communication Services direct routing Emergency call is a regular call from a direct routing perspective. If you want to implement emergency calling with Azure Communication Services direct routing, you need to make sure that there's a routing rule for your emergency number (911, 112, etc.). You also need to make sure that your carrier processes emergency calls properly.
-There's also an option to use purchased number as a caller ID for direct routing calls. In such case, if there's no voice routing rule for emergency number, the call will fall back to Microsoft network, and we'll treat it as a regular emergency call. Learn more about [voice routing fall back](./direct-routing-provisioning.md#outbound-voice-routing-considerations).
+There's also an option to use purchased number as a caller ID for direct routing calls. In such case, if there's no voice routing rule for emergency number, the call falls back to Microsoft network, and we treat it as a regular emergency call. Learn more about [voice routing fall back](./direct-routing-provisioning.md#outbound-voice-routing-considerations).
## Next steps ### Quickstarts - [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)-- [Add Emergency Calling to your app](../../quickstarts/telephony/pstn-call.md)
+- [Add Emergency Calling to your app](../../quickstarts/telephony/emergency-calling.md)
connectors Connectors Create Api Onedrive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-onedrive.md
- Title: Access and manage files in Microsoft OneDrive
-description: Upload and manage files in OneDrive by creating automated workflows in Azure Logic Apps.
--- Previously updated : 10/18/2016
-tags: connectors
--
-# Access and manage files in OneDrive connector by using Azure Logic Apps
-
-By using [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [OneDrive connector](/connectors/onedriveconnector/), you can create automated tasks and workflows to manage your files, including upload, get, delete files, and more. With OneDrive, you can perform these tasks:
-
-* Build your workflow by storing files in OneDrive, or update existing files in OneDrive.
-* Use triggers to start your workflow when a file is created or updated within your OneDrive.
-* Use actions to create a file, delete a file, and more. For example, when a new Office 365 email is received with an attachment (a trigger), create a new file in OneDrive (an action).
-
-This article shows you how to use the OneDrive connector in a logic app, and also lists the triggers and actions.
-
-To learn more about Logic Apps, see [What are logic apps](../logic-apps/logic-apps-overview.md) and [create a logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-
-## Connect to OneDrive
-
-Before your logic app can access any service, you first create a *connection* to the service. A connection provides connectivity between a logic app and another service. For example, to connect to OneDrive, you first need a OneDrive *connection*. To create a connection, enter the credentials you normally use to access the service you wish to connect to. So, with OneDrive, enter the credentials to your OneDrive account to create the connection.
-
-### Create the connection
--
-## Use a trigger
-
-A trigger is an event that can be used to start the workflow defined in a logic app. Triggers "poll" the service at an interval and frequency that you want. [Learn more about triggers](../logic-apps/logic-apps-overview.md#logic-app-concepts).
-
-1. In the Logic App Designer, type `onedrive` to get a list of the triggers:
-
- ![A dialog box titled "Show Microsoft managed A P I's" has a box that contains "onedrive". Below that is a list of four triggers. The first of these is "OneDrive - When a file is created". The second, "OneDrive - When a file is modified", has been selected.](./media/connectors-create-api-onedrive/onedrive-1.png)
-
-2. Select **When a file is modified**. If a connection already exists, then select the Show Picker button to select a folder.
-
- ![A dialog box titled "When a file is modified" has a box titled "FOLDER" with an associated browse button.](./media/connectors-create-api-onedrive/sample-folder.png)
-
- If you are prompted to sign in, then enter the sign in details to create the connection. [Create the connection](connectors-create-api-onedrive.md#create-the-connection) in this article lists the steps.
-
- In this example, the logic app runs when a file in the folder you choose is updated. To see the results of this trigger, add another action that sends you an email. For example, add the Office 365 Outlook *Send an email* action that emails you when a file is updated.
-
-3. Select the **Edit** button and set the **Frequency** and **Interval** values. For example, if you want the trigger to poll every 15 minutes, then set the **Frequency** to **Minute**, and set the **Interval** to **15**.
-
- ![A dialog box titled "When a file is modified" shows five boxes labeled: "FOLDER", "FREQUENCY", "INTERVAL", "TIMEZONE", and "START TIME". There are drop-down lists for the "FREQUENCY" and "TIME ZONE" fields.](./media/connectors-create-api-onedrive/trigger-properties.png)
-
-4. **Save** your changes (top left corner of the toolbar). Your logic app is saved and may be automatically enabled.
-
-## Use an action
-
-An action is an operation carried out by the workflow defined in a logic app. [Learn more about actions](../logic-apps/logic-apps-overview.md#logic-app-concepts).
-
-1. Select the plus sign. You see several choices: **Add an action**, **Add a condition**, or one of the **More** options.
-
- ![A screenshot shows four buttons: "+ New Step", "Add an action", "Add a condition", and "...More".](./media/connectors-create-api-onedrive/add-action.png)
-
-2. Choose **Add an action**.
-
-3. In the search box, type `onedrive` to get a list of all the available actions.
-
- ![A dialog box titled "Show Microsoft managed A P I's" has a box that contains "onedrive". Below that is a list of eight actions. The first is "OneDrive - Create file", and it is selected.](./media/connectors-create-api-onedrive/onedrive-actions.png)
-
-4. In our example, choose **OneDrive - Create file**. If a connection already exists, then select the **Folder Path** to put the file, enter the **File Name**, and choose the **File Content** you want:
-
- ![A dialog box titled "Create file" shows three boxes labeled "FOLDER PATH", "FILE NAME", and "FOLDER CONTENT". There is a directory browse button next to the "FOLDER PATH" box.](./media/connectors-create-api-onedrive/sample-action.png)
-
- If you are prompted for the connection information, enter the details to [create the connection as described](#create-the-connection) in this topic.
-
- In this example, you create a new file in a OneDrive folder. You can use output from another trigger to create the OneDrive file. For example, add the Office 365 Outlook *When a new email arrives* trigger. Then add the OneDrive *Create file* action that uses the Attachments and Content-Type fields within a ForEach to create the new file in OneDrive.
-
- ![A dialog box titled "For each" has a box labeled "SELECT AN OUTPUT FROM PREVIOUS STEPS" which contains "Attachments". There is a "Create file" dialog box covering the remainder of the "For each" box, with boxes labeled "FOLDER PATH", "FILE NAME", and "FILE CONTENT". ](./media/connectors-create-api-onedrive/foreach-action.png)
-
-5. **Save** your changes (top left corner of the toolbar). Your logic app is saved and may be automatically enabled.
-
-## Connector-specific details
-
-View any triggers and actions defined in the swagger, and also see any limits in the [connector details](/connectors/onedriveconnector/).
-
-## Next steps
-
-* [Connectors for Azure Logic Apps](apis-list.md)
connectors Connectors Create Api Onedriveforbusiness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-onedriveforbusiness.md
- Title: Connect to OneDrive for Business
-description: Upload and manage files in OneDrive for Business using Azure Logic Apps.
--- Previously updated : 08/18/2016
-tags: connectors
--
-# Connect to OneDrive for Business using Azure Logic Apps
-
-Connect to OneDrive for Business to manage your files. You can perform various actions such as upload, update, get, and delete on files.
-
-You can get started by creating a logic app now, see [Create a logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-
-## Create a connection to OneDrive for Business
-To create Logic apps with OneDrive for Business, you must first create a **connection** then provide the details for the following properties:
-
-| Property | Required | Description |
-| | | |
-| Token |Yes |Provide OneDrive for Business Credentials |
-
-After you create the connection, you can use it to execute the actions and listen for the triggers described in this article.
-
-> [!INCLUDE [Steps to create a connection to OneDrive for Business](../../includes/connectors-create-api-onedriveforbusiness.md)]
->
-
-## Connector-specific details
-
-View any triggers and actions defined in the swagger, and also see any limits in the [connector details](/connectors/onedriveforbusinessconnector/).
-
-## More connectors
-Go back to the [APIs list](apis-list.md).
connectors Connectors Create Api Outlook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-outlook.md
- Title: Connect to Outlook.com
-description: Automate tasks and workflows that manage email, calendars, and contacts in Outlook.com using Azure Logic Apps.
--- Previously updated : 08/18/2016
-tags: connectors
--
-# Connect to Outlook.com from Azure Logic Apps
-
-With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Outlook.com connector](/connectors/outlook/), you can create automated tasks and workflows that manage your @outlook.com or @hotmail.com account by building logic apps. For example, you automate these tasks:
-
-* Get, send, and reply to email.
-* Schedule meetings on your calendar.
-* Add and edit contacts.
-
-You can use any trigger to start your workflow, for example, when a new email arrives, when a calendar item is updated, or when an event happens in a difference service. You can use actions that respond to the trigger event, for example, send an email or create a new calendar event.
-
-> [!NOTE]
-> To automate tasks for a Microsoft work account such as @fabrikam.onmicrosoft.com, use the
-> [Office 365 Outlook connector](../connectors/connectors-create-api-office365-outlook.md).
-
-## Prerequisites
-
-* An [Outlook.com account](https://outlook.live.com/owa/)
-
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
-* The logic app where you want to access your Outlook.com account. To start your workflow with an Outlook.com trigger, you need to have a [blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md). To add an Outlook.com action to your workflow, your logic app needs to already have a trigger.
-
-## Add a trigger
-
-A [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an event that starts the workflow in your logic app. This example logic app uses a "polling" trigger that checks for any new email in your email account, based on the specified interval and frequency.
-
-1. In the [Azure portal](https://portal.azure.com), open your blank logic app in the Logic App Designer.
-
-1. In the search box, enter "outlook.com" as your filter. For this example, select **When a new email arrives**.
-
-1. If you're prompted to sign in, provide your Outlook.com credentials so that your logic app can connect to your account. Otherwise, if your connection already exists, provide the information for the trigger properties:
-
-1. In the trigger, set the **Frequency** and **Interval** values.
-
- For example, if you want the trigger to poll every 15 minutes, set the **Frequency** to **Minute**, and set the **Interval** to **15**.
-
-1. On the designer toolbar, select **Save**, which saves your logic app.
-
-To respond to the trigger, add another action. For example, you can add the Twilio **Send message** action, which sends a text when an email arrives.
-
-## Add an action
-
-An [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an operation that's run by the workflow in your logic app. This example logic app sends an email from your Outlook.com account. You can use the output from another trigger to populate the action. For example, suppose your logic app uses the SalesForce **When an object is created** trigger. You can add the Outlook.com **Send an email** action and use the outputs from the SalesForce trigger in the email.
-
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
-
-1. To add an action as the last step in your workflow, select **New step**.
-
- To add an action between steps, move your pointer over the arrow between those steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
-
-1. In the search box, enter ΓÇ£outlook.comΓÇ¥ as your filter. For this example, select **Send an email**.
-
-1. If you're prompted to sign in, provide your Outlook.com credentials so that your logic app can connect to your account. Otherwise, if your connection already exists, provide the information for the action properties.
-
-1. On the designer toolbar, select **Save**, which saves your logic app.
-
-## Connector reference
-
-For technical details, such as triggers, actions, and limits, as described by the connector's Swagger file, see the [connector's reference page](/connectors/outlook/).
-
-## Next steps
-
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
connectors Connectors Create Api Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-salesforce.md
- Title: Connect to Salesforce from Azure Logic Apps
-description: Automate tasks and workflows that monitor, create, and manage Salesforce records and jobs using Azure Logic Apps.
--- Previously updated : 08/24/2018
-tags: connectors
--
-# Connect to Salesforce from Azure Logic Apps
-
-With Azure Logic Apps and the Salesforce connector,
-you can create automated tasks and workflows for your
-Salesforce resources, such as records, jobs, and objects,
-for example:
-
-* Monitor when records are created or changed.
-* Create, get, and manage jobs and records,
-including insert, update, and delete actions.
-
-You can use Salesforce triggers that get responses from Salesforce
-and make the output available to other actions. You can use actions
-in your logic apps to perform tasks with Salesforce resources.
-If you're new to logic apps, review
-[What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
-
-## Prerequisites
-
-* An Azure account and subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
-* A [Salesforce account](https://salesforce.com/)
-
-* Basic knowledge about
-[how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-
-* The logic app where you want to access your Salesforce account.
-To start with a Salesforce trigger, [create a blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-To use a Salesforce action, start your logic app with another trigger,
-for example, the **Recurrence** trigger.
-
-## Connect to Salesforce
--
-1. Sign in to the [Azure portal](https://portal.azure.com),
-and open your logic app in Logic App Designer, if not open already.
-
-1. Choose a path:
-
- * For blank logic apps, in the search box,
- enter "salesforce" as your filter.
- Under the triggers list, select the trigger you want.
-
- -or-
-
- * For existing logic apps, under the step where you want
- to add an action, choose **New step**. In the search box,
- enter "salesforce" as your filter. Under the actions list,
- select the action you want.
-
-1. If you're prompted to sign in to Salesforce, sign in now
-and allow access.
-
- Your credentials authorize your logic app to create
- a connection to Salesforce and access your data.
-
-1. Provide the necessary details for your selected trigger or
-action and continue building your logic app's workflow.
-
-## Connector reference
-
-For technical details about triggers, actions, and limits, which are
-described by the connector's OpenAPI (formerly Swagger) description,
-review the connector's [reference page](/connectors/salesforce/).
-
-## Get support
-
-* For questions, visit the [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html).
-* To submit or vote on feature ideas, visit the [Logic Apps user feedback site](https://aka.ms/logicapps-wish).
-
-## Next steps
-
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
connectors Connectors Create Api Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sftp.md
- Title: Connect to SFTP (Deprecated)
-description: Connect to an SFTP server from workflows in Azure Logic Apps.
---- Previously updated : 10/01/2022
-tags: connectors
---
-# Connect to SFTP from workflows in Azure Logic Apps (Deprecated)
-
-The SFTP connector is deprecated, so this connector's operations no longer appear in the workflow designer. However, you can use the [SFTP-SSH connector](/connectors/sftpwithssh/) instead. For more information, see [Connect to an SFTP file server using SSH in Azure Logic Apps](connectors-sftp-ssh.md).
-
-Next steps:
-
-* [Managed connector reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
connectors Connectors Create Api Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sharepoint.md
- Title: Connect to SharePoint
-description: Monitor and manage resources in SharePoint Online or SharePoint Server on premises using Azure Logic Apps.
--- Previously updated : 08/11/2021
-tags: connectors
--
-# Connect to SharePoint from Azure Logic Apps
-
-To automate tasks that monitor and manage resources, such as files, folders, lists, and items, in SharePoint Online or in on-premises SharePoint Server, you can create automated integration workflows by using Azure Logic Apps and the SharePoint connector.
-
-The following list describes example tasks that you can automate:
-
-* Monitor when files or items are created, changed, or deleted.
-* Create, get, update, or delete items.
-* Add, get, or delete attachments. Get the content from attachments.
-* Create, copy, update, or delete files.
-* Update file properties. Get the content, metadata, or properties for a file.
-* List or extract folders.
-* Get lists or list views.
-* Set content approval status.
-* Resolve persons.
-* Send HTTP requests to SharePoint.
-* Get entity values.
-
-In your logic app workflow, you can use a trigger that monitors events in SharePoint and makes the output available to other actions. You can then use actions to perform various tasks in SharePoint. You can also include other actions that use the output from SharePoint actions. For example, if you regularly retrieve files from SharePoint, you can send email alerts about those files and their content by using the Office 365 Outlook connector or Outlook.com connector. If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md). Or, try this [quickstart to create your first example logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-
-## Prerequisites
-
-* Your Microsoft Office 365 account credentials that you use with SharePoint where you sign in with a [work or school account](https://support.microsoft.com/office/what-account-to-use-with-office-and-you-need-one-914e6610-2763-47ac-ab36-602a81068235#bkmk_msavsworkschool).
-
- You need these credentials so that you can authorize your workflow to access your SharePoint account.
-
- > [!NOTE]
- > If you're using [Microsoft Azure operated by 21Vianet](https://portal.azure.cn), Azure Active Directory (Azure AD) authentication
- > works only with an account for Microsoft Office 365 operated by 21Vianet (.cn), not .com accounts.
-
-* Your SharePoint site address
-
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
-* For connections to an on-premises SharePoint server, you need to [install and set up the on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md).
-
- You can then select the gateway resource to use when you create the SharePoint Server connection from your workflow.
-
-* The logic app workflow where you need access to your SharePoint site or server.
-
- * To start the workflow with a SharePoint trigger, you need a blank logic app workflow.
- * To add a SharePoint action, your workflow needs to already have a trigger.
-
-## Connector reference
-
-For more technical details about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, review the [connector's reference page](/connectors/sharepoint/).
-
-## Connect to SharePoint
--
-## Add a trigger
-
-1. From the Azure portal, Visual Studio Code, or Visual Studio, open your logic app workflow in the visual designer, if not open already.
-
-1. On the designer, in the search box, enter `sharepoint` as the search term. Select the **SharePoint** connector.
-
-1. From the **Triggers** list, select the trigger that you want to use.
-
-1. When you are prompted to sign in and create a connection, choose one of the following options:
-
- * For SharePoint Online, select **Sign in** and authenticate your user credentials.
- * For SharePoint Server, select **Connect via on-premises data gateway**. Provide the request information about the gateway resource to use, the authentication type, and other necessary details.
-
-1. When you're done, select **Create**.
-
- After your workflow successfully creates the connection, your selected trigger appears.
-
-1. Provide the information to set up the trigger and continue building your workflow.
-
-## Add an action
-
-1. From the Azure portal, Visual Studio Code, or Visual Studio, open your logic app workflow in the visual designer, if not open already.
-
-1. Choose one of the following options:
-
- * To add an action as the currently last step, select **New step**.
- * To add an action between steps, move your pointer over the arrow between those steps. Select the plus sign (**+**), and then select **Add an action**.
-
-1. Under **Choose an operation**, in the search box, enter `sharepoint` as the search term. Select the **SharePoint** connector.
-
-1. From the **Actions** list, select the action that you want to use.
-
-1. When you are prompted to sign in and create a connection, choose one of the following options:
-
- * For SharePoint Online, select **Sign in** and authenticate your user credentials.
- * For SharePoint Server, select **Connect via on-premises data gateway**. Provide the request information about the gateway resource to use, the authentication type, and other necessary details.
-
-1. When you're done, select **Create**.
-
- After your workflow successfully creates the connection, your selected action appears.
-
-1. Provide the information to set up the action and continue building your workflow.
-
-## Next steps
-
-Learn about other [Logic Apps connectors](../connectors/apis-list.md)
connectors Connectors Create Api Slack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-slack.md
- Title: Connect to Slack from Azure Logic Apps
-description: Automate tasks and workflows that monitor files and manage channels, groups, and messages in your Slack account using Azure Logic Apps.
--- Previously updated : 08/25/2018
-tags: connectors
--
-# Connect to Slack from Azure Logic Apps
-
-With Azure Logic Apps and the Slack connector,
-you can create automated tasks and workflows that monitor
-your Slack files and manage your Slack channels, messages,
-groups, and so on, for example:
-
-* Monitor when new files are created.
-* Create, list, and join channels
-* Post messages.
-* Create groups and set do not disturb.
-
-You can use triggers that get responses from your Slack account
-and make the output available to other actions. You can use actions
-that perform tasks with your Slack account. You can also have
-other actions use the output from Slack actions. For example,
-when a new file is created, you can send email with the
-Office 365 Outlook connector. If you're new to logic apps,
-review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
-
-## Prerequisites
-
-* An Azure subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/).
-
-* Your [Slack](https://slack.com/) account and user credentials
-
- Your credentials authorize your logic app to create
- a connection and access your Slack account.
-
-* Basic knowledge about
-[how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-
-* The logic app where you want to access your Slack account.
-To start with a Slack trigger, [create a blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-To use a Slack action, start your logic app with a trigger,
-such as a Slack trigger or another trigger, such as the **Recurrence** trigger.
-
-## Connect to Slack
--
-1. Sign in to the [Azure portal](https://portal.azure.com),
-and open your logic app in Logic App Designer, if not open already.
-
-1. For blank logic apps, in the search box,
-enter "slack" as your filter. Under the triggers list,
-select the trigger you want.
-
- -or-
-
- For existing logic apps, under the last step where
- you want to add an action, choose **New step**.
- In the search box, enter "slack" as your filter.
- Under the actions list, select the action you want.
-
- To add an action between steps,
- move your pointer over the arrow between steps.
- Choose the plus sign (**+**) that appears,
- and then select **Add an action**.
-
-1. If you're prompted to sign in to Slack,
-sign in to your Slack workspace.
-
- ![Sign in to Slack workspace](./media/connectors-create-api-slack/slack-sign-in-workspace.png)
-
-1. Authorize access for your logic app.
-
- ![Authorize access to Slack](./media/connectors-create-api-slack/slack-authorize-access.png)
-
-1. Provide the necessary details for your selected trigger
-or action. To continue building your logic app's workflow,
-add more actions.
-
-## Connector reference
-
-For technical details about triggers, actions, and limits, which are
-described by the connector's OpenAPI (formerly Swagger) description,
-review the connector's [reference page](/connectors/slack/).
-
-## Get support
-
-* For questions, visit the [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html).
-* To submit or vote on feature ideas, visit the [Logic Apps user feedback site](https://aka.ms/logicapps-wish).
-
-## Next steps
-
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
-
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-sftp-ssh.md
Title: Connect to an SFTP server from workflows
description: Connect to your SFTP file server from workflows using Azure Logic Apps. ms.suite: integration- Previously updated : 01/12/2023 Last updated : 02/21/2023 tags: connectors
This how-to guide shows how to access your [SSH File Transfer Protocol (SFTP)](h
In Consumption logic app workflows, you can use the **SFTP-SSH** *managed* connector, while in Standard logic app workflows, you can use the **SFTP** built-in connector or the **SFTP-SSH** managed connector. You can use these connector operations to create automated workflows that run when triggered by events in your SFTP server or in other systems and run actions to manage files on your SFTP server. Both the managed and built-in connectors use the SSH protocol.
+> [!NOTE]
+>
+> The [**SFTP** *managed* connector](/connectors/sftp/) has been deprecated, so this connector's operations no longer appear in the workflow designer.
+ For example, your workflow can start with an SFTP trigger that monitors and responds to events on your SFTP server. The trigger makes the outputs available to subsequent actions in your workflow. Your workflow can run SFTP actions that get, create, and manage files through your SFTP server account. The following list includes more example tasks: * Monitor when files are added or changed.
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 01/05/2023 Last updated : 02/21/2023 # Azure Policy built-in definitions for Azure Container Instances
container-registry Container Registry Java Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-java-quickstart.md
For more information, see the following resources:
* [Azure for Java Developers](/azure/java) * [Working with Azure DevOps and Java](/azure/devops/pipelines/ecosystems/java)
-* [Spring Boot on Docker Getting Started](https://spring.io/guides/gs/spring-boot-docker)
+* [Spring Boot on Docker Getting Started](https://spring.io/guides/topicals/spring-boot-docker/)
* [Spring Initializr](https://start.spring.io) * [Deploy a Spring Boot Application to the Azure App Service](/azure/developer/java/spring-framework/deploy-spring-boot-java-app-on-linux#configure-maven-to-build-image-to-your-azure-container-registry) * [Using a custom Docker image for Azure Web App on Linux](../app-service/tutorial-custom-container.md)
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
cosmos-db Howto Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-grow.md
Previously updated : 01/30/2023 Last updated : 02/20/2023 # Scale a cluster in Azure Cosmos DB for PostgreSQL
queries.
You can increase the capabilities of existing nodes. Adjusting compute capacity up and down can be useful for performance experiments, and short- or long-term changes to traffic demands.
-To change the vCores for all worker nodes, on the **Scale** screen, select a new value under **Compute per node**. To adjust the coordinator node's vCores, expand **Coordinator** and select a new value under **Coordinator computer**.
+To change the vCores for all worker nodes, on the **Scale** screen, select a new value under **Compute per node**. To adjust the coordinator node's vCores, expand **Coordinator** and select a new value under **Coordinator compute**.
> [!NOTE]
-> You can scale compute on [cluster read replicas](concepts-read-replicas.md) independent of its primary cluster's compute.
+> You can scale compute on [cluster read replicas](concepts-read-replicas.md) independent of their primary cluster's compute.
> [!NOTE] > There is a vCore quota per Azure subscription per region. The default quota
nodes before needing to add more worker nodes.
To change the storage amount for all worker nodes, on the **Scale** screen, select a new value under **Storage per node**. To adjust the coordinator node's storage, expand **Coordinator** and select a new value under **Coordinator storage**. > [!NOTE]
-> Once you increase storage and save, you can't decrease the amount of storage by using this form.
+> Once you increase storage and save, you can't decrease the amount of storage.
## Next steps
cost-management-billing Enable Tag Inheritance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-tag-inheritance.md
description: This article explains how to group costs using tag inheritance. Previously updated : 02/16/2023 Last updated : 02/21/2023
You can enable the tag inheritance setting in the Azure portal. You apply the se
To enable tag inheritance in the Azure portal:
-1. In the Azure portal, navigate to Cost Management.
+1. In the Azure portal, search for **Cost Management** and select it (the green hexagon-shaped symbol, *not* Cost Management + Billing).
2. Select a scope. 3. In the left menu under **Settings**, select either **Manage billing account** or **Manage subscription**, depending on your scope. 4. Under **Tag inheritance**, select **Edit**.
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 02/14/2023 Last updated : 02/21/2023 # Azure Policy built-in definitions for Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 01/05/2023 Last updated : 02/21/2023
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
If you don't have access to install the extension, you must request access from
displayName: 'Microsoft Security DevOps' ```
+> [!Note]
+> The MicrosoftSecurityDevOps build task depends on .NET 6. The CredScan analyzer depends on .NET 3.1. See more [here](https://marketplace.visualstudio.com/items?itemName=ms-securitydevops.microsoft-security-devops-azdevops).
+ 1. Select **Save and run**. 1. To commit the pipeline, select **Save and run**.
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Security DevOps uses the following Open Source tools:
# Upload alerts to the Security tab - name: Upload alerts to Security tab
- uses: github/codeql-action/upload-sarif@v1
+ uses: github/codeql-action/upload-sarif@v2
with: sarif_file: ${{ steps.msdo.outputs.sarifFile }} ```
defender-for-cloud Onboard Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/onboard-management-group.md
Title: Onboard a management group to Microsoft Defender for Cloud description: Learn how to use a supplied Azure Policy definition to enable Microsoft Defender for Cloud for all the subscriptions in a management group. Previously updated : 01/24/2023 Last updated : 02/21/2023 # Enable Defender for Cloud on all subscriptions in a management group You can use Azure Policy to enable Microsoft Defender for Cloud on all the Azure subscriptions within the same management group (MG). This is more convenient than accessing them individually from the portal, and works even if the subscriptions belong to different owners.
-To onboard a management group and all its subscriptions:
+## Prerequisites
+
+Enable the resource provider `_Microsoft.Security_` for the management group using the following Azure CLI command:
+
+```azurecli
+az provider register --namespace Microsoft.Security --management-group-id …
+```
+
+## Onboard a management group and all its subscriptions
+
+**To onboard a management group and all its subscriptions**:
1. As a user with **Security Admin** permissions, open Azure Policy and search for the definition `Enable Microsoft Defender for Cloud on your subscription`.
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/24/2023 Last updated : 02/21/2023
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
The data plane APIs are the Azure Digital Twins APIs used to manage the elements
* `DigitalTwins` - The DigitalTwins category contains the APIs that let developers create, modify, and delete [digital twins](concepts-twins-graph.md) and their relationships in an Azure Digital Twins instance. * `Query` - The Query category lets developers [find sets of digital twins in the twin graph](how-to-query-graph.md) across relationships. * `Event Routes` - The Event Routes category contains APIs to [route data](concepts-route-events.md), through the system and to downstream services.
-* `Import Jobs` - The Import Jobs API lets you manage a long running, asynchronous action to [import models, twins, and relationships in bulk](#bulk-import-with-the-import-jobs-api).
+* `Import Jobs` - The Jobs API lets you manage a long running, asynchronous action to [import models, twins, and relationships in bulk](#bulk-import-with-the-jobs-api).
To call the APIs directly, reference the latest Swagger folder in the [data plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This folder also includes a folder of examples that show the usage. You can also view the [data plane API reference documentation](/rest/api/azure-digitaltwins/).
The available helper classes are:
* `DigitalTwinsJsonPropertyName`: Contains the string constants for use in JSON serialization and deserialization for custom digital twin types
-## Bulk import with the Import Jobs API
+## Bulk import with the Jobs API
-The [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs) is a data plane API that allows you to import a set of models, twins, and/or relationships in a single API call. Import Jobs API operations are also included with the [CLI commands](/cli/azure/dt/job) and [data plane SDKs](#data-plane-apis). Using the Import Jobs API requires use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md).
+The [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) is a data plane API that allows you to import a set of models, twins, and/or relationships in a single API call. Jobs API operations are also included with the [CLI commands](/cli/azure/dt/job) and [data plane SDKs](#data-plane-apis). Using the Jobs API requires use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md).
### Check permissions
-To use the Import Jobs API, you'll need to have write permission in your Azure Digital Twins instance for the following data action categories:
+To use the Jobs API, you'll need to have write permission in your Azure Digital Twins instance for the following data action categories:
* `Microsoft.DigitalTwins/jobs/*` * Any graph elements that you want to include in the Jobs call. This might include `Microsoft.DigitalTwins/models/*`, `Microsoft.DigitalTwins/digitaltwins/*`, and/or `Microsoft.DigitalTwins/digitaltwins/relationships/*`. The built-in role that provides all of these permissions is *Azure Digital Twins Data Owner*. You can also use a custom role to grant granular access to only the data types that you need. For more information about roles in Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md#authorization-azure-roles-for-azure-digital-twins). >[!NOTE]
-> If you attempt an Import Jobs API call and you're missing write permissions to one of the graph element types you're trying to import, the job will skip that type and import the others. For example, if you have write access to models and twins, but not relationships, an attempt to bulk import all three types of element will only succeed in importing the models and twins. The job status will reflect a failure and the message will indicate which permissions are missing.
+> If you attempt an Jobs API call and you're missing write permissions to one of the graph element types you're trying to import, the job will skip that type and import the others. For example, if you have write access to models and twins, but not relationships, an attempt to bulk import all three types of element will only succeed in importing the models and twins. The job status will reflect a failure and the message will indicate which permissions are missing.
### Format data
Here's a sample input data file for the import API:
>[!TIP] >For a sample project that converts models, twins, and relationships into the NDJSON supported by the import API, see [Azure Digital Twins Bulk Import NDJSON Generator](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/bulk-import/ndjson-generator). The sample project is written for .NET and can be downloaded or adapted to help you create your own import files.
-Once the file has been created, upload it to a block blob in Azure Blob Storage using your preferred upload method (some options are the [AzCopy command](../storage/common/storage-use-azcopy-blobs-upload.md), the [Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md#upload-a-blob), or the [Azure portal](https://portal.azure.com)). You'll use the blob storage URL of the NDJSON file in the body of the Import Jobs API call.
+Once the file has been created, upload it to a block blob in Azure Blob Storage using your preferred upload method (some options are the [AzCopy command](../storage/common/storage-use-azcopy-blobs-upload.md), the [Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md#upload-a-blob), or the [Azure portal](https://portal.azure.com)). You'll use the blob storage URL of the NDJSON file in the body of the Jobs API call.
### Run the import job
-Now you can proceed with calling the [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs). For detailed instructions on importing a full graph in one API call, see [Upload models, twins, and relationships in bulk with the Import Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api). You can also use the Import Jobs API to import each resource type independently. For more information on using the Import Jobs API with individual resource types, see Import Jobs API instructions for [models](how-to-manage-model.md#upload-large-model-sets-with-the-import-jobs-api), [twins](how-to-manage-twin.md#create-twins-in-bulk-with-the-import-jobs-api), and [relationships](how-to-manage-graph.md#create-relationships-in-bulk-with-the-import-jobs-api).
+Now you can proceed with calling the [Jobs API](/rest/api/digital-twins/dataplane/import-jobs). For detailed instructions on importing a full graph in one API call, see [Upload models, twins, and relationships in bulk with the Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api). You can also use the Jobs API to import each resource type independently. For more information on using the Jobs API with individual resource types, see Jobs API instructions for [models](how-to-manage-model.md#upload-large-model-sets-with-the-jobs-api), [twins](how-to-manage-twin.md#create-twins-in-bulk-with-the-jobs-api), and [relationships](how-to-manage-graph.md#create-relationships-in-bulk-with-the-jobs-api).
-In the body of the API call, you'll provide the blob storage URL of the NDJSON input file, as well as another blob storage URL for where you'd like the output log to be stored.
-As the import job executes, a structured output log is generated by the service and stored as a new append blob in your blob container, according to the output blob URL and name you provided. Here's an example output log for a successful job importing models, twins, and relationships:
+In the body of the API call, you'll provide the blob storage URL of the NDJSON input file. You'll also provide a new blob storage URL to indicate where you'd like the output log to be stored once the service creates it.
+
+As the import job executes, a structured output log is generated by the service and stored as a new append blob in your blob container, at the URL location you specified for the output blob in the request. Here's an example output log for a successful job importing models, twins, and relationships:
```json {"timestamp":"2022-12-30T19:50:34.5540455Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"status":"Started"}}
As the import job executes, a structured output log is generated by the service
{"timestamp":"2022-12-30T19:50:41.3043264Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"status":"Succeeded"}} ```
-When the job is complete, you can see the total number of ingested entities using the [BulkOperationEntityCount metric](how-to-monitor.md#bulk-operation-metrics-from-the-import-jobs-api).
+When the job is complete, you can see the total number of ingested entities using the [BulkOperationEntityCount metric](how-to-monitor.md#bulk-operation-metrics-from-the-jobs-api).
-It's also possible to cancel a running import job with the [Cancel operation](/rest/api/digital-twins/dataplane/import-jobs/cancel?tabs=HTTP) from the Import Jobs API. Once the job has been canceled and is no longer running, you can delete it.
+It's also possible to cancel a running import job with the [Cancel operation](/rest/api/digital-twins/dataplane/import-jobs/cancel?tabs=HTTP) from the Jobs API. Once the job has been canceled and is no longer running, you can delete it.
### Limits and considerations
-Keep the following considerations in mind while working with the Import Jobs API:
-* Currently, the Import Jobs API only supports "create" operations.
+Keep the following considerations in mind while working with the Jobs API:
+* Currently, the Jobs API only supports "create" operations.
* Import Jobs are not atomic operations. There is no rollback in the case of failure, partial job completion, or usage of the [Cancel operation](/rest/api/digital-twins/dataplane/import-jobs/cancel?tabs=HTTP).
-* Only one bulk import job is supported at a time within an Azure Digital Twins instance. You can view this information and other numerical limits of the Import Jobs API in [Azure Digital Twins limits](reference-service-limits.md).
+* Only one bulk import job is supported at a time within an Azure Digital Twins instance. You can view this information and other numerical limits of the Jobs API in [Azure Digital Twins limits](reference-service-limits.md).
## Monitor API metrics
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md
While designing models to reflect the entities in your environment, it can be us
Once you're finished creating, extending, or selecting your models, you need to upload them to your Azure Digital Twins instance to make them available for use in your solution.
-You can upload many models in a single API call using the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api). The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. For detailed instructions and examples that use this API, see [bulk import instructions for models](how-to-manage-model.md#upload-large-model-sets-with-the-import-jobs-api).
+You can upload many models in a single API call using the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api). The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. For detailed instructions and examples that use this API, see [bulk import instructions for models](how-to-manage-model.md#upload-large-model-sets-with-the-jobs-api).
-An alternative to the Import Jobs API is the [Model uploader sample](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#uploadmodels), which uses the individual model APIs to upload multiple model files at once. The sample also implements automatic reordering to resolve model dependencies.
+An alternative to the Jobs API is the [Model uploader sample](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#uploadmodels), which uses the individual model APIs to upload multiple model files at once. The sample also implements automatic reordering to resolve model dependencies.
If you need to delete all models in an Azure Digital Twins instance at once, you can use the [Model deleter sample](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#deletemodels). This is a project that contains recursive logic to handle model dependencies through the deletion process.
digital-twins Concepts Twins Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-twins-graph.md
Here's some example client code that uses the [DigitalTwins APIs](/rest/api/digi
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_other.cs" id="CreateRelationship_short":::
-### Create twins and relationships in bulk with the Import Jobs API
+### Create twins and relationships in bulk with the Jobs API
-You can upload many twins and relationships in a single API call using the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api). Twins and relationships created with this API can optionally include initialization of their properties. For detailed instructions and examples that use this API, see [bulk import instructions for twins](how-to-manage-twin.md#create-twins-in-bulk-with-the-import-jobs-api) and [relationships](how-to-manage-graph.md#create-relationships-in-bulk-with-the-import-jobs-api).
+You can upload many twins and relationships in a single API call using the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api). Twins and relationships created with this API can optionally include initialization of their properties. For detailed instructions and examples that use this API, see [bulk import instructions for twins](how-to-manage-twin.md#create-twins-in-bulk-with-the-jobs-api) and [relationships](how-to-manage-graph.md#create-relationships-in-bulk-with-the-jobs-api).
## JSON representations of graph elements
digital-twins How To Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-graph.md
You can even create multiple instances of the same type of relationship between
> [!NOTE] > The DTDL attributes of `minMultiplicity` and `maxMultiplicity` for relationships aren't currently supported in Azure Digital TwinsΓÇöeven if they're defined as part of a model, they won't be enforced by the service. For more information, see [Service-specific DTDL notes](concepts-models.md#service-specific-dtdl-notes).
-### Create relationships in bulk with the Import Jobs API
+### Create relationships in bulk with the Jobs API
-You can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to create many relationships at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for relationships and bulk jobs.
+You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to create many relationships at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for relationships and bulk jobs.
>[!TIP]
->The Import Jobs API also allows models and twins to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Import Jobs API](#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api).
+>The Jobs API also allows models and twins to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Jobs API](#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api).
To import relationships in bulk, you'll need to structure your relationships (and any other resources included in the bulk import job) as an *NDJSON* file. The `Relationships` section comes after the `Twins` section, making it the last graph data section in the file. Relationships defined in the file can reference twins that are either defined in this file or already present in the instance, and they can optionally include initialization of any properties that the relationships have.
-You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
+You can view an example import file and a sample project for creating these files in the [Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-jobs-api).
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
-Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
+Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
## List relationships
You can now call this custom method to delete a relationship like this:
This section describes strategies for creating a graph with multiple elements at the same time, rather than using individual API calls to upload models, twins, and relationships to upload them one by one.
-### Upload models, twins, and relationships in bulk with the Import Jobs API
+### Upload models, twins, and relationships in bulk with the Jobs API
-You can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to upload multiple models, twins, and relationships to your instance in a single API call, effectively creating the graph all at once. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for graph elements (models, twins, and relationships) and bulk jobs.
+You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to upload multiple models, twins, and relationships to your instance in a single API call, effectively creating the graph all at once. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for graph elements (models, twins, and relationships) and bulk jobs.
To import resources in bulk, start by creating an *NDJSON* file containing the details of your resources. The file starts with a `Header` section, followed by the optional sections `Models`, `Twins`, and `Relationships`. You don't have to include all three types of graph data in the file, but any sections that are present must follow that order. Twins defined in the file can reference models that are either defined in this file or already present in the instance, and they can optionally include initialization of the twin's properties. Relationships defined in the file can reference twins that are either defined in this file or already present in the instance, and they can optionally include initialization of relationship properties.
-You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
+You can view an example import file and a sample project for creating these files in the [Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-jobs-api).
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
-Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
+Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
### Import graph with Azure Digital Twins Explorer
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-model.md
If you're using the [REST APIs](/rest/api/azure-digitaltwins/) or [Azure CLI](/c
:::code language="json" source="~/digital-twins-docs-samples/models/Planet-Moon.json":::
-### Upload large model sets with the Import Jobs API
+### Upload large model sets with the Jobs API
-For large model sets, you can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to upload many models at once in a single API call. The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for models and bulk jobs.
+For large model sets, you can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to upload many models at once in a single API call. The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for models and bulk jobs.
>[!TIP]
->The Import Jobs API also allows twins and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Import Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api).
+>The Jobs API also allows twins and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api).
-To import models in bulk, you'll need to structure your models (and any other resources included in the bulk import job) as an *NDJSON* file. The `Models` section comes immediately after `Header` section, making it the first graph data section in the file. You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
+To import models in bulk, you'll need to structure your models (and any other resources included in the bulk import job) as an *NDJSON* file. The `Models` section comes immediately after `Header` section, making it the first graph data section in the file. You can view an example import file and a sample project for creating these files in the [Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-jobs-api).
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
-Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
+Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
## Retrieve models
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-twin.md
The helper class of `BasicDigitalTwin` allows you to store property fields in a
>twin.Id = "myRoomId"; >```
-### Create twins in bulk with the Import Jobs API
+### Create twins in bulk with the Jobs API
-You can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to create many twins at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for twins and bulk jobs.
+You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to create many twins at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for twins and bulk jobs.
>[!TIP]
->The Import Jobs API also allows models and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Import Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api).
+>The Jobs API also allows models and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api).
To import twins in bulk, you'll need to structure your twins (and any other resources included in the bulk import job) as an *NDJSON* file. The `Twins` section comes after the `Models` section (and before the `Relationships` section). Twins defined in the file can reference models that are either defined in this file or already present in the instance, and they can optionally include initialization of the twin's properties.
-You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
+You can view an example import file and a sample project for creating these files in the [Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-jobs-api).
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
-Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
+Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
## Get data for a digital twin
digital-twins How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor.md
Metrics having to do with data ingress:
| IngressEventsFailureRate | Ingress Events Failure Rate | Percent | Average | The percentage of incoming telemetry events for which the service returns an internal error (500) response code. | Result | | IngressEventsLatency | Ingress Events Latency | Milliseconds | Average | The time from when an event arrives to when it's ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. | Result |
-### Bulk operation metrics (from the Import Jobs API)
+### Bulk operation metrics (from the Jobs API)
-Metrics having to do with bulk operations from the [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs):
+Metrics having to do with bulk operations from the [Jobs API](/rest/api/digital-twins/dataplane/import-jobs):
| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions | | | | | | | |
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
The first setting gives the function app the **Azure Digital Twins Data Owner**
1. Use the **principalId** value in the following command to assign the function app's identity to the **Azure Digital Twins Data Owner** role for your Azure Digital Twins instance. ```azurecli-interactive
- az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<principal-ID>" --role "Azure Digital Twins Data Owner"
+ az dt role-assignment create --resource-group <your-resource-group> --dt-name <your-Azure-Digital-Twins-instance> --assignee "<principal-ID>" --role "Azure Digital Twins Data Owner"
``` The result of this command is outputted information about the role assignment you've created. The function app now has permissions to access data in your Azure Digital Twins instance.
event-grid Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/delivery-properties.md
Title: Azure Event Grid - Set custom headers on delivered events description: Describes how you can set custom headers (or delivery properties) on delivered events. Previously updated : 02/23/2022 Last updated : 02/21/2023 # Custom delivery properties
You can set custom headers on the events that are delivered to the following des
- Webhooks - Azure Service Bus topics and queues - Azure Event Hubs-- Relay Hybrid Connections
+- Azure Functions
+- Azure Relay Hybrid Connections
+ When creating an event subscription in the Azure portal, you can use the **Delivery Properties** tab to set custom HTTP headers. This page lets you set fixed and dynamic header values.
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
event-grid Security Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-authorization.md
For a list of operation supported by Azure Event Grid, run the following Azure C
az provider operation show --namespace Microsoft.EventGrid ```
-The following operations return potentially secret information, which gets filtered out of normal read operations. It's recommended that you restrict access to these operations.
+The following operations return potentially secret information, which gets filtered out of normal read operations. We recommend that you restrict access to these operations.
* Microsoft.EventGrid/eventSubscriptions/getFullUrl/action * Microsoft.EventGrid/topics/listKeys/action
The Event Grid Contributor role allows you to create and manage Event Grid resou
| Role | Description | | - | -- |
-| [Event Grid Subscription Reader](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-reader) | Lets you read Event Grid event subscriptions. |
-| [Event Grid Subscription Contributor](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-contributor) | Lets you manage Event Grid event subscription operations. |
-| [Event Grid Contributor](../role-based-access-control/built-in-roles.md#eventgrid-contributor) | Lets you create and manage Event Grid resources. |
-| [Event Grid Data Sender](../role-based-access-control/built-in-roles.md#eventgrid-data-sender) | Lets you send events to Event Grid topics. |
+| [`EventGrid EventSubscription Reader`](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-reader) | Lets you read Event Grid event subscriptions. |
+| [`EventGrid EventSubscription Contributor`](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-contributor) | Lets you manage Event Grid event subscription operations. |
+| [`EventGrid Contributor`](../role-based-access-control/built-in-roles.md#eventgrid-contributor) | Lets you create and manage Event Grid resources. |
+| [`EventGrid Data Sender`](../role-based-access-control/built-in-roles.md#eventgrid-data-sender) | Lets you send events to Event Grid topics. |
> [!NOTE]
The following are sample Event Grid role definitions that allow users to take di
} ```
-**EventGridContributorRole.json**: Allows all event grid actions.
+**EventGridContributorRole.json**: Allows all Event Grid actions.
```json {
If you're using an event handler that isn't a WebHook (such as an event hub or q
You must have the **Microsoft.EventGrid/EventSubscriptions/Write** permission on the resource that is the event source. You need this permission because you're writing a new subscription at the scope of the resource. The required resource differs based on whether you're subscribing to a system topic or custom topic. Both types are described in this section. ### System topics (Azure service publishers)
-For system topics, if you are not the owner or contributor of the source resource, you need permission to write a new event subscription at the scope of the resource publishing the event. The format of the resource is:
+For system topics, if you aren't the owner or contributor of the source resource, you need permission to write a new event subscription at the scope of the resource publishing the event. The format of the resource is:
`/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/{resource-provider}/{resource-type}/{resource-name}` For example, to subscribe to an event on a storage account named **myacct**, you need the Microsoft.EventGrid/EventSubscriptions/Write permission on: `/subscriptions/####/resourceGroups/testrg/providers/Microsoft.Storage/storageAccounts/myacct` ### Custom topics
-For custom topics, you need permission to write a new event subscription at the scope of the event grid topic. The format of the resource is:
+For custom topics, you need permission to write a new event subscription at the scope of the Event Grid topic. The format of the resource is:
`/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.EventGrid/topics/{topic-name}` For example, to subscribe to a custom topic named **mytopic**, you need the Microsoft.EventGrid/EventSubscriptions/Write permission on:
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
expressroute Expressroute Howto Reset Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-reset-peering.md
Title: 'Azure ExpressRoute: Reset circuit peering'
+ Title: 'Azure ExpressRoute: Reset circuit peering using Azure PowerShell'
description: Learn how to enable and disable peerings for an Azure ExpressRoute circuit using Azure PowerShell.
Last updated 12/15/2020
-# Reset ExpressRoute circuit peerings
+# Reset ExpressRoute circuit peerings using Azure PowerShell
This article describes how to enable and disable peerings of an ExpressRoute circuit using PowerShell. Peerings are enabled by default when you create them. When you disable a peering, the BGP session on both the primary and the secondary connection of your ExpressRoute circuit will be shut down. You'll lose connectivity for this peering to Microsoft. When you enable a peering, the BGP session on both the primary and the secondary connection of your ExpressRoute circuit will be established. The connectivity to Microsoft will be restored for this peering. You can enable and disable peering for Microsoft Peering and Azure Private Peering independently on the ExpressRoute circuit.
frontdoor Tier Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-migration.md
Azure Front Door zero-down time migration happens in three stages. The first sta
> * Traffic to your Azure Front Door (classic) will continue to be serve until migration has been completed. > * Each Azure Front Door (classic) profile can create one Azure Front Door Standard or Premium profile.
-Migration is only available can be completed using the Azure portal. Service charges for Azure Front Door Standard or Premium tier will start once migration is completed.
+Migration is only available and can be completed using the Azure portal. Service charges for Azure Front Door Standard or Premium tier will start once migration is completed.
## Breaking changes between tiers
frontdoor Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/troubleshoot-issues.md
This article describes how to troubleshoot common routing problems that you migh
You can request Azure Front Door to return more debugging HTTP response headers. For more information, see [optional response headers](front-door-http-headers-protocol.md#optional-debug-response-headers).
-## 503 response from Azure Front Door after a few seconds
+## 503 or 504 response from Azure Front Door after a few seconds
### Symptom
-* Regular requests sent to your backend without going through Azure Front Door are succeeding. Going via Azure Front Door results in 503 error responses.
+* Regular requests sent to your backend without going through Azure Front Door are succeeding. Going via Azure Front Door results in 503 or 504 error responses.
* The failure from Azure Front Door typically appears after about 30 seconds. * Intermittent 503 errors appear with "ErrorInfo: OriginInvalidResponse."
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 01/05/2023 Last updated : 02/21/2023
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 01/05/2023 Last updated : 02/21/2023
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
healthcare-apis How To Use Calculatedcontent Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculatedcontent-mappings.md
Title: How to use CalculatedContentT mappings with the MedTech service device mappings - Azure Health Data Services
-description: This article describes how to use CalculatedContent mappings with the MedTech service device mappings.
+ Title: How to use CalculatedContent mappings - Azure Health Data Services
+description: Learn how to use CalculatedContent mappings with the MedTech service device mappings.
# How to use CalculatedContent mappings
-This article describes how to use CalculatedContent mappings with MedTech service device mappings.
+This article describes how to use CalculatedContent mappings with MedTech service device mappings in Azure Health Data Services.
-## CalculatedContent mappings
+## Overview of CalculatedContent mappings
-The MedTech service provides an expression-based content template to both match the wanted template and extract values. **Expressions** may be used by either JSONPath or JMESPath. Each expression within the template may choose its own expression language.
+The MedTech service provides an expression-based content template to both match the wanted template and extract values. Either JSONPath or JMESPath can use expressions. Each expression within the template can use its own expression language.
> [!NOTE]
-> If an expression language isn't defined, the default expression language configured for the template will be used. The default is JSONPath but can be overwritten if needed.
+> If you don't define an expression language, MedTech service device mappings use the default expression language that's configured for the template. The default is JSONPath, but you can overwrite it if necessary.
An expression is defined as:
An expression is defined as:
} ```
-In the example below, *typeMatchExpression* is defined as:
+In the following example, `typeMatchExpression` is defined as:
```json "templateType": "CalculatedContent",
In the example below, *typeMatchExpression* is defined as:
``` > [!TIP]
-> The default expression language to use for a MedTech service device mappings is JsonPath. If you want to use JsonPath, the expression alone may be supplied.
+> If you want to use JSON instead of the default JSONPath expression language, you can supply the expression alone.
```json "templateType": "CalculatedContent",
In the example below, *typeMatchExpression* is defined as:
} ```
-The default expression language to use for a MedTech service device mappings can be explicitly set using the `defaultExpressionLanguage` parameter:
+You can explicitly set the default expression language for MedTech service device mappings by using the `defaultExpressionLanguage` parameter:
```json "templateType": "CalculatedContent",
The default expression language to use for a MedTech service device mappings can
} ```
-The CalculatedContent mappings allow matching on and extracting values from an Azure Event Hubs message using **Expressions** as defined below:
+CalculatedContent mappings allow matching on, and extracting values from, an Azure Event Hubs message through the following expressions:
|Property|Description|Example| |--|--|-|
-|TypeName|The type to associate with measurements that match the template|`heartrate`|
-|TypeMatchExpression|The expression that is evaluated against the EventData payload. If a matching JToken is found, the template is considered a match. All later expressions are evaluated against the extracted JToken matched here.|`$..[?(@heartRate)]`|
-|TimestampExpression|The expression to extract the timestamp value for the measurement's OccurrenceTimeUtc.|`$.matchedToken.endDate`|
-|DeviceIdExpression|The expression to extract the device identifier.|`$.matchedToken.deviceId`|
-|PatientIdExpression|*Required* when IdentityResolution is in **Create** mode and *Optional* when IdentityResolution is in **Lookup** mode. The expression to extract the patient identifier.|`$.matchedToken.patientId`|
-|EncounterIdExpression|*Optional*: The expression to extract the encounter identifier.|`$.matchedToken.encounterId`|
-|CorrelationIdExpression|*Optional*: The expression to extract the correlation identifier. This output can be used to group values into a single observation in the FHIR destination mappings.|`$.matchedToken.correlationId`|
-|Values[].ValueName|The name to associate with the value extracted by the next expression. Used to bind the wanted value/component in the FHIR destination mapping template.|`hr`|
-|Values[].ValueExpression|The expression to extract the wanted value.|`$.matchedToken.heartRate`|
-|Values[].Required|Will require the value to be present in the payload. If not found, a measurement won't be generated, and an InvalidOperationException will be created.|`true`|
-
-### Expression Languages
-
-When specifying the language to use for the expression, the below values are valid:
-
-| Expression Language | Value |
+|`TypeName`|The type to associate with measurements that match the template.|`heartrate`|
+|`TypeMatchExpression`|The expression that the MedTech service evaluates against the `EventData` payload. If the service finds a matching `JToken` value, it considers the template a match. The service evaluates all later expressions against the extracted `JToken` value matched here.|`$..[?(@heartRate)]`|
+|`TimestampExpression`|The expression to extract the timestamp value for the measurement's `OccurrenceTimeUtc` value.|`$.matchedToken.endDate`|
+|`DeviceIdExpression`|The expression to extract the device identifier.|`$.matchedToken.deviceId`|
+|`PatientIdExpression`|The expression to extract the patient identifier. *Required* when `IdentityResolution` is in `Create` mode, and *optional* when `IdentityResolution` is in `Lookup` mode.|`$.matchedToken.patientId`|
+|`EncounterIdExpression`|*Optional*: The expression to extract the encounter identifier.|`$.matchedToken.encounterId`|
+|`CorrelationIdExpression`|*Optional*: The expression to extract the correlation identifier. You can use this output to group values into a single observation in the FHIR destination mappings.|`$.matchedToken.correlationId`|
+|`Values[].ValueName`|The name to associate with the value that the next expression extracts. Used to bind the wanted value or component in the FHIR destination-mapping template.|`hr`|
+|`Values[].ValueExpression`|The expression to extract the wanted value.|`$.matchedToken.heartRate`|
+|`Values[].Required`|Requires the value to be present in the payload. If the MedTech service doesn't find the value, it won't generate a measurement, and it will create an `InvalidOperationException` instance.|`true`|
+
+## Expression languages
+
+When you're specifying the language to use for the expression, the following values are valid:
+
+| Expression language | Value |
||--|
-| JSONPath | **JsonPath** |
-| JMESPath | **JmesPath** |
+| JSONPath | `JsonPath` |
+| JMESPath | `JmesPath` |
>[!TIP]
-> For more information on JSONPath, see [JSONPath](https://goessner.net/articles/JsonPath/). CalculatedContent mappings use the [JSON .NET implementation](https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm) for resolving JSONPath expressions.
+> For more information on JSONPath, see [JSONPath - XPath for JSON](https://goessner.net/articles/JsonPath/). CalculatedContent mappings use the [JSON .NET implementation](https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm) for resolving JSONPath expressions.
>
-> For more information on JMESPath, see [JMESPath](https://jmespath.org/specification.html). CalculatedContent mappings use the [JMESPath .NET implementation](https://github.com/jdevillard/JmesPath.Net) for resolving JMESPath expressions.
+> For more information on JMESPath, see [JMESPath Specification](https://jmespath.org/specification.html). CalculatedContent mappings use the [JMESPath .NET implementation](https://github.com/jdevillard/JmesPath.Net) for resolving JMESPath expressions.
-### Custom functions
+## Custom functions
-A set of MedTech service custom functions are also available. The MedTech service custom functions are outside of the functions provided as part of the JMESPath specification. For more information on the MedTech service custom functions, see [How to use MedTech service custom functions](how-to-use-custom-functions.md).
+A set of custom functions for the MedTech service is also available. The MedTech service custom functions are outside the functions provided as part of the JMESPath specification. For more information on the MedTech service custom functions, see [How to use custom functions with device mappings](how-to-use-custom-functions.md).
-### Matched Token
+## Matched token
-The **TypeMatchExpression** is evaluated against the incoming EventData payload. If a matching JToken is found, the template is considered a match.
+The MedTech service evaluates `TypeMatchExpression` against the incoming `EventData` payload. If the service finds a matching `JToken` value, it considers the template a match.
-All later expressions are evaluated against a new JToken. This new JToken contains both the original EventData payload and the extracted JToken matched here.
+The MedTech service evaluates all later expressions against a new `JToken` value. This new `JToken` value contains both the original `EventData` payload and the extracted `JToken` value matched here.
-In this way, the original payload and the matched object are available to each later expression. The extracted JToken will be available as the property **matchedToken**.
+In this way, the original payload and the matched object are available to each later expression. The extracted `JToken` value will be available as the property `matchedToken`.
-Given this example message:
+Here's an example message:
*Message*
Given this example message:
} ```
-Two matches will be extracted using the above expression and used to create JTokens. Later expressions will be evaluated using the following JTokens:
+The MedTech service extracts two matches by using the preceding expression and uses them to create `JToken` values. The MedTech service will evaluate later expressions by using the following `JToken` values:
```json {
And
} ```
-### Examples
+## Examples
-**Heart Rate**
+### Heart rate
*Message*
And
} ```
-**Blood Pressure**
+### Blood pressure
*Message*
And
} ```
-**Project Multiple Measurements from Single Message**
+### Projection of multiple measurements from a single message
*Message*
And
} ```
-**Project Multiple Measurements from Array in Message**
+### Projection of multiple measurements from an array in a message
*Message*
And
} ```
-**Project Data From Matched Token And Original Event**
+### Projection of data from a matched token and an original event
*Message*
And
} ```
-**Select and transform incoming data**
+### Selection and transformation of incoming data
-In the below example, height data arrives in either inches or meters. We want all normalized height data to be in meters. To achieve this outcome, we create a template that targets only height data in inches and transforms it into meters. Another template targets height data in meters and simply stores it as is.
+In the following example, height data arrives in either inches or meters. Assume that you want all normalized height data to be in meters. To achieve this outcome, you create a template that targets only height data in inches and transforms it into meters. Another template targets height data in meters and simply stores it as is.
*Message*
In the below example, height data arrives in either inches or meters. We want al
``` > [!TIP]
-> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing MedTech service errors.
+> For assistance in fixing MedTech service errors, see [Troubleshoot MedTech service errors](troubleshoot-errors.md).
## Next steps
-In this article, you learned how to configure the MedTech service device mappings using CalculatedContent mappings.
+In this article, you learned how to configure MedTech service device mappings by using CalculatedContent mappings.
-To learn how to configure FHIR destination mappings, see
+To learn how to configure FHIR destination mappings, see:
> [!div class="nextstepaction"] > [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md)
-(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office, and is used with permission.
iot-edge Deploy Confidential Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/deploy-confidential-applications.md
Confidential applications are encrypted in transit and at rest, and only decrypt
The developer creates the confidential application and packages it as an IoT Edge module. The application is encrypted before being pushed to the container registry. The application remains encrypted throughout the IoT Edge deployment process until the module is started on the IoT Edge device. Once the confidential application is within the device's TEE, it is decrypted and can begin executing.
-![Diagram - Confidential applications are encrypted within IoT Edge modules until deployed into the secure enclave](./media/deploy-confidential-applications/confidential-applications-encrypted.png)
Confidential applications on IoT Edge are a logical extension of [Azure confidential computing](../confidential-computing/overview.md). Workloads that run within secure enclaves in the cloud can also be deployed to run within secure enclaves at the edge.
iot-edge Deploy Modbus Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/deploy-modbus-gateway.md
If you want to connect IoT devices that use Modbus TCP or RTU protocols to an Azure IoT hub, you can use an IoT Edge device as a gateway. The gateway device reads data from your Modbus devices, then communicates that data to the cloud using a supported protocol.
-![Modbus devices connect to IoT Hub through IoT Edge gateway](./media/deploy-modbus-gateway/diagram.png)
This article covers how to create your own container image for a Modbus module (or you can use a prebuilt sample) and then deploy it to the IoT Edge device that will act as your gateway.
iot-edge How To Access Dtpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-access-dtpm.md
The following steps show you how to create a sample executable to access a TPM i
1. Choose the **Microsoft.TSS** package from the list then select **Install**.
- ![Visual Studio add NuGet packages](./media/how-to-access-dtpm/vs-nuget-microsoft-tss.png)
+ :::image type="content" source="./media/how-to-access-dtpm/vs-nuget-microsoft-tss.png" alt-text="Screenshot that shows Visual Studio add NuGet packages .":::
1. Edit the *Program.cs* file and replace the contents with the [EFLOW TPM sample code - Program.cs](https://raw.githubusercontent.com/Azure/iotedge-eflow/main/samples/tpm-read-nv/Program.cs).
The following steps show you how to create a sample executable to access a TPM i
- Target Runtime: **linux-x64**. - Deployment mode: **Self-contained**.
- ![Publish options](./media/how-to-access-dtpm/sample-publish-options.png)
+ :::image type="content" source="./media/how-to-access-dtpm/sample-publish-options.png" alt-text="Screenshot that shows publish options .":::
1. Select **Publish** then wait for the executable to be created.
Once the executable file and dependency files are created, you need to copy the
``` You should see an output similar to the following.
- ![EFLOW dTPM output](./media/how-to-access-dtpm/tpm-read-output.png)
+ :::image type="content" source="./media/how-to-access-dtpm/tpm-read-output.png" alt-text="Screenshot that shows EFLOW dTPM output.":::
## Next steps
iot-edge How To Access Host Storage From Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-access-host-storage-from-module.md
To set up system modules to use persistent storage:
1. For both IoT Edge hub and IoT Edge agent, add an environment variable called **storageFolder** that points to a directory in the module. 1. For both IoT Edge hub and IoT Edge agent, add binds to connect a local directory on the host machine to a directory in the module. For example:
- ![Screenshot that shows the add create options and environment variables for local storage](./media/how-to-access-host-storage-from-module/offline-storage-1-4.png)
+ :::image type="content" source="./media/how-to-access-host-storage-from-module/offline-storage-1-4.png" alt-text="Screenshot that shows how to add create options and environment variables for local storage.":::
Or, you can configure the local storage directly in the deployment manifest. For example:
iot-edge How To Authenticate Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-authenticate-downstream-device.md
When you create the new device identity, provide the following information:
* Select **Set a parent device** and select the IoT Edge gateway device that this downstream device will connect through. You can always change the parent later.
- ![Create device ID with symmetric key auth in portal](./media/how-to-authenticate-downstream-device/symmetric-key-portal.png)
+ :::image type="content" source="./media/how-to-authenticate-downstream-device/symmetric-key-portal.png" alt-text="Screenshot of how to create a device ID with symmetric key authorization in the Azure portal.":::
>[!NOTE] >Setting the parent device used to be an optional step for downstream devices that use symmetric key authentication. However, starting with IoT Edge version 1.1.0 every downstream device must be assigned to a parent device.
For X.509 self-signed authentication, sometimes referred to as thumbprint authen
* Paste the hexadecimal strings that you copied from your device's primary and secondary certificates. * Select **Set a parent device** and choose the IoT Edge gateway device that this downstream device will connect through. You can always change the parent later.
- ![Create device ID with X.509 self-signed auth in portal](./media/how-to-authenticate-downstream-device/x509-self-signed-portal.png)
+ :::image type="content" source="./media/how-to-authenticate-downstream-device/x509-self-signed-portal.png" alt-text="Screenshot that shows how to create a device ID with an X.509 self-signed authorization in the Azure portal.":::
4. Copy both the primary and secondary device certificates and their keys to any location on the downstream device. Also move a copy of the shared root CA certificate that generated both the gateway device certificate and the downstream device certificates.
For X.509 self-signed authentication, sometimes referred to as thumbprint authen
* C#: [Set up X.509 security in your Azure IoT hub](../iot-hub/tutorial-x509-test-certificate.md) * C: [iotedge_downstream_device_sample.c](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iotedge_downstream_device_sample) * Node.js: [simple_sample_device_x509.js](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/simple_sample_device_x509.js)
- * Java: [SendEventX509.java](https://github.com/Azure/azure-iot-sdk-java/tree/main/device/iot-device-samples/send-event-x509)
+ * Java: [SendEventX509.java](https://github.com/Azure/azure-iot-sdk-java/tree/main/iothub/device/iot-device-samples/send-event-x509)
* Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/send_message_x509.py) You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/iot/hub/device-identity) command to create a new IoT device with X.509 self-signed authentication and assigns a parent device:
This section is based on the IoT Hub X.509 certificate tutorial series. See [Und
* C#: [Set up X.509 security in your Azure IoT hub](../iot-hub/tutorial-x509-test-certificate.md) * C: [iotedge_downstream_device_sample.c](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iotedge_downstream_device_sample) * Node.js: [simple_sample_device_x509.js](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/simple_sample_device_x509.js)
- * Java: [SendEventX509.java](https://github.com/Azure/azure-iot-sdk-java/tree/main/device/iot-device-samples/send-event-x509)
+ * Java: [SendEventX509.java](https://github.com/Azure/azure-iot-sdk-java/tree/main/iothub/device/iot-device-samples/send-event-x509)
* Python: [send_message_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/send_message_x509.py) You also can use the [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to complete the same device creation operation. The following example uses the [az iot hub device-identity](/cli/azure/iot/hub/device-identity) command to create a new IoT device with X.509 CA signed authentication and assigns a parent device:
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-collect-and-transport-metrics.md
To configure monitoring on your IoT Edge device, follow the [Tutorial: Monitor I
# [IoT Hub](#tab/iothub)
-[![Metrics monitoring architecture with IoT Hub](./media/how-to-collect-and-transport-metrics/arch.png)](./media/how-to-collect-and-transport-metrics/arch.png#lightbox)
| Note | Description | |-|-|
To configure monitoring on your IoT Edge device, follow the [Tutorial: Monitor I
# [IoT Central](#tab/iotcentral)
-[![Metrics monitoring architecture with IoT Central](./media/how-to-collect-and-transport-metrics/arch-iot-central.png)](./media/how-to-collect-and-transport-metrics/arch-iot-central.png#lightbox)
| Note | Description | |-|-|
iot-edge How To Configure Api Proxy Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-api-proxy-module.md
To update the proxy configuration dynamically, use the following steps:
1. Copy the text of the configuration file and convert it to base64. 1. Paste the encoded configuration file as the value of the `proxy_config` desired property in the module twin.
- ![Paste encoded config file as value of proxy_config property](./media/how-to-configure-api-proxy-module/change-config.png)
+ :::image type="content" source="./media/how-to-configure-api-proxy-module/change-config.png" alt-text="Screenshot that shows how to paste encoded config file as value of proxy_config property.":::
## Next steps
iot-edge How To Configure Iot Edge For Linux On Windows Iiot Dmz https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz.md
Since the EFLOW host device and the PLC or OPC UA devices are physically connect
For the other network, the EFLOW host device is physically connected to the DMZ (online network) with internet and Azure connectivity. Using an *internal or external switch*, you can connect the EFLOW VM to Azure IoT Hub using IoT Edge modules and upload the information sent by the offline devices through the offline NIC.
-![EFLOW Industrial IoT scenario showing a EFLOW VM connected to offline and online network.](./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/iiot-multiple-nic.png)
### Scenario summary
For the custom new *external virtual switch* you created, use the following Powe
1. `Add-EflowNetwork -vswitchName "OnlineOPCUA" -vswitchType "External"`
- ![Screenshot of showing successful creation of the external network named OnlineOPCUA.](./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-network.png)
+ :::image type="content" source="./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-network.png" alt-text="Screenshot of a successful creation of the external network named OnlineOPCUA.":::
2. `Add-EflowVmEndpoint -vswitchName "OnlineOPCUA" -vEndpointName "OnlineEndpoint" -ip4Address 192.168.0.103 -ip4PrefixLength 24 -ip4GatewayAddress 192.168.0.1`
- ![Screenshot showing the successful configuration of the OnlineOPCUA switch.](./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-vm-endpoint.png)
+ :::image type="content" source="./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-vm-endpoint.png" alt-text="Screenshot of a successful configuration of the OnlineOPCUA switch..":::
Once complete, you'll have the *OnlineOPCUA* switch assigned to the EFLOW VM. To check the multiple NIC attachment, use the following steps:
Once complete, you'll have the *OnlineOPCUA* switch assigned to the EFLOW VM. To
1. Review the IP configuration and verify you see the *eth0* interface (connected to the secure network) and the *eth1* interface (connected to the DMZ network).
- ![Screenshot showing IP configuration of multiple NICs connected to two different networks.](./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/ifconfig-multiple-nic.png)
+ :::image type="content" source="./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/ifconfig-multiple-nic.png" alt-text="Screenshot showing the IP configuration of multiple NICs connected to two different networks.":::
## Configure VM network routing
EFLOW uses the [route](https://man7.org/linux/man-pages/man8/route.8.html) servi
sudo route ```
- ![Screenshot listing routing table for the EFLOW VM.](./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/route-output.png)
+ :::image type="content" source="./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/route-output.png" alt-text="Screenshot showing the routing table for the EFLOW virtual machine.":::
>[!TIP] >The previous image shows the route command output with the two NIC's assigned (*eth0* and *eth1*). The virtual machine creates two different *default* destinations rules with different metrics. A lower metric value has a higher priority. This routing table will vary depending on the networking scenario configured in the previous steps.
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md
On Windows hosts, if you're not using OpenSSL or another TLS library, the SDK de
This section introduces a sample application to connect an Azure IoT Java device client to an IoT Edge gateway.
-1. Get the sample for **Send-event** from the [Azure IoT device SDK for Java samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/device/iot-device-samples).
+1. Get the sample for **Send-event** from the [Azure IoT device SDK for Java samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/iothub/device/iot-device-samples).
2. Make sure that you have all the prerequisites to run the sample by reviewing the **readme.md** file. 3. Refer to the SDK documentation for instructions on how to run the sample on your device.
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
the link in the **Version** column to view the source on the
- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). - Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023 ms.suite: integration
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Use the following steps to deploy an MLflow model with a custom scoring script.
__score.py__
- ```python
- import logging
- import mlflow
- import os
- from io import StringIO
- from mlflow.pyfunc.scoring_server import infer_and_parse_json_input, predictions_to_json
-
- def init():
- global model
- global input_schema
- # The path 'model' corresponds to the path where the MLflow artifacts where stored when
- # registering the model using MLflow format.
- model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model')
- model = mlflow.pyfunc.load_model(model_path)
- input_schema = model.metadata.get_input_schema()
-
- def run(raw_data):
- json_data = json.loads(raw_data)
- if "input_data" not in json_data.keys():
- raise Exception("Request must contain a top level key named 'input_data'")
-
- serving_input = json.dumps(json_data["input_data"])
- data = infer_and_parse_json_input(serving_input, input_schema)
- result = model.predict(data)
-
- result = StringIO()
- predictions_to_json(raw_predictions, result)
- return result.getvalue()
- ```
+ :::code language="python" source="~/azureml-examples-main/cli/endpoints/online/ncd/sklearn-diabetes/src/score.py" highlight="14":::
> [!TIP] > The previous scoring script is provided as an example about how to perform inference of an MLflow model. You can adapt this example to your needs or change any of its parts to reflect your scenario.
Use the following steps to deploy an MLflow model with a custom scoring script.
__conda.yml__
- ```yaml
- channels:
- - conda-forge
- dependencies:
- - python=3.7.11
- - pip
- - pip:
- - mlflow
- - scikit-learn==0.24.1
- - cloudpickle==2.0.0
- - psutil==5.8.0
- - pandas==1.3.5
- - azureml-inference-server-http
- name: mlflow-env
- ```
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/ncd/sklearn-diabetes/environment/conda.yml":::
> [!NOTE] > Note how the package `azureml-inference-server-http` has been added to the original conda dependencies file.
Use the following steps to deploy an MLflow model with a custom scoring script.
model: azureml:sklearn-diabetes@latest environment: image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04
- conda_file: mlflow/sklearn-diabetes/environment/conda.yml
+ conda_file: sklearn-diabetes/environment/conda.yml
code_configuration:
- code: mlflow/sklearn-diabetes/src
+ code: sklearn-diabetes/src
scoring_script: score.py instance_type: Standard_F2s_v2 instance_count: 1
machine-learning How To Integrate Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-integrate-azure-policy.md
You can also assign policies by using [Azure PowerShell](../governance/policy/as
## Conditional access policies
-You can't use [Azure AD Conditional Access policies](/azure/active-directory/conditional-access/overview) to control access to Azure Machine Learning studio, as it's a client application. Azure Machine Learning does honor conditional access policies you may have created for other cloud apps or services. For example, when attempting to access approved apps from a Jupyter Notebook running on an Azure Machine Learning compute instance.
+To control who can access your Azure Machine Learning workspace, use Azure Active Directory [Conditional Access](../active-directory/conditional-access/overview.md). To use Conditional Access for Azure Machine Learning workspaces, [assign the Conditional Access policy](../active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to the app named __Azure Machine Learning__. The app ID is __0736f41a-0425-bdb5-1563eff02385__.
## Enable self-service using landing zones
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
Learn how to set up authentication to your Azure Machine Learning workspace from
Regardless of the authentication workflow used, Azure role-based access control (Azure RBAC) is used to scope the level of access (authorization) allowed to the resources. For example, an admin or automation process might have access to create a compute instance, but not use it, while a data scientist could use it, but not delete or create it. For more information, see [Manage access to Azure Machine Learning workspace](how-to-assign-roles.md).
+Azure AD Conditional Access can be used to further control or restrict access to the workspace for each authentication workflow. For example, an admin can allow workspace access from managed devices only.
+ ## Prerequisites * Create an [Azure Machine Learning workspace](how-to-manage-workspace.md).
print(ml_client)
## Use Conditional Access
-You can't use [Azure AD Conditional Access policies](/azure/active-directory/conditional-access/overview) to control access to Azure Machine Learning studio, as it's a client application. Azure Machine Learning does honor conditional access policies you may have created for other cloud apps or services. For example, when attempting to access approved apps from a Jupyter Notebook running on an Azure Machine Learning compute instance.
+As an administrator, you can enforce [Azure AD Conditional Access policies](../active-directory/conditional-access/overview.md) for users signing in to the workspace. For example, you
+can require two-factor authentication, or allow sign in only from managed devices. To use Conditional Access for Azure Machine Learning workspaces specifically, [assign the Conditional Access policy](../active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to the app named __Azure Machine Learning__. The app ID is __0736f41a-0425-bdb5-1563eff02385__.
## Next steps
machine-learning How To Setup Mlops Azureml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-mlops-azureml.md
Before you can set up an MLOps project with AzureML, you need to set up authenti
3. In the project under **Project Settings** (at the bottom left of the project page) select **Service Connections**.
-4. Select **New Service Connection**.
+4. Select **Create Service Connection**.
![Screenshot of ADO New Service connection button.](./media/how-to-setup-mlops-azureml/create_first_service_connection.png)
The Azure DevOps setup is successfully finished.
1. Open the **Project settings** at the bottom of the left hand navigation pane
-1. Under the Repos section, select **Repositories**. Select the repository you created in **Step 6.** Select the **Security** tab
+1. Under the Repos section, select **Repositories**. Select the repository you created in previous step Select the **Security** tab
1. Under the User permissions section, select the **mlopsv2 Build Service** user. Change the permission **Contribute** permission to **Allow** and the **Create branch** permission to **Allow**. ![Screenshot of Azure DevOps permissions.](./media/how-to-setup-mlops-azureml/ado-permissions-repo.png)
This step deploys the training pipeline to the Azure Machine Learning workspace
![Screenshot of ADO Pipelines.](./media/how-to-setup-mlops-azureml/ADO-pipelines.png)
-1. Select **New Pipeline**.
-
- ![Screenshot of ADO New Pipeline button for infra.](./media/how-to-setup-mlops-azureml/ADO-new-pipeline.png)
+1. Select **Create Pipeline**.
1. Select **Azure Repos Git**.
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
ms.devlang: azurecli
# Track ML experiments and models with MLflow
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you're using:"]
> * [v1](./v1/how-to-use-mlflow.md) > * [v2 (current version)](how-to-use-mlflow-cli-runs.md)
-Azure Machine Learning workspaces are MLflow-compatible, which means you can use MLflow to track runs, metrics, parameters, and artifacts with your Azure Machine Learning workspaces. By using MLflow for tracking, you don't need to change your training routines to work with Azure Machine Learning or inject any cloud-specific syntax, which is one of the main advantages of the approach.
+__Tracking__ refers to process of saving all experiment's related information that you may find relevant for every experiment you run. Such metadata varies based on your project, but it may include:
-See [MLflow and Azure Machine Learning](concept-mlflow.md) for all supported MLflow and Azure Machine Learning functionality including MLflow Project support (preview) and model deployment.
+> [!div class="checklist"]
+> - Code
+> - Environment details (OS version, Python packages)
+> - Input data
+> - Parameter configurations
+> - Models
+> - Evaluation metrics
+> - Evaluation visualizations (confusion matrix, importance plots)
+> - Evaluation results (including some evaluation predictions)
+
+Some of these elements are automatically tracked by Azure Machine Learning when working with jobs (including code, environment, and input and output data). However, others like models, parameters, and metrics, need to be instrumented by the model builder as it's specific to the particular scenario.
-In this article, you will learn how to use MLflow for tracking your experiments and runs in Azure Machine Learning workspaces.
+In this article, you'll learn how to use MLflow for tracking your experiments and runs in Azure Machine Learning workspaces.
> [!NOTE] > If you want to track experiments running on Azure Databricks or Azure Synapse Analytics, see the dedicated articles [Track Azure Databricks ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-databricks.md) or [Track Azure Synapse Analytics ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-synapse.md).
+## Benefits of tracking experiments
+
+We highly encourage machine learning practitioners to instrument their experimentation by tracking them, regardless if they're training with jobs in Azure Machine Learning or interactively in notebooks. Benefits include:
+
+- All of your ML experiments are organized in a single place, allowing you to search and filter experiments to find the information and drill down to see what exactly it was that you tried before.
+- Compare experiments, analyze results, and debug model training with little extra work.
+- Reproduce or re-run experiments to validate results.
+- Improve collaboration by seeing what everyone is doing, sharing experiment results, and access experiment data programmatically.
+
+### Why MLflow
+
+Azure Machine Learning workspaces are MLflow-compatible, which means you can use MLflow to track runs, metrics, parameters, and artifacts with your Azure Machine Learning workspaces. By using MLflow for tracking, you don't need to change your training routines to work with Azure Machine Learning or inject any cloud-specific syntax, which is one of the main advantages of the approach.
+
+See [MLflow and Azure Machine Learning](concept-mlflow.md) for all supported MLflow and Azure Machine Learning functionality including MLflow Project support (preview) and model deployment.
+ ## Prerequisites [!INCLUDE [mlflow-prereqs](../../includes/machine-learning-mlflow-prereqs.md)]
When submitting jobs using Azure Machine Learning CLI or SDK, you can set the ex
## Configure the run
-Azure Machine Learning any training job in what MLflow calls a run. Use runs to capture all the processing that your job performs.
+Azure Machine Learning tracks any training job in what MLflow calls a run. Use runs to capture all the processing that your job performs.
# [Working interactively](#tab/interactive)
-When working interactively, MLflow starts tracking your training routine as soon as you try to log information that requires an active run. For instance, when you log a metric, log a parameter, or when you start a training cycle when Mlflow's autologging functionality is enabled. However, it is usually helpful to start the run explicitly, specially if you want to capture the total time of your experiment in the field __Duration__. To start the run explicitly, use `mlflow.start_run()`.
+When working interactively, MLflow starts tracking your training routine as soon as you try to log information that requires an active run. For instance, when you log a metric, log a parameter, or when you start a training cycle when Mlflow's autologging functionality is enabled. However, it's usually helpful to start the run explicitly, specially if you want to capture the total time of your experiment in the field __Duration__. To start the run explicitly, use `mlflow.start_run()`.
-Regardless if you started the run manually or not, you will eventually need to stop the run to inform MLflow that your experiment run has finished and marks its status as __Completed__. To do that, all `mlflow.end_run()`. We strongly recommend starting runs manually so you don't forget to end them when working on notebooks.
+Regardless if you started the run manually or not, you'll eventually need to stop the run to inform MLflow that your experiment run has finished and marks its status as __Completed__. To do that, all `mlflow.end_run()`. We strongly recommend starting runs manually so you don't forget to end them when working on notebooks.
```python mlflow.start_run()
mlflow.start_run()
mlflow.end_run() ```
-To help you avoid forgetting to end the run, it is usually helpful to use the context manager paradigm:
+To help you avoid forgetting to end the run, it's usually helpful to use the context manager paradigm:
```python with mlflow.start_run() as run:
When working with jobs, you typically place all your training logic inside of a
:::code language="python" source="~/azureml-examples-main/cli/jobs/basics/src/hello-mlflow.py" highlight="9-10,12":::
-The previous code example doesn't uses `mlflow.start_run()` but if used you can expect MLflow to reuse the current active run so there is no need to remove those lines if migrating to Azure Machine Learning.
+The previous code example doesn't uses `mlflow.start_run()` but if used you can expect MLflow to reuse the current active run so there's no need to remove those lines if migrating to Azure Machine Learning.
### Adding tracking to your routine
Use MLflow SDK to track any metric, parameter, artifacts, or models. For detaile
### Ensure your job's environment has MLflow installed
-All Azure Machine Learning environments already have MLflow installed for you, so no action is required if you are using a curated environment. If you want to use a custom environment:
+All Azure Machine Learning environments already have MLflow installed for you, so no action is required if you're using a curated environment. If you want to use a custom environment:
1. Create a `conda.yml` file with the dependencies you need: :::code language="yaml" source="~/azureml-examples-main//sdk/python/using-mlflow/deploy/environment/conda.yml" highlight="7-8" range="1-12":::
-1. Reference the environment in the job you are using.
+1. Reference the environment in the job you're using.
-### Configuring the job's name
+### Configuring job's name
Use the parameter `display_name` of Azure Machine Learning jobs to configure the name of the run. The following example shows how:
Use the parameter `display_name` of Azure Machine Learning jobs to configure the
) ```
-2. Ensure you are not using `mlflow.start_run(run_name="")` inside of your training routine.
+2. Ensure you're not using `mlflow.start_run(run_name="")` inside of your training routine.
### Submitting the job
-1. First, let's connect to Azure Machine Learning workspace where we are going to work on.
+1. First, let's connect to Azure Machine Learning workspace where we're going to work on.
# [Azure CLI](#tab/cli)
The metrics and artifacts from MLflow logging are tracked in your workspace. To
:::image type="content" source="media/how-to-log-view-metrics/metrics.png" alt-text="Screenshot of the metrics view.":::
-Select the logged metrics to render charts on the right side. You can customize the charts by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish. Once you have created your desired view, you can save it for future use and share it with your teammates using a direct link.
+Select the logged metrics to render charts on the right side. You can customize the charts by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish. Once you've created your desired view, you can save it for future use and share it with your teammates using a direct link.
-Retrieve run metric using MLflow SDK, use [mlflow.get_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.get_run).
+You can also access or __query metrics, parameters and artifacts programatically__ using the MLflow SDK. Use [mlflow.get_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.get_run) as explained bellow:
-```Python
-from mlflow.tracking import MlflowClient
+```python
+import mlflow
-client = MlflowClient()
-run = MlflowClient().get_run("<RUN_ID>")
+run = mlflow.get_run("<RUN_ID>")
metrics = run.data.metrics
-tags = run.data.tags
params = run.data.params
+tags = run.data.tags
-print(metrics,tags,params)
+print(metrics, params, tags)
```
-To view the artifacts of a run, you can use [MlFlowClient.list_artifacts()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.list_artifacts)
-
-```Python
-client.list_artifacts(run_id)
-```
+> [!TIP]
+> For metrics, the previous example will only return the last value of a given metric. If you want to retrieve all the values of a given metric, use `mlflow.get_metric_history` method as explained at [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run).
-To download an artifact to the current directory, you can use [MLFlowClient.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.download_artifacts)
+To download artifacts you've logged, like files and models, you can use [mlflow.artifacts.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.artifacts.html#mlflow.artifacts.download_artifacts)
-```Python
-client.download_artifacts(run_id, "helloworld.txt", ".")
+```python
+mlflow.artifacts.download_artifacts(run_id="<RUN_ID>", artifact_path="helloworld.txt")
```
-For more details about how to retrieve information from experiments and runs in Azure Machine Learning using MLflow view [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
+For more details about how to __retrieve or compare__ information from experiments and runs in Azure Machine Learning using MLflow view [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md)
## Example notebooks
-If you are looking for examples about how to use MLflow in Jupyter notebooks, please see our example's repository [Using MLflow (Jupyter Notebooks)](https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mlflow).
+If you're looking for examples about how to use MLflow in Jupyter notebooks, please see our example's repository [Using MLflow (Jupyter Notebooks)](https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mlflow).
## Limitations
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
The studio is your web portal for Azure Machine Learning. This portal combines n
Review the parts of the studio on the left-hand navigation bar:
-* The **Author** section of the studio contains multiple ways to get started in creating machine learning models. You can:
+* The **Authoring** section of the studio contains multiple ways to get started in creating machine learning models. You can:
* **Notebooks** section allows you to create Jupyter Notebooks, copy sample notebooks, and run notebooks and Python scripts. * **Automated ML** steps you through creating a machine learning model without writing code.
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-setup-authentication.md
Learn how to set up authentication to your Azure Machine Learning workspace. Aut
Regardless of the authentication workflow used, Azure role-based access control (Azure RBAC) is used to scope the level of access (authorization) allowed to the resources. For example, an admin or automation process might have access to create a compute instance, but not use it, while a data scientist could use it, but not delete or create it. For more information, see [Manage access to Azure Machine Learning workspace](../how-to-assign-roles.md).
+Azure AD Conditional Access can be used to further control or restrict access to the workspace for each authentication workflow. For example, an admin can allow workspace access from managed devices only.
+ ## Prerequisites * Create an [Azure Machine Learning workspace](../how-to-manage-workspace.md).
ws = Workspace(subscription_id="your-sub-id",
## Use Conditional Access
-You can't use [Azure AD Conditional Access policies](/azure/active-directory/conditional-access/overview) to control access to Azure Machine Learning studio, as it's a client application. Azure Machine Learning does honor conditional access policies you may have created for other cloud apps or services. For example, when attempting to access approved apps from a Jupyter Notebook running on an Azure Machine Learning compute instance.
+As an administrator, you can enforce [Azure AD Conditional Access policies](../../active-directory/conditional-access/overview.md) for users signing in to the workspace. For example, you
+can require two-factor authentication, or allow sign in only from managed devices. To use Conditional Access for Azure Machine Learning workspaces specifically, [assign the Conditional Access policy](../../active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to the app named __Azure Machine Learning__. The app ID is __0736f41a-0425-bdb5-1563eff02385__.
## Next steps
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 01/05/2023 Last updated : 02/21/2023 # Azure Policy built-in definitions for Azure Database for MariaDB
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 01/05/2023 Last updated : 02/21/2023 # Azure Policy built-in definitions for Azure Database for MySQL
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
openshift Howto Add Update Pull Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-add-update-pull-secret.md
This section walks through updating that pull secret with additional values from
Run the following command to update your pull secret. > [!NOTE]
-> Running this command will cause your cluster nodes to restart one by one as they're updated.
+> In ARO 4.9 or older, running this command will cause your cluster nodes to restart one by one as they're updated.
+> In ARO 4.10 version or later a restart will not be triggered.
```console oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./pull-secret.json ```
+### Verify that pull secret is in place
+
+```
+oc exec $(oc get pod -n openshift-apiserver -o jsonpath="{.items[0].metadata.name}") -- cat /var/lib/kubelet/config.json
+```
+ After the secret is set, you're ready to enable Red Hat Certified Operators. ### Modify the configuration files
openshift Quickstart Openshift Arm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-openshift-arm-bicep-template.md
Previously updated : 03/17/2022 Last updated : 02/15/2023 keywords: azure, openshift, aro, red hat, arm, bicep #Customer intent: I need to use ARM templates or Bicep files to deploy my Azure Red Hat OpenShift cluster. zone_pivot_groups: azure-red-hat-openshift
This section provides information on deploying the azuredeploy.json template.
The azuredeploy.json template is used to deploy an Azure Red Hat OpenShift cluster. The following parameters are required.
+> [!NOTE]
+> For the `domain` parameter, specify the domain prefix that will be used as part of the auto-generated DNS name for OpenShift console and API servers. This prefix is also used as part of the name of the resource group that is created to host the cluster VMs.
+ | Property | Description | Valid Options | Default Value | |-|-||| | `domain` |The domain prefix for the cluster. | | none |
openshift Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-portal.md
Previously updated : 11/30/2021 Last updated : 02/14/2023
Create a service principal, as explained in [Use the portal to create an Azure A
* Select **Master VM Size** and **Worker VM Size**. ![**Basics** tab on Azure portal](./media/Basics.png)
+
+ > [!NOTE]
+ > In the **Domain name** field, you can either specify a domain name (e.g., *example.com*) or a prefix (e.g., *abc*) that will be used as part of the auto-generated DNS name for OpenShift console and API servers. This prefix is also used as part of the name of the resource group (e.g., *aro-abc*) that is created to host the cluster VMs.
4. On the **Authentication** tab of the **Azure Red Hat OpenShift** dialog, complete the following sections.
openshift Tutorial Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-create-cluster.md
Last updated 10/26/2020
# Tutorial: Create an Azure Red Hat OpenShift 4 cluster
-In this tutorial, part one of three, you'll prepare your environment to create an Azure Red Hat OpenShift cluster running OpenShift 4, and create a cluster. You'll learn how to:
+In this tutorial, part one of three, you prepare your environment to create an Azure Red Hat OpenShift cluster running OpenShift 4, and create a cluster. You learn how to:
> [!div class="checklist"] > * Setup the prerequisites > * Create the required virtual network and subnets
In this tutorial, part one of three, you'll prepare your environment to create a
If you choose to install and use the CLI locally, this tutorial requires that you're running the Azure CLI version 2.30.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The default Azure resource quota for a new Azure subscription does not meet this requirement. To request an increase in your resource limit, see [Standard quota: Increase limits by VM series](../azure-portal/supportability/per-vm-quota-requests.md).
+Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The default Azure resource quota for a new Azure subscription doesn't meet this requirement. To request an increase in your resource limit, see [Standard quota: Increase limits by VM series](../azure-portal/supportability/per-vm-quota-requests.md).
-* For example to check the current subscription quota of the smallest supported virtual machine family SKU "Standard DSv3":
+* For example, to check the current subscription quota of the smallest supported virtual machine family SKU "Standard DSv3":
```azurecli-interactive LOCATION=eastus
Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an Open
### Verify your permissions
-During this tutorial, you will create a resource group, which will contain the virtual network for the cluster. You must have either Contributor and User Access Administrator permissions, or Owner permissions, either directly on the virtual network, or on the resource group or subscription containing it.
+During this tutorial, you'll create a resource group, which contains the virtual network for the cluster. To do this, you'll need Contributor and User Access Administrator permissions or Owner permissions, either directly on the virtual network or on the resource group or subscription containing it.
-You will also need sufficient Azure Active Directory permissions (either a member user of the tenant, or a guest user assigned with role **Application administrator**) for the tooling to create an application and service principal on your behalf for the cluster. See [Member and guest users](../active-directory/fundamentals/users-default-permissions.md#member-and-guest-users) and [Assign administrator and non-administrator roles to users with Azure Active Directory](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md) for more details.
+You'll also need sufficient Azure Active Directory permissions (either a member user of the tenant, or a guest assigned with role **Application administrator**) for the tooling to create an application and service principal on your behalf for the cluster. See [Member and guests](../active-directory/fundamentals/users-default-permissions.md#member-and-guest-users) and [Assign administrator and non-administrator roles to users with Azure Active Directory](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md) for more details.
### Register the resource providers
You will also need sufficient Azure Active Directory permissions (either a membe
### Get a Red Hat pull secret (optional) > [!NOTE]
- > ARO pull secret does not change the cost of the RH OpenShift license for ARO.
+ > ARO pull secret doesn't change the cost of the RH OpenShift license for ARO.
-A Red Hat pull secret enables your cluster to access Red Hat container registries along with additional content. This step is optional but recommended. Please note that the field `cloud.openshift.com` will be removed from your secret even if your pull-secret contains that field. This field enables an extra monitoring feature which sends data to RedHat and is thus disabled by default. To enable this feature, see https://docs.openshift.com/container-platform/4.11/support/remote_health_monitoring/enabling-remote-health-reporting.html .
+A Red Hat pull secret enables your cluster to access Red Hat container registries along with other content. This step is optional but recommended. The field `cloud.openshift.com` is removed from your secret even if your pull-secret contains that field. This field enables an extra monitoring feature, which sends data to RedHat and is thus disabled by default. To enable this feature, see https://docs.openshift.com/container-platform/4.11/support/remote_health_monitoring/enabling-remote-health-reporting.html .
-1. [Navigate to your Red Hat OpenShift cluster manager portal](https://console.redhat.com/openshift/install/azure/aro-provisioned) and log in.
+1. [Navigate to your Red Hat OpenShift cluster manager portal](https://console.redhat.com/openshift/install/azure/aro-provisioned) and sign-in.
- You will need to log in to your Red Hat account or create a new Red Hat account with your business email and accept the terms and conditions.
+ You'll need to log in to your Red Hat account or create a new Red Hat account with your business email and accept the terms and conditions.
-1. Click **Download pull secret** and download a pull secret to be used with your ARO cluster.
+1. Select **Download pull secret** and download a pull secret to be used with your ARO cluster.
Keep the saved `pull-secret.txt` file somewhere safe. The file will be used in each cluster creation if you need to create a cluster that includes samples or operators for Red Hat or certified partners. When running the `az aro create` command, you can reference your pull secret using the `--pull-secret @pull-secret.txt` parameter. Execute `az aro create` from the directory where you stored your `pull-secret.txt` file. Otherwise, replace `@pull-secret.txt` with `@/path/to/my/pull-secret.txt`.
- If you are copying your pull secret or referencing it in other scripts, your pull secret should be formatted as a valid JSON string.
+ If you're copying your pull secret or referencing it in other scripts, your pull secret should be formatted as a valid JSON string.
### Prepare a custom domain for your cluster (optional) When running the `az aro create` command, you can specify a custom domain for your cluster by using the `--domain foo.example.com` parameter.
-If you provide a custom domain for your cluster note the following points:
+> [!NOTE]
+> Although adding a domain name is optional when creating a cluster through Azure CLI, a domain name (or a prefix used as part of the auto-generated DNS name for OpenShift console and API servers) is needed when adding a cluster through the portal. See [Quickstart: Deploy an Azure Red Hat OpenShift cluster using the Azure portal](quickstart-portal.md#create-an-azure-red-hat-openshift-cluster) for more information.
+
+If you provide a custom domain for your cluster, note the following points:
-* After creating your cluster, you must create 2 DNS A records in your DNS server for the `--domain` specified:
+* After creating your cluster, you must create two DNS A records in your DNS server for the `--domain` specified:
* **api** - pointing to the api server IP address * **\*.apps** - pointing to the ingress IP address * Retrieve these values by executing the following command after cluster creation: `az aro show -n -g --query '{api:apiserverProfile.ip, ingress:ingressProfiles[0].ip}'`.
If you provide a custom domain for your cluster note the following points:
### Create a virtual network containing two empty subnets
-Next, you will create a virtual network containing two empty subnets. If you have existing virtual network that meets your needs, you can skip this step.
+Next, you'll create a virtual network containing two empty subnets. If you have existing virtual network that meets your needs, you can skip this step.
1. **Set the following variables in the shell environment in which you will execute the `az` commands.**
Next, you will create a virtual network containing two empty subnets. If you hav
2. **Create a resource group.**
- An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are asked to specify a location. This location is where resource group metadata is stored, and it is also where your resources run in Azure if you don't specify another region during resource creation. Create a resource group using the [az group create](/cli/azure/group#az-group-create) command.
+ An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're asked to specify a location. This location is where resource group metadata is stored, and it is also where your resources run in Azure if you don't specify another region during resource creation. Create a resource group using the [az group create](/cli/azure/group#az-group-create) command.
> [!NOTE] > Azure Red Hat OpenShift is not available in all regions where an Azure resource group can be created. See [Available regions](https://azure.microsoft.com/global-infrastructure/services/?products=openshift) for information on where Azure Red Hat OpenShift is supported.
Next, you will create a virtual network containing two empty subnets. If you hav
## Create the cluster Run the following command to create a cluster. If you choose to use either of the following options, modify the command accordingly:
-* Optionally, you can [pass your Red Hat pull secret](#get-a-red-hat-pull-secret-optional) which enables your cluster to access Red Hat container registries along with additional content. Add the `--pull-secret @pull-secret.txt` argument to your command.
+* Optionally, you can [pass your Red Hat pull secret](#get-a-red-hat-pull-secret-optional), which enables your cluster to access Red Hat container registries along with other content. Add the `--pull-secret @pull-secret.txt` argument to your command.
* Optionally, you can [use a custom domain](#prepare-a-custom-domain-for-your-cluster-optional). Add the `--domain foo.example.com` argument to your command, replacing `foo.example.com` with your own custom domain. > [!NOTE]
partner-solutions New Relic Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-create.md
Title: Create an instance of Azure Native New Relic Service Preview
description: Learn how to create a resource by using Azure Native New Relic Service. Previously updated : 02/16/2023 Last updated : 02/21/2023 # Quickstart: Get started with Azure Native New Relic Service Preview
Use the Azure portal to find the Azure Native New Relic Service application:
## Configure metrics and logs
-Your next step is to configure metrics and logs on the **Logs** tab. When you're creating the New Relic resource, you can set up automatic log forwarding for two types of logs:
+Your next step is to configure metrics and logs on the **Metrics and Logs** tab. When you're creating the New Relic resource, you can set up metrics monitoring and automatic log forwarding:
-1. To send subscription-level logs to New Relic, select **Subscription activity logs**. If you leave this option cleared, no subscription-level logs will be sent to New Relic.
+1. To set up monitoring of platform metrics for Azure resources by New Relic, select **Enable metrics collection**. If you leave this option cleared, metrics are not be pulled by New Relic.
+
+1. To send subscription-level logs to New Relic, select **Subscription activity logs**. If you leave this option cleared, no subscription-level logs are sent to New Relic.
These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). These logs also include updates on service-health events.
Your next step is to configure metrics and logs on the **Logs** tab. When you're
- All Azure resources with tags defined in exclude rules don't send logs to New Relic. - If there's a conflict between inclusion and exclusion rules, the exclusion rule applies.
- Azure charges for logs sent to New Relic. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) to Azure Marketplace partners.
+ Azure charges for logs sent to New Relic. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
> [!NOTE] > You can collect metrics for virtual machines and app services by installing the New Relic agent after you create the New Relic resource.
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
The following are current limitations for configuring the customer-managed key i
- CMK encryption can only be configured during creation of a new server, not as an update to the existing Flexible Server. You can [restore PITR backup to new server with CMK encryption](./concepts-backup-restore.md#point-in-time-recovery) instead. -- Once enabled, CMK encryption can't be removed. If customer desires to remove this feature, it can only be done via restore of the server to non-CMK server.
+- Once enabled, CMK encryption can't be removed. If customer desires to remove this feature, it can only be done via [restore of the server to non-CMK server](./concepts-backup-restore.md#point-in-time-recovery).
- No support for Azure HSM Key Vault
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 01/05/2023 Last updated : 02/21/2023 # Azure Policy built-in definitions for Azure Database for PostgreSQL
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
The following Azure resources are defined in the template.
|**Sim Resources** | If you want to provision SIMs, paste in the contents of the JSON file containing your SIM information. Otherwise, leave this field unchanged. | | **Azure Stack Edge Device** | Enter the resource ID of the Azure Stack Edge resource in the site. | |**Control Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. |
- |**Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network. |
+ |**Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network.</br> Note: Please ensure that the N2 IP address specified here matches the N2 address configured on the ASE Portal. |
|**User Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. | |**User Plane Data Interface Name** | Enter the virtual network name on port 6 on your Azure Stack Edge Pro device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. | |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. |
reliability Reliability Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-bot.md
Last updated 01/06/2022
-# What is reliability in Azure Bot Service?
+# Reliability in Azure Bot Service
When you create an application (bot) in Azure, you can choose whether or not your bot resource will have global or local data residency. Local data residency ensures that your bot's personal data is preserved, stored, and processed within certain geographic boundaries (like EU boundaries).
reliability Reliability Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-containers.md
Last updated 11/29/2022
<!--#Customer intent: I want to understand reliability support in Azure Container Instances so that I can respond to and/or avoid failures in order to minimize downtime and data loss. -->
-# What is reliability in Azure Container Instances?
+# Reliability in Azure Container Instances
> [!IMPORTANT]
reliability Reliability Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-energy-data-services.md
Last updated 01/13/2023
-# What is reliability in Microsoft Energy Data Services?
+# Reliability in Microsoft Energy Data Services
This article describes reliability support in Microsoft Energy Data Services, and covers intra-regional resiliency with [availability zones](#availability-zone-support). For a more detailed overview of reliability in Azure, see [Azure reliability](../reliability/overview.md).
reliability Reliability Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-functions.md
Last updated 10/07/2022
<!--#Customer intent: I want to understand reliability support in Azure Functions so that I can respond to and/or avoid failures in order to minimize downtime and data loss. -->
-# What is reliability in Azure Functions?
+# Reliability in Azure Functions
This article describes reliability support in Azure Functions and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and links to information on [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 12/28/2022 Last updated : 02/21/2023
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- February 21, 2023: Correct link to HANA hardware directory in [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md) and fixed a bug in [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md)
- February 17, 2023: Add support and Sentinel sections, few other minor updates in [RISE with SAP integration](rise-integration.md) - February 02, 2023: Add new HA provider susChkSrv for [SAP HANA Scale-out HA on SUSE](sap-hana-high-availability-scale-out-hsr-suse.md) and change from SAPHanaSR to SAPHanaSrMultiTarget provider, enabling HANA multi-target replication - January 27, 2023: Mark Azure Active Directory Domain Services as supported AD solution in [SAP workload on Azure virtual machine supported scenarios](planning-supported-configurations.md) after successful testing
sap Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-operations.md
Another description on how to use Azure NVAs to control and monitor access from
## Configuring Azure infrastructure for SAP HANA scale-out
-In order to find out the Azure VM types that are certified for either OLAP scale-out or S/4HANA scale-out, check the [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure). A checkmark in the column 'Clustering' indicates scale-out support. Application type indicates whether OLAP scale-out or S/4HANA scale-out is supported. For details on nodes certified in scale-out, review the entry for a specific VM SKU listed in the SAP HANA hardware directory.
+In order to find out the Azure VM types that are certified for either OLAP scale-out or S/4HANA scale-out, check the [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;iaas;ve:24). A checkmark in the column 'Clustering' indicates scale-out support. Application type indicates whether OLAP scale-out or S/4HANA scale-out is supported. For details on nodes certified in scale-out, review the entry for a specific VM SKU listed in the SAP HANA hardware directory.
The minimum OS releases for deploying scale-out configurations in Azure VMs, check the details of the entries in the particular VM SKU listed in the SAP HANA hardware directory. Of a n-node OLAP scale-out configuration, one node functions as the main node. The other nodes up to the limit of the certification act as worker node. More standby nodes don't count into the number of certified nodes
sap Hana Vm Premium Ssd V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v2.md
Configuration for SAP **/hana/data** volume:
| | | | | | | | | E20ds_v4 | 160 GiB | 480 MBps | 32,000 | 192 GB | 425 MBps | 3,000 | | E20(d)s_v5| 160 GiB | 750 MBps | 32,000 | 192 GB | 425 MBps | 3,000 |
-| E32ds_v4 | 256 GiB | 768 MBps | 51,200| 304 GB | 425 MBps | 3,000 |
+| E32ds_v4 | 256 GiB | 769 MBps | 51,200 | 304 GB | 425 MBps | 3,000 |
| E32ds_v5 | 256 GiB | 865 MBps | 51,200| 304 GB | 425 MBps | 3,000 | | E48ds_v4 | 384 GiB | 1,152 MBps | 76,800 | 464 GB |425 MBps | 3,000 | | E48ds_v4 | 384 GiB | 1,315 MBps | 76,800 | 464 GB |425 MBps | 3,000 |
For the **/hana/log** volume. the configuration would look like:
| VM SKU | RAM | Max. VM I/O<br /> Throughput | Max VM IOPS | **/hana/log** capacity | **/hana/log** throughput | **/hana/log** IOPS | **/hana/shared** capacity <br />using default IOPS <br /> and throughput | | | | | | | | | | E20ds_v4 | 160 GiB | 480 MBps | 32,000 | 80 GB | 275 MBps | 3,000 | 160 GB |
-| E20(d)s_v5 | 160 GiB | 750 MBps | 2,000 | 80 GB | 275 MBps | 3,000 | 160 GB |
+| E20(d)s_v5 | 160 GiB | 750 MBps | 32,000 | 80 GB | 275 MBps | 3,000 | 160 GB |
| E32ds_v4 | 256 GiB | 768 MBps | 51,200 | 128 GB | 275 MBps | 3,000 | 256 GB | | E32(d)s_v5 | 256 GiB | 865 MBps | 51,200 | 128 GB | 275 MBps | 3,000 | 256 GB | | E48ds_v4 | 384 GiB | 1,152 MBps | 76,800 | 192 GB | 275 MBps | 3,000 | 384 GB |
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
sentinel Audit Table Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/audit-table-reference.md
The following table describes the columns and data generated in the SentinelAudi
| ColumnName | ColumnType | Description | | | -- | -- | | **TenantId** | String | The tenant ID for your Microsoft Sentinel workspace. |
-| **TimeGenerated** | Datetime | The time (UTC) at which the audit event occurred. |
+| **TimeGenerated** | Datetime | The time (UTC) at which the audited activity occurred. |
| <a name="operationname_audit"></a>**OperationName** | String | The Azure operation being recorded. For example:<br>- `Microsoft.SecurityInsights/alertRules/Write`<br>- `Microsoft.SecurityInsights/alertRules/Delete` |
-| <a name="sentinelresourceid_audit"></a>**SentinelResourceId** | String | The unique identifier of the Microsoft Sentinel workspace and the associated resource on which the audit event occurred. |
+| <a name="sentinelresourceid_audit"></a>**SentinelResourceId** | String | The unique identifier of the Microsoft Sentinel workspace and the associated resource on which the audited activity occurred. |
| **SentinelResourceName** | String | The resource name. For analytics rules, this is the rule name. | | <a name="status_audit"></a>**Status** | String | Indicates `Success` or `Failure` for the [OperationName](#operationname_audit). | | **Description** | String | Describes the operation, including extended data as needed. For example, for failures, this column might indicate the failure reason. |
-| **WorkspaceId** | String | The workspace GUID on which the audit issue occurred. The full Azure Resource Identifier is available in the [SentinelResourceID](#sentinelresourceid_audit) column. |
+| **WorkspaceId** | String | The workspace GUID on which the audited activity occurred. The full Azure Resource Identifier is available in the [SentinelResourceID](#sentinelresourceid_audit) column. |
| **SentinelResourceType** | String | The Microsoft Sentinel resource type being monitored. | | **SentinelResourceKind** | String | The specific type of resource being monitored. For example, for analytics rules: `NRT`. | | **CorrelationId** | String | The event correlation ID in GUID format. |
Extended properties for analytics rules reflect certain [rule settings](detect-t
| **CallerName** | String | The user or application that initiated the action. | | **OriginalResourceState** | Dynamic (json) | A JSON bag that describes the rule before the change. | | **Reason** | String | The reason why the operation failed. For example: `No permissions`. |
-| **ResourceDiffMemberNames** | Array\[String\] | An array of the properties that changed on the relevant resource. For example: `['custom_details','look_back']`. |
-| **ResourceDisplayName** | String | Name of the analytics rule on which the audit issue occurred. |
-| **ResourceGroupName** | String | Resource group of the workspace on which the audit issue occurred. |
-| **ResourceId** | String | The resource ID of the analytics rule on which the audit issue occurred. |
-| **SubscriptionId** | String | The subscription ID of the workspace on which the audit issue occurred. |
+| **ResourceDiffMemberNames** | Array\[String\] | An array of the properties of the rule that were changed by the audited activity. For example: `['custom_details','look_back']`. |
+| **ResourceDisplayName** | String | Name of the analytics rule on which the audited activity occurred. |
+| **ResourceGroupName** | String | Resource group of the workspace on which the audited activity occurred. |
+| **ResourceId** | String | The resource ID of the analytics rule on which the audited activity occurred. |
+| **SubscriptionId** | String | The subscription ID of the workspace on which the audited activity occurred. |
| **UpdatedResourceState** | Dynamic (json) | A JSON bag that describes the rule after the change. | | **Uri** | String | The full-path resource ID of the analytics rule. |
-| **WorkspaceId** | String | The resource ID of the workspace on which the audit issue occurred. |
-| **WorkspaceName** | String | The name of the workspace on which the audit issue occurred. |
+| **WorkspaceId** | String | The resource ID of the workspace on which the audited activity occurred. |
+| **WorkspaceName** | String | The name of the workspace on which the audited activity occurred. |
## Next steps
sentinel Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md
Title: Custom data ingestion and transformation in Microsoft Sentinel (preview)
+ Title: Custom data ingestion and transformation in Microsoft Sentinel
description: Learn about how Azure Monitor's custom log ingestion and data transformation features can help you get any data into Microsoft Sentinel and shape it the way you want.
Last updated 02/27/2022
-# Custom data ingestion and transformation in Microsoft Sentinel (preview)
+# Custom data ingestion and transformation in Microsoft Sentinel
Azure Monitor's Log Analytics serves as the platform behind the Microsoft Sentinel workspace. All logs ingested into Microsoft Sentinel are stored in Log Analytics by default. From Microsoft Sentinel, you can access the stored logs and run Kusto Query Language (KQL) queries to detect threats and monitor your network activity.
Learn more about Microsoft Sentinel data connector types. For more information,
For more in-depth information on ingestion-time transformation, the Custom Logs API, and data collection rules, see the following articles in the Azure Monitor documentation: -- [Data collection transformations in Azure Monitor Logs (preview)](../azure-monitor/essentials/data-collection-transformations.md)-- [Logs ingestion API in Azure Monitor Logs (Preview)](../azure-monitor/logs/logs-ingestion-api-overview.md)
+- [Data collection transformations in Azure Monitor Logs](../azure-monitor/essentials/data-collection-transformations.md)
+- [Logs ingestion API in Azure Monitor Logs](../azure-monitor/logs/logs-ingestion-api-overview.md)
- [Data collection rules in Azure Monitor](../azure-monitor/essentials/data-collection-rule-overview.md)
sentinel Health Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/health-audit.md
To start collecting health and audit data, you need to [enable health and audit
- [Automation rules and playbooks](monitor-automation-health.md#get-the-complete-automation-picture) (join query with Azure Logic Apps diagnostics) - [Analytics rules](monitor-analytics-rule-integrity.md#run-queries-to-detect-health-and-integrity-issues) -- Use the health monitoring workbooks provided in Microsoft Sentinel.
+- Use the auditing and health monitoring workbooks provided in Microsoft Sentinel.
- [Data connectors](monitor-data-connector-health.md#use-the-health-monitoring-workbook) - [Automation rules and playbooks](monitor-automation-health.md#use-the-health-monitoring-workbook)
+ - [Analytics rules](monitor-analytics-rule-integrity.md#use-the-auditing-and-health-monitoring-workbook)
- Export the data into various destinations, like your Log Analytics workspace, archiving to a storage account, and more. Learn about the [supported destinations](../azure-monitor/essentials/diagnostic-settings.md) for your logs.
sentinel Migration Splunk Detection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-splunk-detection-rules.md
Use these samples to compare and map rules from Splunk to Microsoft Sentinel in
|`rename` |Renames a field. Use wildcards to specify multiple fields. |[project-rename](/azure/data-explorer/kusto/query/projectrenameoperator) |`T | project-rename new_column_name = column_name` | |`rex` |Specifies group names using regular expressions to extract fields. |[matches regex](/azure/data-explorer/kusto/query/re2) |`… | where field matches regex "^addr.*"` | |`search` |Filters results to results that match the search expression. |[search](/azure/data-explorer/kusto/query/searchoperator?pivots=azuredataexplorer) |`search "X"` |
-|`sort` |Sorts the search results by the specified fields. |[sort](/azure/data-explorer/kusto/query/sortoperator) |`T | sort by strlen(country) asc, price desc` |
+|`sort` |Sorts the search results by the specified fields. |[sort](/azure/data-explorer/kusto/query/sort-operator) |`T | sort by strlen(country) asc, price desc` |
|`stats` |Provides statistics, optionally grouped by fields. Learn more about [common stats commands](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/SPL%20to%20KQL.md#common-stats-commands). |[summarize](/azure/data-explorer/kusto/query/summarizeoperator) |[KQL example](#stats-command-kql-example) | |`mstats` |Similar to stats, used on metrics instead of events. |[summarize](/azure/data-explorer/kusto/query/summarizeoperator) |[KQL example](#mstats-command-kql-example) | |`table` |Specifies which fields to keep in the result set, and retains data in tabular format. |[project](/azure/data-explorer/kusto/query/projectoperator) |`T | project columnA, columnB` |
sentinel Monitor Analytics Rule Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-analytics-rule-integrity.md
Title: Monitor the health and audit the integrity of your Microsoft Sentinel ana
description: Use the SentinelHealth data table to keep track of your analytics rules' execution and performance. - Previously updated : 01/17/2023+ Last updated : 02/20/2023 # Monitor the health and audit the integrity of your analytics rules
This article describes how to use Microsoft Sentinel's [auditing and health moni
- **Microsoft Sentinel analytics rule health logs:** - This log captures events that record the running of analytics rules, and the end result of these runnings&mdash;if they succeeded or failed, and if they failed, why.
- - The log also records how many events were captured by the query, whether or not that number passed the threshold and caused an alert to be fired.
- - These logs are collected in the *SentinelHealth* table in Log Analytics.
+ - The log also records, for each running of an analytics rule:
+ - How many events were captured by the rule's query.
+ - Whether the number of events passed the threshold defined in the rule, causing the rule to fire an alert.
+
+ These logs are collected in the *SentinelHealth* table in Log Analytics.
- **Microsoft Sentinel analytics rule audit logs:**
- - This log captures events that record changes made to any analytics rule, including which rule was changed, what the change was, the state of the rule settings before and after the change, the user or identity that made the change, the source IP and date/time of the change, and more.
- - These logs are collected in the *SentinelAudit* table in Log Analytics.
+ - This log captures events that record changes made to any analytics rule, including the following details:
+ - The name of the rule that was changed.
+ - Which properties of the rule were changed.
+ - The state of the rule settings before and after the change.
+ - The user or identity that made the change.
+ - The source IP and date/time of the change.
+ - ...and more.
+
+ These logs are collected in the *SentinelAudit* table in Log Analytics.
## Use the SentinelHealth and SentinelAudit data tables (Preview)
For either **Scheduled analytics rule run** or **NRT analytics rule run**, you m
| \<*number*> entities were dropped in alert \<*name*> due to entity mapping issues. | | | The query resulted in \<*number*> events, which exceeds the maximum of \<*limit*> results allowed for \<*rule type*> rules with alert-per-row event-grouping configuration. Alert-per-row was generated for first \<*limit*-1> events and an additional aggregated alert was generated to account for all events.<br>- \<*number*> = number of events returned by the query<br>- \<*limit*> = currently 150 alerts for scheduled rules, 30 for NRT rules<br>- \<*rule type*> = Scheduled or NRT
+## Use the auditing and health monitoring workbook
+
+1. To make the workbook available in your workspace, you must install the workbook solution from the Microsoft Sentinel content hub:
+ 1. From the Microsoft Sentinel portal, select **Content hub (Preview)** from the **Content management** menu.
+
+ 1. In the **Content hub**, enter *health* in the search bar, and select **Analytics Health & Audit** from among the **Workbook** solutions under **Standalone** in the results.
+
+ :::image type="content" source="media/monitor-analytics-rule-integrity/select-workbook-from-content-hub.png" alt-text="Screenshot of selection of analytics health workbook from content hub.":::
+
+ 1. Select **Install** from the details pane, then select **Save** that appears in its place.
+
+1. When the solution indicates it's installed, select **Workbooks** from the **Threat management** menu.
+
+ :::image type="content" source="media/monitor-analytics-rule-integrity/installed.png" alt-text="Screenshot of indication that analytics health workbook solution is installed from content hub.":::
+
+1. In the **Workbooks** gallery, select the **Templates** tab, enter *health* in the search bar, and select **Analytics Health & Audit** from among the results.
+
+ :::image type="content" source="media/monitor-analytics-rule-integrity/select-workbook-template.png" alt-text="Screenshot of selecting analytics health workbook from template gallery.":::
+
+1. Select **Save** in the details pane to create an editable and usable copy of the workbook. When the copy is created, select **View saved workbook**.
+
+1. Once in the workbook, first select the **subscription** and **workspace** you wish to view (they may already be selected), then define the **TimeRange** to filter the data according to your needs. Use the **Show help** toggle to display in-place explanation of the workbook.
+
+ :::image type="content" source="media/monitor-analytics-rule-integrity/analytics-health-workbook-overview.png" alt-text="Screenshot of analytics rule health workbook overview tab.":::
+
+There are three tabbed sections in this workbook:
+
+### Overview tab
+
+The **Overview** tab shows health and audit summaries:
+- Health summaries of the status of analytics rule runs in the selected workspace: number of runs, successes and failures, and failure event details.
+- Audit summaries of activities on analytics rules in the selected workspace: number of activities over time, number of activities by type, and number of activities of different types by rule.
+
+### Health tab
+
+The **Health** tab lets you drill down to particular health events.
++
+- Filter the whole page data by **status** (success/failure) and **rule type** (scheduled/NRT).
+- See the trends of successful and/or failed rule runs (depending on the status filter) over the selected time period. You can "time brush" the trend graph to see a subset of the original time range.
+ :::image type="content" source="media/monitor-analytics-rule-integrity/analytics-rule-runs-over-time.png" alt-text="Screenshot of analytics rule runs over time in analytics health workbook.":::
+- Filter the rest of the page by **reason**.
+- See the total number of runs for all the analytics rules, displayed proportionally by status in a pie chart.
+- Following that is a table showing the number of unique analytics rules that ran, broken down by rule type and status.
+ - Select a status to filter the remaining charts for that status.
+ - Clear the filter by selecting the "Clear selection" icon (it looks like an "Undo" icon) in the upper right corner of the chart.
+ :::image type="content" source="media/monitor-analytics-rule-integrity/number-rule-runs-by-status-and-type.png" alt-text="Screenshot of number of rules run by status and type in the analytics health workbook.":::
+- See each status, with the number of possible reasons for that status. (Only reasons represented in the runs in the selected time frame will be shown.)
+ - Select a status to filter the remaining charts for that status.
+ - Clear the filter by selecting the "Clear selection" icon (it looks like an "Undo" icon) in the upper right corner of the chart.
+ :::image type="content" source="media/monitor-analytics-rule-integrity/unique-reasons-by-status.png" alt-text="Screenshot of number of unique reasons by status in analytics health workbook.":::
+- Next, see a list of those reasons, with the number of total rule runs combined and the number of unique rules that were run.
+ - Select a reason to filter the following charts for that reason.
+ - Clear the filter by selecting the "Clear selection" icon (it looks like an "Undo" icon) in the upper right corner of the chart.
+ :::image type="content" source="media/monitor-analytics-rule-integrity/rule-runs-by-reason.png" alt-text="Screenshot of rule runs by unique reason in analytics health workbook.":::
+- After that is a list of the unique analytics rules that ran, with the latest results and trendlines of their success and/or failure (depending on the status selected to filter the list).
+ - Select a rule to drill down and show a new table with all the runnings of that rule (in the selected time frame).
+ - Clear that table by selecting the "Clear selection" icon (it looks like an "Undo" icon) in the upper right corner of the chart.
+ :::image type="content" source="media/monitor-analytics-rule-integrity/unique-rules-by-status-and-trend.png" alt-text="Screenshot of list of unique rules run, with status and trendlines, in analytics health workbook.":::
+- If you selected a rule in the list above, a new table will appear with the health details for the selected rule.
++
+### Audit tab
+
+The **Audit** tab lets you drill down to particular audit events.
++
+- Filter the whole page data by **audit rule type** (scheduled/Fusion).
+- See the trends of audited activity on analytics rules over the selected time period. You can "time brush" the trend graph to see a subset of the original time range.
+ :::image type="content" source="media/monitor-analytics-rule-integrity/audit-trending-by-activity.png" alt-text="Screenshot of trending audit activity in analytics health workbook.":::
+- See the numbers of audited events, broken down by **activity** and **rule type**.
+ - Select an activity to filter the following charts for that activity.
+ - Clear the filter by selecting the "Clear selection" icon (it looks like an "Undo" icon) in the upper right corner of the chart.
+ :::image type="content" source="media/monitor-analytics-rule-integrity/number-audit-events-by-activity-and-type.png" alt-text="Screenshot of counts of audit events by activity and type in analytics health workbook.":::
+- See the number of audited events by **rule name**.
+ - Select a rule name to filter the following table for that rule, and to drill down and show a new table with all the activity on that rule (in the selected time frame). (See after the following screenshot.)
+ - Clear the filter by selecting the "Clear selection" icon (it looks like an "Undo" icon) in the upper right corner of the chart.
+ :::image type="content" source="media/monitor-analytics-rule-integrity/activity-by-rule-name-and-caller.png" alt-text="Screenshot of audited events by rule name and caller in analytics health workbook.":::
+- See the number of audited events by **caller** (the identity that performed the activity).
+- If you selected a rule name in the chart depicted above, another table will appear showing the audited **activities** on that rule. Select the value that appears as a link in the ExtendedProperties column to open a side panel displaying the changes made to the rule.
+ :::image type="content" source="media/monitor-analytics-rule-integrity/audit-activity-for-rule.png" alt-text="Screenshot of audit activity for selected rule in analytics health workbook.":::
## Next steps
sentinel Monitor Key Vault Honeytokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-key-vault-honeytokens.md
Last updated 01/09/2023
-# Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel (Public preview)
+# Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel (Community supported)
> [!IMPORTANT]
-> The Microsoft Sentinel Deception (Honey Tokens) solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Deception (Honey Tokens) solution is offered in a community supported model by the [Microsoft SIEM & XDR Community](https://github.com/Azure/Azure-Sentinel/wiki). Any support required can be raised as an [issue](https://github.com/Azure/Azure-Sentinel/issues) on GitHub where the Microsoft Sentinel community can assist.
>
+> For solution documentation, review the [Honeytokens solution GitHub page](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/HoneyTokens).
-This article describes how to use the **Microsoft Sentinel Deception (Honey Tokens)** solution to plant decoy [Azure Key Vault](../key-vault/index.yml) keys and secrets, called *honeytokens*, into existing workloads.
+## Azure Key Vault honeytokens is now a community supported solution
-Use the [analytics rules](detect-threats-built-in.md), [watchlists](watchlists.md), and [workbooks](monitor-your-data.md) provided by the solution to monitor access to the deployed honeytokens.
-
-When using honeytokens in your system, detection principles remains the same. Because there is no legitimate reason to access a honeytoken, any activity will indicate the presence of a user who is not familiar with the environment, and could potentially be an attacker.
-
-## Before you begin
-
-In order to start using the **Microsoft Sentinel Deception (Honey Tokens)** solution, make sure that you have:
--- **Required roles**: You must be a tenant admin to install the **Microsoft Sentinel Deception (Honey Tokens)** solution. Once the solution is installed, you can share the workbook with key vault owners so that they can deploy their own honeytokens.--- **Required data connectors**: Make sure that you've deployed the [Azure Key Vault](data-connectors-reference.md#azure-key-vault) and the [Azure Activity](data-connectors-reference.md#azure-activity) data connectors in your workspace, and that they're connected.-
- Verify that data routing succeeded and that the **KeyVault** and **AzureActivity** data is flowing into Microsoft Sentinel. For more information, see:
-
- - [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md?tabs=AP#diagnostic-settings-based-connections)
- - [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
-
-## Install the solution
-
-Install the **Microsoft Sentinel Deception (Honey Tokens)** solution as you would [other solutions](sentinel-solutions-deploy.md). On the **Azure Sentinel Deception** solution page, select **Start** to get started.
--
-**To install the Deception solution**:
-
-The following steps describe specific actions required for the **Microsoft Sentinel Deception (Honey Tokens)** solution.
-
-1. On the **Basics** tab, select the same resource group where your Microsoft Sentinel workspace is located.
-
-1. On the **Prerequisites** tab, in the **Function app name** field, enter a meaningful name for the Azure function app that will create honeytokens in your key vaults.
-
- The function app name must be unique, between 2-22 characters in length, and alphanumeric characters only.
-
- A command is displayed below with the name you've defined. For example:
-
- :::image type="content" source="media/monitor-key-vault-honeytokens/prerequisites.png" alt-text="Screenshot of the prerequisites tab showing the updated curl command.":::
-
-1. <a name="secret"></a>Select **Click here to open a cloud shell** to open a Cloud Shell tab. Sign in if prompted, and then run the command displayed.
-
- The script you run creates an Azure AD (AAD) function app, which will deploy your honeytokens. For example:
-
- ```bash
- Requesting a Cloud Shell.Succeeded
- Connecting terminal...
-
- Welcome to Azure Cloud Shell
-
- Type "az" to use Azure CLI
- Type "help" to learn about Cloud Shell
-
- maria@Azure:~$curl -sL https://aka.ms/sentinelhoneytokensappcreate | bash -s HoneyTokenFunctionApp
- ```
-
- The script output includes the AAD app ID and secret. For example:
-
- ```bash
- WARNING: The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli
- function app name: HoneyTokenFunctionApp
- AAD App Id: k4js48k3-97gg-3958=sl8d=48620nne59k4
- AAD App secret: v9kUtSoy3u~K8DKa8DlILsCM_K-s9FR3Kj
- maria@Azure:~$
- ```
-
-1. Back in Microsoft Sentinel, at the bottom of **Prerequisites** tab, enter the AAD app ID and secret into the relevant fields. For example:
-
- :::image type="content" source="media/monitor-key-vault-honeytokens/client-app-secret-values.png" alt-text="Screenshot of the function app's client app and secret values added.":::
-
-1. Select **Click here to continue in your function app settings** under step 4. A new browser tab opens in the Azure AD application settings.
-
- Sign in if prompted, and then select **Grant admin consent for `<your directory name>`** to continue. For example:
-
- :::image type="content" source="media/monitor-key-vault-honeytokens/grant-admin-access.png" alt-text="Screenshot of the grant admin consent for your directory button.":::
-
- For more information, see [Grant admin consent in App registrations](../active-directory/manage-apps/grant-admin-consent.md).
-
-1. Back in Microsoft Sentinel again, on the **Workbooks**, **Analytics**, **Watchlists**, and **Playbooks** tabs, note the security content that will be created, and modify the names as needed.
-
- > [!NOTE]
- > Other instructions in this article refer to the **HoneyTokensIncidents** and **SOCHTManagement** workbooks. If you change the names of these workbooks, make sure to note the new workbook names for your own reference and use them as needed instead of the default names.
-
-1. On the **Azure Functions** tab, define the following values:
-
- **Key vault configuration**: The following fields define values for the key vault where you'll store your AAD app's secret. These fields do not *not* define the the key vault where you'll be deploying honeytokens.
-
- |Field |Description |
- |||
- | **Service plan** | Select whether you want to use a **Premium** or **Consumption** plan for your function app. For more information, see [Azure Functions Consumption plan hosting](../azure-functions/consumption-plan.md) and [Azure Functions Premium plan](../azure-functions/functions-premium-plan.md). |
- | **Should a new KeyVault be created** | Select **new** to create a new key vault for your app's secret, or **existing** to use an already existing key vault. |
- | **KeyVault name** | Displayed only when you've selected to create a new key vault. <br><br>Enter the name of the key vault you want to use to store your app's secret. This name must be globally unique. |
- | **KeyVault resource group** |Displayed only when you've selected to create a new key vault. <br><br> Select the name of the resource group where you want to store the key vault for your application key. |
- | **Existing key vaults** | Displayed only when you've selected to use an existing key vault. Select the key vault you want to use. |
- | **KeyVault secret name** | Enter a name for the secret where you want to store your AAD app's secret. You'd created this AAP app back in [step 3](#secret). |
-
- **Honeytoken configuration**: The following fields define settings used for the keys and secrets used in your honeytokens. Use naming conventions that will blend in with your organization's naming requirements so that attackers will not be able to tell the difference.
-
- |Field |Description |
- |||
- |**Keys keywords** | Enter comma-separated lists of values you want to use with your decoy honeytoken names. For example, `key,prod,dev`. Values must be alphanumeric only. |
- |**Secrets** | Enter comma-separated lists of values you want to use with your decoy honeytoken secrets. For example, `secret,secretProd,secretDev`. Values must be alphanumeric only. |
- |**Additional HoneyToken Probability** | Enter a value between `0` and `1`, such as `0.6`. This value defines the probability of more than one honeytoken being added to the Key Vault. |
--
-1. Select **Next: Review + create** to finish installing your solution.
-
- After the solution is installed, the following items are displayed:
-
- - A link to your **SOCHTManagement** workbook. You may have modified this name on the **Workbooks** tab earlier in this procedure.
-
- - The URL for a custom ARM template. You can use this ARM template to deploy an Azure Policy initiative, connected to an Microsoft Defender for Cloud custom recommendation, which distributes the **SOCHTManagement** workbook to key vault owners in your organization.
-
-1. The **Post-deployment Steps** tab notes that you can use the information displayed in the deployment output to distribute the Microsoft Defender for Cloud custom recommendation to all key vault owners in your organization, recommending that they deploy honeytokens in their key vaults.
-
- Use the custom [ARM template URL](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2faka.ms%2fsentinelhoneytokenspolicy) shown in the installation output to open the linked template's **Custom deployment** page.
-
- For more information, see [Distribute the SOCHTManagement workbook](#distribute-the-sochtmanagement-workbook).
-
-## Deploy your honeytokens
-
-After you've installed the **Microsoft Sentinel Deception (Honey Tokens)** solution, you're ready to start deploying honeytokens in your key vaults using the steps in the **SOCHTManagement** workbook.
-
-We recommend that you share the **SOCHTManagement** workbook with key vault owners in your organization so that they can create their own honeytokens in their key vaults. You may have renamed this workbook when [installing the solution](#install-the-solution). When sharing, make sure to grant Read permissions only.
-
-**Deploy honeytokens in your key vaults**:
-
-1. In Microsoft Sentinel, go to **Workbooks > My Workbooks** and open the **SOCHTManagement** workbook. You may have modified this name when deploying the solution.
-
-1. Select **View saved workbook** > **Add as trusted**. For example:
-
- :::image type="content" source="media/monitor-key-vault-honeytokens/add-as-trusted.png" alt-text="Screenshot of the SOCHTManagement workbook 'Add as trusted' button.":::
-
- Infrastructure is deployed in your key vaults to allow for the honeytoken deployment.
-
-1. In the workbook's **Key Vault** tab, expand your subscription to view the key vaults ready to deploy honeytokens and any key vaults with honeytokens already deployed.
-
- In the **Is Monitored by SOC** column, a green checkmark :::image type="icon" source="media/monitor-key-vault-honeytokens/checkmark.png" border="false"::: indicates that the key vault already has honeytokens. A red x-mark :::image type="icon" source="media/monitor-key-vault-honeytokens/xmark.png" border="false"::: indicates that the key vault does not yet have honeytokens. For example:
-
- :::image type="content" source="media/monitor-key-vault-honeytokens/honeytokens-deployed.png" alt-text="Screenshot of the SOCHTManagement workbooks showing deployed honeytokens.":::
-
-1. Scroll down on the workbook page and use the instructions and links in the **Take an action** section to deploy honeytokens to all key vaults at scale, or deploy them manually one at a time.
-
- # [Deploy at scale](#tab/deploy-at-scale)
-
- **To deploy honeytokens at scale**:
-
- 1. Select the **Enable user** link to deploy an ARM template that deploys a key vault access policy, granting the user ID specified with rights to create the honeytokens.
-
- Sign in if prompted, and enter values for the **Project details** and **Instance details** areas for your ARM template deployment. Find your **Tenant ID** and **User object ID** on the Azure Active Directory home page for your users.
-
- When you're done, select **Review + Create** to deploy the ARM template. For example:
-
- :::image type="content" source="media/monitor-key-vault-honeytokens/deploy-arm-template.png" alt-text="Screenshot of the Custom deployment page.":::
-
- Your settings are validated, and when the validation passes, a confirmation is displayed: **Validation Passed**
-
- At the bottom of the page, select **Create** to deploy your ARM template, and watch for a successful deployment confirmation page.
-
- 1. Back in Microsoft Sentinel, in your **SOCHTManagement** workbook > **Take an action** > **Deploy at scale** area, select the **Click to deploy** link to add honeytokens to all key vaults that you have access to in the selected subscription.
-
- When complete, your honeytoken deployment results are shown in a table on a new tab.
-
- 1. Make sure to select the **Disable your user** link to remove the access policy that you'd created earlier. Sign in again if prompted, enter values for your custom ARM deployment, and then deploy the ARM template. This step deploys a key vault access policy that removes the user rights to create keys and secrets.
-
- # [Deploy a single honeytoken](#tab/deploy-a-single-honeytoken)
-
- **To deploy a single honeytoken manually**:
-
- 1. In the table at the top of the page, select the key vault where you want to deploy your honeytoken. The **Deploy on a specific key-vault:** section appears at the bottom of the page.
-
- 1. Scroll down, and in the **Honeytoken type** dropdown, select whether you want to create a key or a secret. In the **New honeytoken name** field, enter a meaningful name for your honeytoken. For example:
-
- :::image type="content" source="media/monitor-key-vault-honeytokens/deploy-manually.png" alt-text="Screenshot of the deploy on a specific key vault area.":::
-
- 1. In the **Operation** table, expand the **Deploy a honeytoken** section, and select each task name to perform the required steps. Sign in if prompted.
-
- - Select **Click to validate the key-vault is audited**. In Azure Key Vault, verify that your key vault diagnostic settings are set to send audit events to Log Analytics.
- - Select **Enable your user in the key-vault's policy if missing**. In Azure Key Vault, make sure that your user has access to deploy honeytokens to your required locations. Select **Save** to save any changes.
- - Select **Click to add a honeytoken to the key-vault** to open Azure Key Vault. Add a new honeytoken, like a new secret, to the configured key vault.
- - Select **Click to add monitoring in the SOC**. If successful, a confirmation message is displayed on a new tab: `Honey-token was successfully added to monitored list`.
-
- For more information, see the [Azure Key Vault documentation](../key-vault/secrets/about-secrets.md).
-
- > [!NOTE]
- > Make sure to select the **Disable back your user in the key-vault's policy if needed** link to remove the access policy created grant rights to create the honeytokens.
- >
-
- # [Remove a honeytoken](#tab/remove-a-honeytoken)
-
- **To remove a specific honeytoken**:
-
- 1. In the table at the top of the page, select the key vault where you want to remove a honeytoken. The **Deploy on a specific key-vault:** section appears at the bottom of the page.
-
- 1. In the **Operation** table, expand the **Remove a honeytoken** section, and select each task name to perform the required steps. Sign in if prompted.
-
- - Select **Click to delete the honeytoken from the key-vault** to open Azure Key Vault to the page where you can remove your honeytoken.
- - Select **Send an email to update the SOC**. An email is opened in your default email client to the SOC, recommending that they remove honeytoken monitoring for the selected keyvault.
-
- > [!TIP]
- > We recommend that you clearly communicate with your SOC about honeytokens that you delete.
- >
-
-
-
-You may need to wait a few minutes as the data is populated and permissions are updated. Refresh the page to show any updates in your key vault deployment.
-
-## Test the solution functionality
-
-**To test that you get alerted for any access attempted to your honeytokens**:
-
-1. In the Microsoft Sentinel **Watchlists** page, select the **My watchlists** tab, and then select the **HoneyTokens** watchlist.
-
- Select **View in Log Analytics** to view a list of the current honeytoken values found. In the **Logs** page, the items in your watchlist are automatically extracted for your query. For example:
-
- :::image type="content" source="media/monitor-key-vault-honeytokens/honeytokens-watchlist.png" alt-text="Screenshot of the honeytokens watchlist values in Log Analytics." lightbox="media/monitor-key-vault-honeytokens/honeytokens-watchlist.png":::
-
- For more information, see [Use Microsoft Sentinel watchlists](watchlists.md).
-
-1. From the list in Log Analytics, choose a honeytoken value to test.
-
- Then, go to Azure Key Vault, and download the public key or view the secret for your chosen honeytoken.
-
- For example, select your honeytoken and then select **Download public key**. This action creates a `KeyGet` or `SecretGet` log that triggers an alert in Microsoft Sentinel.
-
- For more information, see the [Key Vault documentation](../key-vault/index.yml).
-
-1. Back in Microsoft Sentinel, go to the **Incidents** page. You might need to wait five minutes or so, but you should should see a new incident, named for example **HoneyTokens: KeyVault HoneyTokens key accessed**.
-
- Select the incident to view its details, such as the key operation performed, the user who accessed the honeytoken key, and the name of the compromised key vault.
-
- > [!TIP]
- > Any access or operation with the honeytoken keys and secrets will generate incidents that you can investigate in Microsoft Sentinel. Since there's no reason to actually use honeytoken keys and secrets, any similar activity in your workspace may be malicious and should be investigated.
-
-1. View honeytoken activity in the **HoneyTokensIncident** workbook. In the Microsoft Sentinel **Workbooks** page, search for and open the **HoneyTokensIncident** workbook.
-
- This workbook displays all honeytoken-related incidents, the related entities, compromised key vaults, key operations performed, and accessed honeytokens.
-
- Select specific incidents and operations to investigate all related activity further.
-
-## Distribute the SOCHTManagement workbook
-
-We recommend that you deploy honeytokens in as many key vaults as possible to ensure optimal detection abilities in your organization.
-
-However, many SOC teams don't have access to key vaults. To help cover this gap, distribute the **SOCHTManagement** workbook to all key vault owners in your tenant, so that your SOC teams can deploy their own honeytokens. You may have modified the name of this workbook when you [installed the solution](#install-the-solution).
-
-You can always share the direct link to the workbook. Alternately, this procedure describes how to use an ARM template to deploy an Azure Policy initiative, connected to an Microsoft Defender for Cloud custom recommendation, which distributes the **SOCHTManagement** workbook to key vault owners in your organization.
-
-> [!NOTE]
-> Whenever you distribute the workbook, make sure to grant Read access only.
->
-
-**To distribute the SOCHTManagement workbook via Azure Policy initiative**
-
-1. From the following table, select a **Deploy to Azure** buttons to open the ARM template to the **Custom deployment** page, depending on how you to want to deploy the ARM template. Use the GitHUb links to view the details of what's included in the ARM template, or to customize the ARM template for your environment.
-
- The **Deploy to Azure** buttons use the same URLs that are shown on the **Output** tab after the [solution installation](#install-the-solution).
-
- | Deployment option | Description | Deploy to Azure | GitHub link |
- |-|-|-|--|
- | Management group | Recommended for enterprise-wide deployment| [![DTA-Button-MG]][DTA-MG] |[Example in GitHub][GitHub-MG] |
- | Subscription | Recommended for testing in a single subscription | [![DTA-Button-Sub]][DTA-Sub] | [Example in GitHub][GitHub-Sub] |
-
- Sign in when prompted.
-
-1. On the ARM template's **Deception Solution Policy Deployment** > **Basics** tab, select your management group value and region. Then, select **Next: Deployment Target >** to continue.
-
-1. On the **Deployment Target** tab, select your management group again, and then select **Next: Management Workbook >**.
-
-1. On the **Management Workbook** tab, paste the link to your **SOCHTManagement** workbook.
-
- You can find the workbook link from the **SOCHTManagement** workbook in Microsoft Sentinel, and it was also included in the solution deployment's **Output** tab.
-
- For example, to find the link in the workbook, select **Workbooks** > **My workbooks** > **SOCHTManagement**, and then select **Copy link** in the toolbar.
-
-1. After entering your workbook link, select **Next: Review + create >** to continue. Wait for a confirmation message that the validation has passed, and then select **Create**.
-
-1. After the deployment is complete, you'll see that the deployment includes a new **HoneyTokens** initiative and two new policies, named **KeyVault HoneyTokens** and **KVReviewTag**. For example:
-
- :::image type="content" source="media/monitor-key-vault-honeytokens/policy-deployment.png" alt-text="Screenshot of a successfully deployed ARM template policy." lightbox="media/monitor-key-vault-honeytokens/policy-deployment.png":::
-
-1. In Azure **Policy**, assign the new **KVReviewTag** policy with the scope you need. This assignment adds the **KVReview** tag and a value of **ReviewNeeded** to all key vaults in the selected scope.
-
- 1. In Azure Policy, under **Authoring** on the left, select **Definitions**. Locate your **KVReviewTag** policy row, and select the options menu on the right.
-
- 1. On the **Deploy Diagnostic Settings for Activity Log to Log Analytics workspace** page, enter required values to deploy the diagnostic settings for your environment.
-
- On the **Remediation** tab, make sure to select the **Create a remediation task** option to apply the tag to existing key vaults.
-
- For more information, see the [Azure Policy documentation](../governance/policy/assign-policy-portal.md).
-
-1. In **Microsoft Defender for Cloud**, add an audit recommendation to all key vaults in the selected scope:
-
- 1. Select **Regulatory compliance > Manage compliance policies**, and then select your scope.
-
- 1. In the details page for the selected scope, scroll down and in the **Your custom initatives** section, select **Add custom initiative**.
-
- 1. In the **HoneyTokens** initiative row, select **Add**.
-
-An audit recommendation, with a link to the **SOCHTManagement** workbook, is added to all key vaults in the selected scope. You may have modified the name of this workbook [when installing the solution](#install-the-solution).
-
-For more information, see the [Microsoft Defender for Cloud documentation](/azure/security-center/security-center-recommendations).
-
-## Watch our end-to-end demo video
--
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RWPOxX]
+To deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel, review the [Honeytokens solution GitHub page](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/HoneyTokens).
## Next steps
For more information, see:
[GitHub-Sub]: https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/HoneyTokens/ASCRecommendationPolicySub.json [DTA-MG]: https://portal.azure.com/#blade/Microsoft_Azure_CreateUIDef/CustomDeploymentBlade/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Sentinel%2Fmaster%2FSolutions%2FHoneyTokens%2FASCRecommendationPolicy.json/uiFormDefinitionUri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Sentinel%2Fmaster%2FSolutions%2FHoneyTokens%2FASCRecommendationPolicyUI.json
-[DTA-Sub]: https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Sentinel%2Fmaster%2FSolutions%2FHoneyTokens%2FASCRecommendationPolicySub.json
+[DTA-Sub]: https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Sentinel%2Fmaster%2FSolutions%2FHoneyTokens%2FASCRecommendationPolicySub.json
sentinel Normalization Schema Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-authentication.md
The following list mentions fields that have specific guidelines for authenticat
||-||--| | **EventType** | Mandatory | Enumerated | Describes the operation reported by the record. <br><br>For Authentication records, supported values include: <br>- `Logon` <br>- `Logoff`<br>- `Elevate`| | <a name ="eventresultdetails"></a>**EventResultDetails** | Recommended | String | The details associated with the event result. This field is typically populated when the result is a failure.<br><br>Allowed values include: <br> - `No such user or password`. This value should be used also when the original event reports that there is no such user, without reference to a password.<br> - `No such user`<br> - `Incorrect password`<br> - `Incorrect key`<br>- `Account expired`<br>- `Password expired`<br>- `User locked`<br>- `User disabled`<br> - `Logon violates policy`. This value should be used when the original event reports, for example: MFA required, logon outside of working hours, conditional access restrictions, or too frequent attempts.<br>- `Session expired`<br>- `Other`<br><br>The value may be provided in the source record using different terms, which should be normalized to these values. The original value should be stored in the field [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)|
-| **EventSubType** | Optional | String | The sign-in type. Allowed values include:<br> - `System`<br> - `Interactive`<br> - `Service`<br> - `RemoteService`<br> - `Remote` - Use when the type of remote sign-in is unknown.<br> - `AssumeRole` - Typically used when the event type is `Elevate`. <br><br>The value may be provided in the source record using different terms, which should be normalized to these values. The original value should be stored in the field [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype). |
+| **EventSubType** | Optional | String | The sign-in type. Allowed values include:<br> - `System`<br> - `Interactive`<br> - `RemoteInteractive`<br> - `Service`<br> - `RemoteService`<br> - `Remote` - Use when the type of remote sign-in is unknown.<br> - `AssumeRole` - Typically used when the event type is `Elevate`. <br><br>The value may be provided in the source record using different terms, which should be normalized to these values. The original value should be stored in the field [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype). |
| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1.3` | | **EventSchema** | Optional | String | The name of the schema documented here is **Authentication**. | | **Dvc** fields| - | - | For authentication events, device fields refer to the system reporting the event. |
sentinel Normalization Schema Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-network.md
The following list mentions fields that have specific guidelines for Network Ses
| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation reported by the record.<br><br> For Network Session records, the allowed values are:<br> - `EndpointNetworkSession`: for sessions reported by endpoint systems, including clients and servers. For such systems, the schema supports the `remote` and `local` alias fields. <br> - `NetworkSession`: for sessions reported by intermediary systems and network taps. <br> - `L2NetworkSession`: for sessions reported by intermediary systems and network taps, but which for which only layer 2 information is available. Such events will include MAC addresses but not IP addresses. <br> - `Flow`: for `NetFlow` type aggregated flows, which group multiple similar sessions together. For such records, [EventSubType](#eventsubtype) should be left empty. | | <a name="eventsubtype"></a>**EventSubType** | Optional | String | Additional description of the event type, if applicable. <br> For Network Session records, supported values include:<br>- `Start`<br>- `End` | | <a name="eventresult"></a>**EventResult** | Mandatory | Enumerated | If the source device does not provide an event result, **EventResult** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventResult** should be `Failure`. Otherwise, **EventResult** should be `Success`. |
-| **EventResultDetails** | Recommended | Enumerated | Reason or details for the result reported in the [EventResult](#eventresult) field. Supported values are:<br> - Failover <br> - Invalid TCP <br> - Invalid Tunnel <br> - Maximum Retry <br> - Reset <br> - Routing issue <br> - Simulation <br> - Terminated <br> - Timeout <br> - Unknown <br> - NA.<br><br>The original, source specific, value is stored in the [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails) field. |
+| **EventResultDetails** | Recommended | Enumerated | Reason or details for the result reported in the [EventResult](#eventresult) field. Supported values are:<br> - Failover <br> - Invalid TCP <br> - Invalid Tunnel<br> - Maximum Retry<br> - Reset<br> - Routing issue<br> - Simulation<br> - Terminated<br> - Timeout<br> - Transient error<br> - Unknown<br> - NA.<br><br>The original, source specific, value is stored in the [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails) field. |
| **EventSchema** | Mandatory | String | The name of the schema documented here is `NetworkSession`. | | **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.5`. | | <a name="dvcaction"></a>**DvcAction** | Recommended | Enumerated | The action taken on the network session. Supported values are:<br>- `Allow`<br>- `Deny`<br>- `Drop`<br>- `Drop ICMP`<br>- `Reset`<br>- `Reset Source`<br>- `Reset Destination`<br>- `Encrypt`<br>- `Decrypt`<br>- `VPNroute`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. The original value should be stored in the [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction) field.<br><br>Example: `drop` |
sentinel Sentinel Content Centralize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-content-centralize.md
Title: Out-of-the-box (OOTB) content centralization changes
-description: This article describes the centralization changes about to take place for out-of-the-box content in Microsoft Sentinel.
+description: This article describes upcoming centralization changes for out-of-the-box content in Microsoft Sentinel.
Last updated 01/30/2023
-#Customer intent: As a SIEM decision maker or implementer, I want to know about changes to out of the box content, and how to centralize the management, discovery and inventory of content in Microsoft Sentinel.
+#Customer intent: As a SIEM decision maker or implementer, I want to know about changes to out-of-the-box content, and how to centralize the management, discovery, and inventory of content in Microsoft Sentinel.
# Microsoft Sentinel out-of-the-box content centralization changes
-Microsoft Sentinel Content hub enables discovery and on-demand installation of out-of-the-box (OOTB) content and solutions in a single step. Previously, some of this OOTB content only existed in various gallery sections of Sentinel. We're excited to announce *all* of the following gallery content templates are now available in content hub as standalone items or part of packaged solutions.
+The Microsoft Sentinel content hub enables discovery and on-demand installation of out-of-the-box (OOTB) content and solutions in a single step. Previously, some of this OOTB content existed only in various gallery sections of Microsoft Sentinel. Now, *all* of the following gallery content templates are available in the content hub as standalone items or as part of packaged solutions:
-- **Data connectors**-- **Hunting queries**-- **Analytics rule templates**-- **Playbook templates**-- **Workbook templates**
+- Data connectors
+- Analytics rule templates
+- Hunting queries
+- Playbook templates
+- Workbook templates
## Content hub changes
-In order to centralize all out-of-the-box content, we're planning to retire the gallery-only content templates. The legacy gallery content templates are no longer being updated consistently, and the content hub is where OOTB content is kept up to date. Content hub also provides update workflows for solutions and automatic updates for standalone content. To facilitate this transition, we're going to publish a central tool to reinstate corresponding **IN USE** retired templates from corresponding Content hub solutions.
-## Sentinel GitHub changes
-Microsoft Sentinel has an official [GitHub repository](https://github.com/Azure/Azure-Sentinel) for community contributions vetted by Microsoft and the community. It's the source for most of the content items in Content hub. For consistent discovery of this content, the OOTB content centralization changes have already been extended to the Sentinel GitHub repo.
+To centralize all OOTB content, we're planning to retire the gallery-only content templates. The legacy gallery content templates are no longer being updated consistently, and the content hub is where OOTB content stays up to date. The content hub also provides update workflows for solutions and automatic updates for standalone content.
+To facilitate this transition, we're publishing a central tool to reinstate **IN USE** retired templates from corresponding content hub solutions.
+
+## Microsoft Sentinel GitHub changes
+
+Microsoft Sentinel has an official [GitHub repository](https://github.com/Azure/Azure-Sentinel) for community contributions that are vetted by Microsoft and the community. It's the source for most of the content items in the content hub.
+
+For consistent discovery of this content, the OOTB content centralization changes have already been extended to the Microsoft Sentinel GitHub repo:
+
+- All OOTB content packaged from content hub solutions is now stored in the GitHub repo's [Solutions folder](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions).
+- All standalone OOTB content items will remain in their respective locations.
-Together, these Content hub and Sentinel GitHub repo changes will complete the journey towards centralizing Sentinel content.
+These changes to the content hub and the Microsoft Sentinel GitHub repo will complete the journey toward centralizing Microsoft Sentinel content.
## When is this change coming?+ > [!IMPORTANT] > The following timeline is tentative and subject to change.
->
-The centralization change in the Sentinel portal is expected to go live in all Sentinel workspaces Q2 2023. The Microsoft Sentinel GitHub changes have already been done. Standalone content is available in existing GitHub folders and solutions content has been moved to the solutions folder.
+The centralization change in the Microsoft Sentinel portal is expected to go live in all Microsoft Sentinel workspaces in Q2 2023. The Microsoft Sentinel GitHub changes have already happened. Standalone content is available in existing GitHub folders, and solution content has been moved to the *Solutions* folder.
## Scope of change
-This change is only scoped to *gallery content* type templates. All these same templates and more OOTB content are available in *Content hub* as solutions or standalone content.
-For Microsoft Sentinel GitHub, OOTB content packaged in solutions in content hub is now only listed under the GitHub repo [Solutions folder](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions). The other existing GitHub content is scoped to the following folders and only contains standalone content items. Content in the remaining GitHub folders not called out in this list don't have any changes.
+This change is scoped to only the *gallery content* type of templates. All these same templates and more OOTB content are available in the content hub as solutions or standalone content.
+
+For the Microsoft Sentinel GitHub repo, OOTB content packaged in solutions in the content hub is now listed only under the GitHub repo's [Solutions folder](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions). The other existing GitHub content is scoped to the following folders and contains only standalone content items. Content in the remaining GitHub folders not mentioned in this list doesn't have any changes.
- [DataConnectors folder](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors)-- [Detections folder](https://github.com/Azure/Azure-Sentinel/tree/master/Detections) (Analytics rules)-- [Hunting queries folder](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries)
+- [Detections folder](https://github.com/Azure/Azure-Sentinel/tree/master/Detections) (analytics rules)
+- [Hunting Queries folder](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries)
- [Parsers folder](https://github.com/Azure/Azure-Sentinel/tree/master/Parsers) - [Playbooks folder](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks) - [Workbooks folder](https://github.com/Azure/Azure-Sentinel/tree/master/Workbooks) - ### What's not changing?
-The active or custom items created in any manner (from templates or otherwise) are **NOT** impacted by this change. More specifically, the following are **NOT** affected by this change:
-- Data Connectors with *Status = Connected*. -- Alert rules or detections (enabled or disabled) in the **'Active rules'** tab in the Analytics gallery. -- Saved workbooks in the **'My workbooks'** tab in the Workbooks gallery. -- Cloned content or *Content source = Custom* in the Hunting gallery. -- Active playbooks (enabled or disabled) in the **'Active playbooks'** tab in the Automation gallery.
+This change does not affect active or custom items (created from templates or otherwise). Specifically, this change doesn't affect the following items:
+
+- Data connectors with **Status** = **Connected**.
+- Alert rules or detections (enabled or disabled) on the **Active rules** tab in the analytics gallery.
+- Saved workbooks on the **My workbooks** tab in the workbooks gallery.
+- Cloned content or **Content source** = **Custom** in the hunting gallery.
+- Active playbooks (enabled or disabled) on the **Active playbooks** tab in the automation gallery.
-Any OOTB content templates installed from content hub (identifiable as *Content source = Content hub*) are NOT affected by this change.
+This change also doesn't affect any OOTB content templates installed from the content hub (identifiable as **Content source** = **Content hub**).
### What's changing?
-All template galleries will display an in-product warning banner. This banner will contain a link to a tool that will run within the Microsoft Sentinel portal. Activating the tool will initiate a guided experience to reinstate the content templates for the **IN USE** retired templates from the Content hub. This tool only needs to be run once per workspace, so be sure to plan with your organization. Once the tool runs successfully, the warning banner will resolve and no longer be visible from the template galleries of that workspace.
-Specific impact to the gallery content templates for each of these galleries are detailed in the following table. Expect these changes when the OOTB content centralization goes live.
+All template galleries will display an in-product warning banner. This banner will contain a link to a tool that will run within the Microsoft Sentinel portal. Activating the tool will start a guided experience to reinstate the content templates for the **IN USE** retired templates from the content hub.
+
+This tool needs to run only once per workspace, so be sure to plan with your organization. After the tool runs successfully, the warning banner will disappear from the template galleries of that workspace.
-| Content Type | Impact |
+The following table lists specific impacts to the content templates for each of these galleries. Expect these changes when the OOTB content centralization goes live.
+
+| Content type | Impact |
| - | - |
-| [Data connectors](connect-data-sources.md) | The templates identifiable as `content source = "Gallery content"` and `Status = "Not connected"` will no longer appear in the data connectors gallery. |
-| [Analytics templates](detect-threats-built-in.md#view-built-in-detections) | The templates identifiable as `source name = "Gallery content"` will no longer appear in the Analytics template gallery. |
-| [Hunting](hunting.md#use-built-in-queries) | The templates with `Content source = "Gallery content"` will no longer appear in the Hunting gallery. |
-| [Workbooks templates](get-visibility.md#use-built-in-workbooks) | The templates with `Content source = "Gallery content"` will no longer appear in the Workbooks template gallery. |
-| [Playbooks templates](use-playbook-templates.md#explore-playbook-templates) | The templates identifiable as `source name = "Gallery content"` will no longer appear in the Automation Playbook templates gallery. |
+| [Data connectors](connect-data-sources.md) | Templates identifiable as **Content source** = **Gallery content** and **Status** = **Not connected** will no longer appear in the data connectors gallery. |
+| [Analytics](detect-threats-built-in.md#view-built-in-detections) | Templates identifiable as **Source name** = **Gallery content** will no longer appear in the analytics gallery. |
+| [Hunting](hunting.md#use-built-in-queries) | Templates with **Content source** = **Gallery content** will no longer appear in the hunting gallery. |
+| [Playbooks](use-playbook-templates.md#explore-playbook-templates) | Templates identifiable as **Source name** = **Gallery content** will no longer appear in the automation playbooks gallery. |
+| [Workbooks](get-visibility.md#use-built-in-workbooks) | Templates with **Content source** = **Gallery content** will no longer appear in the workbooks gallery. |
+
+Here's an example of an analytics rule before and after the centralization changes and the tool has run:
+
+- The active analytics rule won't change at all. It's based on an analytics rule template that will be retired.
+
+ :::image type="content" source="media/sentinel-content-centralize/before-tool-analytic-rule-active-2.png" alt-text="Screenshot that shows an active analytics rule before centralization changes." lightbox="media/sentinel-content-centralize/before-tool-analytic-rule-active-2.png":::
+
+ This screenshot shows an analytics rule template that will be retired.
-Here's an example of an Analytics rule before and after the centralization changes and the tool has run.
-- The active Analytics rule won't change at all. We can see it's based on an Analytics rule template that will be retired.
- :::image type="content" source="media/sentinel-content-centralize/before-tool-analytic-rule-active-2.png" alt-text="This screenshot shows an active Analytics rule before centralization changes." lightbox="media/sentinel-content-centralize/before-tool-analytic-rule-active-2.png":::
+ :::image type="content" source="media/sentinel-content-centralize/before-tool-analytic-rule-templates-2.png" alt-text="Screenshot that shows the analytics rule template that will be retired." lightbox="media/sentinel-content-centralize/before-tool-analytic-rule-templates-2.png":::
-- This screenshot shows an Analytics rule template that will be retired.
- :::image type="content" source="media/sentinel-content-centralize/before-tool-analytic-rule-templates-2.png" alt-text="This screenshot shows the Analytics rule template that will be retired." lightbox="media/sentinel-content-centralize/before-tool-analytic-rule-templates-2.png":::
+- After you run the tool to reinstate the analytics rule template, the source changes to the solution that it's reinstated from.
-- After the tool has been run to reinstate the Analytics rule template, the source changes to the solution it's reinstated from.
- :::image type="content" source="media/sentinel-content-centralize/after-tool-analytic-rule-template-2.png" alt-text="This screenshot shows the Analytics rule template after being reinstated from the Content hub Azure Active Directory solution." lightbox="media/sentinel-content-centralize/after-tool-analytic-rule-template-2.png":::
+ :::image type="content" source="media/sentinel-content-centralize/after-tool-analytic-rule-template-2.png" alt-text="Screenshot that shows the analytics rule template after being reinstated from the content hub Azure Active Directory solution." lightbox="media/sentinel-content-centralize/after-tool-analytic-rule-template-2.png":::
## Action needed-- Starting now, install new OOTB content from Content hub and update solutions as needed to have the latest version of templates. -- For existing gallery content templates in use, get future updates by installing the respective solutions or standalone content items from Content hub. The gallery content in the feature galleries may be out-of-date.-- If you have applications or processes that directly get OOTB content from the Microsoft Sentinel GitHub repository, update the locations to include getting OOTB content from the solutions folder in addition to existing content folders. -- Plan with your organization who and when will run the tool when you see the warning banner and the change goes live in Q2 2023. The tool needs to be run once in a workspace to reinstate all **IN USE** retired templates from the Content hub. -- Review the FAQs section to learn more details that may be applicable to your environment. +
+- Starting now, install new OOTB content from the content hub and update solutions as needed to have the latest versions of templates.
+- For existing gallery content templates in use, get future updates by installing the solutions or standalone content items from the content hub. The gallery content in the feature galleries might be out of date.
+- If you have applications or processes that directly get OOTB content from the Microsoft Sentinel GitHub repository, update the locations to include getting OOTB content from the *Solutions* folder in addition to existing content folders.
+- Plan with your organization who will run the tool, and when, after you see the warning banner and the change goes live in Q2 2023. The tool needs to run once in a workspace to reinstate all **IN USE** retired templates from the content hub.
+- Review the following FAQs to learn more details that might apply to your environment.
## Content centralization FAQs
-#### Will my SOC alert generation or incidents generation and management be impacted by this change?
-No, there's no impact to active alert rules or detections, or active playbooks, or cloned hunting queries, or saved workbooks. The OOTB content centralization change won't impact your current incident generation and management processes.
-#### Are there any gallery content exceptions?
-Yes, the following Analytics rule template types are exempt from this change.
+### Will this change affect my SOC alert generation or incident generation and management?
+
+No. There's no impact to active alert rules or detections, active playbooks, cloned hunting queries, or saved workbooks. The OOTB content centralization change won't affect your current incident generation and management processes.
+
+### Are there any exceptions for gallery content?
+
+Yes. The following types of analytics rule templates are exempt from this change:
- Anomalies rule templates - Fusion rule templates-- ML (Machine Learning) Behavior Analytics rule templates -- Microsoft Security (incident creation) rule templates -- Threat Intelligence rule template
+- ML Behavior Analytics (machine learning) rule templates
+- Microsoft Security (incident creation) rule templates
+- Threat Intelligence rule templates
-#### Will any of the APIs be impacted with this change?
-Yes. Currently the only Sentinel REST API calls that exist for content template management are the `Get` and `List` operations for alert rule templates. These operations only surface gallery content templates and won't be updated. For more information on these operations see the current [Alert Rule Templates REST API reference](/rest/api/securityinsights/stable/alert-rule-templates).
+### Will this change affect any of the APIs?
-New content hub REST API operations will be available soon to enable OOTB content management scenarios more broadly. This API update will include operations for the same content types scoped in the centralization changes (data connectors, playbook templates, workbook templates, analytic rule templates, hunting queries). A mechanism to update Analytics rule templates installed on the workspace is also on the roadmap.
+Yes. Currently, the only Microsoft Sentinel REST API calls that exist for content template management are the `Get` and `List` operations for alert rule templates. These operations only surface gallery content templates and won't be updated. For more information on these operations, see the current [Alert Rule Templates REST API reference](/rest/api/securityinsights/stable/alert-rule-templates).
-**Action needed:** Plan to update your applications and processes to utilize the new content hub OOTB content management API operations when those are available in Q2 2023.
+New REST API operations on the content hub will be available soon to enable OOTB content management scenarios more broadly. This API update will include operations for the same content types scoped in the centralization changes (data connectors, playbook templates, workbook templates, analytics rule templates, hunting queries). A mechanism to update analytics rule templates installed on the workspace is also on the roadmap.
-#### How will the central tool identify my in-use OOTB content templates?
-The tool builds a list of solutions based on two criteria: data connectors with `Status = "Connected"` and **IN USE** Playbook templates. Once the proposed list of solutions is generated, the tool will present them for approval. If approved, the tool installs all those solutions. Because the OOTB content is reinstated based on solutions you may get more templates than you might actually be using.
+**Action needed:** Plan to update your applications and processes to use the new OOTB content management API operations on the content hub when those are available in Q2 2023.
-Please note that this central tool is a best-effort to get your **IN USE** OOTB content templates reinstated from Content hub. You can install OOTB content omitted directly from Content hub.
+### How will the central tool identify my in-use OOTB content templates?
-#### What if I'm using APIs to connect data sources in my Sentinel workspace?
-Currently, if an API data connection matches the data connector data type, it will show up as `Status = "Connected"` in the Data connectors gallery. After the centralization changes go live, the specific data connector needs to be installed from a respective solution to get the same behavior.
+The tool builds a list of solutions based on two criteria: data connectors with **Status** = **Connected** and **IN USE** playbook templates. After the tool builds the proposed list of solutions, it will present the list for approval. If the list is approved, the tool installs all those solutions. Because the OOTB content is reinstated based on solutions, you might get more templates than you actually use.
-**Action needed:** Plan to update processes or tooling for your data connector deployments to install from Content hub solution(s) before the connecting with data ingestion APIs. The REST API operator for installing a solution will be coming in Q2 2023 with the OOTB content management APIs.
+This central tool is a best effort to get your **IN USE** OOTB content templates reinstated from the content hub. You can install omitted OOTB content directly from the content hub.
-#### What if I'm working with content using Repositories feature in Microsoft Sentinel?
-Repositories specifically deploy custom or active content in Microsoft Sentinel. Content deployed through the Repositories feature won't be impacted by the OOTB content centralization changes.
+### What if I'm using APIs to connect data sources in my Microsoft Sentinel workspace?
+
+Currently, if an API data connection matches the data connector data type, it will appear as **Status** = **Connected** in the data connectors gallery. After the centralization changes go live, the specific data connector needs to be installed from a respective solution to get the same behavior.
+
+**Action needed:** Plan to update processes or tooling for your data connector deployments to install from content hub solutions before connecting with data ingestion APIs. The REST API operator for installing a solution will be coming in Q2 2023 with the OOTB content management APIs.
+
+### What if I'm working with content by using the repositories feature in Microsoft Sentinel?
+
+Repositories specifically deploy custom or active content in Microsoft Sentinel. The OOTB content centralization changes won't affect content that's deployed through the repositories feature.
## Next steps
-Take a look at these other resources for OOTB content and Content hub.
--- [About OOTB content and solutions in Microsoft Sentinel](sentinel-solutions.md)-- [Discover OOTB content and solutions in Content hub](sentinel-solutions-deploy.md)-- [How to install and update OOTB content and solutions in Content hub](sentinel-solutions-deploy.md#install-or-update-content)-- [Bulk install and update solutions and standalone content in Content hub](sentinel-solutions-deploy.md#bulk-install-and-update-content)-- [How to enable OOTB content and solutions in Content hub](sentinel-solutions-deploy.md#enable-content-items-in-a-solution)-- Video: [Using content hub to manage your SIEM content](https://www.youtube.com/watch?v=OtHs4dnR0yA&list=PL3ZTgFEc7LyvY90VTpKVFf70DXM7--47u&index=10)+
+Take a look at these other resources for OOTB content and the content hub:
+
+- [About Microsoft Sentinel content and solutions](sentinel-solutions.md)
+- [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md)
+- Video: [Using content hub to manage your SIEM content](https://www.youtube.com/watch?v=OtHs4dnR0yA&list=PL3ZTgFEc7LyvY90VTpKVFf70DXM7--47u&index=10)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
Microsoft Sentinel's **health monitoring feature is now available for analytics
Learn more about [auditing and health monitoring in Microsoft Sentinel](health-audit.md): - [Turn on auditing and health monitoring for Microsoft Sentinel (preview)](enable-monitoring.md) - [Monitor the health and audit the integrity of your analytics rules](monitor-analytics-rule-integrity.md)
+- Explore the new [Analytics Health & Audit workbook](monitor-analytics-rule-integrity.md#use-the-auditing-and-health-monitoring-workbook).
### Microsoft 365 Defender data connector is now generally available
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 01/05/2023 Last updated : 02/21/2023 # Azure Policy built-in definitions for Azure Service Fabric
service-fabric Service Fabric Application Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade.md
Last updated 07/14/2022
# Service Fabric application upgrade
-An Azure Service Fabric application is a collection of services. During an upgrade, Service Fabric compares the new [application manifest](service-fabric-application-and-service-manifests.md) with the previous version and determines which services in the application require updates. Service Fabric compares the version numbers in the service manifests with the version numbers in the previous version. If a service has not changed, that service is not upgraded.
+An Azure Service Fabric application is a collection of services. During an upgrade, Service Fabric compares the new [application manifest](service-fabric-application-and-service-manifests.md) with the previous version and determines which services in the application require updates. Service Fabric compares the version in the service manifests with the version in the previous version. If the service version has not changed, that service will not be upgraded.
> [!NOTE] > [ApplicationParameter](/dotnet/api/system.fabric.description.applicationdescription.applicationparameters#System_Fabric_Description_ApplicationDescription_ApplicationParameters)s are not preserved across an application upgrade. In order to preserve current application parameters, the user should get the parameters first and pass them into the upgrade API call like below:
spring-apps How To Dynatrace One Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-dynatrace-one-agent-monitor.md
az spring app deploy \
DT_CONNECTION_POINT=<your-communication-endpoint> ```
-#### Option 2: Portal
+#### Option 2: Azure portal
To add the key/value pairs using the Azure portal, use the following steps:
-1. Navigate to the list of your existing applications.
+1. In your Azure Spring Apps instance, select **Apps** in the navigation pane.
- :::image type="content" source="media/dynatrace-oneagent/existing-applications.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps section." lightbox="media/dynatrace-oneagent/existing-applications.png":::
+ :::image type="content" source="media/dynatrace-oneagent/existing-applications.png" alt-text="Screenshot of the Azure portal showing the Apps page for an Azure Spring Apps instance." lightbox="media/dynatrace-oneagent/existing-applications.png":::
-1. Select an application to navigate to the **Overview** page of the application.
+1. Select the application from the list, and then select **Configuration** in the navigation pane.
- :::image type="content" source="media/dynatrace-oneagent/overview-application.png" alt-text="Screenshot of the application's Overview section." lightbox="media/dynatrace-oneagent/overview-application.png":::
+1. Use the **Environmental variables** tab to add or update the variables used by your application.
-1. Select **Configurations** to add, update, or delete values in the **Environment variables** section for the application.
-
- :::image type="content" source="media/dynatrace-oneagent/configuration-application.png" alt-text="Screenshot of the 'Environment variables' tab of the application's Configuration section." lightbox="media/dynatrace-oneagent/configuration-application.png":::
+ :::image type="content" source="media/dynatrace-oneagent/configuration-application.png" alt-text="Screenshot of the Azure portal showing the Configuration page for an app in an Azure Spring Apps instance, with the Environmental variables tab selected." lightbox="media/dynatrace-oneagent/configuration-application.png":::
## Automate provisioning
spring-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/overview.md
Azure Spring Apps, including Enterprise tier, runs on Azure in a fully managed e
| Monitor end-to-end using any tool and platform. | Application Insights, Azure Log Analytics, Splunk, Elastic, New Relic, Dynatrace, or AppDynamics | | Connect Spring applications and interact with your cloud services. | Spring integration with Azure services for data, messaging, eventing, cache, storage, and directories | | Securely load app secrets and certificates. | Azure Key Vault |
-| Use familiar development tools. | IntelliJ, VS Code, Eclipse, Spring Tool Suite, Maven, or Gradle |
+| Use familiar development tools. | IntelliJ, Visual Studio Code, Eclipse, Spring Tool Suite, Maven, or Gradle |
After you create your Enterprise tier service instance and deploy your applications, you can monitor with Application Insights or any other application performance management tools of your choice.
As a quick reference, the articles listed above and the articles in the followin
> [Quickstart: Deploy your first application to Azure Spring Apps](quickstart.md) Samples are available on GitHub. See [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/).+
+For feature updates about Azure Spring Apps, see [Azure updates](https://azure.microsoft.com/updates/?query=spring).
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
storage Access Tiers Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-best-practices.md
Choosing the most optimal tier up front can reduce costs. If you change the tier
- For guidance about how to upload to a specific access tier, see [Set a blob's access tier](access-tiers-online-manage.md). -- For offline data movement to the desired tier, see [Azure Data Box](/products/databox/).
+- For offline data movement to the desired tier, see [Azure Data Box](https://azure.microsoft.com/products/databox/).
## Move data into the most cost-efficient access tiers
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
storage Storage Use Azcopy Blobs Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-copy.md
AzCopy uses [server-to-server](/rest/api/storageservices/put-block-from-url) [AP
See the [Get started with AzCopy](storage-use-azcopy-v10.md) article to download AzCopy and learn about the ways that you can provide authorization credentials to the storage service. > [!NOTE]
-> The examples in this article assume that you've provided authorization credentials by using Azure Active Directory (Azure AD) and that your Azure AD identity has the proper role assignments for both source and destination accounts.
+> The examples in this article assume that you've provided authorization credentials by using Azure Active Directory (Azure AD) and that your Azure AD identity has the proper role assignments for the destination account. The source account, if different from the destination, must use a SAS token with the proper read permissions or allow public access. For example: azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path>'.
>
-> Alternatively you can append a SAS token to either the source or destination URL in each AzCopy command. For example: `azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>'`.
+> Alternatively you can also append a SAS token to the destination URL in each AzCopy command. For example: azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>'.
## Guidelines
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
We strongly recommend that you read [Planning for an Azure Files deployment](../
> [!NOTE] > The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks. See [Failover Clustering](file-sync-planning.md#failover-clustering) for Azure File Sync.
+5. Although cloud management can be done with the Azure portal, advanced registered server functionality is provided through PowerShell cmdlets that are intended to be run locally in either PowerShell 5.1 or PowerShell 6+. PowerShell 5.1 ships by default on Windows Server 2016 and above. On Windows Server 2012 R2, you can verify that you are running at least PowerShell 5.1.\* by looking at the value of the **PSVersion** property of the **$PSVersionTable** object:
+
+ ```powershell
+ $PSVersionTable.PSVersion
+ ```
+
+ If your **PSVersion** value is less than 5.1.\*, as will be the case with most fresh installations of Windows Server 2012 R2, you'll need to upgrade by downloading and installing [Windows Management Framework (WMF) 5.1](https://www.microsoft.com/download/details.aspx?id=54616). The appropriate package to download and install for Windows Server 2012 R2 is **Win8.1AndW2K12R2-KB\*\*\*\*\*\*\*-x64.msu**.
+
+ PowerShell 6+ can be used with any supported system, and can be downloaded via its [GitHub page](https://github.com/PowerShell/PowerShell#get-powershell).
+ # [PowerShell](#tab/azure-powershell) 1. An **Azure file share** in the same region that you want to deploy Azure File Sync. For more information, see:
We strongly recommend that you read [Planning for an Azure Files deployment](../
> [!NOTE] > The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks. See [Failover Clustering](file-sync-planning.md#failover-clustering) for Azure File Sync.
-5. The Az PowerShell module may be used with either PowerShell 5.1 or PowerShell 6+. You may use the Az PowerShell module for Azure File Sync on any supported system, including non-Windows systems, however the server registration cmdlet must always be run on the Windows Server instance you are registering (this can be done directly or via PowerShell remoting). On Windows Server 2012 R2, you can verify that you are running at least PowerShell 5.1.\* by looking at the value of the **PSVersion** property of the **$PSVersionTable** object:
+5. PowerShell 5.1 or PowerShell 6+. You may use the Az PowerShell module for Azure File Sync on any supported system, including non-Windows systems, however the server registration cmdlet must always be run on the Windows Server instance you are registering (this can be done directly or via PowerShell remoting). PowerShell 5.1 ships by default on Windows Server 2016 and above. On Windows Server 2012 R2, you can verify that you are running at least PowerShell 5.1.\* by looking at the value of the **PSVersion** property of the **$PSVersionTable** object:
```powershell $PSVersionTable.PSVersion
We strongly recommend that you read [Planning for an Azure Files deployment](../
The installed extension 'storagesync' is experimental and not covered by customer support. Please use with discretion. ```
+8. Although cloud management can be done with the Azure CLI, advanced registered server functionality is provided through PowerShell cmdlets that are intended to be run locally in either PowerShell 5.1 or PowerShell 6+. PowerShell 5.1 ships by default on Windows Server 2016 and above. On Windows Server 2012 R2, you can verify that you are running at least PowerShell 5.1.\* by looking at the value of the **PSVersion** property of the **$PSVersionTable** object:
+
+ ```powershell
+ $PSVersionTable.PSVersion
+ ```
+
+ If your **PSVersion** value is less than 5.1.\*, as will be the case with most fresh installations of Windows Server 2012 R2, you'll need to upgrade by downloading and installing [Windows Management Framework (WMF) 5.1](https://www.microsoft.com/download/details.aspx?id=54616). The appropriate package to download and install for Windows Server 2012 R2 is **Win8.1AndW2K12R2-KB\*\*\*\*\*\*\*-x64.msu**.
+
+ PowerShell 6+ can be used with any supported system, and can be downloaded via its [GitHub page](https://github.com/PowerShell/PowerShell#get-powershell).
+ ## Prepare Windows Server to use with Azure File Sync
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
Before you enable Azure AD Kerberos authentication over SMB for Azure file share
The Azure AD Kerberos functionality for hybrid identities is only available on the following operating systems:
- - Windows 11 Enterprise single or multi-session.
- - Windows 10 Enterprise single or multi-session, versions 2004 or later with the latest cumulative updates installed, especially the [KB5007253 - 2021-11 Cumulative Update Preview for Windows 10](https://support.microsoft.com/topic/november-22-2021-kb5007253-os-builds-19041-1387-19042-1387-19043-1387-and-19044-1387-preview-d1847be9-46c1-49fc-bf56-1d469fc1b3af).
+ - Windows 11 Enterprise/Pro single or multi-session.
+ - Windows 10 Enterprise/Pro single or multi-session, versions 2004 or later with the latest cumulative updates installed, especially the [KB5007253 - 2021-11 Cumulative Update Preview for Windows 10](https://support.microsoft.com/topic/november-22-2021-kb5007253-os-builds-19041-1387-19042-1387-19043-1387-and-19044-1387-preview-d1847be9-46c1-49fc-bf56-1d469fc1b3af).
- Windows Server, version 2022 with the latest cumulative updates installed, especially the [KB5007254 - 2021-11 Cumulative Update Preview for Microsoft server operating system version 21H2](https://support.microsoft.com/topic/november-22-2021-kb5007254-os-build-20348-380-preview-9a960291-d62e-486a-adcc-6babe5ae6fc1). To learn how to create and configure a Windows VM and log in by using Azure AD-based authentication, see [Log in to a Windows virtual machine in Azure by using Azure AD](../../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
synapse-analytics Apache Spark 33 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-33-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.3 description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.3.-+
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
> [!IMPORTANT] > * Azure Synapse Runtime for Apache Spark 3.3 is currently in Public Preview. > * We are actively rolling out the final changes to all production regions with the goal of ensuring a seamless implementation. As we monitor the stability of these updates, we tentatively anticipate a general availability date of February 23rd. Please note that this is subject to change and we will provide updates as they become available.
-> * The .NET Core 3.1 library has reached end of support and therefore has been removed from the Azure Synapse Runtime for Apache Spark 3.3, meaning users will no longer be able to access Spark APIs through C# and F# and execute C# code in notebooks and through jobs. For more information about Azure Synapse Runtime for Apache Spark 3.3 and its components, please refer to the Release Notes. Additionally, for more information about the guidelines for the availability of support throughout the life of a product which applies to Azure Synapse Analytics, please refer to the Lifecycle Policy.
-
+ ## Component versions | Component | Version |
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
| Python | 3.10 | | R (Preview) | 4.2.2 |
+>[!WARNING]
+> * The [.NET for Apache Spark](https://github.com/dotnet/spark) is an open-source project under the .NET Foundation that currently requires the .NET 3.1 library, which has reached the out-of-support status. We would like to inform users of Azure Synapse Spark of the removal of the .NET for Apache Spark library in the Azure Synapse Runtime for Apache Spark version 3.3. Users may refer to the [.NET Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) for more details on this matter. As a result, users it will no longer be possible for users to utilize Apache Spark APIs via C# and F#, or execute C# code in notebooks within Synapse or through Apache Spark Job definitions in Synapse. It is important to note that this change affects only Azure Synapse Runtime for Apache Spark 3.3 and above. We will continue to support .NET for Apache Spark in all previous versions of the Azure Synapse Runtime according to [their lifecycle stages](/runtime-for-apache-spark-lifecycle-and-supportability.md). However, we do not have plans to support .NET for Apache Spark in Azure Synapse Runtime for Apache Spark 3.3 and future versions. We recommend that users with existing workloads written in C# or F# migrate to Python or Scala. Users are advised to take note of this information and plan accordingly.
++ ## Libraries The following sections present the libraries included in Azure Synapse Runtime for Apache Spark 3.3.
update-center Manage Arc Enabled Servers Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-arc-enabled-servers-programmatically.md
description: This article tells how to use Update management center (preview) us
Previously updated : 04/21/2022 Last updated : 02/20/2023 # How to programmatically manage updates for Azure Arc-enabled servers
-This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure Arc-enabled servers with update management (preview) in Azure. If you are new to update management center (preview) and you want to learn more, see [overview of update management center (preview)](overview.md). To use the Azure REST API to manage Azure virtual machines, see [How to programmatically work with Azure virtual machines](manage-vms-programmatically.md).
+This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure Arc-enabled servers with update management (preview) in Azure. If you're new to update management center (preview) and you want to learn more, see [overview of update management center (preview)](overview.md). To use the Azure REST API to manage Azure virtual machines, see [How to programmatically work with Azure virtual machines](manage-vms-programmatically.md).
Update management center (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure) and [Azure CLI](/cli/azure).
The following table describes the elements of the request body:
| Property | Description | |-|-| | `maximumDuration` | Maximum amount of time in minutes the OS update operation can take. It must be an ISO 8601-compliant duration string such as `PT100M`. |
-| `rebootSetting` | Flag to state if machine should be rebooted if Guest OS update installation needs it for completion. Acceptable values are: `IfRequired, NeverReboot, AlwaysReboot`. |
+| `rebootSetting` | Flag to state if you should reboot the machine and if the Guest OS update installation needs it for completion. Acceptable values are: `IfRequired, NeverReboot, AlwaysReboot`. |
| `windowsParameters` | Parameter options for Guest OS update on machine running a supported Microsoft Windows Server operating system. | | `windowsParameters - classificationsToInclude` | List of categories or classifications of OS updates to apply, as supported and provided by Windows Server OS. Acceptable values are: `Critical, Security, UpdateRollUp, FeaturePack, ServicePack, Definition, Tools, Update` |
-| `windowsParameters - kbNumbersToInclude` | List of Windows Update KB IDs that are available to the machine and need to be installed. If you have included any 'classificationsToInclude', the KBs available in the category will be installed. 'kbNumbersToInclude' is an option to provide list of specific KB IDs over and above that you want to get installed. For example: `1234` |
-| `windowsParameters - kbNumbersToExclude` | List of Windows Update KB Ids that are available to the machine and should **not** to be installed. If you have included any 'classificationsToInclude', the KBs available in the category will be installed. 'kbNumbersToExclude' is an option to provide list of specific KB IDs that you want to ensure don't get installed. For example: `5678` |
+| `windowsParameters - kbNumbersToInclude` | List of Windows Update KB IDs that are available to the machine and that you need install. If you've included any 'classificationsToInclude', the KBs available in the category are installed. 'kbNumbersToInclude' is an option to provide list of specific KB IDs over and above that you want to get installed. For example: `1234` |
+| `windowsParameters - kbNumbersToExclude` | List of Windows Update KB Ids that are available to the machine and that should **not** be installed. If you've included any 'classificationsToInclude', the KBs available in the category will be installed. 'kbNumbersToExclude' is an option to provide list of specific KB IDs that you want to ensure don't get installed. For example: `5678` |
| `linuxParameters` | Parameter options for Guest OS update when machine is running supported Linux distribution | | `linuxParameters - classificationsToInclude` | List of categories or classifications of OS updates to apply, as supported & provided by Linux OS's package manager used. Acceptable values are: `Critical, Security, Others`. For more information, see [Linux package manager and OS support](./support-matrix.md#supported-operating-systems). |
-| `linuxParameters - packageNameMasksToInclude` | List of Linux packages that are available to the machine and need to be installed. If you have included any 'classificationsToInclude', the packages available in the category will be installed. 'packageNameMasksToInclude' is an option to provide list of packages over and above that you want to get installed. For example: `mysql, libc=1.0.1.1, kernel*` |
-| `linuxParameters - packageNameMasksToExclude` | List of Linux packages that are available to the machine and should **not** be installed. If you have included any 'classificationsToInclude', the packages available in the category will be installed. 'packageNameMasksToExclude' is an option to provide list of specific packages that you want to ensure don't get installed. For example: `mysql, libc=1.0.1.1, kernel*` |
+| `linuxParameters - packageNameMasksToInclude` | List of Linux packages that are available to the machine and need to be installed. If you've included any 'classificationsToInclude', the packages available in the category will be installed. 'packageNameMasksToInclude' is an option to provide list of packages over and above that you want to get installed. For example: `mysql, libc=1.0.1.1, kernel*` |
+| `linuxParameters - packageNameMasksToExclude` | List of Linux packages that are available to the machine and should **not** be installed. If you've included any 'classificationsToInclude', the packages available in the category will be installed. 'packageNameMasksToExclude' is an option to provide list of specific packages that you want to ensure don't get installed. For example: `mysql, libc=1.0.1.1, kernel*` |
# [Azure REST API](#tab/rest)
The following table describes the elements of the request body:
| `properties.extensionProperties` | Gets or sets extensionProperties of the maintenanceConfiguration | | `properties.maintenanceScope` | Gets or sets maintenanceScope of the configuration | | `properties.maintenanceWindow.duration` | Duration of the maintenance window in HH:mm format. If not provided, default value will be used based on maintenance scope provided. Example: 05:00. |
-| `properties.maintenanceWindow.expirationDateTime` | Effective expiration date of the maintenance window in YYYY-MM-DD hh:MM format. The window will be created in the time zone provided and adjusted to daylight savings according to that time zone. Expiration date must be set to a future date. If not provided, it will be set to the maximum datetime 9999-12-31 23:59:59. |
-| `properties.maintenanceWindow.recurEvery` | Rate at which a Maintenance window is expected to recur. The rate can be expressed as daily, weekly, or monthly schedules. Daily schedule are formatted as recurEvery: [Frequency as integer]['Day(s)']. If no frequency is provided, the default frequency is 1. Daily schedule examples are recurEvery: Day, recurEvery: 3Days. Weekly schedule are formatted as recurEvery: [Frequency as integer]['Week(s)'] [Optional comma separated list of weekdays Monday-Sunday]. Weekly schedule examples are recurEvery: 3Weeks, recurEvery: Week Saturday,Sunday. Monthly schedules are formatted as [Frequency as integer]['Month(s)'] [Comma separated list of month days] or [Frequency as integer]['Month(s)'] [Week of Month (First, Second, Third, Fourth, Last)] [Weekday Monday-Sunday]. Monthly schedule examples are recurEvery: Month, recurEvery: 2Months, recurEvery: Month day23,day24, recurEvery: Month Last Sunday, recurEvery: Month Fourth Monday. |
-| `properties.maintenanceWindow.startDateTime` | Effective start date of the maintenance window in YYYY-MM-DD hh:mm format. The start date can be set to either the current date or future date. The window will be created in the time zone provided and adjusted to daylight savings according to that time zone. |
-| `properties.maintenanceWindow.timeZone` | Name of the timezone. List of timezones can be obtained by executing [System.TimeZoneInfo]::GetSystemTimeZones() in PowerShell. Example: Pacific Standard Time, UTC, W. Europe Standard Time, Korea Standard Time, Cen. Australia Standard Time. |
+| `properties.maintenanceWindow.expirationDateTime` | Effective expiration date of the maintenance window in YYYY-MM-DD hh:MM format. The window is created in the time zone provided to daylight savings according to that time zone. You must set the expiration date to a future date. If not provided, it will be set to the maximum datetime 9999-12-31 23:59:59. |
+| `properties.maintenanceWindow.recurEvery` | Rate at which a Maintenance window is expected to recur. The rate can be expressed as daily, weekly, or monthly schedules. You can format daily schedules as recurEvery: [Frequency as integer]['Day(s)']. If no frequency is provided, the default frequency is 1. Daily schedule examples are recurEvery: Day, recurEvery: 3Days. Weekly schedule are formatted as recurEvery: [Frequency as integer]['Week(s)'] [Optional comma separated list of weekdays Monday-Sunday]. Weekly schedule examples are recurEvery: 3Weeks, recurEvery: Week Saturday, Sunday. You can format monthly schedules as [Frequency as integer]['Month(s)'] [Comma separated list of month days] or [Frequency as integer]['Month(s)'] [Week of Month (First, Second, Third, Fourth, Last)] [Weekday Monday-Sunday]. Monthly schedule examples are recurEvery: Month, recurEvery: 2Months, recurEvery: Month day23, day24, recurEvery: Month Last Sunday, recurEvery: Month Fourth Monday. |
+| `properties.maintenanceWindow.startDateTime` | Effective start date of the maintenance window in YYYY-MM-DD hh:mm format. You can set the start date to either the current date or future date. The window will be created in the time zone provided and adjusted to daylight savings according to that time zone. |
+| `properties.maintenanceWindow.timeZone` | Name of the timezone. You can obtain the list of timezones by executing [System.TimeZoneInfo]:GetSystemTimeZones() in PowerShell. Example: Pacific Standard Time, UTC, W. Europe Standard Time, Korea Standard Time, Cen. Australia Standard Time. |
| `properties.namespace` | Gets or sets namespace of the resource | | `properties.visibility` | Gets or sets the visibility of the configuration. The default value is 'Custom' | | `systemData` | Azure Resource Manager metadata containing createdBy and modifiedBy information. |
PUT on '/subscriptions/0f55bb56-6089-4c7e-9306-41fb78fc5844/resourceGroups/atsca
"location": "eastus2euap", "properties": { "namespace": null,
- "extensionProperties": {},
+ "extensionProperties": {
+ "InGuestPatchMode" : "User"
+ },
"maintenanceScope": "InGuestPatch", "maintenanceWindow": { "startDateTime": "2021-08-21 01:18",
The format of the request body is as follows:
"location": "eastus2euap", "properties": { "namespace": null,
- "extensionProperties": {},
+ "extensionProperties": {
+ "InGuestPatchMode": "User"
+ },
"maintenanceScope": "InGuestPatch", "maintenanceWindow": { "startDateTime": "2021-08-21 01:18",
Invoke-AzRestMethod -Path "/subscriptions/<subscriptionId>/resourceGroups/<resou
"location": "eastus2euap", "properties": { "namespace": null,
- "extensionProperties": {},
+ "extensionProperties": {
+ "InGuestPatchMode" : "User"
+ },
"maintenanceScope": "InGuestPatch", "maintenanceWindow": { "startDateTime": "2021-12-21 01:18",
Invoke-AzRestMethod -Path "<ARC or Azure VM resourceId>/providers/Microsoft.Main
}' ```
+## Remove machine from the schedule
+
+To remove a machine from the schedule, get all the configuration assignment names for the machine that you have created to associate the machine with the current schedule from the Azure Resource Graph as listed:
+
+```kusto
+maintenanceresources
+| where type =~ "microsoft.maintenance/configurationassignments"
+| where properties.maintenanceConfigurationId =~ "<maintenance configuration Resource ID>"
+| where properties.resourceId =~ "<Machine Resource Id>"
+| project name, id
+```
+After you obtain the name from above, delete the configuration assignment by following the DELETE request -
+```rest
+DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configurationAssignments/<configurationAssignment name>?api-version=2021-09-01-preview`
+```
## Next steps
update-center Manage Vms Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-vms-programmatically.md
# How to programmatically manage updates for Azure VMs
-This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure virtual machine with update management center (preview) in Azure. If you are new to update management center (preview) and you want to learn more, see [overview of update management center (preview)](overview.md). To use the Azure REST API to manage Arc-enabled servers, see [How to programmatically work with Arc-enabled servers](manage-arc-enabled-servers-programmatically.md).
+This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure virtual machine with update management center (preview) in Azure. If you're new to update management center (preview) and you want to learn more, see [overview of update management center (preview)](overview.md). To use the Azure REST API to manage Arc-enabled servers, see [How to programmatically work with Arc-enabled servers](manage-arc-enabled-servers-programmatically.md).
-Update management center (private preview) in Azure enables you to use the [Azure REST API](/rest/api/azure/) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure/) and [Azure CLI](/cli/azure/).
+Update management center (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure/) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure/) and [Azure CLI](/cli/azure/).
Support for Azure REST API to manage Azure VMs is available through the update management center (preview) virtual machine extension.
The following table describes the elements of the request body:
| Property | Description | |-|-| | `maximumDuration` | Maximum amount of time that the operation runs. It must be an ISO 8601-compliant duration string such as `PT4H` (4 hours). |
-| `rebootSetting` | Flag to state if machine should be rebooted if Guest OS update installation requires it for completion. Acceptable values are: `IfRequired, NeverReboot, AlwaysReboot`. |
+| `rebootSetting` | Flag to state if machine should be rebooted and if Guest OS update installation requires it for completion. Acceptable values are: `IfRequired, NeverReboot, AlwaysReboot`. |
| `windowsParameters` | Parameter options for Guest OS update on Azure VMs running a supported Microsoft Windows Server operating system. | | `windowsParameters - classificationsToInclude` | List of categories/classifications to be used for selecting the updates to be installed on the machine. Acceptable values are: `Critical, Security, UpdateRollUp, FeaturePack, ServicePack, Definition, Tools, Updates` | | `windowsParameters - kbNumbersToInclude` | List of Windows Update KB Ids that should be installed. All updates belonging to the classifications provided in `classificationsToInclude` list will be installed. `kbNumbersToInclude` is an optional list of specific KBs to be installed in addition to the classifications. For example: `1234` |
-| `windowsParameters - kbNumbersToExclude` | List of Windows Update KB Ids that should **not** be installed. This parameter overrides `windowsParameters - classificationsToInclude`, meaning a Windows Update KB Id specified here will not be installed even if it belongs to the classification provided under `classificationsToInclude` parameter. |
+| `windowsParameters - kbNumbersToExclude` | List of Windows Update KB Ids that should **not** be installed. This parameter overrides `windowsParameters - classificationsToInclude`, meaning a Windows Update KB ID specified here won't be installed even if it belongs to the classification provided under `classificationsToInclude` parameter. |
| `linuxParameters` | Parameter options for Guest OS update on Azure VMs running a supported Linux server operating system. | | `linuxParameters - classificationsToInclude` | List of categories/classifications to be used for selecting the updates to be installed on the machine. Acceptable values are: `Critical, Security, Other` | | `linuxParameters - packageNameMasksToInclude` | List of Linux packages that should be installed. All updates belonging to the classifications provided in `classificationsToInclude` list will be installed. `packageNameMasksToInclude` is an optional list of package names to be installed in addition to the classifications. For example: `mysql, libc=1.0.1.1, kernel*` |
-| `linuxParameters - packageNameMasksToExclude` | List of updates that should **not** be installed. This parameter overrides `linuxParameters - packageNameMasksToExclude`, meaning a package specified here will not be installed even if it belongs to the classification provided under `classificationsToInclude` parameter. |
+| `linuxParameters - packageNameMasksToExclude` | List of updates that should **not** be installed. This parameter overrides `linuxParameters - packageNameMasksToExclude`, meaning a package specified here won't be installed even if it belongs to the classification provided under `classificationsToInclude` parameter. |
# [Azure REST API](#tab/rest)
The following table describes the elements of the request body:
| `properties.extensionProperties` | Gets or sets extensionProperties of the maintenanceConfiguration | | `properties.maintenanceScope` | Gets or sets maintenanceScope of the configuration | | `properties.maintenanceWindow.duration` | Duration of the maintenance window in HH:MM format. If not provided, default value will be used based on maintenance scope provided. Example: 05:00. |
-| `properties.maintenanceWindow.expirationDateTime` | Effective expiration date of the maintenance window in YYYY-MM-DD hh:mm format. The window will be created in the time zone provided and adjusted to daylight savings according to that time zone. Expiration date must be set to a future date. If not provided, it will be set to the maximum datetime 9999-12-31 23:59:59. |
-| `properties.maintenanceWindow.recurEvery` | Rate at which a maintenance window is expected to recur. The rate can be expressed as daily, weekly, or monthly schedules. Daily schedule are formatted as recurEvery: [Frequency as integer]['Day(s)']. If no frequency is provided, the default frequency is 1. Daily schedule examples are recurEvery: Day, recurEvery: 3Days. Weekly schedule are formatted as recurEvery: [Frequency as integer]['Week(s)'] [Optional comma separated list of weekdays Monday-Sunday]. Weekly schedule examples are recurEvery: 3Weeks, recurEvery: Week Saturday,Sunday. Monthly schedules are formatted as [Frequency as integer]['Month(s)'] [Comma separated list of month days] or [Frequency as integer]['Month(s)'] [Week of Month (First, Second, Third, Fourth, Last)] [Weekday Monday-Sunday]. Monthly schedule examples are recurEvery: Month, recurEvery: 2Months, recurEvery: Month day23,day24, recurEvery: Month Last Sunday, recurEvery: Month Fourth Monday. |
-| `properties.maintenanceWindow.startDateTime` | Effective start date of the maintenance window in YYYY-MM-DD hh:mm format. The start date can be set to either the current date or future date. The window will be created in the time zone provided and adjusted to daylight savings according to that time zone. |
-| `properties.maintenanceWindow.timeZone` | Name of the timezone. List of timezones can be obtained by executing [System.TimeZoneInfo]::GetSystemTimeZones() in PowerShell. Example: Pacific Standard Time, UTC, W. Europe Standard Time, Korea Standard Time, Cen. Australia Standard Time. |
+| `properties.maintenanceWindow.expirationDateTime` | Effective expiration date of the maintenance window in YYYY-MM-DD hh:mm format. The window is created in the time zone provided to daylight savings according to that time zone. Expiration date must be set to a future date. If not provided, it will be set to the maximum datetime 9999-12-31 23:59:59. |
+| `properties.maintenanceWindow.recurEvery` | Rate at which a maintenance window is expected to recur. The rate can be expressed as daily, weekly, or monthly schedules. Daily schedules are formatted as recurEvery: [Frequency as integer]['Day(s)']. If no frequency is provided, the default frequency is 1. Daily schedule examples are recurEvery: Day, recurEvery: 3Days. Weekly schedules are formatted as recurEvery: [Frequency as integer]['Week(s)'] [Optional comma separated list of weekdays Monday-Sunday]. Weekly schedule examples are recurEvery: 3Weeks, recurEvery: Week Saturday, Sunday. Monthly schedules are formatted as [Frequency as integer]['Month(s)'] [Comma separated list of month days] or [Frequency as integer]['Month(s)'] [Week of Month (First, Second, Third, Fourth, Last)] [Weekday Monday-Sunday]. Monthly schedule examples are recurEvery: Month, recurEvery: 2Months, recurEvery: Month day23, day24, recurEvery: Month Last Sunday, recurEvery: Month Fourth Monday. |
+| `properties.maintenanceWindow.startDateTime` | Effective start date of the maintenance window in YYYY-MM-DD hh:mm format. You can set the start date to either the current date or future date. The window will be created in the time zone provided and adjusted to daylight savings according to that time zone. |
+| `properties.maintenanceWindow.timeZone` | Name of the timezone. List of timezones can be obtained by executing [System.TimeZoneInfo]:GetSystemTimeZones() in PowerShell. Example: Pacific Standard Time, UTC, W. Europe Standard Time, Korea Standard Time, Cen. Australia Standard Time. |
| `properties.namespace` | Gets or sets namespace of the resource | | `properties.visibility` | Gets or sets the visibility of the configuration. The default value is 'Custom' | | `systemData` | Azure Resource Manager metadata containing createdBy and modifiedBy information. |
Invoke-AzRestMethod -Path "<ARC or Azure VM resourceId>/providers/Microsoft.Main
}' ```
+## Remove machine from the schedule
+
+To remove a machine from the schedule, get all the configuration assignment names for the machine that were created to associate the machine with the current schedule from the Azure Resource Graph as listed:
+
+```kusto
+maintenanceresources
+| where type =~ "microsoft.maintenance/configurationassignments"
+| where properties.maintenanceConfigurationId =~ "<maintenance configuration Resource ID>"
+| where properties.resourceId =~ "<Machine Resource Id>"
+| project name, id
+```
+After you obtain the name from above, delete the configuration assignment by following the DELETE request -
+```rest
+DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configurationAssignments/<configurationAssignment name>?api-version=2021-09-01-preview`
+```
## Next steps
virtual-desktop Publish Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/publish-apps.md
To publish a built-in app:
CommandLineSetting = '<Allow|Require|DoNotAllow>' IconIndex = '0' IconPath = '<IconPath>'
- howInPortal = $true
+ ShowInPortal = $true
} New-AzWvdApplication @parameters
$parameters = @{
CommandLineSetting = '<Allow|Require|DoNotAllow>' IconIndex = '0' IconPath = 'C:\Windows\SystemApps\Microsoft.MicrosoftEdge_8wekyb3d8bbwe\microsoftedge'
- howInPortal = $true
+ ShowInPortal = $true
} New-AzWvdApplication @parameters
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Previously updated : 01/05/2023 Last updated : 02/21/2023 # Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
virtual-machines Dlsv5 Dldsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dlsv5-dldsv5-series.md
+
+ Title: 'Dlsv5 and Dldsv5 (preview)' #Required; page title is displayed in search results. 60 characters max.
+description: Specifications for the Dlsv5 and Dldsv5-series VMs. #Required; this appears in search as the short description
+++++ Last updated : 02/16/2023+++
+# Dlsv5 and Dldsv5-series (preview)
+
+The Dlsv5 and Dldsv5-series Virtual Machines runs on the Intel&reg; Xeon&reg; Platinum 8370C (Ice Lake) processor in a [hyper threaded](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) configuration. This new processor features an all core turbo clock speed of 3.5 GHz with [Intel&reg; Turbo Boost Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Advanced-Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html) and [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). The Dlsv5 and Dldsv5 VM series provides 2GiBs of RAM per vCPU and optimized for workloads that require less RAM per vCPU than standard VM sizes. Target workloads include web servers, gaming, video encoding, AI/ML, and batch processing.
++
+> [!NOTE]
+> This feature is currently in preview. Previews are made available to you on the condition that you agree to the [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some aspects of this feature may change prior to general availability (GA).
+
+## Dlsv5-series
+Dlsv5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 96 vCPU and 192 GiB of RAM. These VM sizes can reduce cost when running non-memory intensive applications.
+
+Dlsv5-series virtual machines do not have any temporary storage thus lowering the price of entry. You can attach Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
++
+[Premium Storage](premium-storage-performance.md): Supported<br>
+[Premium Storage caching](premium-storage-performance.md): Supported<br>
+[Live Migration](maintenance-and-updates.md): Supported<br>
+[Memory Preserving Updates](maintenance-and-updates.md): Supported<br>
+[VM Generation Support](generation-2.md): Generation 1 and 2<br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+<br>
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps<sup>*</sup> | Max burst uncached disk throughput: IOPS/MBps3 | Max NICs |Max network bandwidth (Mbps) |
+||||||||| |
+| Standard_D2ls_v5 | 2 | 4 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_D4ls_v5 | 4 | 8 | Remote Storage Only | 8 | 6400/145 | 20000/1200 | 2 | 12500 |
+| Standard_D8ls_v5 | 8 | 16 | Remote Storage Only | 16 | 12800/290 | 20000/1200 | 4 | 12500 |
+| Standard_D16s_v5 | 16 | 32 | Remote Storage Only | 32 | 25600/600 | 40000/1200 | 8 | 12500 |
+| Standard_D32ls_v5 | 32 | 64 | Remote Storage Only | 32 | 51200/865 | 80000/2000 | 8 | 16000 |
+| Standard_D48ls_v5 | 48 | 96 | Remote Storage Only | 32 | 76800/1315 | 80000/3000 | 8 | 24000 |
+| Standard_D64ls_v5 | 64 | 128 | Remote Storage Only | 32 | 80000/1735 | 80000/3000 | 8 | 30000 |
+| Standard_D96ls_v5 | 96 | 192 | Remote Storage Only | 32 | 80000/2600 | 80000/4000 |8 | 35000 |
+
+<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br>
+<sup>1</sup> Accelerated networking is required and turned on by default on all Dlsv5 virtual machines.<br>
+
+## Dldsv5-series
+
+Dldsv5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 96 vCPU and 192 GiB of RAM as well as fast, local SSD storage up to 3,600 GiB. These VM sizes can reduce cost when running non-memory intensive applications.
+
+Dldsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
++
+[Premium Storage](premium-storage-performance.md): Supported<br>
+[Premium Storage caching](premium-storage-performance.md): Supported<br>
+[Live Migration](maintenance-and-updates.md): Supported<br>
+[Memory Preserving Updates](maintenance-and-updates.md): Supported<br>
+[VM Generation Support](generation-2.md): Generation 1 and 2<br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+<br>
++
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>3</sup> | Max NICs | Max network bandwidth (Mbps) |
+|||||||||||
+| Standard_D2lds_v5 | 2 | 4 | 75 | 4 | 9000/125 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_D4lds_v5 | 4 | 8 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 |
+| Standard_D8lds_v5 | 8 | 16 | 300 | 16 | 38000/500 | 12800/290 | 20000/1200 | 4 | 12500 |
+| Standard_D16lds_v5 | 16 | 32 | 600 | 32 | 75000/1000 | 25600/600 | 40000/1200 | 8 | 12500 |
+| Standard_D32lds_v5 | 32 | 64 | 1200 | 32 | 150000/2000 | 51200/865 | 80000/2000 | 8 | 16000 |
+| Standard_D48lds_v5 | 48 | 96 | 1800 | 32 | 225000/3000 | 76800/1315 | 80000/3000 | 8 | 24000 |
+| Standard_D64lds_v5 | 64 | 128 | 2400 | 32 | 300000/4000 | 80000/1735 | 80000/3000 | 8 | 30000 |
+| Standard_D96lds_v5 | 96 | 192 | 3600 | 32 | 450000/4000 | 80000/2600 | 80000/4000 | 8 | 35000 |
+
+<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br>
+<sup>1</sup> Accelerated networking is required and turned on by default on all Dldsv5 virtual machines.<br>
+<sup>2</sup> Dldsv5-series virtual machines can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+For more information on Disks Types: [Disk Types](./disks-types.md#ultra-disks)
+
+## Next steps
+
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023
virtual-machines Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resize-vm.md
Previously updated : 11/14/2022 Last updated : 2/21/2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
-This article shows you how to move a VM to a different [VM size](sizes.md).
+This article shows you how to change an existing virtual machine's [VM size](sizes.md).
-After you create a virtual machine (VM), you can scale the VM up or down by changing the VM size. In some cases, you must deallocate the VM first. This can happen if the new size isn't available on the hardware cluster that is currently hosting the VM.
+After you create a virtual machine (VM), you can scale the VM up or down by changing the VM size. In some cases, you must deallocate the VM first. Deallocation may be necessary if the new size isn't available on the same hardware cluster that is currently hosting the VM.
If your VM uses Premium Storage, make sure that you choose an **s** version of the size to get Premium Storage support. For example, choose Standard_E4**s**_v3 instead of Standard_E4_v3.
If your VM uses Premium Storage, make sure that you choose an **s** version of t
1. In the left menu, select **Size**. 1. Pick a new size from the list of available sizes and then select **Resize**. -
-If the virtual machine is currently running, changing its size will cause it to restart.
+> [!Note]
+> If the virtual machine is currently running, changing its size will cause it to restart.
If your VM is still running and you don't see the size you want in the list, stopping the virtual machine may reveal more sizes.
If your VM is still running and you don't see the size you want in the list, sto
To resize a VM, you need the latest [Azure CLI](/cli/azure/install-az-cli2) installed and logged in to an Azure account using [az login](/cli/azure/reference-index).
-1. View the list of available VM sizes on the hardware cluster where the VM is hosted with [az vm list-vm-resize-options](/cli/azure/vm). The following example lists VM sizes for the VM named `myVM` in the resource group `myResourceGroup` region:
+1. View the list of available VM sizes on the current hardware cluster using [az vm list-vm-resize-options](/cli/azure/vm). The following example lists VM sizes for the VM named `myVM` in the resource group `myResourceGroup` region:
```azurecli-interactive az vm list-vm-resize-options \
To resize a VM, you need the latest [Azure CLI](/cli/azure/install-az-cli2) inst
--name myVM --output table ```
-2. If the desired VM size is listed, resize the VM with [az vm resize](/cli/azure/vm). The following example resizes the VM named `myVM` to the `Standard_DS3_v2` size:
+2. If you find the desired VM size listed, resize the VM with [az vm resize](/cli/azure/vm). The following example resizes the VM named `myVM` to the `Standard_DS3_v2` size:
```azurecli-interactive az vm resize \
To resize a VM, you need the latest [Azure CLI](/cli/azure/install-az-cli2) inst
--size Standard_DS3_v2 ```
- The VM restarts during this process. After the restart, your existing OS and data disks are kept. Anything on the temporary disk is lost.
+ The VM restarts during this process. After the restart, your VM will keep existing OS and data disks. Anything on the temporary disk will be lost.
-3. If the desired VM size isn't listed, you need to first deallocate the VM with [az vm deallocate](/cli/azure/vm). This process allows the VM to then be resized to any size available that the region supports and then started. The following steps deallocate, resize, and then start the VM named `myVM` in the resource group named `myResourceGroup`:
+3. If you don't see the desired VM size, deallocate the VM with [az vm deallocate](/cli/azure/vm). This process allows you to resize the VM to any size available that the region supports. The following steps deallocate, resize, and then start the VM named `myVM` in the resource group named `myResourceGroup`:
```azurecli-interactive # Variables will make this easier. Replace the values with your own.
$resourceGroup = "myResourceGroup"
$vmName = "myVM" ```
-List the VM sizes that are available in the region where the VM is hosted.
+List the VM sizes that are available in the region where you hosted the VM.
```azurepowershell-interactive Get-AzVMSize -ResourceGroupName $resourceGroup -VMName $vmName ```
-If the size you want is listed, run the following commands to resize the VM. If the desired size isn't listed, go on to step 3.
+If you see the size you want listed, run the following commands to resize the VM. If you don't see the desired size, go on to step 3.
```azurepowershell-interactive $vm = Get-AzVM -ResourceGroupName $resourceGroup -VMName $vmName
$vm.HardwareProfile.VmSize = "<newVMsize>"
Update-AzVM -VM $vm -ResourceGroupName $resourceGroup ```
-If the size you want isn't listed, run the following commands to deallocate the VM, resize it, and restart the VM. Replace **\<newVMsize>** with the size you want.
+If you don't see the size you want listed, run the following commands to deallocate the VM, resize it, and restart the VM. Replace **\<newVMsize>** with the size you want.
```azurepowershell-interactive Stop-AzVM -ResourceGroupName $resourceGroup -Name $vmName -Force
Start-AzVM -ResourceGroupName $resourceGroup -Name $vmName
**Use PowerShell to resize a VM in an availability set**
-If the new size for a VM in an availability set isn't available on the hardware cluster currently hosting the VM, then all VMs in the availability set will need to be deallocated to resize the VM. You also might need to update the size of other VMs in the availability set after one VM has been resized. To resize a VM in an availability set, perform the following steps.
+If the new size for a VM in an availability set isn't available on the hardware cluster currently hosting the VM, then you will need to deallocate all VMs in the availability set to resize the VM. You also might need to update the size of other VMs in the availability set after one VM has been resized. To resize a VM in an availability set, perform the following steps.
```azurepowershell-interactive $resourceGroup = "myResourceGroup" $vmName = "myVM" ```
-List the VM sizes that are available on the hardware cluster where the VM is hosted.
+List the VM sizes that are available on the hardware cluster where you hosted the VM.
```azurepowershell-interactive Get-AzVMSize `
Get-AzVMSize `
-VMName $vmName ```
-If the desired size is listed, run the following commands to resize the VM. If it isn't listed, go to the next section.
+If you see the size you want listed, run the following commands to resize the VM. If you don't see it listed, go to the next section.
```azurepowershell-interactive $vm = Get-AzVM `
Update-AzVM `
-ResourceGroupName $resourceGroup ```
-If the size you want isn't listed, continue with the following steps to deallocate all VMs in the availability set, resize VMs, and restart them.
+If you don't see the size you want listed, continue with the following steps to deallocate all VMs in the availability set, resize VMs, and restart them.
Stop all VMs in the availability set.
virtual-machines Ssh Keys Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ssh-keys-portal.md
Title: Create SSH keys in the Azure portal description: Learn how to generate and store SSH keys in the Azure portal for connecting the Linux VMs.-+ Previously updated : 08/25/2020- Last updated : 02/21/2023+
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-If you frequently use the portal to deploy Linux VMs, you can make using SSH keys simpler by creating them directly in the portal, or uploading them from your computer.
+If you frequently use the portal to deploy Linux VMs, you can simplify using SSH keys by integrating them into Azure. There are several ways to create SSH keys for use with Azure.
-You can create a SSH keys when you first create a VM, and reuse them for other VMs. Or, you can create SSH keys separately, so that you have a set of keys stored in Azure to fit your organizations needs.
+- You can create SSH keys when you first create a VM. Your keys aren't tied to a specific VM and you can use them in future applications.
-If you have existing keys and you want to simplify using them in the portal, you can upload them and store them in Azure for reuse.
+- You can create SSH keys in the Azure portal separate from a VM. You can use them with both new and old VMs.
+
+- You can create SSH keys externally and upload them for use in Azure.
+
+You can reuse your stored keys in various of applications to fit your organization's needs.
For more detailed information about creating and using SSH keys with Linux VMs, see [Use SSH keys to connect to Linux VMs](./linux/ssh-from-windows.md).
For more detailed information about creating and using SSH keys with Linux VMs,
1. In **Resource group** select **Create new** to create a new resource group to store your keys. Type a name for your resource group and select **OK**.
-1. In **Region** select a region to store your keys. You can use the keys in any region, this is just the region where they will be stored.
+1. In **Region** select a region to store your keys. You can use the keys in any region, this option is just the region where you store them.
1. Type a name for your key in **Key pair name**. 1. In **SSH public key source**, select **Generate public key source**.
-1. When you are done, select **Review + create**.
+1. When you're done, select **Review + create**.
1. After it passes validation, select **Create**.
-1. You will then get a pop-up window to, select **Download private key and create resource**. This will download the SSH key as a .pem file.
+1. You'll get a pop-up window to, select **Download private key and create resource** that downloads the SSH key as a .pem file.
:::image type="content" source="./media/ssh-keys/download-key.png" alt-text="Download the private key as a .pem file":::
-1. Once the .pem file is downloaded, you might want to move it somewhere on your computer where it is easy to point to from your SSH client.
+1. Once you've downloaded the .pem file, you might want to move it somewhere on your computer where it's easy to point to from your SSH client.
## Connect to the VM
You can also upload a public SSH key to store in Azure. For information about ho
1. In **Resource group** select **Create new** to create a new resource group to store your keys. Type a name for your resource group and select **OK**.
-1. In **Region** select a region to store your keys. You can use the keys in any region, this is just the region where they will be stored.
+1. In **Region** select a region to store your keys. You can use the keys in any region, this option is just the region where they're stored.
1. Type a name for your key in **Key pair name**.
You can also upload a public SSH key to store in Azure. For information about ho
1. After validation completes, select **Create**.
-Once the key has been uploaded, you can choose to use it when you create a VM.
+Once you upload the key, you can choose to use it when you create a VM.
## List keys
-SSH keys created in the portal are stored as resources, so you can filter your resources view to see all of them.
+Azure stores your SSH keys created in the portal as resources, so you can filter your resources view to see all of them.
1. In the portal, select **All resource**. 1. In the filters, select **Type**, unselect the **Select all** option to clear the list.
SSH keys created in the portal are stored as resources, so you can filter your r
## Get the public key
-If you need your public key, you can easily copy it from the portal page for the key. Just list your keys (using the process in the last section) then select a key from the list. The page for your key will open and you can click the **Copy to clipboard** icon next to the key to copy it.
+If you need your public key, you can easily copy it from the portal page for the key. Just list your keys (using the process in the last section) then select a key from the list. The page for your key opens and you can click the **Copy to clipboard** icon next to the key to copy it.
## Next steps
virtual-machines Image Builder Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-virtual-desktop.md
Multi-session images are intended for pooled usage. Here's an example of the ima
```json "publisher": "MicrosoftWindowsDesktop", "offer": "Windows-10",
-"sku": "20h2-evd",
+"sku": "20h2-avd",
"version": "latest" ```
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/05/2023 Last updated : 02/21/2023