Updates from: 02/26/2022 02:08:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
Previously updated : 10/21/2021 Last updated : 02/25/2022
To obtain the values, look at the OpenID Connect discovery metadata for each of
Perform these steps for each Azure AD tenant that should be used to sign in:
-1. Open your browser and go to the OpenID Connect metadata URL for the tenant. Find the **issuer** object and record its value. It should look similar to `https://login.microsoftonline.com/00000000-0000-0000-0000-000000000000/.well-known/openid-configuration`.
+1. Open your browser and go to the OpenID Connect metadata URL for the tenant. Find the `issuer` object and record its value. It should look similar to `https://login.microsoftonline.com/00000000-0000-0000-0000-000000000000/v2.0`.
1. Copy and paste the value into the **ValidTokenIssuerPrefixes** key. Separate multiple issuers with a comma. An example with two issuers appears in the previous `ClaimsProvider` XML sample. [!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
active-directory-b2c Oauth2 Error Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/oauth2-error-technical-profile.md
Previously updated : 01/25/2022 Last updated : 02/25/2022 # Define an OAuth2 custom error technical profile in an Azure Active Directory B2C custom policy
-This article describes how to handle an OAuth2 custom error with Azure Active Directory B2C (Azure AD B2C). Use this technical profile if something logic goes wrong within your policy. The technical profile returns error to your OAuth2 or OpenId Connect relying party application.
+This article describes how to handle an OAuth2 custom error with Azure Active Directory B2C (Azure AD B2C). Use this technical profile if something logic goes wrong within your policy. The technical profile returns error to your OAuth2 or OpenId Connect relying party application. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/oauth2-error) of the OAuth2 custom error technical profile.
To handle custom OAuth2 error message:
The CryptographicKeys element contains the following key:
## Invoke the technical profile
-You can call the OAuth2 error technical profile from a user journey, or sub journey. Set the [orchestration step](userjourneys.md#orchestrationsteps) type to `SendClaims` with a reference to your OAuth2 error technical profile.
+You can call the OAuth2 error technical profile from a [user journey](userjourneys.md), or [sub journey](subjourneys.md) (type of `transfer`). Set the [orchestration step](userjourneys.md#orchestrationsteps) type to `SendClaims` with a reference to your OAuth2 error technical profile.
If your user journey or sub journey already has another `SendClaims` orchestration step, set the `DefaultCpimIssuerTechnicalProfileReferenceId` attribute to the token issuer technical profile.
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/session-behavior.md
Previously updated : 11/30/2021 Last updated : 02/25/2022
You can configure the Azure AD B2C session behavior, including:
- **Application** - This setting allows you to maintain a user session exclusively for an application, independent of other applications. For example, you can use this setting if you want the user to sign in to Contoso Pharmacy regardless of whether the user is already signed into Contoso Groceries. - **Policy** - This setting allows you to maintain a user session exclusively for a user flow, independent of the applications using it. For example, if the user has already signed in and completed a multi-factor authentication (MFA) step, the user can be given access to higher-security parts of multiple applications, as long as the session tied to the user flow doesn't expire. - **Suppressed** - This setting forces the user to run through the entire user flow upon every execution of the policy.-- **Keep me signed in (KMSI)** - Extends the session lifetime through the use of a persistent cookie. If this feature is enabled and the user selects it, the session remains active even after the user closes and reopens the browser. The session is revoked only when the user signs out. The KMSI feature only applies to sign-in with local accounts. The KMSI feature takes precedence over the session lifetime. ::: zone pivot="b2c-user-flow"
-To configure the session behavior:
+### Configure the user flow
+
+To configure the session behavior in your user flow, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
To configure the session behavior:
::: zone pivot="b2c-custom-policy"
-To change your session behavior and SSO configurations, you add a **UserJourneyBehaviors** element inside of the [RelyingParty](relyingparty.md) element. The **UserJourneyBehaviors** element must immediately follow the **DefaultUserJourney**. Your **UserJourneyBehavors** element should look like this example:
+### Configure the custom policy
+
+To configure the session behavior in your custom policy, follow these steps:
+
+1. Open the relying party (RP) file, for example *SignUpOrSignin.xml*
+1. If it doesn't already exist, add the following `<UserJourneyBehaviors>` element to the `<RelyingParty>` element. It must be located immediately after `<DefaultUserJourney ReferenceId="UserJourney Id"/>`.
+
+ ```xml
+ <UserJourneyBehaviors>
+ <SingleSignOn Scope="Application" />
+ <SessionExpiryType>Absolute</SessionExpiryType>
+ <SessionExpiryInSeconds>86400</SessionExpiryInSeconds>
+ </UserJourneyBehaviors>
+ ```
+
+ After you add the user journey behavior elements, the `RelyingParty` element should look like the following example:
+
+ ```xml
+ <RelyingParty>
+ <DefaultUserJourney ReferenceId="SignUpOrSignIn" />
+ <UserJourneyBehaviors>
+ <SingleSignOn Scope="Application" />
+ <SessionExpiryType>Absolute</SessionExpiryType>
+ <SessionExpiryInSeconds>86400</SessionExpiryInSeconds>
+ </UserJourneyBehaviors>
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="displayName" />
+ <OutputClaim ClaimTypeReferenceId="givenName" />
+ ...
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="sub" />
+ </TechnicalProfile>
+ </RelyingParty>
+ ```
+
+
+1. Change the value of the `Scope` attribute to one of the possible value: `Suppressed`, `Tenant`, `Application`, or `Policy`. For more information, check out the [RelyingParty](relyingparty.md) reference article.
+1. Set the `SessionExpiryType` element to `Rolling` or `Absolute`. For more information, check out the [RelyingParty](relyingparty.md) reference article.
+1. Set the `SessionExpiryInSeconds` element to a numeric value between 900 seconds (15 minutes) and 86,400 seconds(24 hours). For more information, check out the [RelyingParty](relyingparty.md) reference article.
-```xml
-<UserJourneyBehaviors>
- <SingleSignOn Scope="Application" />
- <SessionExpiryType>Absolute</SessionExpiryType>
- <SessionExpiryInSeconds>86400</SessionExpiryInSeconds>
-</UserJourneyBehaviors>
-```
::: zone-end ## Enable Keep me signed in (KMSI)
active-directory-domain-services Tutorial Create Replica Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-replica-set.md
To delete a replica set, complete the following steps:
1. Choose your managed domain, such as *aaddscontoso.com*. 1. On the left-hand side, select **Replica sets**. From the list of replica sets, select the **...** context menu next to the replica set you want to delete. 1. Select **Delete** from the context menu, then confirm you want to delete the replica set.
+1. In the Azure ADDS management VM, access the DNS console and manually delete DNS records for the domain controllers from the deleted replica set.
> [!NOTE] > Replica set deletion may be a time-consuming operation.
active-directory Cloudknox Multi Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-multi-cloud-glossary.md
This glossary provides a list of some of the commonly used cloud terms in CloudK
|--|--| | ACL | Access control list. A list of files or resources that contain information about which users or groups have permission to access those resources or modify those files. | | ARN | Azure Resource Notification |
-| ASIM | Azure Sentinel Information Model |
+| Authorization System | CIEM supports AWS accounts, Azure Subscriptions, GCP projects as the Authorization systems |
+| Authorization System Type | Any system which provides the authorizations by assigning the permissions to the identities, resources. CIEM supports AWS, Azure, GCP as the Authorization System Types |
| Cloud security | A form of cybersecurity that protects data stored online on cloud computing platforms from theft, leakage, and deletion. Includes firewalls, penetration testing, obfuscation, tokenization, virtual private networks (VPN), and avoiding public internet connections. |
-| CASB | Cloud Access Security Broker. Products and services that address security gaps in an organizationΓÇÖs use of cloud services. Designed to protect and control access to data thatΓÇÖs stored in someone elseΓÇÖs systems. Deliver differentiated, cloud-specific capabilities that may not be available as features in traditional security products. They provide a central location for policy and governance concurrently across multiple cloud services. They also provide granular visibility into and control over user activities and sensitive data from both inside and outside the enterprise perimeter, including cloud-to-cloud access. |
| Cloud storage | A service model in which data is maintained, managed, and backed up remotely. Available to users over a network. | | CIAM | Cloud Infrastructure Access Management | | CIEM | Cloud Infrastructure Entitlement Management. The next generation of solutions for enforcing least privilege in the cloud. It addresses cloud-native security challenges of managing identity access management in cloud environments. |
This glossary provides a list of some of the commonly used cloud terms in CloudK
| CNAPP | Cloud-Native Application Protection. The convergence of cloud security posture management (CSPM), cloud workload protection (CWP), cloud infrastructure entitlement management (CIEM), and cloud applications security broker (CASB). An integrated security approach that covers the entire lifecycle of cloud-native applications. | | CSPM | Cloud Security Posture Management. Addresses risks of compliance violations and misconfigurations in enterprise cloud environments. Also focuses on the resource level to identify deviations from best practice security settings for cloud governance and compliance. | | CWPP | Cloud Workload Protection Platform |
-| DRI | Data risk index. A comprehensive, integrated representation of data risk. |
-| Data risk management | The process an organization uses when acquiring, storing, transforming, and using its data, from creation to retirement, to eliminate data risk. |
+| Data Collector | Virtual entity which stores the data collection configuration |
| Delete task | A high-risk task that allows users to permanently delete a resource. |
+| ED | Enterprise directory |
| Entitlement | An abstract attribute that represents different forms of user permissions in a range of infrastructure systems and business applications.| | Entitlement management | Technology that grants, resolves, enforces, revokes, and administers fine-grained access entitlements (that is, authorizations, privileges, access rights, permissions and rules). Its purpose is to execute IT access policies to structured/unstructured data, devices, and services. It can be delivered by different technologies, and is often different across platforms, applications, network components, and devices. | | High-risk task | A task in which a user can cause data leakage, service disruption, or service degradation. |
This glossary provides a list of some of the commonly used cloud terms in CloudK
| Identity lifecycle management | Maintain digital identities, their relationships with the organization, and their attributes during the entire process from creation to eventual archiving, using one or more identity life cycle patterns. | | IGA | Identity governance and administration. Technology solutions that conduct identity management and access governance operations. IGA includes the tools, technologies, reports, and compliance activities required for identity lifecycle management. It includes every operation from account creation and termination to user provisioning, access certification, and enterprise password management. It looks at automated workflow and data from authoritative sources capabilities, self-service user provisioning, IT governance, and password management. | | ITSM | Information Technology Security Management. Tools that enable IT operations organizations (infrastructure and operations managers), to better support the production environment. Facilitate the tasks and workflows associated with the management and delivery of quality IT services. |
+| JEP | Just Enough Permissions |
| JIT | Just in Time access can be seen as a way to enforce the principle of least privilege to ensure users and non-human identities are given the minimum level of privileges. It also ensures that privileged activities are conducted in accordance with an organizationΓÇÖs Identity Access Management (IAM), IT Service Management (ITSM), and Privileged Access Management (PAM) policies, with its entitlements and workflows. JIT access strategy enables organizations to maintain a full audit trail of privileged activities so they can easily identify who or what gained access to which systems, what they did at what time, and for how long. | | Least privilege | Ensures that users only gain access to the specific tools they need to complete a task. | | Multi-tenant | A single instance of the software and its supporting infrastructure serves multiple customers. Each customer shares the software application and also shares a single database. |
This glossary provides a list of some of the commonly used cloud terms in CloudK
| Resource | Any entity that uses compute capabilities can be accessed by users and services to perform actions. | | Role | An IAM identity that has specific permissions. Instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. A role doesn't have standard long-term credentials such as a password or access keys associated with. | | SCIM | System for CrossΓÇôdomain Identity Management |
-| SCIΓÇôM | Security Compliance Identity and Management |
| SIEM | Security Information and Event Management. Technology that supports threat detection, compliance and security incident management through the collection and analysis (both near real time and historical) of security events, as well as a wide variety of other event and contextual data sources. The core capabilities are a broad scope of log event collection and management, the ability to analyze log events and other data across disparate sources, and operational capabilities (such as incident management, dashboards, and reporting). | | SOAR | Security orchestration, automation and response (SOAR). Technologies that enable organizations to take inputs from various sources (mostly from security information and event management [SIEM] systems) and apply workflows aligned to processes and procedures. These workflows can be orchestrated via integrations with other technologies and automated to achieve the desired outcome and greater visibility. Other capabilities include case and incident management features; the ability to manage threat intelligence, dashboards and reporting; and analytics that can be applied across various functions. SOAR tools significantly enhance security operations activities like threat detection and response by providing machine-powered assistance to human analysts to improve the efficiency and consistency of people and processes. | | Super user / Super identity | A powerful account used by IT system administrators that can be used to make configurations to a system or application, add or remove users, or delete data. |
This glossary provides a list of some of the commonly used cloud terms in CloudK
## Next steps -- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](cloudknox-overview.md).
+- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](cloudknox-overview.md).
active-directory Cloudknox Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-aws.md
Previously updated : 02/23/2022 Last updated : 02/24/2022
This article describes how to onboard an Amazon Web Services (AWS) account on Cl
> [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable CloudKnox on your Azure Active Directory tenant](cloudknox-onboard-enable-tenant.md).
+## Prerequisites
+
+- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).
+- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms).
+ ## Onboard an AWS account 1. If the **Data Collectors** dashboard isn't displayed when CloudKnox launches:
active-directory Cloudknox Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-azure.md
Previously updated : 02/23/2022 Last updated : 02/24/2022
This article describes how to onboard a Microsoft Azure subscription or subscrip
To add CloudKnox to your Azure AD tenant: - You must have an Azure AD user account and an Azure command-line interface (Azure CLI) on your system, or an Azure subscription. If you don't already have one, [create a free account](https://azure.microsoft.com/free/). - You must have **Microsoft.Authorization/roleAssignments/write** permission at the subscription or management group scope to perform these tasks. If you don't have this permission, you can ask someone who has this permission to perform these tasks for you.
+- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).
+- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms).
-## Onboard an Azure subscription
+## View a training video on enabling CloudKnox
+
+To view a video on how to enable CloudKnox in your Azure AD tenant, select
+[Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+
+## How to onboard an Azure subscription
1. If the **Data Collectors** dashboard isn't displayed when CloudKnox launches:
active-directory Cloudknox Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-tenant.md
Previously updated : 02/23/2022 Last updated : 02/24/2022
This article describes how to enable CloudKnox Permissions Management (CloudKnox
## Prerequisites
-To enable CloudKnox in your organization, you must:
+To enable CloudKnox in your organization:
-- Have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).-- Be eligible for or have an active assignment to the global administrator role as a user in that tenant.
+- You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+- You must be eligible for or have an active assignment to the global administrator role as a user in that tenant.
+- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).
+- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms).
> [!NOTE] > During public preview, CloudKnox doesn't perform a license check.
-## Enable CloudKnox on your Azure AD tenant
+## View a training video on enabling CloudKnox
+
+To view a video on how to enable CloudKnox in your Azure AD tenant, select
+[Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+
+## How to enable CloudKnox on your Azure AD tenant
1. In your browser: 1. Go to [Azure services](https://portal.azure.com) and use your credentials to sign in to [Azure Active Directory](https://ms.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview).
active-directory Cloudknox Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-gcp.md
Previously updated : 02/23/2022 Last updated : 02/24/2022
This article describes how to onboard a Google Cloud Platform (GCP) project on C
> [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable CloudKnox on your Azure Active Directory tenant](cloudknox-onboard-enable-tenant.md).
+## Prerequisites
+
+- To enable the CloudKnox **Feature highlights** tile in the Azure AD portal, [select this link to run the script in your browser](https://aka.ms/ciem-prod).
+- To use the CloudKnox public preview, we encourage you to fill out a consent form that provides other terms and conditions for the public preview product. To open the form, select [CloudKnox Permissions Management Public Preview: Terms and Conditions](https://aka.ms/ciem-terms).
+ ## Onboard a GCP project 1. If the **Data Collectors** dashboard isn't displayed when CloudKnox launches:
active-directory Cloudknox Training Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-training-videos.md
Title: Microsoft CloudKnox Permissions Management training videos
-description: Microsoft CloudKnox Permissions Management training videos.
+ Title: CloudKnox Permissions Management training videos
+description: CloudKnox Permissions Management training videos.
Previously updated : 12/27/2021 Last updated : 02/24/2022
-# Microsoft CloudKnox Permissions Management training videos
+# CloudKnox Permissions Management training videos
-To view step-by-step training videos on how to use CloudKnox features, select a link below.
+To view step-by-step training videos on how to use CloudKnox Permissions Management (CloudKnox) features, select a link below.
-## Privilege on demand (POD) work flows
+## Enable CloudKnox in your Azure Active Directory (Azure AD) tenant
+
+To view a video on how to enable CloudKnox in your Azure AD tenant, select
+[Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+
+<!## Privilege on demand (POD) work flows
- View a step-by-step video on the [privilege on demand (POD) work flow from the Just Enough Permissions (JEP) Controller](https://vimeo.com/461508166/3d88107f41).
To view step-by-step training videos on how to use CloudKnox features, select a
## Group permissions -- View a step-by-step video on [how to create group-based permissions](https://vimeo.com/462797947/d041de9157).
+- View a step-by-step video on [how to create group-based permissions](https://vimeo.com/462797947/d041de9157).>
<!## Next steps>
active-directory Resilience Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/resilience-defaults.md
Previously updated : 09/13/2021 Last updated : 02/25/2022
-# Conditional Access: Resilience defaults (Preview)
+# Conditional Access: Resilience defaults
If there was an outage of the primary authentication service, the Azure Active Directory (Azure AD) Backup Authentication Service may automatically issue access tokens to applications for existing sessions. This functionality may significantly increase Azure AD resilience, because reauthentications for existing sessions account for more than 90% of authentications to Azure AD. The Backup Authentication Service doesn't support new sessions or authentications by guest users.
For authentications protected by Conditional Access, policies are reevaluated be
1. Which Conditional Access policies apply? 1. For policies that do apply, were the required controls are satisfied?
-During an outage, not all conditions can be evaluated in real time by the Backup Authentication Service to determine whether a Conditional Access policy should apply. Conditional Access resilience defaults are a new session control that lets admins decide whether to block authentications during an outage whenever a policy condition cannot be evaluated in real-time or allow policies to be evaluated using data collected at the beginning of the userΓÇÖs session.
+During an outage, not all conditions can be evaluated in real time by the Backup Authentication Service to determine whether a Conditional Access policy should apply. Conditional Access resilience defaults are a new session control that lets admins decide between:
+
+- Whether to block authentications during an outage whenever a policy condition canΓÇÖt be evaluated in real-time.
+- Allow policies to be evaluated using data collected at the beginning of the userΓÇÖs session.
> [!IMPORTANT] > Resilience defaults are automatically enabled for all new and existing policies, and Microsoft highly recommends leaving the resilience defaults enabled to mitigate the impact of an outage. Admins may disable resilience defaults for individual Conditional Access policies.
When resilience defaults are enabled, the Backup Authentication Service may use
## Resilience defaults disabled
-When resilience defaults are disabled, the Backup Authentication Service won't use data collected at the beginning of the session to evaluate conditions. During an outage, if a policy condition cannot be evaluated in real-time, access will be denied.
+When resilience defaults are disabled, the Backup Authentication Service won't use data collected at the beginning of the session to evaluate conditions. During an outage, if a policy condition canΓÇÖt be evaluated in real-time, access will be denied.
**Example**: A policy with resilience defaults disabled requires all global admins accessing the Azure portal to do MFA. Before an outage, if a user who isn't a global admin accesses the Azure portal, the policy wouldn't apply, and the user would be granted access without being prompted for MFA. During an outage, the Backup Authentication Service would reevaluate the policy to determine whether the user should be prompted for MFA. **Since the Backup Authentication Service cannot evaluate role membership in real-time, it would block the user from accessing the Azure Portal.**
When resilience defaults are disabled, the Backup Authentication Service won't u
## Testing resilience defaults
-It isn't possible to conduct a dry run using the Backup Authentication Service or simulate the result of a policy with resilience defaults enabled or disabled at this time. Azure AD will conduct monthly exercises using the Backup Authentication Service and the sign-in logs will display if the Backup Authentication Service was used to issue the access token.
+It isn't possible to conduct a dry run using the Backup Authentication Service or simulate the result of a policy with resilience defaults enabled or disabled at this time. Azure AD will conduct monthly exercises using the Backup Authentication Service. The sign-in logs will display if the Backup Authentication Service was used to issue the access token.
## Configuring resilience defaults
active-directory Mobile App Quickstart Portal Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-android.md
Applications must be represented by an app object in Azure Active Directory so t
### Step 1: Configure your application in the Azure portal For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker. > [!div id="makechanges" class="nextstepaction" class="configure-app-button"]
-> [Make these changes for me]()
+> <button>Make this change for me</button>
> [!div id="appconfigured" class="alert alert-info"] > ![Already configured](media/quickstart-v2-android/green-check.png) Your application is configured with these attributes
active-directory Mobile App Quickstart Portal Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md
The quickstart applies to both iOS and macOS apps. Some steps are needed only fo
![Shows how the sample app generated by this quickstart works](media/quickstart-v2-ios/ios-intro.svg) #### Step 1: Configure your application
-For the code sample for this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
+For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
> [!div id="makechanges" class="nextstepaction" class="configure-app-button"]
-> [Make this change for me]()
+> <button>Make this change for me</button>
> [!div id="appconfigured" class="alert alert-info"] > ![Already configured](media/quickstart-v2-ios/green-check.png) Your application is configured with these attributes
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
In the code above:
You can find more details about how Microsoft.Identity.Web enables you to create web apps in <https://aka.ms/ms-id-web/webapp>
-> [!WARNING]
-> Currently, Microsoft.Identity.Web does not support the scenario of **Individual User Accounts** (storing user accounts in-app) when using Azure AD as and external login provider. For details, see: [AzureAD/microsoft-identity-web#133](https://github.com/AzureAD/microsoft-identity-web/issues/133)
- # [ASP.NET](#tab/aspnet) The code related to authentication in an ASP.NET web app and web APIs is located in the [App_Start/Startup.Auth.cs](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/a2da310539aa613b77da1f9e1c17585311ab22b7/WebApp/App_Start/Startup.Auth.cs#L17-L61) file.
active-directory Security Best Practices For App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-best-practices-for-app-registration.md
It's important to keep Redirect URIs of your application up to date. A lapse in
## Implicit flow token configuration
-Scenarios that require **implicit flow** can now use **Auth code flow** to reduce the risk of compromise associated with implicit grant flow misuse. If you configured your application registration to get Access tokens using implicit flow, but don't actively use it, we recommend you turn off the setting to protect from misuse.
+Scenarios that required **implicit flow** can now use **Auth code flow** to reduce the risk of compromise associated with implicit grant flow misuse. If you configured your application registration to get Access tokens using implicit flow, but don't actively use it, we recommend you turn off the setting to protect from misuse.
![access tokens used for implicit flows](media/active-directory-application-registration-best-practices/implict-grant-flow.png)
active-directory Enterprise State Roaming Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-enable.md
Last updated 02/15/2022
-+ # Enable Enterprise State Roaming in Azure Active Directory
-Enterprise State Roaming is available to any organization with an Azure AD Premium or Enterprise Mobility + Security
-(EMS) license. For more information on how to get an Azure AD subscription, see the [Azure AD product page](https://azure.microsoft.com/services/active-directory).
+Enterprise State Roaming is available to any organization with an Azure AD Premium or Enterprise Mobility + Security (EMS) license. For more information on how to get an Azure AD subscription, see the [Azure AD product page](https://azure.microsoft.com/services/active-directory).
When you enable Enterprise State Roaming, your organization is automatically granted a free, limited-use license for Azure Rights Management protection from Azure Information Protection. This free subscription is limited to encrypting and decrypting enterprise settings and application data synced by Enterprise State Roaming. You must have [a paid subscription](https://azure.microsoft.com/services/information-protection/) to use the full capabilities of the Azure Rights Management service.
When you enable Enterprise State Roaming, your organization is automatically gra
## To enable Enterprise State Roaming
-1. Sign in to [Azure AD admin center](https://aad.portal.azure.com/).
-1. Select **Azure Active Directory** > **Devices** > **Enterprise State Roaming**.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Browse to **Azure Active Directory** > **Devices** > **Enterprise State Roaming**.
1. Select **Users may sync settings and app data across devices**. For more information, see [how to configure device settings](./device-management-azure-portal.md). ![image of device setting labeled Users may sync settings and app data across devices](./media/enterprise-state-roaming-enable/device-settings.png)
-For a Windows 10 or newer device to use the Enterprise State Roaming service, the device must authenticate using an Azure AD identity. For devices that are joined to Azure AD, the userΓÇÖs primary sign-in identity is their Azure AD identity, so no additional configuration is required. For devices that use on-premises Active Directory, the IT admin must [Configure hybrid Azure Active Directory joined devices](./hybrid-azuread-join-plan.md).
+For a Windows 10 or newer device to use the Enterprise State Roaming service, the device must authenticate using an Azure AD identity. For devices that are joined to Azure AD, the userΓÇÖs primary sign-in identity is their Azure AD identity, so no other configuration is required. For devices that use on-premises Active Directory, the IT admin must [Configure hybrid Azure Active Directory joined devices](./hybrid-azuread-join-plan.md).
## Data storage
-Enterprise State Roaming data is hosted in one or more [Azure regions](https://azure.microsoft.com/regions/) that best align with the country/region value set in the Azure Active Directory instance. Enterprise State Roaming data is partitioned based on three major geographic regions: North America, EMEA, and APAC. Enterprise State Roaming data for the tenant is locally located with the geographical region, and is not replicated across regions. For example:
+Enterprise State Roaming data is hosted in one or more [Azure regions](https://azure.microsoft.com/regions/) that best align with the country/region value set in the Azure Active Directory instance. Enterprise State Roaming data is partitioned based on three major geographic regions: North America, EMEA, and APAC. Enterprise State Roaming data for the tenant is locally located with the geographical region, and isn't replicated across regions. For example:
| Country/region value | has their data hosted in | | -- | |
Enterprise State Roaming data is hosted in one or more [Azure regions](https://a
| An APAC country/region such as Australia or New Zealand | One or more of the Azure regions within Asia | | South American and Antarctica regions | One or more Azure regions within the US |
-The country/region value is set as part of the Azure AD directory creation process and cannot be subsequently modified. If you need more details on your data storage location, file a ticket with [Azure support](https://azure.microsoft.com/support/options/).
+The country/region value is set as part of the Azure AD directory creation process and canΓÇÖt be modified later. If you need more details on your data storage location, file a ticket with [Azure support](https://azure.microsoft.com/support/options/).
## View per-user device sync status
Follow these steps to view a per-user device sync status report.
## Data retention
-Data synced to the Microsoft cloud using Enterprise State Roaming is retained until it is manually deleted or until the data in question is determined to be stale.
+Data synced to the Microsoft cloud using Enterprise State Roaming is retained until it's manually deleted or until the data is determined to be stale.
### Explicit deletion
-Explicit deletion is when an Azure admin deletes a user or a directory or otherwise requests explicitly that data is to be deleted.
+Explicit deletion is when an administrator deletes a user, directory, or requests explicitly that data is to be deleted.
* **User deletion**: When a user is deleted in Azure AD, the user account roaming data is deleted after 90 to 180 days. * **Directory deletion**: Deleting an entire directory in Azure AD is an immediate operation. All the settings data associated with that directory is deleted after 90 to 180 days.
Explicit deletion is when an Azure admin deletes a user or a directory or otherw
### Stale data deletion
-Data that has not been accessed for one year (ΓÇ£the retention periodΓÇ¥) will be treated as stale and may be deleted from the Microsoft cloud. The retention period is subject to change but will not be less than 90 days. The stale data may be a specific set of Windows/application settings or all settings for a user. For example:
+Data that hasn't been accessed for one year (ΓÇ£the retention periodΓÇ¥) will be treated as stale and may be deleted from the Microsoft cloud. The retention period is subject to change but won't be less than 90 days. The stale data may be a specific set of Windows/application settings or all settings for a user. For example:
-* If no devices access a particular settings collection (for example, an application is removed from the device, or a settings group such as ΓÇ£ThemeΓÇ¥ is disabled for all of a userΓÇÖs devices), then that collection becomes stale after the retention period and may be deleted.
-* If a user has turned off settings sync on all their devices, then none of the settings data will be accessed, and all the settings data for that user will become stale and may be deleted after the retention period.
-* If the Azure AD directory admin turns off Enterprise State Roaming for the entire directory, then all users in that directory will stop syncing settings, and all settings data for all users will become stale and may be deleted after the retention period.
+* If no devices access a particular settings collection like language, then that collection becomes stale after the retention period and may be deleted.
+* If a user has turned off settings sync on all their devices, then none of the settings data will be accessed. All the settings data for that user will become stale and may be deleted after the retention period.
+* If the Azure AD directory admin turns off Enterprise State Roaming for the entire directory, then all users in that directory will stop syncing settings. All settings data for all users will become stale and may be deleted after the retention period.
### Deleted data recovery
-The data retention policy is not configurable. Once the data is permanently deleted, it is not recoverable. However, The settings data is deleted only from the Microsoft cloud, not from the end-user device. If any device later reconnects to the Enterprise State Roaming service, the settings are again synced and stored in the Microsoft cloud.
+The data retention policy isn't configurable. Once the data is permanently deleted, it isn't recoverable. However, The settings data is deleted only from the Microsoft cloud, not from the end-user device. If any device later reconnects to the Enterprise State Roaming service, the settings are again synced and stored in the Microsoft cloud.
## Next steps
active-directory Enterprise State Roaming Group Policy Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-group-policy-settings.md
Last updated 02/15/2022
-+ # Group Policy and MDM settings
-Use these group policy and mobile device management (MDM) settings only on corporate-owned devices because these policies are applied to the userΓÇÖs entire device. Applying an MDM policy to disable settings sync for a personal, user-owned device will negatively impact the use of that device. Additionally, other user accounts on the device will also be affected by the policy.
+Use these Group Policy and mobile device management (MDM) settings only on corporate-owned devices because these policies are applied to the userΓÇÖs entire device. Applying an MDM policy to disable settings sync for a personal, user-owned device will negatively impact the use of that device. Additionally, other user accounts on the device will also be affected by the policy.
Enterprises that want to manage roaming for personal (unmanaged) devices can use the Azure portal to enable or disable roaming, rather than using Group Policy or MDM. The following tables describe the policy settings available.
The MDM policy settings apply to Windows 10 or newer. Refer to [Devices and endp
| Allow Microsoft Account Connection |Allows users to authenticate using a Microsoft account on the device | | Allow Sync My Settings |Allows users to roam Windows settings and app data; Disabling this policy will disable sync as well as backups on mobile devices |
-## Group policy settings
+## Group Policy settings
-The group policy settings apply to Windows 10 or newer devices that are joined to an Active Directory domain. The table also includes legacy settings that would appear to manage sync settings, but that do not work for Enterprise State Roaming for Windows 10 or newer, which are noted with ΓÇÿDo not useΓÇÖ in the description.
+The Group Policy settings apply to Windows 10 or newer devices that are joined to an Active Directory domain. The table also includes legacy settings that would appear to manage sync settings. Legacy settings that don't work for Enterprise State Roaming for Windows 10 or newer are noted with ΓÇÿDo not useΓÇÖ in the description.
-These settings are located at: `Computer Configuration > Administrative Templates > Windows Components > Sync your settings`
+These settings are located in Group Policy under: **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Sync your settings**.
| Name | Description | | | |
active-directory Enterprise State Roaming Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-overview.md
Last updated 02/15/2022
-+
With Windows 10 or newer, [Azure Active Directory (Azure AD)](../fundamentals/active-directory-whatis.md) users gain the ability to securely synchronize their user settings and application settings data to the cloud. Enterprise State Roaming provides users with a unified experience across their Windows devices and reduces the time needed for configuring a new device. Enterprise State Roaming operates similar to the standard [consumer settings sync](https://go.microsoft.com/fwlink/?linkid=2015135) that was first introduced in Windows 8. Additionally, Enterprise State Roaming offers:
-* **Separation of corporate and consumer data** ΓÇô Organizations are in control of their data, and there is no mixing of corporate data in a consumer cloud account or consumer data in an enterprise cloud account.
-* **Enhanced security** ΓÇô Data is automatically encrypted before leaving the userΓÇÖs Windows 10 or newer device by using Azure Rights Management (Azure RMS), and data stays encrypted at rest in the cloud. All content stays encrypted at rest in the cloud, except for the namespaces, like settings names and Windows app names.
-* **Better management and monitoring** ΓÇô Provides control and visibility over who syncs settings in your organization and on which devices through the Azure AD portal integration.
-
-Enterprise State Roaming is available in multiple Azure regions. You can find the updated list of available regions on the [Azure Services by Regions](https://azure.microsoft.com/regions/#services) page under Azure Active Directory.
+- **Separation of corporate and consumer data** ΓÇô Organizations are in control of their data, and there is no mixing of corporate data in a consumer cloud account or consumer data in an enterprise cloud account.
+- **Enhanced security** ΓÇô Data is automatically encrypted before leaving the userΓÇÖs Windows 10 or newer device by using Azure Rights Management (Azure RMS), and data stays encrypted at rest in the cloud. All content stays encrypted at rest in the cloud, except for the namespaces, like settings names and Windows app names.
+- **Better management and monitoring** ΓÇô Provides control and visibility over who syncs settings in your organization and on which devices through the Azure AD portal integration.
| Article | Description | | | |
-| [Enable Enterprise State Roaming in Azure Active Directory](enterprise-state-roaming-enable.md) |Enterprise State Roaming is available to any organization with a Premium Azure Active Directory (Azure AD) subscription. For more information on how to get an Azure AD subscription, see the [Azure AD product](https://azure.microsoft.com/services/active-directory) page. |
-| [Settings and data roaming FAQ](enterprise-state-roaming-faqs.yml) |This article answers some questions IT administrators might have about settings and app data sync. |
-| [Group policy and MDM settings for settings sync](enterprise-state-roaming-group-policy-settings.md) |Windows 10 or newer provides Group Policy and mobile device management (MDM) policy settings to limit settings sync. |
-| [Windows 10 roaming settings reference](enterprise-state-roaming-windows-settings-reference.md) |A list of settings that will be roamed and/or backed-up in Windows 10 or newer. |
-| [Troubleshooting](enterprise-state-roaming-troubleshooting.md) |This article goes through some basic steps for troubleshooting, and contains a list of known issues. |
+| [Enable Enterprise State Roaming in Azure Active Directory](enterprise-state-roaming-enable.md) | Enterprise State Roaming is available to any organization with a Premium Azure Active Directory (Azure AD) subscription. |
+| [Settings and data roaming FAQ](enterprise-state-roaming-faqs.yml) | This article answers some questions IT administrators might have about settings and app data sync. |
+| [Group policy and MDM settings for settings sync](enterprise-state-roaming-group-policy-settings.md) | Windows 10 or newer provides Group Policy and mobile device management (MDM) policy settings to limit settings sync. |
+| [Windows 10 roaming settings reference](enterprise-state-roaming-windows-settings-reference.md) | A list of settings that will be roamed and/or backed-up in Windows 10 or newer. |
+| [Troubleshooting](enterprise-state-roaming-troubleshooting.md) | This article goes through some basic steps for troubleshooting, and contains a list of known issues. |
## Next steps
active-directory Enterprise State Roaming Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-troubleshooting.md
Previously updated : 02/15/2022 Last updated : 02/25/2022 -+
Enterprise State Roaming requires the device to be registered with Azure AD. Alt
Under certain conditions, Enterprise State Roaming can fail to sync data if Azure AD Multi-Factor Authentication is configured. For more information on these symptoms, see the support document [KB3193683](https://support.microsoft.com/kb/3193683).
-**Potential issue**: If your device is configured to require Multi-Factor Authentication on the Azure Active Directory portal, you may fail to sync settings while signing in to a Windows 10 or newer device using a password. This type of Multi-Factor Authentication configuration is intended to protect an Azure administrator account. Admin users may still be able to sync by signing in to their Windows 10 or newer devices with their Microsoft Passport for Work PIN or by completing Multi-Factor Authentication while accessing other Azure services like Microsoft 365.
+**Potential issue**: If your device is configured to require Multi-Factor Authentication on the Azure Active Directory portal, you may fail to sync settings while signing in to a Windows 10 or newer device using a password. This type of Multi-Factor Authentication configuration is intended to protect an Azure administrator account. Admin users may still be able to sync by signing in to their Windows 10 or newer devices with their Windows Hello for Business PIN or by completing Multi-Factor Authentication while accessing other Azure services like Microsoft 365.
-**Potential issue**: Sync can fail if the admin configures the Active Directory Federation Services Multi-Factor Authentication Conditional Access policy and the access token on the device expires. Ensure that you sign in and sign out using the Microsoft Passport for Work PIN or complete Multi-Factor Authentication while accessing other Azure services like Microsoft 365.
+**Potential issue**: Sync can fail if the admin configures the Active Directory Federation Services Multi-Factor Authentication Conditional Access policy and the access token on the device expires. Ensure that you sign in and sign out using the Windows Hello for Business PIN or complete Multi-Factor Authentication while accessing other Azure services like Microsoft 365.
### Event Viewer
Make sure the Windows 10 v1607 client has the August 23, 2016 Cumulative Update
-### Internet Explorer Favorites do not sync
-
-Affects devices running the Windows 10 November Update (Version 1511).
-
-**Recommended action**
-Make sure the Windows 10 v1511 client has the July 2016 Cumulative Update ([KB3172985](https://support.microsoft.com/kb/3172985) OS Build 10586.494).
---
-### Theme is not syncing, as well as data protected with Windows Information Protection
-
-To prevent data leakage, data that is protected with [Windows Information Protection](/windows/security/information-protection/windows-information-protection/protect-enterprise-data-using-wip) will not sync through Enterprise State Roaming for devices using the Windows 10 Anniversary Update.
-
-**Recommended action**
-None. Future updates to Windows may resolve this issue.
--- ### Date, Time, and Region settings do not sync on domain-joined device Devices that are domain-joined will not experience sync for the setting Date, Time, and Region: automatic time. Using automatic time may override the other Date, Time, and Region settings and cause those settings not to sync.
None.
-### UAC Prompts when syncing passwords
-
-Affects devices running the Windows 10 November Update (Version 1511) with a wireless NIC that is configured to sync passwords.
-
-**Recommended action**
-Make sure the Windows 10 v1511 client has the Cumulative Update ([KB3140743](https://support.microsoft.com/kb/3140743) OS Build 10586.494).
---
-### Sync does not work on devices that use smart card for login
-
-If you attempt to sign in to your Windows device using a smart card or virtual smart card, settings sync will stop working.
-
-**Recommended action**
-None. Future updates to Windows may resolve this issue.
--- ### Domain-joined device is not syncing after leaving corporate network Domain-joined devices registered to Azure AD may experience sync failure if the device is off-site for extended periods of time, and domain authentication can't complete.
active-directory Enterprise State Roaming Windows Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-windows-settings-reference.md
Title: Windows 10 roaming settings reference - Azure Active Directory
-description: Settings that will be roamed or backed up in Windows 10 with ESR
+ Title: Windows roaming settings reference - Azure Active Directory
+description: Settings that will be roamed or backed up in Windows with ESR
Previously updated : 02/15/2022 Last updated : 02/25/2022 -+
-# Windows 10 roaming settings reference
+# Windows roaming settings reference
The following is a list of the settings that will be roamed or backed up in Windows 10 or newer.
-## Devices and endpoints
-
-See the following table for a summary of the devices and account types that are supported by the sync, backup, and restore framework in Windows 10 or newer.
-
-| Account type and operation | Desktop | Mobile |
-| | | |
-| Azure Active Directory: sync |Yes |No |
-| Azure Active Directory: backup/restore |No |No |
-| Microsoft account: sync |Yes |Yes |
-| Microsoft account: backup/restore |No |Yes |
-
-## What is backup?
-
-Windows settings generally sync by default, but some settings are only backed up, such as the list of installed applications on a device. Backup is for mobile devices only and currently not available for Enterprise State Roaming users. Backup uses a Microsoft account and stores the settings and application data into OneDrive. If a user disables sync on the device using the Settings app, application data that normally syncs becomes backup only. Backup data can only be accessed through the restore operation during the first run experience of a new device. Backups can be disabled via the device settings, and can be managed and deleted through the userΓÇÖs OneDrive account.
-
-## Windows Settings overview
+## Windows Settings details
-The following settings groups are available for end users to enable/disable settings sync on Windows 10 or newer devices.
+List of settings that can be configured to sync in recent Windows versions. These can be found in Windows 10 under **Settings** > **Accounts** > **Sync your settings** or **Settings** > **Accounts** > **Windows backup** > **Remember my preferences** on Windows 11.
-* Theme: desktop background, user tile, taskbar position, etc.
-* Internet Explorer Settings: browsing history, typed URLs, favorites, etc.
-* Passwords: Windows credential manager, including Wi-Fi profiles
-* Language Preferences: spelling dictionary, system language settings
-* Ease of Access: narrator, on-screen keyboard, magnifier
-* Other Windows Settings: see Windows Settings details
-* Microsoft Edge browser setting: Microsoft Edge favorites, reading list, and other settings
+| Settings | Windows 10 (21H1 or newer) |
+| | |
+| Keyboard: turn on toggle keys (off by default) | sync |
+| Date, Time, and Region: country/region | sync |
+| Date, Time, and Region: region format (locale) | sync |
+| Language: language profile | sync |
+| Language: list of keyboards | sync |
+| Wi-Fi: Wi-Fi profiles (only WPA) | sync |
-![Sync your settings](./media/enterprise-state-roaming-windows-settings-reference/active-directory-enterprise-state-roaming-syncyoursettings.png)
+## Browser settings
-> [!NOTE]
-> This article applies to the Microsoft Edge Legacy HTML-based browser launched with Windows 10 in July 2015. The article does not apply to the new Microsoft Edge Chromium-based browser released on January 15, 2020. For more information on the Sync behavior for the new Microsoft Edge, see the article [Microsoft Edge Sync](/deployedge/microsoft-edge-enterprise-sync).
+For more information on the Sync behavior for the new Microsoft Edge, see the article [Microsoft Edge Sync](/deployedge/microsoft-edge-enterprise-sync).
Microsoft Edge browser setting group (favorites, reading list) syncing can be enabled or disabled by end users through Microsoft Edge browser Settings menu option.
For Windows 10 version 1803 or later, Internet Explorer setting group (favorites
![Settings](./media/enterprise-state-roaming-windows-settings-reference/active-directory-enterprise-state-roaming-ie.png)
-## Windows Settings details
-
-In the following table, Other entries in the Settings Group column refer to settings that can be disabled by going to Settings > Accounts > Sync your settings > Other Windows settings.
-
-Internal entries in the Settings Group column refer to settings and apps that can only be disabled from syncing within the app itself or by disabling sync for the entire device using mobile device management (MDM) or Group Policy settings.
-Settings that don't roam or sync will not belong to a group.
-
-| Settings | Desktop | Mobile | Group |
-| | | | |
-| **Accounts**: account picture |sync |X |Theme |
-| **Accounts**: other account settings |X |X | |
-| **Advanced mobile broadband**: Internet connection sharing network name (enables autodiscovery of mobile Wi-Fi hotspots via Bluetooth) |X |X |Passwords |
-| **App data**: individual apps can sync data |sync backup |sync backup |internal |
-| **App list**: list of installed apps |X |backup |Other |
-| **Bluetooth**: all Bluetooth settings |X |X | |
-| **Command prompt**: Command prompt "Defaults" settings |sync |X |internal |
-| **Credentials**: Credential Locker |sync |sync |password |
-| **Date, Time, and Region**: automatic time (Internet time sync) |sync |sync |language |
-| **Date, Time, and Region**: 24-hour clock |sync |X |language |
-| **Date, Time, and Region**: date and time |sync |X |language |
-| **Date, Time, and Region**: time zone | |X |language |
-| **Date, Time, and Region**: daylight savings time |sync |X |language |
-| **Date, Time, and Region**: country/region |sync |X |language |
-| **Date, Time, and Region**: first day of week |sync |X |language |
-| **Date, Time, and Region**: region format (locale) |sync |X |language |
-| **Date, Time, and Region**: short date |sync |X |language |
-| **Date, Time, and Region**: long date |sync |X |language |
-| **Date, Time, and Region**: short time |sync |X |language |
-| **Date, Time, and Region**: long time |sync |X |language |
-| **Desktop personalization**: desktop Theme (background, system color, default system sounds, screen saver) |sync |X |Theme |
-| **Desktop personalization**: slideshow wallpaper |sync |X |Theme |
-| **Desktop personalization**: taskbar settings (position, auto-hide, etc.) |sync |X |Theme |
-| **Desktop personalization**: start screen layout |X |backup | |
-| **Devices**: shared printers you've connected to |X |X |other |
-| **Microsoft Edge browser**: reading list |sync |sync |internal |
-| **Microsoft Edge browser**: favorites |sync |sync |internal |
-| **Microsoft Edge browser**: top sites <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: typed URLs <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: favorites bar settings <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: show the home button <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: block pop-up windows <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: ask me what to do with each download <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: offer to save passwords <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: send do not track requests <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: save form entries <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: show search and site suggestions as I type <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: cookies preference <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: let sites save protected media licenses on my device <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **Microsoft Edge browser**: screen reader setting <sup>[[1]](#footnote-1)</sup> |sync |sync |internal |
-| **High Contrast**: On or Off |sync |X |ease of access |
-| **High contrast**: Theme settings |sync |X |ease of access |
-| **Internet Explorer**: open tabs (URL and title) |sync |sync |Internet Explorer |
-| **Internet Explorer**: reading list |sync |sync |Internet Explorer |
-| **Internet Explorer**: typed URLs |sync |sync |Internet Explorer |
-| **Internet Explorer**: browsing history |sync |sync |Internet Explorer |
-| **Internet Explorer**: favorites |sync |sync |Internet Explorer |
-| **Internet Explorer**: excluded URLs |sync |sync |Internet Explorer |
-| **Internet Explorer**: home pages |sync |sync |Internet Explorer |
-| **Internet Explorer**: domain suggestions |sync |sync |Internet Explorer |
-| **Keyboard**: users can turn on/off on-screen keyboard |sync |X |ease of access |
-| **Keyboard**: turn on sticky yes (off by default) |sync |X |ease of access |
-| **Keyboard**: turn on filter keys (off by default) |sync |X |ease of access |
-| **Keyboard**: turn on toggle keys (off by default) |sync |X |ease of access |
-| **Internet Explorer**: domain Language: Chinese (CHS) QWERTY - enable self-learning |sync |X |Language |
-| **Language**: CHS QWERTY - enable dynamic candidate ranking |sync |X |Language |
-| **Language**: CHS QWERTY - char-set Simplified Chinese |sync |X |Language |
-| **Language**: CHS QWERTY - char-set Traditional Chinese |sync |X |Language |
-| **Language**: CHS QWERTY - fuzzy pinyin |sync |backup |Language |
-| **Language**: CHS QWERTY - fuzzy pairs |sync |backup |Language |
-| **Language**: CHS QWERTY - full pinyin |sync |X |Language |
-| **Language**: CHS QWERTY - double pinyin |sync |X |Language |
-| **Language**: CHS QWERTY - reading auto correction |sync |X |Language |
-| **Language**: CHS QWERTY - C/E switch key, shift |sync |X |Language |
-| **Language**: CHS QWERTY - C/E switch key, Ctrl |sync |X |Language |
-| **Language**: CHS WUBI - single character input mode |sync |X |Language |
-| **Language**: CHS WUBI - show the remaining coding of the candidate |sync |X |Language |
-| **Language**: CHS WUBI - beep when 4-coding is invalid |sync |X |Language |
-| **Language**: CHT Bopomofo - include CJK Ext-A |sync |X |Language |
-| **Language**: Japanese IME - predictive typing and custom words |sync |sync |Language |
-| **Language**: Korean (KOR) IME |X |X |Language |
-| **Language**: handwriting recognition |X |X |Language |
-| **Language**: language profile |sync |backup |Language |
-| **Language**: spellcheck - autocorrect and highlight misspellings |sync |backup |Language |
-| **Language**: list of keyboards |sync |backup |Language |
-| **Lock Screen**: all lock screen settings |X |X | |
-| **Magnifier**: on or off (master toggle) |X |X |Ease of access |
-| **Magnifier**: turn inversion color on or off (off by default) |sync |X |Ease of access |
-| **Magnifier**: tracking - follow the keyboard focus |sync |X |Ease of access |
-| **Magnifier**: tracking - follow the mouse cursor |sync |X |Ease of access |
-| **Magnifier**: start when users sign in (off by default) |sync |X |Ease of access |
-| **Mouse**: change the size of mouse cursor |sync |X |other |
-| **Mouse**: change the color of mouse cursor |sync |X |other |
-| **Mouse**: all other settings |X |X | |
-| **Narrator**: quick launch |sync |X |Ease of access |
-| **Narrator**: users can change Narrator speaking pitch |sync |X |Ease of access |
-| **Narrator**: users can turn on or off Narrator reading hints for common items (on by default) |sync |X |Ease of access |
-| **Narrator**: users can turn on or off whether they can hear typed characters (on by default) |sync |X |Ease of access |
-| **Narrator**: users can turn on or off whether they can hear typed words (on by default) |sync |X |Ease of access |
-| **Narrator**: have insert cursor following Narrator (on by default) |sync |X |Ease of access |
-| **Narrator**: enable visual highlighting of Narrator cursor (on by default) |sync |X |Ease of access |
-| **Narrator**: play audio cues (on by default) |sync |X |Ease of access |
-| **Narrator**: activate keys on the touch keyboard when you lift your finger (off by default) |sync |X |Ease of access |
-| **Ease of access**: set the thickness of the blinking cursor |sync |X |Ease of access |
-| **Ease of access**: remove background images (off by default) |sync |X |Ease of access |
-| **Power and Sleep**: all settings |X |X | |
-| **Start screen personalization**: accent color (phone only) |X |sync |Theme |
-| **Typing**: spelling dictionary |sync |backup |Language |
-| **Typing**: autocorrect misspelled word |sync |backup |Language |
-| **Typing**: highlight misspelled words |sync |backup |Language |
-| **Typing**: show text suggestions as I type |sync |backup |Language |
-| **Typing**: add a space after I choose a text suggestion |sync |backup |Language |
-| **Typing**: add a period after I double-tap the spacebar |sync |backup |Language |
-| **Typing**: capitalize the first letter of each sentence |sync |backup |Language |
-| **Typing**: use all uppercase letters when I double-tap shift key |sync |backup |Language |
-| **Typing**: play key sounds as I type |sync |backup |Language |
-| **Typing**: personalization data for touch keyboard |sync |backup |Language |
-| **Wi-Fi**: Wi-Fi profiles (only WPA) |sync |sync |Passwords |
-
-###### Footnote 1
-
-Minimum supported OS version of Windows Creators Update (Build 15063).
- ## Next steps For an overview, see [enterprise state roaming overview](enterprise-state-roaming-overview.md).
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
The sensitivity label option is only displayed for groups when all the following
1. Labels are published in the Microsoft 365 Compliance Center for this Azure AD organization. 1. The feature is enabled, EnableMIPLabels is set to True in from the Azure AD PowerShell module.
-1. Labels are synchronized to Azure AD with the Execute-AzureAdLabelSync cmdlet in the Security & Compliance PowerShell module.
+1. Labels are synchronized to Azure AD with the Execute-AzureAdLabelSync cmdlet in the Security & Compliance PowerShell module. It can take up to 24 hours after synchronization for the label to be available to Azure AD.
1. The group is a Microsoft 365 group. 1. The organization has an active Azure Active Directory Premium P1 license.
-1. The current signed-in user has sufficient privileges to assign labels. The user must be either a Global Administrator, Group Administrator, or the group owner.
+1. The [sensitivity label scope](https://docs.microsoft.com/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide#label-scopes&preserve-view=true) must be configured for Groups & Sites.
+3. The current signed-in user has sufficient privileges to assign labels. The user must be either a Global Administrator, Group Administrator, or the group owner.
+4. The current signed-in user must be within the scope of the [sensitivity label publishing policy](https://docs.microsoft.com/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide#what-label-policies-can-do&preserve-view=true)
Please make sure all the conditions are met in order to assign labels to a group.
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For more information about this feature, see [Disable and delete external identi
**Service category:** Access Reviews **Product capability:** Identity Governance
-In Azure AD access reviews, administrators creating reviews can now write a custom message to the reviewers. Reviewers will see the message in the email they receive that prompts them to complete the review. To learn more about using this feature, see step 14 of the [Create one or more access reviews](../governance/create-access-review.md#create-one-or-more-access-reviews) section.
+In Azure AD access reviews, administrators creating reviews can now write a custom message to the reviewers. Reviewers will see the message in the email they receive that prompts them to complete the review. To learn more about using this feature, see step 14 of the [Create a single-stage review](../governance/create-access-review.md#create-a-single-stage-access-review) section.
active-directory Complete Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/complete-access-review.md
na Previously updated : 08/20/2021 Last updated : 02/18/2022
You can track the progress of access reviews as they are completed.
1. In the list, click an access review.
-
On the **Overview** page, you can see the progress of the **Current** instance of the review. If there is not an active instance open at the time, you will see information on the previous instance. No access rights are changed in the directory until the review is completed. ![Review of All company group](./media/complete-access-review/all-company-group.png)
You can track the progress of access reviews as they are completed.
1. If you're no longer interested in the access review, you can delete it by clicking the **Delete** button.
+### View status of multi-stage review (preview)
+
+To see the status and stage of a multi-stage access review:
+
+1. Select the multi-stage review you want to check the status of or see what stage it's in.
+
+1. Click **Results** on the left nav menu under **Current**.
+
+1. Once you are on the results page, under **Status** it will tell you which stage the multi-stage review is in. The next stage of the review won't become active until the duration specified during the access review setup has passed.
+
+1. If a decision has been made, but the review duration for this stage has not expired yet, you can select **Stop current stage** button on the results page. This will trigger the next stage of review.
+
## Retrieve the results To view the results for a review, click the **Results** page. To view just a user's access, in the Search box, type the display name or user principal name of a user whose access was reviewed. ![Retrieve results for an access review](./media/complete-access-review/retrieve-results.png)
-
To view the results of a completed instance of an access review that is recurring, click **Review history**, then select the specific instance from the list of completed access review instances, based on the instance's start and end date. The results of this instance can be obtained from the **Results** page. Recurring access reviews allow you to have a constant picture of access to resources that may need to be updated more often than one-time access reviews. To retrieve the results of an access review, both in-progress or completed, click the **Download** button. The resulting CSV file can be viewed in Excel or in other programs that open UTF-8 encoded CSV files. -
-
- ## Apply the changes If **Auto apply results to resource** was enabled based on your selections in **Upon completion settings**, auto-apply will be executed once a review instance completes, or earlier if you manually stop the review.
Manually or automatically applying results doesn't have an effect on a group tha
> - Reviewing a resource (role, group, application) with nested groups assigned: For users who have membership through a nested group, we will not remove their membership to the nested group and therefore they will retain access to the resource being reviewed. > - User not found / other errors can also result in an apply result not being supported. - ## Actions taken on denied guest users in an access review On review creation, the creator can choose between two options for denied guest users in an access review. - Denied guest users can have their access to the resource removed. This is the default. - The denied guest user can be blocked from signing in for 30 days, then deleted from the tenant. During the 30-day period the guest user is able to be restored access to the tenant by an administrator. After the 30-day period is completed, if the guest user has not had access to the resource granted to them again, they will be removed from the tenant permanently. In addition, using the Azure Active Directory portal, a Global Administrator can explicitly [permanently delete a recently deleted user](../fundamentals/active-directory-users-restore.md) before that time period is reached. Once a user has been permanently deleted, the data about that guest user will be removed from active access reviews. Audit information about deleted users remains in the audit log. - ## Next steps - [Manage access reviews](manage-access-review.md)
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
na Previously updated : 08/20/2021 Last updated : 02/18/2022
This article describes how to create one or more access reviews for group member
For more information, see [License requirements](access-reviews-overview.md#license-requirements).
-## Create one or more access reviews
+## Create a single-stage access review
+### Scope
1. Sign in to the Azure portal and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page. 1. On the left menu, select **Access reviews**.
For more information, see [License requirements](access-reviews-overview.md#lice
![Screenshot that shows the interface that appears if you selected applications instead of groups.](./media/create-access-review/select-application-detailed.png)
- > [!NOTE]
- > Selecting multiple groups or applications results in the creation of multiple access reviews. For example, if you select five groups to review, the result is five separate access reviews.
+ > [!NOTE]
+ > Selecting multiple groups or applications results in the creation of multiple access reviews. For example, if you select five groups to review, the result is five separate access reviews.
1. Now you can select a scope for the review. Your options are:
For more information, see [License requirements](access-reviews-overview.md#lice
1. Select **Next: Reviews**.
-1. In the **Specify reviewers** section, in the **Select reviewers** box, select either one or more people to do the access reviews. You can choose from:
+### Next: Reviews
+
+1. You can create a single-stage or multi-stage review (preview). For a single stage review continue here. To create a multi-stage access review (preview), follow the steps in [Create a multi-stage access review (preview)](#create-a-multi-stage-access-review-preview)
+
+1. In the **Specify reviewers** section, in the **Select reviewers** box, select either one or more people to make decisions in the access reviews. You can choose from:
- **Group owner(s)**: This option is only available when you do a review on a team or group. - **Selected user(s) or groups(s)**
For more information, see [License requirements](access-reviews-overview.md#lice
1. Select **Next: Settings**.
+### Next: Settings
+ 1. In the **Upon completion settings** section, you can specify what happens after the review finishes. ![Screenshot that shows Upon completion settings.](./media/create-access-review/upon-completion-settings-new.png)
For more information, see [License requirements](access-reviews-overview.md#lice
![Screenshot that shows the Review + Create tab.](./media/create-access-review/create-review.png)
+### Next: Review + Create
+ 1. Name the access review. Optionally, give the review a description. The name and description are shown to the reviewers. 1. Review the information and select **Create**.
+## Create a multi-stage access review (preview)
+
+A multi-stage review allows the administrator to define two or three sets of reviewers to complete a review one after another. In a single-stage review, all reviewers make a decision within the same period and the last reviewer to make a decision "wins". In a multi-stage review, two or three independent sets of reviewers make a decision within their own stage, and the next stage doesn't happen until a decision is made in the previous stage. Multi-stage reviews can be used to reduce the burden on later-stage reviewers, allow for escalation of reviewers, or have independent groups of reviewers agree on decisions.
+
+1. After you have selected the resource and scope of your review, move on to the **Reviews** tab.
+
+1. Click the checkbox next to **(Preview) Multi-stage review**.
+
+1. Under **First stage review**, select the reviewers from the dropdown menu next to **Select reviewers**.
+
+1. If you select **Group owner(s)** or **Managers of Users**, you have the option to add a fallback reviewer. To add a fallback, click **Select fallback reviewers** and add the users you want to be fallback reviewers.
+
+ ![Screenshot that shows multi-stage review enabled and multi-stage review settings.](./media/create-access-review/create-multi-stage-review.png)
+
+1. Add the duration for the first stage. To add the duration, enter a number in the field next to **Stage duration (in days)**. This is the number of days you wish for the first stage to be open to the first stage reviewers to make decisions.
+
+1. Under **Second stage review**, select the reviewers from the dropdown menu next to **Select reviewers**. These reviewers will be asked to review after the duration of the first stage review ends.
+
+1. Add any fallback reviewers if necessary.
+
+1. Add the duration for the second stage.
+
+1. By default, you will see two stages when you create a multi-stage review. However, you can add up to three stages. If you want to add a third stage, click **+ Add a stage** and complete the required fields.
+
+1. You can decide to allow 2nd and 3rd stage reviewers to the see decisions made in the previous stage(s).If you want to allow them to see the decisions made prior, click the box next to **Show previous stage(s) decisions to later stage reviewers** under **Reveal review results**. Leave the box unchecked to disable this setting if youΓÇÖd like your reviewers to review independently.
+
+ ![Screenshot that shows duration and show previous stages setting enabled for multi-stage review.](./media/create-access-review/reveal-multi-stage-results-and-duration.png)
+
+1. The duration of each recurrence will be set to the sum of the duration day(s) you specified in each stage.
+
+1. Specify the **Review recurrence**, the **Start date**, and **End date** for the review. The recurrence type must be at least as long as the total duration of the recurrence (i.e., the max duration for a weekly review recurrence is 7 days).
+
+1. To specify which reviewees will continue from stage to stage, select one or multiple of the following options next to **Specify reviewees to go to next stage** :
+ ![Screenshot that shows specify reviewees setting and options for multi-stage review.](./media/create-access-review/next-stage-reviewees-setting.png)
+
+ 1. **Approved reviewees** - Only reviewees that were approved move on to the next stage(s).
+ 1. **Denied reviewees** - Only reviewees that were denied move on to the next stage(s).
+ 1. **Not reviewed reviewees** - Only reviewees that haven't been reviewed will move on to the next stage(s).
+ 1. **Reviewees marked as "Don't Know"** - Only reviewees marked as "Don't know" move on to the next stage(s).
+ 1. **All**: everyone moves on to the next stage if youΓÇÖd like all stages of reviewers to make a decision.
+
+1. Continue on to the **settings tab** and finish the rest of the settings and create the review. Follow the instructions in [Next: Settings](#next-settings).
+ ## Allow group owners to create and manage access reviews of their groups (preview) The prerequisite role is a Global or User administrator.
active-directory Perform Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/perform-access-review.md
na Previously updated : 12/22/2020 Last updated : 2/18/2022
# Review access to groups and applications in Azure AD access reviews
-Azure Active Directory (Azure AD) simplifies how enterprises manage access to groups and applications in Azure AD and other Microsoft Online Services with a feature called Azure AD access reviews. This article will go over how a designated reviewer performs an access review for members of a group or users with access to an application. If you would like to review, access to a package read [Review access of an access package in Azure AD entitlement management](entitlement-management-access-reviews-review-access.md)
+Azure Active Directory (Azure AD) simplifies how enterprises manage access to groups and applications in Azure AD and other Microsoft Online Services with a feature called Azure AD access reviews. This article will go over how a designated reviewer performs an access review for members of a group or users with access to an application. If you would like to review access to an access package read [Review access of an access package in Azure AD entitlement management](entitlement-management-access-reviews-review-access.md)
-## Perform access review using My Apps
+## Perform access review using My Access
+You can review access to groups and applications via My Access, an end-user friendly portal for granting, approving, and reviewing access needs.
-You can start the Access Review process from the notification email or by going directly to the site.
--- **Email**:
+### Use email to navigate to My Access
>[!IMPORTANT] > There could be delays in receiving email and it some cases it could take up to 24 hours. Add azure-noreply@microsoft.com to your safe recipients list to make sure that you are receiving all emails.
-1. Look for an email from Microsoft asking you to review access. Here is an example email to review the access for a group.
-
- ![Screenshot that shows an example email from Microsoft to review access to a group.](./media/perform-access-review/access-review-email.png)
-
-1. Click the **Start review** link to open the access review.
--- **If you don't have the email**, you can find your pending access reviews by following these steps.-
- 1. Sign in to the My Apps portal at [https://myapps.microsoft.com](https://myapps.microsoft.com).
-
- ![My Apps portal listing apps you have permissions to](./media/perform-access-review/myapps-access-panel.png)
-
- 1. In the upper-right corner of the page, click the user next to your name and default organization. If more than one organization is listed, select the organization that requested an access review.
-
- 1. Click the **Access reviews** tile to see a list of pending access reviews.
-
- > [!NOTE]
- > If the **Access reviews** tile isn't visible, there are no access reviews to perform for that organization and no action is needed at this time.
-
- ![Screenshot that shows the pending access reviews list for apps and groups.](./media/perform-access-review/access-reviews-list.png)
-
- 1. Click the **Begin review** link for the access review you want to perform.
-
-Once you have opened the access review, you see the names of users who need to have their access reviewed.
-
-If the request is to review your own access, the page will look different. For more information, see [Review access for yourself to groups or applications](review-your-access.md).
-
-![Open access review listing the users to review](./media/perform-access-review/perform-access-review.png)
-
-There are two ways that you can approve or deny access:
--- You can approve or deny access for one or more users 'manually' by choosing the appropriate action for each user request.-- You can accept the system recommendations.-
-### Approve or deny access for one or more users
-
-1. Review the list of users and decide whether to approve or deny their continued access.
-
- - To approve or deny access for a single user, click the row to open a window to specify the action to take.
- - To approve or deny access for multiple users, add check marks next to the users and then click the **Review X user(s)** button to open a window to specify the action to take.
-
-1. Click **Approve** or **Deny**.
-
- ![Action window that includes Approve, Deny, and Don't know options](./media/perform-access-review/approve-deny.png)
- >[!NOTE]
- > If you are unsure, you can click **Don't know**. and the user gets to keep their access and your choice is recorded in the audit logs.
-
-1. The administrator of the access review may require that you supply a reason in the **Reason** box for your decision. Even when a reason is not required. You can still provide a reason for your decision and the information that you include will be available to other reviewers.
-
-1. Once you have specified the action to take, click **Save**.
-
- >[!NOTE]
- > You can change your response at any time before the access review has ends. If you want to change your response, select the row and update the response. For example, you can approve a previously denied user or deny a previously approved user.
-
- >[!IMPORTANT]
- > - If a user is denied access, they aren't removed immediately. They are removed when the review period has ended or when an administrator stops the review if [Auto apply](complete-access-review.md#apply-the-changes) is enabled.
- > - If there are multiple reviewers, the last submitted response is recorded. Consider an example where an administrator designates two reviewers ΓÇô Alice and Bob. Alice opens the access review first and approves a user's access request. Before the review period ends, Bob opens the access review and denies access on the same request previously approved by Alice. The last decision denying the access is the response that gets recorded.
-
-### Approve or deny access based on recommendations
-
-To make access reviews easier and faster for you, we also provide recommendations that you can accept with a single click. The recommendations are generated based on the user's sign-in activity.
-
-1. In the blue bar at the bottom of the page, click **Accept recommendations**.
-
- ![Screenshot that shows the open access review listing with the "Accept recommendations" button selected.](./media/perform-access-review/accept-recommendations.png)
-
- You see a summary of the recommended actions.
-
- ![Window that displays a summary of the recommended actions](./media/perform-access-review/accept-recommendations-summary.png)
-
-1. Click **Ok** to accept the recommendations.
-
-## Perform access review using My Access (New)
-
-You can get to the new reviewer experience with the updated user interface in My Access a couple of different ways:
-
-### My Apps portal
-
-1. Sign in to My Apps at [https://myapps.microsoft.com](https://myapps.microsoft.com).
-
- ![My Apps portal listing apps you have permissions to](./media/perform-access-review/myapps-access-panel.png)
-
-2. Click the **Access reviews** tile to see a list of pending access reviews.
-
- > [!NOTE]
- > If the **Access reviews** tile isn't visible, there are no access reviews to perform for that organization and no action is needed at this time.
-
-![Pending access reviews list for apps and groups with the new experience available banner displayed during the preview](./media/perform-access-review/banner.png)
-
-3. Click on **Try it!** in the banner at the top of the page. This will take you to the new My Access experience.
-
-### Email
-
- >[!IMPORTANT]
-> There could be delays in receiving email and it some cases it could take up to 24 hours. Add azure-noreply@microsoft.com to your safe recipients list to make sure that you are receiving all emails.
-
- 1. Look for an email from Microsoft asking you to review access. You can see an example email message below:
+1. Look for an email from Microsoft asking you to review access. You can see an example email message below:
![Example email from Microsoft to review access to a group](./media/perform-access-review/access-review-email-preview.png)
- 2. Click the **Start review** link to open the access review.
+1. Click the **Start review** link to open the access review.git pu
->[!NOTE]
->If clicking start review takes you to **My Apps** follow the steps listed in the section above titled **My Apps Portal**.
-
-### Navigate to My Access directly
+### Navigate directly to My Access
You can also view your pending access reviews by using your browser to open My Access.
-1. Sign in to the My Access at https://myaccess.microsoft.com/
+1. Sign in to the My Access at https://myaccess.microsoft.com/
2. Select **Access reviews** from the menu on the left side bar to see a list of pending access reviews assigned to you.
- ![access reviews in the menu](./media/perform-access-review/access-review-menu.png)
-
-### Approve or deny access for one or more users
+## Review access for one or more users
After you open My Access under Groups and Apps you can see:
Click on the name of an access review to get started.
![Pending access reviews list for apps and groups](./media/perform-access-review/access-reviews-list-preview.png)
-Once that it opens, you will see the list of users in scope for the access review. If the request is to review your own access, the page will look different. For more information, see [Review access for yourself to groups or applications](review-your-access.md).
+Once that it opens, you will see the list of users in scope for the access review.
+
+> [!NOTE]
+> If the request is to review your own access, the page will look different. For more information, see [Review access for yourself to groups or applications](review-your-access.md).
There are two ways that you can approve or deny access: - You can manually approve or deny access for one or more users. - You can accept the system recommendations.
-#### Manually approve or deny access for one or more users
+### Manually review access for one or more users
1. Review the list of users and decide whether to approve or deny their continued access.
-2. Select one or more users by clicking the circle next to their names.
-3. Select **Approve** or **Deny** on the bar above.
- - If you are unsure, you can click **Don't know**. The user gets to keep their access and your choice is recorded in the audit logs. It is important that you keep in mind that any information you provide will be available to other reviewers. They can read your comments and take them into account when they review the request.
+
+1. Select one or more users by clicking the circle next to their names.
+
+1. Select **Approve** or **Deny** on the bar above.
+ - If you are unsure if a user should continue to have access or not, you can click **Don't know**. The user gets to keep their access and your choice is recorded in the audit logs. It is important that you keep in mind that any information you provide will be available to other reviewers. They can read your comments and take them into account when they review the request.
![Open access review listing the users who need review](./media/perform-access-review/user-list-preview.png)
-4. The administrator of the access review may require that you supply a reason in the **Reason** box for your decision. Even when a reason is not required. You can still provide a reason for your decision and the information that you include will be available to other approvers for review.
+1. The administrator of the access review may require that you supply a reason in the **Reason** box for your decision. Even when a reason is not required. You can still provide a reason for your decision and the information that you include will be available to other approvers for review.
-5. Click **Submit**.
+1. Click **Submit**.
- You can change your response at any time until the access review has ended. If you want to change your response, select the row and update the response. For example, you can approve a previously denied user or deny a previously approved user.
- >[!IMPORTANT]
+ > [!IMPORTANT]
> - If a user is denied access, they aren't removed immediately. They are removed when the review period has ended or when an administrator stops the review. > - If there are multiple reviewers, the last submitted response is recorded. Consider an example where an administrator designates two reviewers ΓÇô Alice and Bob. Alice opens the access review first and approves a user's access request. Before the review period ends, Bob opens the access review and denies access on the same request previously approved by Alice. The last decision denying the access is the response that gets recorded.
-#### Approve or deny access based on recommendations
+### Review access based on recommendations
To make access reviews easier and faster for you, we also provide recommendations that you can accept with a single click. The recommendations are generated based on the user's sign-in activity.
To make access reviews easier and faster for you, we also provide recommendation
![Open access review listing showing the Accept recommendations button](./media/perform-access-review/accept-recommendations-preview.png)
+1. Or to accept recommendations for all unreviewed users, make sure that no users are selected and click on the **Accept recommendations** button on the top bar.
+ 1. Click **Submit** to accept the recommendations.
-To accept recommendations for all users make sure that no one is selected and click on the **Accept recommendations** button on the top bar.
->[!NOTE]
->When you accept recommendations previous decisions will not be changed.
+> [!NOTE]
+> When you accept recommendations previous decisions will not be changed.
+
+### Review access for one or more users in a multi-stage access review (preview)
+
+If multi-stage access reviews have been enabled by the administrator, there will be 2 or 3 total stages of review. Each stage of review will have a specified reviewer.
+
+You will review access either manually or accept the recommendations based on sign-in activity for the stage you are assigned as the reviewer.
+
+If you are the 2nd stage or 3rd stage reviewer, you will also see the decisions made by the reviewers in the prior stage(s) if the administrator enabled this setting when creating the access review. The decision made by a 2nd or 3rd stage reviewer will overwrite the previous stage. So, the decision the 2nd stage reviewer makes will overwrite the first stage, and the 3rd stage reviewer's decision will overwrite the second stage.
+
+ ![Select user to show the multi-stage access review results](./media/perform-access-review/multi-stage-access-review.png)
+
+Approve or deny access as outlined in [Review access for one or more users](#review-access-for-one-or-more-users).
+
+> [!NOTE]
+> The next stage of the review won't become active until the duration specified during the access review setup has passed. If the administrator believes a stage is done but the review duration for this stage has not expired yet, they can use the **Stop current stage** button in the overview of the access review in the Azure AD portal. This will close the active stage and start the next stage.
+
+## If no action is taken on access review
+When the access review is setup, the administrator has the option to use advanced settings to determine what will happen in the event a reviewer doesn't respond to an access review request.
+
+The administrator can set up the review so that if reviewers do not respond at the end of the review period, all unreviewed users can have an automatic decision made on their access. This includes the loss of access to the group or application under review.
## Next steps
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
Having a BIG-IP in front of the application enables us to overlay the service wi
The SHA solution for this scenario is made up of:
-**Application:** BIG-IP published service to be protected by and Azure AD SHA.
+**Application:** BIG-IP published service to be protected by Azure AD SHA.
-**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required attributes including a user identifier.
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the backend application.
SHA for this scenario supports both SP and IdP initiated flows. The following im
| - |-| | 1| User connects to application endpoint (BIG-IP) | | 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
-| 3| Azure AD pre-authenticates user and applies any enforced CA policies |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token | | 5| BIG-IP injects Azure AD attributes as headers in request to the application | | 6| Application authorizes request and returns payload |
Prior BIG-IP experience isnΓÇÖt necessary, but youΓÇÖll need:
* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
-* A [SSL certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default certificates while testing
+* An [SSL Web certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
* An existing header-based application or [setup a simple IIS header app](/previous-versions/iis/6.0-sdk/ms525396(v=vs.90)) for testing ## BIG-IP configuration methods
-There are many methods to deploy BIG-IP for this scenario including a template-driven Guided Configuration, or an advanced configuration. This tutorial covers the Easy Button templates offered by the Guided Configuration 16.1 and upwards.
-
-With the **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management of applications is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
> [!NOTE] > All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
The Easy Button client must also be registered in Azure AD, before it is allowed
## Configure Easy Button
-Next, step through the Easy Button configurations to federate and publish the internal application. Start by provisioning your BIG-IP with an X509 certificate that Azure AD can use to sign SAML tokens and claims issued for SHA enabled services.
-
-1. From a browser, sign-in to the F5 BIG-IP management console
-2. Navigate to **System > Certificate Management > Traffic Certificate Management SSL Certificate List > Import**
-3. Select **PKCS 12 (IIS)** and import your certificate along with its private key
-
-Once provisioned, the certificate can be used for every application published through Easy Button. You can also choose to upload a separate certificate for individual applications.
-
- ![Screenshot for Configure Easy Button- Import SSL certificates and keys](./media/f5-big-ip-easy-button-ldap/configure-easy-button.png)
+Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
-4. Navigate to **Access > Guided Configuration > Microsoft Integration and select Azure AD Application**
-
-You can now access the Easy Button functionality that provides quick configuration steps to set up the APM as a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
-5. Review the list of configuration steps and select **Next**
-
+2. Review the list of configuration steps and select **Next**
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
-## Configuration steps
+3. Follow the sequence of steps required to publish your application.
-The **Easy Button** template will display the sequence of steps required to publish your application.
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox)
- ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png)
### Configuration Properties
-These are general and service account properties. The **Configuration Properties** tab creates up a new application config and SSO object that will be managed through the BIG-IPΓÇÖs Guided Configuration UI. This configuration can then be reused for publishing more applications through the Easy Button template.
+The **Configuration Properties** tab creates a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
-Consider the **Azure Service Account Details** be the BIG-IP client application you registered in your Azure AD tenant earlier. This section allows the BIG-IP to programmatically register a SAML application directly in your tenant, along with the other properties you would normally configure manually in the portal. Easy Button will do this for every BIG-IP APM service being published and enabled for SHA.
+Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
1. Enter a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations.
Consider the **Azure Service Account Details** be the BIG-IP client application
The Service Provider settings define the SAML SP properties for the APM instance representing the application protected through SHA.
-1. Enter **Host**. This is the public FQDN of the application being secured. YouΓÇÖll need a corresponding DNS record for clients to resolve this address, but using a localhost record is fine during testing
+1. Enter **Host**. This is usually the FQDN that will be used for the applications external URL
2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
Selected policies should either have an **Include** or **Exclude** option checke
A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for clients requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
-1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP.
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the appllication itself. Using a test PC's localhost DNS is fine for testing.
2. Enter **Service Port** as *443* for HTTPS
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
The SHA solution for this scenario is made up of the following:
**Application:** BIG-IP published service to be protected by and Azure AD SHA. The application host is domain-joined and so is integrated with Active Directory (AD).
-**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP APM.
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
**KDC:** Key Distribution Center (KDC) role on a Domain Controller (DC), issuing Kerberos tickets.
-**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the PeopleSoft service.
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing Kerberos-based SSO to the backend application.
SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
SHA for this scenario supports both SP and IdP initiated flows. The following im
| Steps| Description| | -- |-|
-| 1| User connects to SAML SP endpoint for application (BIG-IP APM) |
-| 2| APM access policy redirects user to Azure AD (SAML IdP) |
-| 3| Azure AD pre-authenticates user and applies any enforced CA policies |
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token | | 5| BIG-IP requests Kerberos ticket from KDC | | 6| BIG-IP sends request to backend application, along with Kerberos ticket for SSO |
Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
* An account with Azure AD Application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
-* Web server [certificate](./f5-bigip-deployment-guide.md) for publishing services over HTTPS or use default BIG-IP certs while testing
+* An [SSL Web certificate](./f5-bigip-deployment-guide.md) for publishing services over HTTPS, or use default BIG-IP certs while testing
* An existing Kerberos application or [setup an IIS (Internet Information Services) app](https://active-directory-wp.com/docs/Networking/Single_Sign_On/SSO_with_IIS_on_Windows.html) for KCD SSO ## BIG-IP configuration methods
-There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers latest Guided Configuration 16.1 offering an Easy button template.
-
-With the **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
-
-The advanced approach provides a more flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would also use this approach for scenarios not covered by the guided configuration templates.
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
>[!NOTE] > All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
The Easy Button client must also be registered in Azure AD, before it is allowed
## Configure Easy Button
-Next, step through the Easy Button configurations, and complete the trust to start publishing the internal application. Start by provisioning your BIG-IP with an X509 certificate that Azure AD can use to sign SAML tokens and claims issued for SHA enabled services.
-
-1. From a browser, sign-in to the **F5 BIG-IP management console**
-
-2. Navigate to **System > Certificate Management > Traffic Certificate Management SSL Certificate List > Import**
-
-3. Select **PKCS 12 (IIS)** and import your certificate along with its private key
-
-Once provisioned, the certificate can be used for every application published through Easy Button. You can also choose to upload a separate certificate for individual applications.
-
- ![Screenshot for Configure Easy Button- Import SSL certificates and keys](./media/f5-big-ip-kerberos-easy-button/config-easy-button.png)
-
-4. Navigate to **Access > Guided Configuration > Microsoft Integration and select Azure AD Application**
-
-You can now access the Easy Button functionality that provides quick configuration steps to set up the APM as a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
- ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-kerberos-easy-button/easy-button-template.png)
+1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
-5. Review the list of configuration steps and select **Next**
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
- ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-kerberos-easy-button/config-steps.png)
+2. Review the list of configuration steps and select **Next**
-## Configuration steps
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
-The **Easy Button** template will display the sequence of steps required to publish your application.
+3. Follow the sequence of steps required to publish your application.
- ![Configuration steps flow](./media/f5-big-ip-kerberos-easy-button/config-steps-flow.png)
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox)
### Configuration Properties
-These are general and service account properties. The **Configuration Properties** tab creates up a new application config and SSO object that will be managed through the BIG-IPΓÇÖs Guided Configuration UI. This configuration can then be reused for publishing more applications through the Easy Button template.
+The **Configuration Properties** tab creates a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
-Consider the **Azure Service Account Details** be the BIG-IP client application you registered in your Azure AD tenant earlier. This section allows the BIG-IP to programmatically register a SAML application directly in your tenant, along with the other properties you would normally configure manually in the portal. Easy Button will do this for every BIG-IP APM service being published and enabled for SHA.
+Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
1. Provide a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations
Before you select **Next**, confirm the BIG-IP can successfully connect to your
The Service Provider settings define the SAML SP properties for the APM instance representing the application protected through SHA.
-1. Enter **Host.** This is the public FQDN of the application being secured. YouΓÇÖll need a corresponding DNS record for clients to resolve this address, but using a localhost record is fine during testing
+1. Enter **Host**. This is usually the FQDN that will be used for the applications external URL
2. Enter **Entity ID.** This is the identifier Azure AD will use to identify the SAML SP requesting a token
Selected policies should either have an **Include** or **Exclude** option checke
A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
-1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP.
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the appllication itself. Using a test PC's localhost DNS is fine for testing.
2. Enter **Service Port** as *443* for HTTPS
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
The secure hybrid access solution for this scenario is made up of:
**Application:** BIG-IP published service to be protected by Azure AD SHA.
-**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP APM. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
-**HR system:** Legacy employee database acting as source of truth for fine grained application permissions.
+**HR system:** LDAP based employee database acting as source of truth for fine grained application permissions.
**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the backend application.
SHA for this scenario supports both SP and IdP initiated flows. The following im
| Steps| Description | | -- |-|
-| 1| User connects to applicationΓÇÖs SAML SP endpoint (BIG-IP APM) |
-| 2| APM access policy redirects user to SAML IdP (Azure AD) for pre-authentication |
-| 3| Azure AD authenticates user and applies any enforced CA policies |
-| 4| User is redirected back to BIG-IP with issued token and claims |
-| 5| BIG-IP authenticates user and requests more attributes from HR system |
-| 6| BIG-IP injects Azure AD and HR system attributes as headers in request to the application |
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
+| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
+| 5| BIG-IP requests additional attributes from LDAP based HR system |
+| 6| BIG-IP injects Azure AD and HR system attributes as headers in request to application |
| 7| Application authorizes access with enriched session permissions | ## Prerequisites
Prior BIG-IP experience isn't necessary, but you'll need:
- An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator) -- An [SSL certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default certificates while testing
+- An [SSL Web certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
- An existing header-based application or [setup a simple IIS header app](/previous-versions/iis/6.0-sdk/ms525396(v=vs.90)) for testing
Prior BIG-IP experience isn't necessary, but you'll need:
## BIG-IP configuration methods
-There are many methods to deploy BIG-IP for this scenario including a template-driven Guided Configuration wizard, or the manual advanced configuration. This tutorial covers the Easy Button templates offered by the Guided Configuration 16.1 and upwards.
-
-With the **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for secure hybrid access. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
-
-For scenarios where the Guided Configuration lacks the flexibility to achieve a particular set of requirements, see the [Advanced deployment](#advanced-deployment) at the end of this tutorial.
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
>[!NOTE] >All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
The Easy Button client must also be registered in Azure AD, before it is allowed
## Configure Easy Button
-Next, step through the Easy Button configurations to federate and publish the EBS application. Start by provisioning your BIG-IP with an X509 certificate that Azure AD can use to sign SAML tokens and claims issued for SHA enabled services.
-
-1. From a browser, sign-in to the F5 BIG-IP management console
-2. Navigate to **System > Certificate Management > Traffic Certificate Management SSL Certificate List > Import**
-3. Select **PKCS 12 (IIS)** and import your certificate along with its private key
-
- Once provisioned, the certificate can be used for every application published through Easy Button. You can also choose to upload a separate certificate for individual applications.
+Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
-
- ![Screenshot for Configure Easy Button- Import SSL certificates and keys](./media/f5-big-ip-easy-button-ldap/configure-easy-button.png)
-
-4. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**
-
- You can now access the Easy Button functionality that provides quick configuration steps to set up the APM as a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
-5. Review the list of configuration steps and select **Next**
+2. Review the list of configuration steps and select **Next**
![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
-## Configuration steps
+3. Follow the sequence of steps required to publish your application.
-The **Easy Button** template will display the sequence of steps required to publish your application.
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox)
- ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png)
### Configuration Properties
-These are general and service account properties. The **Configuration Properties** tab creates up a new application config and SSO object that will be managed through the BIG-IPΓÇÖs Guided Configuration UI. This configuration can then be reused for publishing more applications through the Easy Button template.
+The **Configuration Properties** tab creates a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
-Consider the **Azure Service Account Details** be the BIG-IP client application you registered in your Azure AD tenant earlier. This section allows the BIG-IP to programmatically register a SAML application directly in your tenant, along with the other properties you would normally configure manually in the portal. Easy Button will do this for every BIG-IP APM service being published and enabled for SHA.
+Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
1. Enter a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations.
Consider the **Azure Service Account Details** be the BIG-IP client application
The Service Provider settings define the SAML SP properties for the APM instance representing the application protected through secure hybrid access.
-1. Enter **Host**. This is the public FQDN of the application being secured. YouΓÇÖll need a corresponding DNS record for clients to resolve this address, but using a localhost record is fine during testing
+1. Enter **Host**. This is usually the FQDN that will be used for the applications external URL
2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-ldap/service-provider.png)
- The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
-
+The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
3. From the **Assertion Decryption Private Key** list, select **Create New**- ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png) 4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab - 6. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab. - ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png) 6. Check **Enable Encrypted Assertion**. - 8. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions. - 9. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions. - ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png) ### Azure Active Directory
Selected policies should either have an **Include** or **Exclude** option checke
A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for clients requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
-1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP.
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the appllication itself. Using a test PC's localhost DNS is fine for testing.
2. Enter **Service Port** as *443* for HTTPS
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
The secure hybrid access solution for this scenario is made up of several compon
**Oracle EBS Application:** BIG-IP published service to be protected by Azure AD SHA.
-**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP.
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
**Oracle Internet Directory (OID):** Hosts the user database. BIG-IP checks via LDAP for authorization attributes. **Oracle AccessGate:** Validates authorization attributes through back channel with OID service, before issuing EBS access cookies
-**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the Oracle service.
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the Oracle application.
SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
SHA for this scenario supports both SP and IdP initiated flows. The following im
| 1| User connects to application endpoint (BIG-IP) | | 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) | | 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
-| 4| User is redirected back to BIG-IP with issued token and claims |
-| 5| BIG-IP authenticates user and performs LDAP query for user Unique ID (UID) attribute |
+| 4| User is redirected back to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
+| 5| BIG-IP performs LDAP query for users Unique ID (UID) attribute |
| 6| BIG-IP injects returned UID attribute as user_orclguid header in EBS session cookie request to Oracle AccessGate | | 7| Oracle AccessGate validates UID against Oracle Internet Directory (OID) service and issues EBS access cookie | 8| EBS user headers and cookie sent to application and returns the payload to the user |
Prior BIG-IP experience isnΓÇÖt necessary, but you need:
* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
-* [SSL certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS
+* An [SSL Web certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
* An existing Oracle EBS suite including Oracle AccessGate and an LDAP enabled OID (Oracle Internet Database)
Some of these are global settings so can be re-used for publishing more applicat
The **Service Provider** settings define the SAML SP properties for the APM instance representing the application protected through SHA.
-1. Enter **Host**. This is the public FQDN of the application being secured. You need a corresponding DNS record for clients to resolve this address, but using a localhost record is fine during testing
+1. Enter **Host**. This is usually the FQDN that will be used for the applications external URL
2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
This section defines all properties that you would normally use to manually conf
When a user successfully authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims** tab shows the default claims to issue for the new application. It also lets you configure more claims.
-![Screenshot for Azure configuration ΓÇô User attributes & claims](./media/f5-big-ip-easy-button-ldap/user-attributes-claims.png)
+ ![Screenshot for user attributes and claims](./media/f5-big-ip-kerberos-easy-button/user-attributes-claims.png)
You can include additional Azure AD attributes if necessary, but the Oracle EBS scenario only requires the default attributes.
To select a policy to be applied to the application being published:
A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
-1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP.
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the appllication itself. Using a test PC's localhost DNS is fine for testing.
2. Enter **Service Port** as *443* for HTTPS
active-directory F5 Big Ip Oracle Jde Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-jde-easy-button.md
The secure hybrid access solution for this scenario is made up of several compon
**Oracle JDE Application:** BIG-IP published service to be protected by Azure AD SHA.
-**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP.
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
-**BIG-IP APM:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the Oracle service.
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the Oracle service.
SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
Prior BIG-IP experience isnΓÇÖt necessary, but you need:
* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
-* [SSL certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS
+* An [SSL Web certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
* An existing Oracle JDE environment
Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP
### Configuration Properties
-The **Configuration Properties** tab creates up a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
+The **Configuration Properties** tab creates a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
-Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
+Some of these are global settings can be re-used for publishing more applications, further reducing deployment time and effort.
1. Provide a unique **Configuration Name** that enables an admin to easily distinguish between Easy Button configurations
Some of these are global settings so can be re-used for publishing more applicat
The **Service Provider** settings define the SAML SP properties for the APM instance representing the application protected through SHA.
-1. Enter **Host**. This is the public FQDN of the application being secured. You need a corresponding DNS record for clients to resolve this address, but using a localhost record is fine during testing
+1. Enter **Host**. This is usually the FQDN that will be used for the applications external URL
2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
This section defines all properties that you would normally use to manually conf
When a user successfully authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims** tab shows the default claims to issue for the new application. It also lets you configure more claims.
-![Screenshot for Azure configuration ΓÇô User attributes & claims](./media/f5-big-ip-easy-button-ldap/user-attributes-claims.png)
+ ![Screenshot for user attributes and claims](./media/f5-big-ip-kerberos-easy-button/user-attributes-claims.png)
You can include additional Azure AD attributes if necessary, but the Oracle JDE scenario only requires the default attributes.
You can include additional Azure AD attributes if necessary, but the Oracle JDE
The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+ ![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-header/additional-user-attributes.png)
+
+>[!NOTE]
+>This feature has no correlation to Azure AD but is another source of attributes.
+ #### Conditional Access Policy Conditional Access policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
To select a policy to be applied to the application being published:
A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
-1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP.
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the appllication itself. Using a test PC's localhost DNS is fine for testing.
2. Enter **Service Port** as *443* for HTTPS
active-directory Howto Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-download-logs.md
Previously updated : 05/14/2021 Last updated : 02/25/2022
This article explains how to download activity logs in Azure AD.
- By downloading the logs, you can control for how long logs are stored. -- You can download up to 250 000 records. If you want to download more data, use the reporting API.- - Your download is based on the filter you have set. - Azure AD supports the following formats for your download:
This article explains how to download activity logs in Azure AD.
- The timestamps in the downloaded files are always based on UTC.
+- For large data sets (> 250 000 records), you should use the reporting API to download the data.
## What license do you need?
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Users with this role have access to all administrative features in Azure Active
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/accessReviews/allProperties/allTasks | Create and delete access reviews, read and update all properties of access reviews, and manage access reviews of groups in Azure AD |
+> | microsoft.directory/accessReviews/allProperties/allTasks | (Deprecated) Create and delete access reviews, read and update all properties of access reviews, and manage access reviews of groups in Azure AD |
> | microsoft.directory/accessReviews/definitions/allProperties/allTasks | Manage access reviews of all reviewable resources in Azure AD | > | microsoft.directory/adminConsentRequestPolicy/allProperties/allTasks | Manage admin consent request policies in Azure AD | > | microsoft.directory/administrativeUnits/allProperties/allTasks | Create and manage administrative units (including members) |
Users in this role can read settings and administrative information across Micro
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/accessReviews/allProperties/read | Read all properties of access reviews |
+> | microsoft.directory/accessReviews/allProperties/read | (Deprecated) Read all properties of access reviews |
> | microsoft.directory/accessReviews/definitions/allProperties/read | Read all properties of access reviews of all reviewable resources in Azure AD | > | microsoft.directory/adminConsentRequestPolicy/allProperties/read | Read all properties of admin consent request policies in Azure AD | > | microsoft.directory/administrativeUnits/allProperties/read | Read all properties of administrative units, including members |
Users with this role can manage Azure AD identity governance configuration, incl
> | microsoft.directory/accessReviews/definitions.groups/allProperties/update | Update all properties of access reviews for membership in Security and Microsoft 365 groups, excluding role-assignable groups. | > | microsoft.directory/accessReviews/definitions.groups/create | Create access reviews for membership in Security and Microsoft 365 groups. | > | microsoft.directory/accessReviews/definitions.groups/delete | Delete access reviews for membership in Security and Microsoft 365 groups. |
-> | microsoft.directory/accessReviews/allProperties/allTasks | Create and delete access reviews, read and update all properties of access reviews, and manage access reviews of groups in Azure AD |
+> | microsoft.directory/accessReviews/allProperties/allTasks | (Deprecated) Create and delete access reviews, read and update all properties of access reviews, and manage access reviews of groups in Azure AD |
> | microsoft.directory/entitlementManagement/allProperties/allTasks | Create and delete resources, and read and update all properties in Azure AD entitlement management | > | microsoft.directory/groups/members/update | Update members of Security groups and Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments |
active-directory Appsec Flow Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appsec-flow-sso-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with AppSec Flow SSO | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and AppSec Flow SSO.
+ Title: 'Tutorial: Azure AD SSO integration with Conviso Platform SSO'
+description: Learn how to configure single sign-on between Azure Active Directory and Conviso Platform SSO.
Previously updated : 10/28/2020 Last updated : 02/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with AppSec Flow SSO
+# Tutorial: Azure AD SSO integration with Conviso Platform SSO
-In this tutorial, you'll learn how to integrate AppSec Flow SSO with Azure Active Directory (Azure AD). When you integrate AppSec Flow SSO with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Conviso Platform SSO with Azure Active Directory (Azure AD). When you integrate Conviso Platform SSO with Azure AD, you can:
-* Control in Azure AD who has access to AppSec Flow SSO.
-* Enable your users to be automatically signed-in to AppSec Flow SSO with their Azure AD accounts.
+* Control in Azure AD who has access to Conviso Platform SSO.
+* Enable your users to be automatically signed-in to Conviso Platform SSO with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate AppSec Flow SSO with Azure Activ
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* AppSec Flow SSO single sign-on (SSO) enabled subscription.
+* Conviso Platform SSO single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* AppSec Flow SSO supports **IDP** initiated SSO
+* Conviso Platform SSO supports **IDP** initiated SSO.
-## Adding AppSec Flow SSO from the gallery
+## Adding Conviso Platform SSO from the gallery
-To configure the integration of AppSec Flow SSO into Azure AD, you need to add AppSec Flow SSO from the gallery to your list of managed SaaS apps.
+To configure the integration of Conviso Platform SSO into Azure AD, you need to add Conviso Platform SSO from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **AppSec Flow SSO** in the search box.
-1. Select **AppSec Flow SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Conviso Platform SSO** in the search box.
+1. Select **Conviso Platform SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for AppSec Flow SSO
+## Configure and test Azure AD SSO for Conviso Platform SSO
-Configure and test Azure AD SSO with AppSec Flow SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AppSec Flow SSO.
+Configure and test Azure AD SSO with Conviso Platform SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Conviso Platform SSO.
-To configure and test Azure AD SSO with AppSec Flow SSO, perform the following steps:
+To configure and test Azure AD SSO with Conviso Platform SSO, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure AppSec Flow SSO SSO](#configure-appsec-flow-sso-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create AppSec Flow SSO test user](#create-appsec-flow-sso-test-user)** - to have a counterpart of B.Simon in AppSec Flow SSO that is linked to the Azure AD representation of user.
+1. **[Configure Conviso Platform SSO SSO](#configure-conviso-platform-sso-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Conviso Platform SSO test user](#create-conviso-platform-sso-test-user)** - to have a counterpart of B.Simon in Conviso Platform SSO that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **AppSec Flow SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Conviso Platform SSO** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up AppSec Flow SSO** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up Conviso Platform SSO** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AppSec Flow SSO.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Conviso Platform SSO.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **AppSec Flow SSO**.
+1. In the applications list, select **Conviso Platform SSO**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure AppSec Flow SSO SSO
+## Configure Conviso Platform SSO SSO
-To configure single sign-on on **AppSec Flow SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [AppSec Flow SSO support team](mailto:sre@convisoappsec.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Conviso Platform SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Conviso Platform SSO support team](mailto:sre@convisoappsec.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create AppSec Flow SSO test user
+### Create Conviso Platform SSO test user
-In this section, you create a user called Britta Simon in AppSec Flow SSO. Work with [AppSec Flow SSO support team](mailto:sre@convisoappsec.com) to add the users in the AppSec Flow SSO platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Conviso Platform SSO. Work with [Conviso Platform SSO support team](mailto:sre@convisoappsec.com) to add the users in the Conviso Platform SSO platform. Users must be created and activated before you use single sign-on.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-1. Click on Test this application in Azure portal and you should be automatically signed in to the AppSec Flow SSO for which you set up the SSO
+* Click on Test this application in Azure portal and you should be automatically signed in to the Conviso Platform SSO for which you set up the SSO
-1. You can use Microsoft Access Panel. When you click the AppSec Flow SSO tile in the Access Panel, you should be automatically signed in to the AppSec Flow SSO for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the Conviso Platform SSO tile in the My Apps, you should be automatically signed in to the Conviso Platform SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure AppSec Flow SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Conviso Platform SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Atlassian Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Atlassian Cloud | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory integration with Atlassian Cloud'
description: Learn how to configure single sign-on between Azure Active Directory and Atlassian Cloud.
Previously updated : 07/29/2021 Last updated : 02/07/2022 # Tutorial: Integrate Atlassian Cloud with Azure Active Directory
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Before you start go to your Atlassian product instance and copy/save the Instance URL. > [!NOTE]
- > URL should fit `https://<INSTANCE>.atlassian.com` pattern.
+ > URL should fit `https://<INSTANCE>.atlassian.net` pattern.
![Instance Name](./media/atlassian-cloud-tutorial/instance.png)
active-directory Bullseyetdp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bullseyetdp-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and BullseyeTDP](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure BullseyeTDP to support provisioning with Azure AD
-Contact to BullseyeTDP technical support team <hello@bullseyetdp.com> in order to obtain a SCIM Token. Under the Admin Credentials section, enter `https://scim.bullseyeengagement.com/scim` in Tenant URL. Input the provided value in Secret Token.
+Contact [BullseyeTDP support](mailto:hello@bullseyetdp.com) in order to obtain a SCIM Token.
## Step 3. Add BullseyeTDP from the Azure AD application gallery
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning tab automatic](common/provisioning-automatic.png)
-1. Under the **Admin Credentials** section, input your BullseyeTDP Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to BullseyeTDP. If the connection fails, ensure your BullseyeTDP account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input your BullseyeTDP Tenant URL as `https://scim.bullseyeengagement.com/scim` and corresponding Secret Token. Click **Test Connection** to ensure Azure AD can connect to BullseyeTDP. If the connection fails, ensure your BullseyeTDP account has Admin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)
active-directory Datasite Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datasite-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Datasite | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Datasite'
description: Learn how to configure single sign-on between Azure Active Directory and Datasite.
Previously updated : 08/10/2021 Last updated : 02/11/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Datasite
+# Tutorial: Azure AD SSO integration with Datasite
In this tutorial, you'll learn how to integrate Datasite with Azure Active Directory (Azure AD). When you integrate Datasite with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, perform the following step:
-
- In the **Sign-on URL** text box, type the URL:
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Reply URL** text box, type the URL:
+ `https://auth.datasite.com/sp/ACS.saml2`
+
+ b. In the **Sign-on URL** text box, type the URL:
`https://auth.datasite.com/sp/ACS.saml2` 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
To configure single sign-on on **Datasite** side, you need to send the downloade
### Create Datasite test user
-In this section, you create a user called B.Simon in Datasite. Work with [Datasite support team](mailto:service@datasite.com) to add the users in the Datasite platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in Datasite. Work with [Datasite support team](mailto:service@datasite.com) to add the users in the Datasite platform. Users must be created and activated before you use single sign-on.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Datasite you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Datasite you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Orgchartnow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/orgchartnow-tutorial.md
Previously updated : 05/11/2021 Last updated : 02/18/2022 # Tutorial: Azure Active Directory integration with OrgChart Now
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following step:
- In the **Identifier** text box, type the URL:
- `https://sso2.orgchartnow.com`
+ a. In the **Identifier** text box, type the URL:
+ `https://<OrgChartNowServer>.orgchartnow.com/saml/sso_metadata?entityID=<Your_Azure_AD_Entity_ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<OrgChartServer>.orgchartnow.com/saml/sso_acs?entityID=<Your_Azure_AD_Entity_ID>`
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://sso2.orgchartnow.com/Shibboleth.sso/Login?entityID=<YourEntityID>&target=https://sso2.orgchartnow.com`
+ `https://<OrgChartServer>.orgchartnow.com/saml/sso_acs?entityID=<Your_Azure_AD_Entity_ID>`
> [!NOTE] > `<YourEntityID>` is the **Azure AD Identifier** copied from the **Set up OrgChart Now** section, described later in tutorial.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure OrgChart Now SSO
-To configure single sign-on on **OrgChart Now** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [OrgChart Now support team](mailto:ocnsupport@officeworksoftware.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on in OrgChart Now, follow the steps enumerated in the [SSO Configuration article](https://help.orgchartnow.com/en/topics/sso-configuration.html#configuring-sso-41334) on OrgChart Now's Help site.
### Create OrgChart Now test user
-To enable Azure AD users to log in to OrgChart Now, they must be provisioned into OrgChart Now.
-
-1. OrgChart Now supports just-in-time provisioning, which is by default enabled. A new user is created during an attempt to access OrgChart Now if it doesn't exist yet. The just-in-time user provisioning feature will only create a **read-only** user when an SSO request comes from a recognized IDP and the email in the SAML assertion is not found in the user list. For this auto provisioning feature you need to create an access group titled **General** in OrgChart Now. Please follow the below steps to create an access group:
-
- a. Go to the **Manage Groups** option after clicking the **gear** in the top right corner of the UI.
-
- ![OrgChart Now groups](./media/orgchartnow-tutorial/groups.png)
-
- b. Select the **Add** icon and name the group **General** then click **OK**.
-
- ![OrgChart Now add](./media/orgchartnow-tutorial/general.png)
-
- c. Select the folder(s) you wish the general or read-only users to be able to access:
-
- ![OrgChart Now folders](./media/orgchartnow-tutorial/folders.png)
-
- d. **Lock** the folders so that only Admin users can modify them. Then press **OK**.
-
- ![OrgChart Now lock](./media/orgchartnow-tutorial/lock.png)
-
-2. To create **Admin** users and **read/write** users, you must manually create a user in order to get access to their privilege level via SSO. To provision a user account, perform the following steps:
-
- a. Log in to OrgChart Now as a Security Administrator.
-
- b. Click on **Settings** on the top right corner and then navigate to **Manage Users**.
-
- ![OrgChart Now settings](./media/orgchartnow-tutorial/settings.png)
-
- c. Click on **Add** and perform the following steps:
-
- ![OrgChart Now manage](./media/orgchartnow-tutorial/manage-users.png)
-
- 1. In the **User ID** textbox, enter the User ID like **brittasimon\@contoso.com**.
-
- 1. In **Email Address** text box, enter the email of user like **brittasimon\@contoso.com**.
+To enable Azure AD users to log in to OrgChart Now, they must be set up as a user in OrgChart Now, or **Auto-Provisioning** must be enabled in the [SSO Configuration](https://help.orgchartnow.com/en/topics/sso-configuration.html#configuring-sso-41334) panel.
- 1. Click **Add**.
+If you do not wish to enable auto-provisioning at this time, you can manually add a user to OrgChart Now for SSO testing purposes. To do so, follow the steps enumerated in the [Creating a New User](https://help.orgchartnow.com/en/account-settings/manage-users.html#UUID-a921b00b-a5a2-3099-8fe5-d0f28f5a50b9_bridgehead-idm4532421481724832584395125038) section of the [Account Settings: Manage Users](https://help.orgchartnow.com/en/account-settings/manage-users.html) article.
## Test SSO
active-directory Shiphazmat Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shiphazmat-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with ShipHazmat | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ShipHazmat'
description: Learn how to configure single sign-on between Azure Active Directory and ShipHazmat.
Previously updated : 02/24/2020 Last updated : 02/22/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with ShipHazmat
+# Tutorial: Azure AD SSO integration with ShipHazmat
In this tutorial, you'll learn how to integrate ShipHazmat with Azure Active Directory (Azure AD). When you integrate ShipHazmat with Azure AD, you can:
In this tutorial, you'll learn how to integrate ShipHazmat with Azure Active Dir
* Enable your users to be automatically signed-in to ShipHazmat with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* ShipHazmat supports **IDP** initiated SSO
-* ShipHazmat supports **Just In Time** user provisioning
-* Once you configure ShipHazmat you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
-
+* ShipHazmat supports **IDP** initiated SSO.
+* ShipHazmat supports **Just In Time** user provisioning.
-## Adding ShipHazmat from the gallery
+## Add ShipHazmat from the gallery
To configure the integration of ShipHazmat into Azure AD, you need to add ShipHazmat from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **ShipHazmat** in the search box. 1. Select **ShipHazmat** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for ShipHazmat
+## Configure and test Azure AD SSO for ShipHazmat
Configure and test Azure AD SSO with ShipHazmat using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ShipHazmat.
-To configure and test Azure AD SSO with ShipHazmat, complete the following building blocks:
+To configure and test Azure AD SSO with ShipHazmat, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure ShipHazmat SSO](#configure-shiphazmat-sso)** - to configure the single sign-on settings on application side.
- * **[Create ShipHazmat test user](#create-shiphazmat-test-user)** - to have a counterpart of B.Simon in ShipHazmat that is linked to the Azure AD representation of user.
+ 1. **[Create ShipHazmat test user](#create-shiphazmat-test-user)** - to have a counterpart of B.Simon in ShipHazmat that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **ShipHazmat** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **ShipHazmat** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier** text box, type a URL using the following pattern:
+ a. In the **Identifier** text box, type a value using the following pattern:
`ShipHazmat<CustomOrganization>Sso` b. In the **Reply URL** text box, type a URL using the following pattern:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **ShipHazmat**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called B.Simon is created in ShipHazmat. ShipHazmat supp
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the ShipHazmat tile in the Access Panel, you should be automatically signed in to the ShipHazmat for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the ShipHazmat for which you set up the SSO.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the ShipHazmat tile in the My Apps, you should be automatically signed in to the ShipHazmat for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [Try ShipHazmat with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+Once you configure ShipHazmat you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Skysite Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skysite-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with SKYSITE | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with SKYSITE'
description: Learn how to configure single sign-on between Azure Active Directory and SKYSITE.
Previously updated : 08/27/2019 Last updated : 02/22/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with SKYSITE
+# Tutorial: Azure AD SSO integration with SKYSITE
In this tutorial, you'll learn how to integrate SKYSITE with Azure Active Directory (Azure AD). When you integrate SKYSITE with Azure AD, you can:
In this tutorial, you'll learn how to integrate SKYSITE with Azure Active Direct
* Enable your users to be automatically signed-in to SKYSITE with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* SKYSITE supports **IDP** initiated SSO
+* SKYSITE supports **IDP** initiated SSO.
-* SKYSITE supports **Just In Time** user provisioning
+* SKYSITE supports **Just In Time** user provisioning.
-## Adding SKYSITE from the gallery
+## Add SKYSITE from the gallery
To configure the integration of SKYSITE into Azure AD, you need to add SKYSITE from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **SKYSITE** in the search box. 1. Select **SKYSITE** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for SKYSITE
+## Configure and test Azure AD SSO for SKYSITE
Configure and test Azure AD SSO with SKYSITE using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SKYSITE.
-To configure and test Azure AD SSO with SKYSITE, complete the following building blocks:
+To configure and test Azure AD SSO with SKYSITE, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with SKYSITE, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **SKYSITE** application integration page, click on **Properties tab** and perform the following step:
+1. In the Azure portal, on the **SKYSITE** application integration page, click on **Properties tab** and perform the following step:
- ![Single sign-on properties](./media/skysite-tutorial/config05.png)
+ ![Screenshot shows Single sign-on properties.](./media/skysite-tutorial/property.png)
* Copy the **User access URL** and you have to paste it in **Configure SKYSITE SSO section**, which is explained later in the tutorial. 1. On the **SKYSITE** application integration page, navigate to **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
a. Click the **pen** next to **Groups returned in claim**.
- ![Screenshot shows User claims with the option to Add new claim.](./media/skysite-tutorial/config01.png)
+ ![Screenshot shows User claims with the option to Add new claim.](./media/skysite-tutorial/claims.png)
- ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](./media/skysite-tutorial/config02.png)
+ ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](./media/skysite-tutorial/groups.png)
b. Select **All Groups** from the radio list.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **SKYSITE**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure SKYSITE SSO
+## Configure SKYSITE SSO
1. Open a new web browser window and sign into your SKYSITE company site as an administrator and perform the following steps:
-4. Click on **Settings** on the top right side of page and then navigate to **Account setting**.
+1. Click on **Settings** on the top right side of page and then navigate to **Account setting**.
- ![Screenshot shows Account setting selected from Settings.](./media/skysite-tutorial/config03.png)
+ ![Screenshot shows Account setting selected from Settings.](./media/skysite-tutorial/settings.png)
-5. Switch to **Single sign on (SSO)** tab, perform the following steps:
+1. Switch to **Single sign on (SSO)** tab, perform the following steps:
- ![Screenshot shows the Single sign on tab where you can enter the values described.](./media/skysite-tutorial/config04.png)
+ ![Screenshot shows the Single sign on tab where you can enter the values described.](./media/skysite-tutorial/certificate.png)
a. In the **Identity Provider sign in URL** text box, paste the value of **User access URL**, which you have copied from the **properties** tab in Azure portal.
In this section, a user called Britta Simon is created in SKYSITE. SKYSITE suppo
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the SKYSITE tile in the Access Panel, you should be automatically signed in to the SKYSITE for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the SKYSITE for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the SKYSITE tile in the My Apps, you should be automatically signed in to the SKYSITE for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try SKYSITE with Azure AD](https://aad.portal.azure.com/)
+Once you configure SKYSITE you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Tap App Security Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tap-app-security-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure TAP App Security for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to TAP App Security.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: affd297c-2b6f-4dc2-b4c3-d29458cf4b1b
+++
+ms.devlang: na
+ Last updated : 02/03/2022+++
+# Tutorial: Configure TAP App Security for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both TAP App Security and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [TAP App Security](https://tapappsecurity.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in TAP App Security.
+> * Remove users in TAP App Security when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and TAP App Security.
+> * [Single sign-on](tap-app-security-tutorial.md) to TAP App Security.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in TAP App Security with Admin permissions.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and TAP App Security](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure TAP App Security to support provisioning with Azure AD
+
+Contact [TAP App Security support](mailto:support@tapappsecurity.com) in order to obtain a SCIM Token.
++
+## Step 3. Add TAP App Security from the Azure AD application gallery
+
+Add TAP App Security from the Azure AD application gallery to start managing provisioning to TAP App Security. If you have previously setup TAP App Security for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to TAP App Security, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to TAP App Security
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TAP App Security based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for TAP App Security in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **TAP App Security**.
+
+ ![The TAP App Security link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your TAP App Security Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to TAP App Security. If the connection fails, ensure your TAP App Security account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to TAP App Security**.
+
+1. Review the user attributes that are synchronized from Azure AD to TAP App Security in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in TAP App Security for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the TAP App Security API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by TAP App Security|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |phoneNumbers[type eq "mobile"].value|String||
+ |displayName|String||&check;
+ |title|String||&check;
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for TAP App Security, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to TAP App Security by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Textline Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/textline-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Textline | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Textline'
description: Learn how to configure single sign-on between Azure Active Directory and Textline.
Previously updated : 07/15/2020 Last updated : 02/22/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Textline
+# Tutorial: Azure AD SSO integration with Textline
In this tutorial, you'll learn how to integrate Textline with Azure Active Directory (Azure AD). When you integrate Textline with Azure AD, you can:
In this tutorial, you'll learn how to integrate Textline with Azure Active Direc
* Enable your users to be automatically signed-in to Textline with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Textline supports **IDP** initiated SSO
-
-* Textline supports **Just In Time** user provisioning
+* Textline supports **IDP** initiated SSO.
-* Once you configure Textline you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* Textline supports **Just In Time** user provisioning.
-## Adding Textline from the gallery
+## Add Textline from the gallery
To configure the integration of Textline into Azure AD, you need to add Textline from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Textline** in the search box. 1. Select **Textline** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Textline Configure and test Azure AD SSO with Textline using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Textline.
-To configure and test Azure AD SSO with Textline, complete the following building blocks:
+To configure and test Azure AD SSO with Textline, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Textline SSO](#configure-textline-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Textline test user](#create-textline-test-user)** - to have a counterpart of B.Simon in Textline that is linked to the Azure AD representation of user.
+ 1. **[Create Textline test user](#create-textline-test-user)** - to have a counterpart of B.Simon in Textline that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Textline** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Textline** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Textline**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called B.Simon is created in Textline. Textline supports
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Textline tile in the Access Panel, you should be automatically signed in to the Textline for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Textline for which you set up the SSO.
-- [Try Textline with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the Textline tile in the My Apps, you should be automatically signed in to the Textline for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Textline with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Textline you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory The Funding Portal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/the-funding-portal-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with The Funding Portal | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with The Funding Portal'
description: Learn how to configure single sign-on between Azure Active Directory and The Funding Portal.
Previously updated : 08/27/2019 Last updated : 02/22/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with The Funding Portal
+# Tutorial: Azure AD SSO integration with The Funding Portal
In this tutorial, you'll learn how to integrate The Funding Portal with Azure Active Directory (Azure AD). When you integrate The Funding Portal with Azure AD, you can:
In this tutorial, you'll learn how to integrate The Funding Portal with Azure Ac
* Enable your users to be automatically signed-in to The Funding Portal with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* The Funding Portal supports **SP** initiated SSO
+* The Funding Portal supports **SP** initiated SSO.
-## Adding The Funding Portal from the gallery
+## Add The Funding Portal from the gallery
To configure the integration of The Funding Portal into Azure AD, you need to add The Funding Portal from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **The Funding Portal** in the search box. 1. Select **The Funding Portal** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for The Funding Portal
+## Configure and test Azure AD SSO for The Funding Portal
Configure and test Azure AD SSO with The Funding Portal using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in The Funding Portal.
-To configure and test Azure AD SSO with The Funding Portal, complete the following building blocks:
+To configure and test Azure AD SSO with The Funding Portal, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with The Funding Portal, complete the followi
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on **The Funding Portal** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on **The Funding Portal** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<subdomain>.regenteducation.net/`
+1. On the **Basic SAML Configuration** section, perform the following steps:
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<subdomain>.regenteducation.net`
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<subdomain>.regenteducation.net/`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [The Funding Portal Client support team](mailto:info@regenteducation.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [The Funding Portal Client support team](mailto:info@regenteducation.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **The Funding Portal**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called Britta Simon in The Funding Portal. Wo
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the The Funding Portal tile in the Access Panel, you should be automatically signed in to the The Funding Portal for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to The Funding Portal Sign-on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to The Funding Portal Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the The Funding Portal tile in the My Apps, this will redirect to The Funding Portal Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try The Funding Portal with Azure AD](https://aad.portal.azure.com/)
+Once you configure The Funding Portal you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Previously updated : 10/08/2021 Last updated : 02/24/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
The following diagram illustrates the Azure AD Verifiable Credentials architectu
![Diagram that illustrates the Azure AD Verifiable Credentials architecture.](media/verifiable-credentials-configure-tenant/verifiable-credentials-architecture.png)
-See a [video walkthrough](https://www.youtube.com/watch?v=8jqjHjQo-3c) of setting up the Azure AD Verifiable Credential service, including all prerequisites, like Azure AD and an Azure subscription.
+See a [video walkthrough](https://www.youtube.com/watch?v=8jqjHjQo-3c) going over the setup of the Azure AD Verifiable Credential service.
## Prerequisites
After you create your key vault, Verifiable Credentials generates a set of keys
A Key Vault [access policy](../../key-vault/general/assign-access-policy.md) defines whether a specified security principal can perform operations on Key Vault secrets and keys. Set access policies in your key vault for both the administrator account of the Azure AD Verifiable Credentials service, and for the Request Service API principal that you created.
+### Set access policies for the Verifiable Credentials Admin user
+ 1. In the [Azure portal](https://portal.azure.com/), go to the key vault you use for this tutorial. 1. Under **Settings**, select **Access policies**.
A Key Vault [access policy](../../key-vault/general/assign-access-policy.md) def
1. To save the changes, select **Save**.
+### Set access policies for the Verifiable Credentials Issuer and Request services
+ 1. Select **+ Add Access Policy** to add permission to the service principal of the **Verifiable Credential Request Service**. 1. In **Add access policy**: 1. For **Key permissions**, select **Get** and **Sign**.
- 1. For **Secret permissions**, select **Get**.
- 1. For **Select principal**, select **Verifiable Credential Request Service**. 1. Select **Add**.
- ![Screenshot that demonstrates how to add an access policy for the Verifiable Credential Request Service.](media/verifiable-credentials-configure-tenant/set-key-vault-service-principal-access-policy.png)
+ :::image type="content" source="media/verifiable-credentials-configure-tenant/request-service-key-vault-access-policy.png" alt-text="Screenshot that demonstrates how to add an access policy for the Verifiable Credential Issuer Service." :::
+
+The access policies for the Verifiable Credentials Issuer service should be added automatically. If the **Verifiable Credential Issuer Service** doesn't appear in the list of access policies, take the following steps to manually add access policies to the service.
+
+1. Select **+ Add Access Policy** to add permission to the service principal of the **Verifiable Credential Issuer Service**.
+
+1. In **Add access policy**:
+
+ 1. For **Key permissions**, select **Get** and **Sign**.
+
+ 1. For **Select principal**, select **Verifiable Credential Issuer Service**.
+
+ 1. Select **Add**.
+ :::image type="content" source="media/verifiable-credentials-configure-tenant/issuer-service-key-vault-access-policy.png" alt-text="Screenshot that demonstrates how to add an access policy for the Verifiable Credential Request Service." :::
+
1. Select **Save** to save the new policy you created. ## Register an application in Azure AD
To set up Azure AD Verifiable Credentials, follow these steps:
1. **Organization name**: Enter a name to reference your business within Verifiable Credentials. Your customers don't see this name.
- 1. **Domain**: Enter a domain that's added to a service endpoint in your decentralized identity (DID) document. The domain is what binds your DID to something tangible that the user might know about your business. Microsoft Authenticator and other digital wallets use this information to validate that your DID is linked to your domain. If the wallet can verify the DID, it displays a verified symbol. If the wallet can't verify the DID, it informs the user that the credential was issued by an organization it couldn't validate.
+ 1. **Domain**: Enter a domain that's added to a service endpoint in your decentralized identity (DID) document. The domain is what binds your DID to something tangible that the user might know about your business. Microsoft Authenticator and other digital wallets use this information to validate that your DID is linked to your domain. If the wallet can verify the DID, it displays a verified symbol. If the wallet can't verify the DID, it informs the user that the credential was issued by an organization it couldn't validate.
>[!IMPORTANT] > The domain can't be a redirect. Otherwise, the DID and domain can't be linked. Make sure to use HTTPS for the domain. For example: `https://contoso.com`.
To set up Azure AD Verifiable Credentials, follow these steps:
## Next steps - [Learn how to issue Azure AD Verifiable Credentials from a web application](verifiable-credentials-configure-issuer.md).-- [Learn how to verify Azure AD Verifiable Credentials](verifiable-credentials-configure-verifier.md).
+- [Learn how to verify Azure AD Verifiable Credentials](verifiable-credentials-configure-verifier.md).
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
If you wish to re-enable the cluster autoscaler on an existing cluster, you can
## Retrieve cluster autoscaler logs and status
-To diagnose and debug autoscaler events, logs and status can be retrieved from the autoscaler add-on.
+To diagnose and debug autoscaler events, logs and status can be retrieved from the cluster autoscaler.
AKS manages the cluster autoscaler on your behalf and runs it in the managed control plane. You can enable control plane node to see the logs and operations from CA.
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Windows Server support for node pool includes some limitations that are part of
## Does AKS offer a service-level agreement?
-AKS provides SLA guarantees as an optional add-on feature with [Uptime SLA][uptime-sla].
+AKS provides SLA guarantees as an optional feature with [Uptime SLA][uptime-sla].
The Free SKU offered by default doesn't have a associated Service Level *Agreement*, but has a Service Level *Objective* of 99.5%. It could happen that transient connectivity issues are observed in case of upgrades, unhealthy underlay nodes, platform maintenance, application overwhelming the API Server with requests, etc. If your workload doesn't tolerate API Server restarts, then we suggest using Uptime SLA.
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
+
+ Title: Add-ons, extensions, and other integrations with Azure Kubernetes Service
+description: Learn about the add-ons, extensions, and open-source integrations you can use with Azure Kubernetes Service.
++ Last updated : 02/22/2022++
+# Add-ons, extensions, and other integrations with Azure Kubernetes Service
+
+Azure Kubernetes Service (AKS) provides additional, supported functionality for your cluster using add-ons and extensions. There are also many more integrations provided by open-source projects and third parties that are commonly used with AKS. These open-source and third-party integrations are not covered by the [AKS support policy][aks-support-policy].
+
+## Add-ons
+
+Add-ons provide extra capabilities for your AKS cluster and their installation and configuration is managed Azure. Use `az aks addon` to manage all add-ons for your cluster.
+
+The below table shows the available add-ons.
+
+| Name | Description | More details |
+||||
+| http_application_routing | Configure ingress with automatic public DNS name creation for your AKS cluster. | [HTTP application routing add-on on Azure Kubernetes Service (AKS)][http-app-routing] |
+| monitoring | Use Container Insights monitoring with your AKS cluster. | [Container insights overview][container-insights] |
+| virtual-node | Use virtual nodes with your AKS cluster. | [Use virtual nodes][virtual-nodes] |
+| azure-policy | Use Azure Policy for AKS, which enables at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. | [Understand Azure Policy for Kubernetes clusters][azure-policy-aks] |
+| ingress-appgw | Use Application Gateway Ingress Controller with your AKS cluster. | [What is Application Gateway Ingress Controller?][agic] |
+| open-service-mesh | Use Open Service Mesh with your AKS cluster. | [Open Service Mesh AKS add-on][osm] |
+| azure-keyvault-secrets-provider | Use Azure Keyvault Secrets Provider addon.| [Use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster][keyvault-secret-provider] |
+
+## Extensions
+
+Cluster extensions build on top of certain Helm charts and provide an Azure Resource Manager-driven experience for installation and lifecycle management of different Azure capabilities on top of your Kubernetes cluster. For more details on the specific cluster extensions for AKS, see [Deploy and manage cluster extensions for Azure Kubernetes Service (AKS)][cluster-extensions]. For more details on the currently available cluster extensions, see [Currently available extensions][cluster-extensions-current].
+
+## Difference between extensions and add-ons
+
+Both extensions and add-ons are supported ways to add functionality to your AKS cluster. When you install an add-on, the functionality is added as part of the AKS resource provider in the Azure API. When you install an extension, the functionality is added as part of a separate resource provider in the Azure API.
+
+## Open source and third-party integrations
+
+You can install many open source and third-party integrations on your AKS cluster, but these open-source and third-party integrations are not covered by the [AKS support policy][aks-support-policy].
+
+The below table shows a few examples of open-source and third-party integrations.
+
+| Name | Description | More details |
+||||
+| [Helm][helm] | An open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. | [Quickstart: Develop on Azure Kubernetes Service (AKS) with Helm][helm-qs] |
+| [Prometheus][prometheus] | An open source monitoring and alerting toolkit. | [Container insights with metrics in Prometheus format][prometheus-az-monitor], [Prometheus Helm chart][prometheus-helm-chart] |
+| [Grafana][grafana] | An open-source dashboard for observability. | [Deploy Grafana on Kubernetes][grafana-install] |
+| [Couchbase][couchdb] | A distributed NoSQL cloud database. | [Install Couchbase and the Operator on AKS][couchdb-install] |
+| [OpenFaaS][open-faas]| An open-source framework for building serverless functions by using containers. | [Use OpenFaaS with AKS][open-faas-aks] |
+| [Apache Spark][apache-spark] | An open source, fast engine for large-scale data processing. | [Run an Apache Spark job with AKS][spark-job] |
+| [Istio][istio] | An open-source service mesh. | [Istio Installation Guides][istio-install] |
+| [Linkerd][linkerd] | An open-source service mesh. | [Linkerd Getting Started][linkerd-install] |
+| [Consul][consul] | An open source, identity-based networking solution. | [Getting Started with Consul Service Mesh for Kubernetes][consul-install] |
++
+[http-app-routing]: http-application-routing.md
+[container-insights]: ../azure-monitor/containers/container-insights-overview.md
+[virtual-nodes]: virtual-nodes.md
+[azure-policy-aks]: ../governance/policy/concepts/policy-for-kubernetes.md#install-azure-policy-add-on-for-aks
+[agic]: ../application-gateway/ingress-controller-overview.md
+[osm]: open-service-mesh-about.md
+[keyvault-secret-provider]: csi-secrets-store-driver.md
+[cluster-extensions]: cluster-extensions.md?tabs=azure-cli
+[cluster-extensions-current]: cluster-extensions.md?tabs=azure-cli#currently-available-extensions
+[aks-support-policy]: support-policies.md
+[helm]: https://helm.sh
+[helm-qs]: quickstart-helm.md
+[prometheus]: https://prometheus.io/
+[prometheus-helm-chart]: https://github.com/prometheus-community/helm-charts#usage
+[prometheus-az-monitor]: /monitor-aks.md#container-insights
+[istio]: https://istio.io/
+[istio-install]: https://istio.io/latest/docs/setup/install/
+[linkerd]: https://linkerd.io/
+[linkerd-install]: https://linkerd.io/getting-started/
+[consul]: https://www.consul.io/
+[consul-install]: https://learn.hashicorp.com/tutorials/consul/service-mesh-deploy
+[grafana]: https://grafana.com/
+[grafana-install]: https://grafana.com/docs/grafana/latest/installation/kubernetes/
+[couchdb]: https://www.couchbase.com/
+[couchdb-install]: https://docs.couchbase.com/operator/current/tutorial-aks.html
+[open-faas]: https://www.openfaas.com/
+[open-faas-aks]: openfaas.md
+[apache-spark]: https://spark.apache.org/
+[spark-job]: spark-job.md
+[azure-ml-overview]: ../machine-learning/how-to-attach-arc-kubernetes.md
+[dapr-overview]: ./dapr.md
+[gitops-overview]: ../azure-arc/kubernetes/conceptual-gitops-flux2.md
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
# Azure Kubernetes Service (AKS) node image upgrade
-AKS supports upgrading the images on a node so you're up to date with the newest OS and runtime updates. AKS provides one new image per week with the latest updates, so it's beneficial to upgrade your node's images regularly for the latest features, including Linux or Windows patches. Although customers will be notified of image upgrades via the AKS release notes, it might take up to a week for updates to be rolled out in all regions. This article shows you how to upgrade AKS cluster node images and how to update node pool images without upgrading the version of Kubernetes.
+AKS supports upgrading the images on a node so you're up to date with the newest OS and runtime updates. AKS regularly provides new images with the latest updates, so it's beneficial to upgrade your node's images regularly for the latest AKS features. Linux node images are updated weekly, and Windows node images updated monthly. Although customers will be notified of image upgrades via the AKS release notes, it might take up to a week for updates to be rolled out in all regions. This article shows you how to upgrade AKS cluster node images and how to update node pool images without upgrading the version of Kubernetes.
For more information about the latest images provided by AKS, see the [AKS release notes](https://github.com/Azure/AKS/releases).
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
The following parameters can be leveraged to configure Private DNS Zone.
```azurecli-interactive az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone [system|none] ```
-### Create a private AKS cluster with a BYO Private DNS SubZone
-
-Prerequisites:
-
-* Azure CLI >= 2.32.0 or later.
### Create a private AKS cluster with Custom Private DNS Zone or Private DNS SubZone
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
This operation level policy does not forward requests to the backend service.
| timeout="integer" | The amount of time in seconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 seconds. Values greater than 240 seconds may not be honored as the underlying network infrastructure can drop idle connections after this time. | No | None | | follow-redirects="false &#124; true" | Specifies whether redirects from the backend service are followed by the gateway or returned to the caller. | No | false | | buffer-request-body="false &#124; true" | When set to "true" request is buffered and will be reused on [retry](api-management-advanced-policies.md#Retry). | No | false |
-| buffer-response="false &#124; true" | Affects processing of chunked responses. When set to "false" each chunk received from the backend is immediately returned to the caller. When set to "true" chunks are buffered (8KB, unless end of stream is detected) and only then returned to the caller. | No | true |
+| buffer-response="false &#124; true" | Affects processing of chunked responses. When set to "false", each chunk received from the backend is immediately returned to the caller. When set to "true", chunks are buffered (8KB, unless end of stream is detected) and only then returned to the caller.<br/><br/>Set to "false" with backends such as those implementing [server-sent events (SSE)](how-to-server-sent-events.md) that require content to be returned or streamed immediately to the caller. | No | true |
| fail-on-error-status-code="false &#124; true" | When set to true triggers [on-error](api-management-error-handling-policies.md) section for response codes in the range from 400 to 599 inclusive. | No | false | ### Usage
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
In this article, you'll learn how to:
## Prerequisites - Complete the [Create an Azure API Management instance](get-started-create-service-instance.md) quickstart.+ - [Import and publish](import-and-publish.md) an Azure API Management instance. + [!INCLUDE [premium-dev-standard.md](../../includes/api-management-availability-premium-dev-standard.md)] ## Authorize developer accounts by using Azure AD
Follow these steps to grant:
* `Directory.Read.All` application permission for Microsoft Graph API and Azure Active Directory Graph API. * `User.Read` delegated permission for Microsoft Graph API.
-1. Update the first 3 lines of the following PowerShell script to match your environment and run it.
- ```powershell
+1. Update the first 3 lines of the following Azure CLI script to match your environment and run it.
+
+ ```azurecli
$subId = "Your Azure subscription ID" #e.g. "1fb8fadf-03a3-4253-8993-65391f432d3a" $tenantId = "Your Azure AD Tenant or Organization ID" #e.g. 0e054eb4-e5d0-43b8-ba1e-d7b5156f6da8" $appObjectID = "Application Object ID that has been registered in AAD" #e.g. "2215b54a-df84-453f-b4db-ae079c0d2619"
Follow these steps to grant:
#Assign the following permissions: Microsoft Graph Delegated Permission: User.Read, Microsoft Graph Application Permission: Directory.ReadAll, Azure Active Directory Graph Application Permission: Directory.ReadAll (legacy) az rest --method PATCH --uri "https://graph.microsoft.com/v1.0/$($tenantId)/applications/$($appObjectID)" --body "{'requiredResourceAccess':[{'resourceAccess': [{'id': 'e1fe6dd8-ba31-4d61-89e7-88639da4683d','type': 'Scope'},{'id': '7ab1d382-f21e-4acd-a863-ba3e13f7da61','type': 'Role'}],'resourceAppId': '00000003-0000-0000-c000-000000000000'},{'resourceAccess': [{'id': '5778995a-e1bf-45b8-affa-663a9f3f4d04','type': 'Role'}], 'resourceAppId': '00000002-0000-0000-c000-000000000000'}]}" ```+ 2. Log out and log back in to the Azure portal. 3. Navigate to the App Registration page for the application you registered in [the previous section](#authorize-developer-accounts-by-using-azure-ad).
-4. Click **API Permissions**. You should see the permissions granted by the PowerShell script in step 1.
+4. Click **API Permissions**. You should see the permissions granted by the Azure CLI script in step 1.
5. Select **Grant admin consent for {tenantname}** so that you grant access for all users in this directory. Now you can add external Azure AD groups from the **Groups** tab of your API Management instance.
api-management How To Server Sent Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-server-sent-events.md
+
+ Title: Configure API for server-sent events in Azure API Management
+description: How to configure an API for server-sent events (SSE) in Azure API Management
++++ Last updated : 02/24/2022++
+# Configure API for server-sent events
+
+This article provides guidelines for configuring an API in API Management that implements server-sent events (SSE). SSE is based on the HTML5 `EventSource` standard for streaming (pushing) data automatically to a client over HTTP after a client has established a connection.
+
+> [!TIP]
+> API Management also provides native support for [WebSocket APIs](websocket-api.md), which keep a single, persistent, bidrectional connection open between a client and server.
+
+## Prerequisites
+
+- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
+- An API that implements SSE. [Import and publish](import-and-publish.md) the API to your API Management instance using one of the supported import methods.
++
+## Guidelines for SSE
+
+Follow these guidelines when using API Management to reach a backend API that implements SSE.
+
+* **Choose service tier for long-running HTTP connections** - SSE relies on a long-running HTTP connection. Long-running connections are supported in the dedicated API Management tiers, but not in the Consumption tier.
+
+* **Keep idle connections alive** - If a connection between client and backend could be idle for 4 minutes or longer, implement a mechanism to keep the connection alive. For example, enable a TCP keepalive signal at the backend of the connection, or send traffic from the client side at least once per 4 minutes.
+
+ This configuration is needed to override the idle session timeout of 4 minutes that is enforced by the Azure Load Balancer, which is used in the API Management infrastructure.
+
+* **Relay events immediately to clients** - Turn off response buffering on the [`forward-request` policy](api-management-advanced-policies.md#ForwardRequest) so that events are immediately relayed to the clients. For example:
+
+ ```xml
+ <forward-request timeout="120" fail-on-error-status-code="true" buffer-response="false"/>
+ ```
+
+* **Avoid other policies that buffer responses** - Certain policies such as [`validate-content`](validation-policies.md#validate-content) can also buffer response content and shouldn't be used with APIs that implement SSE.
+
+* **Disable response caching** - To ensure that notifications to the client are timely, verify that [response caching](api-management-howto-cache.md) isn't enabled. For more information, see [API Management caching policies](api-management-caching-policies.md).
+
+* **Test API under load** - Follow general practices to test your API under load to detect performance or configuration issues before going into production.
+
+## Next steps
+
+* Learn more about [configuring policies](/api-management-howto-policies.md) in API Management.
+* Learn about API Management [capacity](api-management-capacity.md).
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
The following features are supported for Linux containers:
- Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation. - Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
+> [!NOTE]
+> Ensure ports 80 and 445 are open when using Azure Files with VNET integration.
+>
::: zone-end ::: zone pivot="container-linux"
The following features are supported for Linux containers:
- Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation. - Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
+> [!NOTE]
+> When VNET integration is used, ensure the following ports are open:
+> * Azure Files: 80 and 445.
+> * Azure Blobs: 80 and 443.
+>
::: zone-end ::: zone pivot="container-windows"
app-service Management Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/management-addresses.md
> This article is about the App Service Environment v2 which is used with Isolated App Service plans > +
+## Summary
+ The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that runs in your Azure Virtual Network (VNet). While the ASE does run in your VNet, it must still be accessible from a number of dedicated IP addresses that are used by the Azure App Service to manage the service. In the case of an ASE, the management traffic traverses the user-controlled network. If this traffic is blocked or misrouted, the ASE will become suspended. For details on the ASE networking dependencies, read [Networking considerations and the App Service Environment][networking]. For general information on the ASE, you can start with [Introduction to the App Service Environment][intro]. All ASEs have a public VIP which management traffic comes into. The incoming management traffic from these addresses comes in from to ports 454 and 455 on the public VIP of your ASE. This document lists the App Service source addresses for management traffic to the ASE. These addresses are also in the IP Service Tag named AppServiceManagement.
With Network Security Groups, you do not need to worry about the individual addr
The management addresses can be placed in a route table with a next hop of internet to ensure that all inbound management traffic is able to go back through the same path. These routes are needed when configuring forced tunneling. To create the route table, you can use the portal, PowerShell or Azure CLI. The commands to create a route table using Azure CLI from a PowerShell prompt are below.
-```azurepowershell-interactive
+```azurecli
$rg = "resource group name" $rt = "route table name" $location = "azure location"
app-service Nat Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/nat-gateway-integration.md
For more information and pricing. Go to the [NAT gateway overview](../../virtual
:::image type="content" source="./media/nat-gateway-integration/nat-gateway-overview.png" alt-text="Diagram shows Internet traffic flowing to a NAT gateway in an Azure Virtual Network."::: > [!Note]
-> Using NAT gateway with App Service is dependent on regional VNet Integration, and therefore **Standard**, **Premium**, **PremiumV2** or **PremiumV3** App Service plan is required.
+> * Using NAT gateway with App Service is dependent on virtual network integration, and therefore **Standard**, **Premium**, **PremiumV2** or **PremiumV3** App Service plan is required.
+> * When using NAT gateway together with App Service, all traffic to Azure Storage must be using private endpoint or service endpoint.
+> * NAT gateway cannot be used together with App Service Environment v1 or v2.
## Configuring NAT gateway integration To configure NAT gateway integration with App Service, you need to complete the following steps:
-* Configure regional VNet Integration with your app as described in [Integrate your app with an Azure virtual network](../overview-vnet-integration.md)
-* Ensure [Route All](../overview-vnet-integration.md#routes) is enabled for your VNet Integration so the Internet bound traffic will be affected by routes in your VNet.
-* Provision a NAT gateway with a public IP and associate it with the VNet Integration subnet.
+* Configure regional virtual network integration with your app as described in [Integrate your app with an Azure virtual network](../overview-vnet-integration.md)
+* Ensure [Route All](../overview-vnet-integration.md#routes) is enabled for your virtual network integration so the Internet bound traffic will be affected by routes in your virtual network.
+* Provision a NAT gateway with a public IP and associate it with the virtual network integration subnet.
Set up NAT gateway through the portal:
-1. Go to the **Networking** UI in the App Service portal and select VNet Integration in the Outbound Traffic section. Ensure that your app is integrated with a subnet and **Route All** has been enabled.
+1. Go to the **Networking** UI in the App Service portal and select virtaul network integration in the Outbound Traffic section. Ensure that your app is integrated with a subnet and **Route All** has been enabled.
1. On the Azure portal menu or from the **Home** page, select **Create a resource**. The **New** window appears. 1. Search for "NAT gateway" and select it from the list of results. 1. Fill in the **Basics** information and pick the region where your app is located. :::image type="content" source="./media/nat-gateway-integration/nat-gateway-create-basics.png" alt-text="Screenshot of Basics tab in Create NAT gateway."::: 1. In the **Outbound IP** tab, create a new or select an existing public IP. :::image type="content" source="./media/nat-gateway-integration/nat-gateway-create-outbound-ip.png" alt-text="Screenshot of Outbound IP tab in Create NAT gateway.":::
-1. In the **Subnet** tab, select the subnet used for VNet Integration.
+1. In the **Subnet** tab, select the subnet used for virtual network integration.
:::image type="content" source="./media/nat-gateway-integration/nat-gateway-create-subnet.png" alt-text="Screenshot of Subnet tab in Create NAT gateway."::: 1. Fill in tags if needed and **Create** the NAT gateway. After the NAT gateway is provisioned, click on the **Go to resource group** and select the new NAT gateway. You can to see the public IP that your app will use for outbound Internet-facing traffic in the Outbound IP blade. :::image type="content" source="./media/nat-gateway-integration/nat-gateway-public-ip.png" alt-text="Screenshot of Outbound IP blade in the NAT gateway portal.":::
-If you prefer using CLI to configure your environment, these are the important commands. As a prerequisite, you should create a Web App with VNet Integration configured.
+If you prefer using CLI to configure your environment, these are the important commands. As a prerequisite, you should create an app with virtual network integration configured.
-Ensure **Route All** is configured for your VNet Integration (*Note*: minimum `az version` required is 2.27):
+Ensure **Route All** is configured for your virtual network integration:
```azurecli-interactive az webapp config set --resource-group [myResourceGroup] --name [myWebApp] --vnet-route-all-enabled
az network public-ip create --resource-group [myResourceGroup] --name myPublicIP
az network nat gateway create --resource-group [myResourceGroup] --name myNATgateway --public-ip-addresses myPublicIP --idle-timeout 10 ```
-Associate the NAT gateway with the VNet Integration subnet:
+Associate the NAT gateway with the virtual network integration subnet:
```azurecli-interactive az network vnet subnet update --resource-group [myResourceGroup] --vnet-name [myVnet] --name [myIntegrationSubnet] --nat-gateway myNATgateway
The same NAT gateway can be used across multiple subnets in the same Virtual Net
NAT gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,000 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scale-nat) of NAT gateway. ## Next steps+ For more information on the NAT gateway, see [NAT gateway documentation](../../virtual-network/nat-gateway/nat-overview.md).
-For more information on VNet Integration, see [VNet Integration documentation](../overview-vnet-integration.md).
+For more information on virtual network integration, see [Virtual network integration documentation](../overview-vnet-integration.md).
app-service Quickstart Html Uiex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html-uiex.md
This quickstart shows how to deploy a basic HTML+CSS site to <abbr title="An HTTP-based service for hosting web applications, REST APIs, and mobile back-end applications.">Azure App Service</abbr>. You'll complete this quickstart in [Cloud Shell](../cloud-shell/overview.md), but you can also run these commands locally with [Azure CLI](/cli/azure/install-azure-cli). + ## 1. Prepare your environment [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
git clone https://github.com/Azure-Samples/html-docs-hello-world.git
## 2. Create a web app
-Change to the directory that contains the sample code and run the `az webapp up` command. **Replace** `<app-name>` with a globally unique name.
+Change to the directory that contains the sample code and run the [az webapp up](/cli/azure/webapp#az-webapp-up) command. **Replace** `<app-name>` with a globally unique name.
-```bash
+```azurecli
cd html-docs-hello-world az webapp up --location westeurope --name <app_name> --html
az webapp up --location westeurope --name <app_name> --html
<li>Replace <code>&lt;app-name&gt;</code> with a name that's unique across all of Azure (<em>valid characters are <code>a-z</code>, <code>0-9</code>, and <code>-</code></em>). A good pattern is to use a combination of your company name and an app identifier.</li> <li>The <code>--sku F1</code> argument creates the web app on the Free pricing tier. Omit this argument to use a faster premium tier, which incurs an hourly cost.</li> <li>The <code>--html</code> argument says to treat all folder content as static content and disable build automation.</li>
-<li>You can optionally include the argument <code>--location &lt;location-name&gt;</code> where <code>&lt;location-name&gt;</code> is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the <a href="/cli/azure/appservice#az_appservice_list_locations"><code>az account list-locations</code></a> command.</li>
+<li>You can optionally include the argument <code>--location &lt;location-name&gt;</code> where <code>&lt;location-name&gt;</code> is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the <a href="/cli/azure/appservice#az-appservice-list-locations"><code>az account list-locations</code></a> command.</li>
</ul> </details>
In the `<h1>` heading tag, change "Azure App Service - Sample Static HTML Site"
Redeploy the app with `az webapp up` command.
-```bash
+```azurecli
az webapp up --html ```
The left menu provides different pages for configuring your app.
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell. Remember that the resource group name was automatically generated for you in the [create a web app](#2-create-a-web-app) step.
-```bash
+```azurecli
az group delete --name appsvc_rg_Windows_westeurope ```
app-service Quickstart Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html.md
adobe-target-content: ./quickstart-html-uiex
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Download the sample
git clone https://github.com/Azure-Samples/html-docs-hello-world.git
## Create a web app
-Change to the directory that contains the sample code and run the `az webapp up` command. In the following example, replace <app_name> with a unique app name. Static content is indicated by the `--html` flag.
+Change to the directory that contains the sample code and run the [az webapp up](/cli/azure/webapp#az-webapp-up) command. In the following example, replace <app_name> with a unique app name. Static content is indicated by the `--html` flag.
-```bash
+```azurecli
cd html-docs-hello-world az webapp up --location westeurope --name <app_name> --html
Save your changes and exit nano. Use the command `^O` to save and `^X` to exit.
You'll now redeploy the app with the same `az webapp up` command.
-```bash
+```azurecli
az webapp up --location westeurope --name <app_name> --html ```
The left menu provides different pages for configuring your app.
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell. Remember that the resource group name was automatically generated for you in the [create a web app](#create-a-web-app) step.
-```bash
+```azurecli
az group delete --name appsvc_rg_Windows_westeurope ```
app-service Tutorial Troubleshoot Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-troubleshoot-monitor.md
You can follow the steps in this tutorial on macOS, Linux, Windows.
To complete this tutorial, you'll need: - [Azure subscription](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio)-- [Azure CLI](/cli/azure/install-azure-cli)+ - [Git](https://git-scm.com/) + ## Create Azure resources First, you run several commands locally to setup a sample app to use with this tutorial. The commands create Azure resources, create a deployment user, and deploy the sample app to Azure. You'll be prompted for the password supplied as a part of the creation of the deployment user.
-```bash
+```azurecli
az group create --name myResourceGroup --location "South Central US" az webapp deployment user set --user-name <username> --password <password> az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku B1 --is-linux
Now that you've deployed the sample app to Azure App Service, you'll configure m
In this step, you create a Log Analytics workspace to configure Azure Monitor with your app.
-```bash
+```azurecli
az monitor log-analytics workspace create --resource-group myResourceGroup --workspace-name myMonitorWorkspace ```
Diagnostic settings can be used to collect metrics for certain Azure services in
You run the following commands to create diagnostic settings for AppServiceConsoleLogs (standard output/error) and AppServiceHTTPLogs (web server logs). Replace _\<app-name>_ and _\<workspace-name>_ with your values. > [!NOTE]
-> The first two commands, `resourceID` and `workspaceID`, are variables to be used in the `az monitor diagnostic-settings create` command. See [Create diagnostic settings using Azure CLI](../azure-monitor/essentials/diagnostic-settings.md#create-using-azure-cli) for more information on this command.
+> The first two commands, `resourceID` and `workspaceID`, are variables to be used in the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. See [Create diagnostic settings using Azure CLI](../azure-monitor/essentials/diagnostic-settings.md#create-using-azure-cli) for more information on this command.
>
-```bash
+```azurecli
resourceID=$(az webapp show -g myResourceGroup -n <app-name> --query id --output tsv) workspaceID=$(az monitor log-analytics workspace show -g myResourceGroup --workspace-name <workspace-name> --query id --output tsv)
Converting images should not longer produce the HTTP 500 errors.
Delete the diagnostic setting with the following command:
-```bash
+```azurecli
az monitor diagnostic-settings delete --resource $resourceID -n myMonitorLogs ``` What you learned:
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Previously updated : 10/18/2021 Last updated : 02/25/2022
Activity logging is automatically enabled for every Resource Manager resource. Y
1. Note your storage account's resource ID, where the log data is stored. This value is of the form: /subscriptions/\<subscriptionId\>/resourceGroups/\<resource group name\>/providers/Microsoft.Storage/storageAccounts/\<storage account name\>. You can use any storage account in your subscription. You can use the Azure portal to find this information.
- ![Portal: resource ID for storage account](./media/application-gateway-diagnostics/diagnostics1.png)
+ :::image type="content" source="media/application-gateway-diagnostics/diagnostics2.png" alt-text="Screenshot of storage account endpoints" lightbox="media/application-gateway-diagnostics/diagnostics2.png":::
2. Note your application gateway's resource ID for which logging is enabled. This value is of the form: /subscriptions/\<subscriptionId\>/resourceGroups/\<resource group name\>/providers/Microsoft.Network/applicationGateways/\<application gateway name\>. You can use the portal to find this information.
- ![Portal: resource ID for application gateway](./media/application-gateway-diagnostics/diagnostics2.png)
+ :::image type="content" source="media/application-gateway-diagnostics/diagnostics1.png" alt-text="Screenshot of app gateway properties" lightbox="media/application-gateway-diagnostics/diagnostics1.png":::
+ 3. Enable diagnostic logging by using the following PowerShell cmdlet:
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
For this scenario, use NSGs on the Application Gateway subnet. Put the following
**v2 supported scenarios** > [!WARNING]
- > An incorrect configuration of the route table could result in asymmetrical routing in Application Gateway v2. Ensure that all management/control plane traffic is sent directly to the Internet and not through a virtual appliance. Logging and metrics could also be affected.
+ > An incorrect configuration of the route table could result in asymmetrical routing in Application Gateway v2. Ensure that all management/control plane traffic is sent directly to the Internet and not through a virtual appliance. Logging, metrics, and CRL checks could also be affected.
**Scenario 1**: UDR to disable Border Gateway Protocol (BGP) Route Propagation to the Application Gateway subnet
automation Add User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/add-user-assigned-identity.md
This article shows you how to add a user-assigned managed identity for an Azure Automation account and how to use it to access other resources. For more information on how managed identities work with Azure Automation, see [Managed identities](automation-security-overview.md#managed-identities). > [!NOTE]
-> User-assigned managed identities are supported for cloud jobs only.
+> User-assigned managed identities are supported for Azure jobs only.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If you don't have an Azure subscription, create a [free account](https://azure.m
- An Azure resource that you want to access from your Automation runbook. This resource needs to have a role defined for the user-assigned managed identity, which helps the Automation runbook authenticate access to the resource. To add roles, you need to be an owner for the resource in the corresponding Azure AD tenant. -- If you want to execute hybrid jobs using a user-assigned managed identity, update the Hybrid Runbook Worker to the latest version. The minimum required versions are:-
- - Windows Hybrid Runbook Worker: version 7.3.1125.0
- - Linux Hybrid Runbook Worker: version 1.7.4.0
- - To assign an Azure role, you must have ```Microsoft.Authorization/roleAssignments/write``` permissions, such as [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner). ## Add user-assigned managed identity for Azure Automation account
print(response.text)
- If you need to disable a managed identity, see [Disable your Azure Automation account managed identity](disable-managed-identity-for-automation.md). -- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).
+- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It
> [!NOTE] > Before you install this version (v1), we would like you to know about the [next version](../azure-functions/start-stop-vms/overview.md), which is in preview right now. This new version (v2) offers all the same functionality as this one, but is designed to take advantage of newer technology in Azure. It adds some of the commonly requested features from customers, such as multi-subscription support from a single Start/Stop instance. >
-> Start/Stop VMs during off-hours (v1) will deprecate on 5/21/2022.
+> Start/Stop VMs during off-hours (v1) will be deprecated soon and the date will be announced once V2 moves to general availability (GA).
This feature uses [Start-AzVm](/powershell/module/az.compute/start-azvm) cmdlet to start VMs. It uses [Stop-AzVM](/powershell/module/az.compute/stop-azvm) for stopping VMs.
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
This article describes how to create the Azure Arc data controller in direct con
Before you begin, verify that you have completed the prerequisites in [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md).
-Creating an Azure Arc data controller in direct connectivity mode involves the following steps:
+- Creating an Azure Arc data controller in direct connectivity mode involves the following steps:
-1. Create an Azure Arc-enabled data services extension.
-1. Create a custom location.
-1. Create the data controller.
+ 1. Create an Azure Arc-enabled data services extension.
+ 1. Create a custom location.
+ 1. Create the data controller.
+ ## Step 1: Create an Azure Arc-enabled data services extension
The following command creates the Arc data services extension.
#### [Linux](#tab/linux)
-```console
+```azurecli
az k8s-extension create --cluster-name ${clusterName} --resource-group ${resourceGroup} --name ${adsExtensionName} --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace ${namespace} --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper az k8s-extension show --resource-group ${resourceGroup} --cluster-name ${resourceName} --name ${adsExtensionName} --cluster-type connectedclusters ``` #### [Windows (PowerShell)](#tab/windows)
-```PowerShell
+```azurecli
az k8s-extension create --cluster-name $ENV:clusterName --resource-group $ENV:resourceGroup --name $ENV:adsExtensionName --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace $ENV:namespace --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper az k8s-extension show --resource-group $ENV:resourceGroup --cluster-name $ENV:clusterName --name $ENV:adsExtensionName --cluster-type connectedclusters ```
When the Arc data services extension is created, Azure creates a managed identit
### Retrieve managed identity of the Arc data controller extension
-```powershell
+```azurecli
$Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group <resource group> --cluster-name <connectedclustername> --cluster-type connectedClusters --name <name of extension> | convertFrom-json).identity.principalId #Example $Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group myresourcegroup --cluster-name myconnectedcluster --cluster-type connectedClusters --name ads-extension | convertFrom-json).identity.principalId
A custom location is an Azure resource that is equivalent to a namespace in a Ku
#### [Linux](#tab/linux)
-```bash
+```azurecli
export clName=mycustomlocation export hostClusterId=$(az connectedk8s show --resource-group ${resourceGroup} --name ${clusterName} --query id -o tsv) export extensionId=$(az k8s-extension show --resource-group ${resourceGroup} --cluster-name ${clusterName} --cluster-type connectedClusters --name ${adsExtensionName} --query id -o tsv)
az customlocation create --resource-group ${resourceGroup} --name ${clName} --na
#### [Windows (PowerShell)](#tab/windows)
-```PowerShell
+```azurecli
$ENV:clName="mycustomlocation" $ENV:hostClusterId=(az connectedk8s show --resource-group $ENV:resourceGroup --name $ENV:clusterName --query id -o tsv) $ENV:extensionId=(az k8s-extension show --resource-group $ENV:resourceGroup --cluster-name $ENV:clusterName --cluster-type connectedClusters --name $ENV:adsExtensionName --query id -o tsv)
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-high-availability.md
After all containers within the pod have recovered, you can connect to the manag
## High availability in Business Critical service tier
-In the Business Critical service tier, in addition to what is natively provided by Kubernetes orchestration, there is a new technology called Contained availability group that provides higher levels of availability. Azure Arc-enabled SQL managed instance deployed with `Business Critical` service tier can be deployed with either 2 or 3 replicas. These replicas are always kept in sync with each other. Contained availability group is built on SQL Server Always On availability groups. With contained availability groups, any pod crashes or node failures are transparent to the application as there is at least one other pod that has the SQL managed instance that has all the data from the primary and ready to take on connections.
+In the Business Critical service tier, in addition to what is natively provided by Kubernetes orchestration, Azure SQL Managed Instance for Azure Arc provides a contained availability group. The contained availability group is built on SQL Server Always On technology. It provides higher levels of availability. Azure Arc-enabled SQL managed instance deployed with *Business Critical* service tier can be deployed with either 2 or 3 replicas. These replicas are always kept in sync with each other. With contained availability groups, any pod crashes or node failures are transparent to the application as there is at least one other pod that has the instance that has all the data from the primary and is ready to take on connections.
## Contained availability groups
Additional steps are required to restore a database into an availability group.
1. Expose the primary instance external endpoint by creating a new Kubernetes service.
- Determine the pod that hosts the primary replica by connecting to the managed instance and run:
+ Determine the pod that hosts the primary replica. Connect to the managed instance and run:
```sql SELECT @@SERVERNAME ```+
+ The query returns the pod that hosts the primary replica.
+ Create the Kubernetes service to the primary instance by running the command below if your Kubernetes cluster uses nodePort services. Replace `podName` with the name of the server returned at previous step, `serviceName` with the preferred name for the Kubernetes service created. ```console
Additional steps are required to restore a database into an availability group.
```console kubectl get services -n <namespaceName> ```+ 2. Restore the database to the primary instance endpoint. Add the database backup file into the primary instance container.
Additional steps are required to restore a database into an availability group.
``` > [!IMPORTANT]
-> As a best practice, you should cleanup by deleting the Kubernetes service created above by running this command:
+> As a best practice, you should delete the Kubernetes service created above by running this command:
> >```console >kubectl delete svc sql2-0-p -n arc
azure-arc Monitor Grafana Kibana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/monitor-grafana-kibana.md
Kibana and Grafana web dashboards are provided to bring insight and clarity to the Kubernetes namespaces being used by Azure Arc-enabled data services. To access Kibana and Grafana web dashboards view service endpoints check [Azure Data Studio dashboards](./azure-data-studio-dashboards.md) documentation. -- ## Monitor Azure SQL managed instances on Azure Arc To access the logs and monitoring dashboards for Azure Arc-enabled SQL Managed Instance, run the following `azdata` CLI command
-```azurecl
+```azurecli
az sql mi-arc endpoint list -n <name of SQL instance> --use-k8s- ```+ The relevant Grafana dashboards are: * "Azure SQL managed instance Metrics"
The relevant Grafana dashboards are:
To access the logs and monitoring dashboards for PostgreSQL Hyperscale, run the following `azdata` CLI command
-```bash
-
+```azurecli
az postgres arc-server endpoint list -n <name of postgreSQL instance> --k8s-namespace <namespace> --use-k8s- ``` The relevant postgreSQL dashboards are:
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Previously updated : 01/27/2022 Last updated : 02/25/2022 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## February 2022
+
+This release is published February 25, 2022.
+
+### Image tag
+
+`v1.4.0_2022-02-25`
+
+For complete release version information, see [Version log](version-log.md).
+
+### SQL Managed Instance
+
+- Support for readable secondary replicas:
+ - To set readable secondary replicas use `--readable-secondaries` when you create or update an Arc-enabled SQL Managed Instance deployment.
+ - Set `--readable secondaries` to any value between 0 and the number of replicas minus 1.
+ - `--readable-secondaries` only applies to Business Critical tier.
+- Automatic backups are taken on the primary instance in a Business Critical service tier when there are multiple replicas. When a failover happens, backups move to the new primary.
+- RWX capable storage class is required for backups, for both General Purpose and Business Critical service tiers.
+- Billing support when using multiple read replicas.
+
+For additional information about service tiers, see [High Availability with Azure Arc-enabled SQL Managed Instance (preview)](managed-instance-high-availability.md).
+
+### User experience improvements
+
+The following improvements are available in [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+
+- Azure Arc and Azure CLI extensions now generally available.
+- Changed edit commands for SQL Managed Instance for Azure Arc dashboard to use `update`, reflecting Azure CLI changes. This works in indirect or direct mode.
+- Data controller deployment wizard step for connectivity mode is now earlier in the process.
+- Removed an extra backups field in SQL MI deployment wizard.
+ ## January 2022 This release is published January 27, 2022.
+### Image tag
+
+`v1.3.0_2022-01-27`
+
+For complete release version information, see [Version log](version-log.md).
+ ### Data controller - Initiate an upgrade of the data controller from the portal in the direct connected mode
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
Previously updated : 12/16/2021 Last updated : 02/25/2022 # Customer intent: As a data professional, I want to understand what versions of components align with specific releases.
This article identifies the component versions with each release of Azure Arc-enabled data services.
+### February 25, 2022
+
+|Component |Value |
+|--||
+|Container images tag |`v1.4.0_2022-02-25`
+|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2, v3</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2, v3, v4</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1|
+|ARM API version|2021-11-01|
+|`arcdata` Azure CLI extension version| 1.2.1|
+|Arc enabled Kubernetes helm chart extension version|1.1.18791000|
+|Arc Data extension for Azure Data Studio|1.0|
+ ### January 27, 2022 |Component |Value |
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
There's no cost for the Azure Monitor agent, but you might incur charges for the
The Azure Monitor agent doesn't require any keys but instead requires a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). You must have a system-assigned managed identity enabled on each virtual machine before you deploy the agent. ## Networking
-The Azure Monitor agent supports Azure service tags (both AzureMonitor and AzureResourceManager tags are required). It supports connecting via **direct proxies, Log Analytics gateway and private links** as described below.
+The Azure Monitor agent supports Azure service tags (both AzureMonitor and AzureResourceManager tags are required). It supports connecting via **direct proxies, Log Analytics gateway, and private links** as described below.
### Proxy configuration If the machine connects through a proxy server to communicate over the internet, review requirements below to understand the network configuration required.
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType Azur
+### Log Analytics gateway configuration
+1. Follow the instructions above to configure proxy settings on the agent and provide the IP address and port number corresponding to the gateway server. If you have deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
+2. Add the **configuration endpoint URL** to fetch data collection rules to the allow list for the gateway
+ `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com`
+ `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`
+ (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
+3. Add the **data ingestion endpoint URL** to the allow list for the gateway
+ `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`
+3. Restart the **OMS Gateway** service to apply the changes
+ `Stop-Service -Name <gateway-name>`
+ `Start-Service -Name <gateway-name>`
+ ## Private link configuration To configure the agent to use private links for network communications with Azure Monitor, you can use [Azure Monitor Private Links Scopes (AMPLS)](../logs/private-link-security.md) and [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md) to enable required network isolation.
azure-monitor Data Sources Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-custom-logs.md
Last updated 02/07/2022
# Collect custom logs with Log Analytics agent in Azure Monitor
+> [!IMPORTANT]
+> This article describes collecting file based text logs using the Log Analytics agent. It should not be confused with the [custom logs API](../logs/custom-logs-overview.md) which allows you to send data to Azure Monitor Logs using a REST API.
+ The Custom Logs data source for the Log Analytics agent in Azure Monitor allows you to collect events from text files on both Windows and Linux computers. Many applications log information to text files instead of standard logging services such as Windows Event log or Syslog. Once collected, you can either parse the data into individual fields in your queries or extract the data during collection to individual fields. > [!IMPORTANT]
azure-monitor Resource Manager Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/resource-manager-agent.md
The following sample installs the Azure Monitor agent on a Windows Azure virtual
"type": "AzureMonitorWindowsAgent", "typeHandlerVersion": "1.0", "autoUpgradeMinorVersion": true,
- "enableAutomaticUpgrade":true
+ "enableAutomaticUpgrade":true
} } ]
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
appInsights.trackMetric({name: "queueLength", average: 42});
```csharp var sample = new MetricTelemetry(); sample.Name = "queueLength";
-sample.Value = 42.3;
+sample.Sum = 42.3;
telemetryClient.TrackMetric(sample); ```
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
For apps written by using [ASP.NET Core](asp-net-core.md#adding-telemetry-proces
} ```
+To register telemetry processors that need parameters in ASP.NET Core, create a custom class implementing **ITelemetryProcessorFactory**. Call the constructor with the desired parameters in the **Create** method and then use **AddSingleton<ITelemetryProcessorFactory, MyTelemetryProcessorFactory>()**.
+ ### Example filters #### Synthetic requests
azure-monitor Apm Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/apm-tables.md
Legacy table: requests
|user_AuthenticatedId|string|UserAuthenticatedId|string| |user_Id|string|UserId|string|
-### AppSystemEvents
+### AppExceptions
Legacy table: exceptions
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
The `New-AzApplicationInsights` PowerShell command does not currently support cr
"properties": { "ApplicationId": "[parameters('name')]", "Application_Type": "[parameters('type')]",
- "Flow_Type": "Redfield",
+ "Flow_Type": "Bluefield",
"Request_Source": "[parameters('requestSource')]", "WorkspaceResourceId": "[parameters('workspaceResourceId')]" }
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
armclient GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/
To set the daily cap reset time, you can use [ARMClient](https://github.com/projectkudu/ARMClient). Here's an example using `ARMClient`, to set the reset time to a new hour (in this example 12:00 UTC): ```PS
-armclient PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/microsoft.insights/components/MyResourceName/CurrentBillingFeatures?api-version=2018-05-01-preview "{'CurrentBillingFeatures':['Basic'],'DataVolumeCap':{'ResetTime':12}}"
+armclient PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/microsoft.insights/components/MyResourceName/CurrentBillingFeatures?api-version=2018-05-01-preview "{'CurrentBillingFeatures':['Basic'],'DataVolumeCap':{'Cap':100,'WarningThreshold':80,'ResetTime':12}}"
``` <a id="price"></a>
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md
Autoscale will post to the Activity Log if any of the following conditions occur
You can also use an Activity Log alert to monitor the health of the autoscale engine. Here are examples to [create an Activity Log Alert to monitor all autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert) or to [create an Activity Log Alert to monitor all failed autoscale scale in/scale out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert).
-In addition to using activity log alerts, you can also configure email or webhook notifications to get notified for successful scale actions via the notifications tab on the autoscale setting.
+In addition to using activity log alerts, you can also configure email or webhook notifications to get notified for scale actions via the notifications tab on the autoscale setting.
## Next Steps - [Create an Activity Log Alert to monitor all autoscale engine operations on your subscription.](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert)
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
Last updated 02/21/2022
## Types of data collection rules There are currently two types of data collection rule in Azure Monitor: -- **Standard DCR**. Used with different workflows that send data to Azure Monitor. Workflows currently supported are [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and [custom logs](../logs/custom-logs-overview.md).
+- **Standard DCR**. Used with different workflows that send data to Azure Monitor. Workflows currently supported are [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and [custom logs (preview)](../logs/custom-logs-overview.md).
-- **Workspace transformation DCR**. Used with a Log Analytics workspace to apply transformations to workflows that don't currently support DCRs.
+- **Workspace transformation DCR)**. Used with a Log Analytics workspace to apply [ingestion-time transformations (preview)](../logs/ingestion-time-transformations.md) to workflows that don't currently support DCRs.
## Structure of a data collection rule Data collection rules are formatted in JSON. While you may not need to interact with them directly, there are scenarios where you may need to directly edit a data collection rule. See [Data collection rule structure](data-collection-rule-structure.md) for a description of this structure and different elements.
See the following resources for programmatically working with DCRs.
## Data resiliency and high availability
-Data collection rules are stored regionally, and are available in all public regions where Log Analytics is supported. Government regions and clouds are not currently supported. A rule gets created and stored in the region you specify, and is backed up to the [paired-region](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) within the same geography. The service is deployed to all three [availability zones](../../availability-zones/az-overview.md#availability-zones) within the region, making it a **zone-redundant service** which further adds to high availability.
+A rule gets created and stored in the region you specify, and is backed up to the [paired-region](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) within the same geography. The service is deployed to all three [availability zones](../../availability-zones/az-overview.md#availability-zones) within the region, making it a **zone-redundant service** which further adds to high availability.
+
+## Supported regions
+Data collection rules are stored regionally, and are available in all public regions where Log Analytics is supported, as well as the Azure Government and China clouds. Air-gapped clouds are not yet supported.
### Single region data residency This is a preview feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. Single region residency is enabled by default in these regions.
azure-monitor Data Collection Rule Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md
Each data source has a data source type. Each type defines a unique set of prope
| Data source type | Description | |:|:|
-| extension | VM extension-based data source |
+| extension | VM extension-based data source, used exclusively by Log Analytics solutions and Azure services ([View agent supported services and solutions](../agents/azure-monitor-agent-overview.md#supported-services-and-features)) |
| performanceCounters | Performance counters for both Windows and Linux | | syslog | Syslog events on Linux | | windowsEventLogs | Windows event log |
azure-monitor Data Collection Rule Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-transformations.md
Transformations don't currently support [has](/azure/data-explorer/kusto/query/h
### Handling dynamic data
-Since the properties of type [dynamic](/azure/data-explorer/kusto/query/scalar-data-types/dynamic) aren't supported in the input stream schema, you need alternate methods for strings containing JSON.
-
-Consider the following input:
+Consider the following input with [dynamic data](/azure/data-explorer/kusto/query/scalar-data-types/dynamic):
```json {
Consider the following input:
} ```
-In order to access the properties in *AdditionalContext*, define it as string-typed column in the input stream:
+In order to access the properties in *AdditionalContext*, define it as dynamic-typed column in the input stream:
```json "columns": [
In order to access the properties in *AdditionalContext*, define it as string-ty
}, { "name": "AdditionalContext",
- "type": "string"
+ "type": "dynamic"
} ] ```
source
``` ### Dynamic literals
-[Dynamic literals](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals) aren't supported, but you can use the [parse_json function](/azure/data-explorer/kusto/query/parsejsonfunction) as a workaround.
+Use the [parse_json function](/azure/data-explorer/kusto/query/parsejsonfunction) to handle [dynamic literals](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals).
-For example, the following query isn't supported:
+For example, the following queries provide the same functionality:
```kql print d=dynamic({"a":123, "b":"hello", "c":[1,2,3], "d":{}})
- ```
-
-The following query is supported and provides the same functionality:
+```
```kql print d=parse_json('{"a":123, "b":"hello", "c":[1,2,3], "d":{}}')
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
Previously updated : 10/05/2021 Last updated : 02/08/2022
The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analyti
This latest update adds a new column and reorders the metrics to be alphabetical. The additional information means that the tables might have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you seem to be missing information, use the scroll bar to see the entirety of the table.
+## microsoft.aadiam/azureADMetrics
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|ThrottledRequests|No|ThrottledRequests|Count|Average|azureADMetrics type metric|No Dimensions|
++ ## Microsoft.AnalysisServices/servers |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|BackendDuration|Yes|Duration of Backend Requests|MilliSeconds|Average|Duration of Backend Requests in milliseconds|Location, Hostname|
-|Capacity|Yes|Capacity|Percent|Average|Utilization metric for ApiManagement service. Note: For skus other than Premium, 'Max' aggregation will show the value as 0.|Location|
-|ConnectionAttempts|Yes|WebSocket Connection Attempts (Preview)|Count|Total|Count of WebSocket connection attempts based on selected source and destination|Location, Source, Destination, State|
-|Duration|Yes|Overall Duration of Gateway Requests|MilliSeconds|Average|Overall Duration of Gateway Requests in milliseconds|Location, Hostname|
+|BackendDuration|Yes|Duration of Backend Requests|Milliseconds|Average|Duration of Backend Requests in milliseconds|Location, Hostname|
+|Capacity|Yes|Capacity|Percent|Average|Utilization metric for ApiManagement service|Location|
+|Duration|Yes|Overall Duration of Gateway Requests|Milliseconds|Average|Overall Duration of Gateway Requests in milliseconds|Location, Hostname|
|EventHubDroppedEvents|Yes|Dropped EventHub Events|Count|Total|Number of events skipped because of queue size limit reached|Location| |EventHubRejectedEvents|Yes|Rejected EventHub Events|Count|Total|Number of rejected EventHub events (wrong configuration or unauthorized)|Location| |EventHubSuccessfulEvents|Yes|Successful EventHub Events|Count|Total|Number of successful EventHub events|Location|
This latest update adds a new column and reorders the metrics to be alphabetical
|SuccessfulRequests|Yes|Successful Gateway Requests (Deprecated)|Count|Total|Number of successful gateway requests - Use multi-dimension request metric with GatewayResponseCodeCategory dimension instead|Location, Hostname| |TotalRequests|Yes|Total Gateway Requests (Deprecated)|Count|Total|Number of gateway requests - Use multi-dimension request metric with GatewayResponseCodeCategory dimension instead|Location, Hostname| |UnauthorizedRequests|Yes|Unauthorized Gateway Requests (Deprecated)|Count|Total|Number of unauthorized gateway requests - Use multi-dimension request metric with GatewayResponseCodeCategory dimension instead|Location, Hostname|
-|WebSocketMessages|Yes|WebSocket Messages (Preview)|Count|Total|Count of WebSocket messages based on selected source and destination|Location, Source, Destination|
## Microsoft.AppConfiguration/configurationStores
This latest update adds a new column and reorders the metrics to be alphabetical
|current-requests|Yes|current-requests|Count|Average|Total number of requests in processing in the lifetime of the process|Deployment, AppName, Pod| |exception-count|Yes|exception-count|Count|Total|Number of Exceptions|Deployment, AppName, Pod| |failed-requests|Yes|failed-requests|Count|Average|Total number of failed requests in the lifetime of the process|Deployment, AppName, Pod|
+|GatewayHttpServerRequestsMilliSecondsMax|Yes|Max time of requests|Milliseconds|Maximum|The max time of requests|Pod, httpStatusCode, outcome, httpMethod|
+|GatewayHttpServerRequestsMilliSecondsSum|Yes|Total time of requests|Milliseconds|Total|The total time of requests|Pod, httpStatusCode, outcome, httpMethod|
+|GatewayHttpServerRequestsSecondsCount|Yes|Request count|Count|Total|The number of requests|Pod, httpStatusCode, outcome, httpMethod|
+|GatewayJvmGcLiveDataSizeBytes|Yes|jvm.gc.live.data.size|Bytes|Average|Size of old generation memory pool after a full GC|Pod|
+|GatewayJvmGcMaxDataSizeBytes|Yes|jvm.gc.max.data.size|Bytes|Maximum|Max size of old generation memory pool|Pod|
+|GatewayJvmGcMemoryAllocatedBytesTotal|Yes|jvm.gc.memory.allocated|Bytes|Maximum|Incremented for an increase in the size of the young generation memory pool after one GC to before the next|Pod|
+|GatewayJvmGcMemoryPromotedBytesTotal|Yes|jvm.gc.memory.promoted|Bytes|Maximum|Count of positive increases in the size of the old generation memory pool before GC to after GC|Pod|
+|GatewayJvmGcPauseSecondsCount|Yes|jvm.gc.pause.total.count|Count|Total|GC Pause Count|Pod|
+|GatewayJvmGcPauseSecondsMax|Yes|jvm.gc.pause.max.time|Seconds|Maximum|GC Pause Max Time|Pod|
+|GatewayJvmGcPauseSecondsSum|Yes|jvm.gc.pause.total.time|Seconds|Total|GC Pause Total Time|Pod|
+|GatewayJvmMemoryCommittedBytes|Yes|jvm.memory.committed|Bytes|Average|Memory assigned to JVM in bytes|Pod|
+|GatewayJvmMemoryUsedBytes|Yes|jvm.memory.used|Bytes|Average|Memory Used in bytes|Pod|
+|GatewayProcessCpuUsage|Yes|process.cpu.usage|Percent|Average|The recent CPU usage for the JVM process|Pod|
+|GatewayRatelimitThrottledCount|Yes|Throttled requests count|Count|Total|The count of the throttled requests|Pod|
+|GatewaySystemCpuUsage|Yes|system.cpu.usage|Percent|Average|The recent CPU usage for the whole system|Pod|
|gc-heap-size|Yes|gc-heap-size|Count|Average|Total heap size reported by the GC (MB)|Deployment, AppName, Pod| |gen-0-gc-count|Yes|gen-0-gc-count|Count|Average|Number of Gen 0 GCs|Deployment, AppName, Pod| |gen-0-size|Yes|gen-0-size|Bytes|Average|Gen 0 Heap Size|Deployment, AppName, Pod|
This latest update adds a new column and reorders the metrics to be alphabetical
|monitor-lock-contention-count|Yes|monitor-lock-contention-count|Count|Average|Number of times there were contention when trying to take the monitor lock|Deployment, AppName, Pod| |PodCpuUsage|Yes|App CPU Usage|Percent|Average|The recent CPU usage for the app|Deployment, AppName, Pod| |PodMemoryUsage|Yes|App Memory Usage|Percent|Average|The recent Memory usage for the app|Deployment, AppName, Pod|
+|PodNetworkIn|Yes|App Network In|Bytes|Average|Cumulative count of bytes received in the app|Deployment, AppName, Pod|
+|PodNetworkOut|Yes|App Network Out|Bytes|Average|Cumulative count of bytes sent from the app|Deployment, AppName, Pod|
|process.cpu.usage|Yes|process.cpu.usage|Percent|Average|The recent CPU usage for the JVM process|Deployment, AppName, Pod| |requests-per-second|Yes|requests-rate|Count|Average|Request rate|Deployment, AppName, Pod| |system.cpu.usage|Yes|system.cpu.usage|Percent|Average|The recent CPU usage for the whole system|Deployment, AppName, Pod|
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalCalls|Yes|Total Calls|Count|Total|Total number of calls|ApiName, ServingRegion, StatusCode| |TotalErrors|Yes|Total Errors|Count|Total|Number of calls with any error (HTTP status code 4xx or 5xx)|ApiName, ServingRegion, StatusCode| - ## Microsoft.Blockchain/blockchainMembers |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestHandled|Yes|Handled Requests|Count|Total|Handled Requests|Node| |StorageUsage|Yes|Storage Usage|Bytes|Average|Storage Usage|Node| -
-## microsoft.botservice/botservices
+## Microsoft.BotService/botServices
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|RequestLatency|Yes|Request Latency|Milliseconds|Total|Time taken by the server to process the request|Operation, Authentication, Protocol, DataCenter|
-|RequestsTraffic|Yes|Requests Traffic|Percent|Count|Number of Requests Made|Operation, Authentication, Protocol, StatusCode, StatusCodeClass, DataCenter|
-
+|RequestLatency|Yes|Requests Latencies|Milliseconds|Average|How long it takes to get request response|Operation, Authentication, Protocol, ResourceId, Region|
+|RequestsTraffic|Yes|Requests Traffic|Count|Average|Number of requests within a given period of time|Operation, Authentication, Protocol, ResourceId, Region, StatusCode, StatusCodeClass, StatusText|
## Microsoft.Cache/redis
This latest update adds a new column and reorders the metrics to be alphabetical
|usedmemoryRss8|Yes|Used Memory RSS (Shard 8)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions| |usedmemoryRss9|Yes|Used Memory RSS (Shard 9)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions| - ## Microsoft.Cache/redisEnterprise |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|cachemisses|Yes|Cache Misses|Count|Total||InstanceId| |cacheRead|Yes|Cache Read|BytesPerSecond|Maximum||InstanceId| |cacheWrite|Yes|Cache Write|BytesPerSecond|Maximum||InstanceId|
+|CharactersTrained|Yes|Characters Trained (Deprecated)|Count|Total|Total number of characters trained.|ApiName, OperationName, Region|
+|CharactersTranslated|Yes|Characters Translated (Deprecated)|Count|Total|Total number of characters in incoming text request.|ApiName, OperationName, Region|
|connectedclients|Yes|Connected Clients|Count|Maximum||InstanceId| |errors|Yes|Errors|Count|Maximum||InstanceId, ErrorType| |evictedkeys|Yes|Evicted Keys|Count|Total||No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AudioSecondsTranscribed|Yes|Audio Seconds Transcribed|Count|Total|Number of seconds transcribed|ApiName, FeatureName, UsageChannel, Region|
-|AudioSecondsTranslated|Yes|Audio Seconds Translated|Count|Total|Number of seconds translated|ApiName, FeatureName, UsageChannel, Region|
|BlockedCalls|Yes|Blocked Calls|Count|Total|Number of calls that exceeded rate or quota limit.|ApiName, OperationName, Region| |CharactersTrained|Yes|Characters Trained (Deprecated)|Count|Total|Total number of characters trained.|ApiName, OperationName, Region| |CharactersTranslated|Yes|Characters Translated (Deprecated)|Count|Total|Total number of characters in incoming text request.|ApiName, OperationName, Region| |ClientErrors|Yes|Client Errors|Count|Total|Number of calls with client side error (HTTP response code 4xx).|ApiName, OperationName, Region|
-|ComputerVisionTransactions|Yes|Computer Vision Transactions|Count|Total|Number of Computer Vision Transactions|ApiName, FeatureName, UsageChannel, Region|
-|CustomVisionTrainingTime|Yes|Custom Vision Training Time|Seconds|Total|Custom Vision training time|ApiName, FeatureName, UsageChannel, Region|
-|CustomVisionTransactions|Yes|Custom Vision Transactions|Count|Total|Number of Custom Vision prediction transactions|ApiName, FeatureName, UsageChannel, Region|
|DataIn|Yes|Data In|Bytes|Total|Size of incoming data in bytes.|ApiName, OperationName, Region| |DataOut|Yes|Data Out|Bytes|Total|Size of outgoing data in bytes.|ApiName, OperationName, Region|
-|DocumentCharactersTranslated|Yes|Document Characters Translated|Count|Total|Number of characters in document translation request.|ApiName, FeatureName, UsageChannel, Region|
-|DocumentCustomCharactersTranslated|Yes|Document Custom Characters Translated|Count|Total|Number of characters in custom document translation request.|ApiName, FeatureName, UsageChannel, Region|
-|FaceImagesTrained|Yes|Face Images Trained|Count|Total|Number of images trained. 1,000 images trained per transaction.|ApiName, FeatureName, UsageChannel, Region|
-|FacesStored|Yes|Faces Stored|Count|Total|Number of faces stored, prorated daily. The number of faces stored is reported daily.|ApiName, FeatureName, UsageChannel, Region|
-|FaceTransactions|Yes|Face Transactions|Count|Total|Number of API calls made to Face service|ApiName, FeatureName, UsageChannel, Region|
-|ImagesStored|Yes|Images Stored|Count|Total|Number of Custom Vision images stored.|ApiName, FeatureName, UsageChannel, Region|
|Latency|Yes|Latency|MilliSeconds|Average|Latency in milliseconds.|ApiName, OperationName, Region| |LearnedEvents|Yes|Learned Events|Count|Total|Number of Learned Events.|IsMatchBaseline, Mode, RunId|
-|LUISSpeechRequests|Yes|LUIS Speech Requests|Count|Total|Number of LUIS speech to intent understanding requests|ApiName, FeatureName, UsageChannel, Region|
-|LUISTextRequests|Yes|LUIS Text Requests|Count|Total|Number of LUIS text requests|ApiName, FeatureName, UsageChannel, Region|
-|MatchedRewards|Yes|Matched Rewards|Count|Total|Number of Matched Rewards.|Mode, RunId|
-|NumberofSpeakerProfiles|Yes|Number of Speaker Profiles|Count|Total|Number of speaker profiles enrolled. Prorated hourly.|ApiName, FeatureName, UsageChannel, Region|
+|MatchedRewards|Yes|Matched Rewards|Count|Total| Number of Matched Rewards.|Mode, RunId|
|ObservedRewards|Yes|Observed Rewards|Count|Total|Number of Observed Rewards.|Mode, RunId|
-|ProcessedCharacters|Yes|Processed Characters|Count|Total|Number of Characters processed by Immersive Reader.|ApiName, FeatureName, UsageChannel, Region|
-|ProcessedHealthTextRecords|Yes|Processed Health Text Records|Count|Total|Number of health text records processed|ApiName, FeatureName, UsageChannel, Region|
-|ProcessedImages|Yes|Processed Images|Count|Total|Number of images processed|ApiName, FeatureName, UsageChannel, Region|
-|ProcessedPages|Yes|Processed Pages|Count|Total|Number of pages processed|ApiName, FeatureName, UsageChannel, Region|
+|ProcessedCharacters|Yes|Processed Characters|Count|Total|Number of Characters.|ApiName, FeatureName, UsageChannel, Region|
|ProcessedTextRecords|Yes|Processed Text Records|Count|Total|Count of Text Records.|ApiName, FeatureName, UsageChannel, Region| |ServerErrors|Yes|Server Errors|Count|Total|Number of calls with service internal error (HTTP response code 5xx).|ApiName, OperationName, Region|
-|SpeakerRecognitionTransactions|Yes|Speaker Recognition Transactions|Count|Total|Number of speaker recognition transactions|ApiName, FeatureName, UsageChannel, Region|
-|SpeechModelHostingHours|Yes|Speech Model Hosting Hours|Count|Total|Number of speech model hosting hours|ApiName, FeatureName, UsageChannel, Region|
-|SpeechSessionDuration|Yes|Speech Session Duration (Deprecated)|Seconds|Total|Total duration of speech session in seconds.|ApiName, OperationName, Region|
+|SpeechSessionDuration|Yes|Speech Session Duration|Seconds|Total|Total duration of speech session in seconds.|ApiName, OperationName, Region|
|SuccessfulCalls|Yes|Successful Calls|Count|Total|Number of successful calls.|ApiName, OperationName, Region|
-|SynthesizedCharacters|Yes|Synthesized Characters|Count|Total|Number of Characters.|ApiName, FeatureName, UsageChannel, Region|
-|TextCharactersTranslated|Yes|Text Characters Translated|Count|Total|Number of characters in incoming text translation request.|ApiName, FeatureName, UsageChannel, Region|
-|TextCustomCharactersTranslated|Yes|Text Custom Characters Translated|Count|Total|Number of characters in incoming custom text translation request.|ApiName, FeatureName, UsageChannel, Region|
-|TextTrainedCharacters|Yes|Text Trained Characters|Count|Total|Number of characters trained using text translation.|ApiName, FeatureName, UsageChannel, Region|
|TotalCalls|Yes|Total Calls|Count|Total|Total number of calls.|ApiName, OperationName, Region| |TotalErrors|Yes|Total Errors|Count|Total|Total number of calls with error response (HTTP response code 4xx or 5xx).|ApiName, OperationName, Region| |TotalTokenCalls|Yes|Total Token Calls|Count|Total|Total number of token calls.|ApiName, OperationName, Region|
-|TotalTransactions|Yes|Total Transactions (Deprecated)|Count|Total|Total number of transactions.|No Dimensions|
-|VoiceModelHostingHours|Yes|Voice Model Hosting Hours|Count|Total|Number of Hours.|ApiName, FeatureName, UsageChannel, Region|
-|VoiceModelTrainingMinutes|Yes|Voice Model Training Minutes|Count|Total|Number of Minutes.|ApiName, FeatureName, UsageChannel, Region|
+|TotalTransactions|Yes|Total Transactions|Count|Total|Total number of transactions.|No Dimensions|
## Microsoft.Communication/CommunicationServices
This latest update adds a new column and reorders the metrics to be alphabetical
|VM Cached IOPS Consumed Percentage|Yes|VM Cached IOPS Consumed Percentage|Percent|Average|Percentage of cached disk IOPS consumed by the VM|No Dimensions| |VM Uncached Bandwidth Consumed Percentage|Yes|VM Uncached Bandwidth Consumed Percentage|Percent|Average|Percentage of uncached disk bandwidth consumed by the VM|No Dimensions| |VM Uncached IOPS Consumed Percentage|Yes|VM Uncached IOPS Consumed Percentage|Percent|Average|Percentage of uncached disk IOPS consumed by the VM|No Dimensions|
-|VmAvailabilityMetric|Yes|VM Availability Metric (Preview)|Count|Average|Measure of Availability of Virtual machines over time. Note: This metric is previewed to only a small set of customers at the moment, as we prioritize improving data quality and consistency. As we improve our data standard, we will be rolling out this feature fleetwide in a phased manner.|No Dimensions|
-## Microsoft.Compute/virtualmachineScaleSets
+## Microsoft.Compute/virtualMachineScaleSets
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|VM Cached IOPS Consumed Percentage|Yes|VM Cached IOPS Consumed Percentage|Percent|Average|Percentage of cached disk IOPS consumed by the VM|VMName| |VM Uncached Bandwidth Consumed Percentage|Yes|VM Uncached Bandwidth Consumed Percentage|Percent|Average|Percentage of uncached disk bandwidth consumed by the VM|VMName| |VM Uncached IOPS Consumed Percentage|Yes|VM Uncached IOPS Consumed Percentage|Percent|Average|Percentage of uncached disk IOPS consumed by the VM|VMName|
-|VmAvailabilityMetric|Yes|VM Availability Metric (Preview)|Count|Average|Measure of Availability of Virtual machines over time. Note: This metric is previewed to only a small set of customers at the moment, as we prioritize improving data quality and consistency. As we improve our data standard, we will be rolling out this feature fleetwide in a phased manner.|VMName|
## Microsoft.Compute/virtualMachineScaleSets/virtualMachines
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |AgentPoolCPUTime|Yes|AgentPool CPU Time|Seconds|Total|AgentPool CPU Time in seconds|No Dimensions|
-|RunDuration|Yes|Run Duration|MilliSeconds|Total|Run Duration in milliseconds|No Dimensions|
-|StorageUsed|Yes|Storage used|Bytes|Average|The amount of storage used by the container registry. For a registry account, it's the sum of capacity used by all the repositories within a registry. It's sum of capacity used by shared layers, manifest files, and replica copies in each of its repositories.|Geolocation|
-|SuccessfulPullCount|Yes|Successful Pull Count|Count|Total|Number of successful image pulls|No Dimensions|
-|SuccessfulPushCount|Yes|Successful Push Count|Count|Total|Number of successful image pushes|No Dimensions|
-|TotalPullCount|Yes|Total Pull Count|Count|Total|Number of image pulls in total|No Dimensions|
-|TotalPushCount|Yes|Total Push Count|Count|Total|Number of image pushes in total|No Dimensions|
+|RunDuration|Yes|Run Duration|Milliseconds|Total|Run Duration in milliseconds|No Dimensions|
+|SuccessfulPullCount|Yes|Successful Pull Count|Count|Average|Number of successful image pulls|No Dimensions|
+|SuccessfulPushCount|Yes|Successful Push Count|Count|Average|Number of successful image pushes|No Dimensions|
+|TotalPullCount|Yes|Total Pull Count|Count|Average|Number of image pulls in total|No Dimensions|
+|TotalPushCount|Yes|Total Push Count|Count|Average|Number of image pushes in total|No Dimensions|
## Microsoft.ContainerService/managedClusters |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|apiserver_current_inflight_requests|No|Inflight Requests|Count|Average|Maximum number of currently used inflight requests on the apiserver per request kind in the last second|requestKind|
-|cluster_autoscaler_cluster_safe_to_autoscale|No|Cluster Health|Count|Average|Determines whether or not cluster autoscaler will take action on the cluster|No Dimensions|
-|cluster_autoscaler_scale_down_in_cooldown|No|Scale Down Cooldown|Count|Average|Determines if the scale down is in cooldown - No nodes will be removed during this timeframe|No Dimensions|
-|cluster_autoscaler_unneeded_nodes_count|No|Unneeded Nodes|Count|Average|Cluster auotscaler marks those nodes as candidates for deletion and are eventually deleted|No Dimensions|
-|cluster_autoscaler_unschedulable_pods_count|No|Unschedulable Pods|Count|Average|Number of pods that are currently unschedulable in the cluster|No Dimensions|
|kube_node_status_allocatable_cpu_cores|No|Total number of available cpu cores in a managed cluster|Count|Average|Total number of available cpu cores in a managed cluster|No Dimensions| |kube_node_status_allocatable_memory_bytes|No|Total amount of available memory in a managed cluster|Bytes|Average|Total amount of available memory in a managed cluster|No Dimensions| |kube_node_status_condition|No|Statuses for various node conditions|Count|Average|Statuses for various node conditions|condition, status, status2, node| |kube_pod_status_phase|No|Number of pods by phase|Count|Average|Number of pods by phase|phase, namespace, pod| |kube_pod_status_ready|No|Number of pods in Ready state|Count|Average|Number of pods in Ready state|namespace, pod, condition|
-|node_cpu_usage_millicores|Yes|CPU Usage Millicores|MilliCores|Average|Aggregated measurement of CPU utilization in millicores across the cluster|node, nodepool|
-|node_cpu_usage_percentage|Yes|CPU Usage Percentage|Percent|Average|Aggregated average CPU utilization measured in percentage across the cluster|node, nodepool|
-|node_disk_usage_bytes|Yes|Disk Used Bytes|Bytes|Average|Disk space used in bytes by device|node, nodepool, device|
-|node_disk_usage_percentage|Yes|Disk Used Percentage|Percent|Average|Disk space used in percent by device|node, nodepool, device|
-|node_memory_rss_bytes|Yes|Memory RSS Bytes|Bytes|Average|Container RSS memory used in bytes|node, nodepool|
-|node_memory_rss_percentage|Yes|Memory RSS Percentage|Percent|Average|Container RSS memory used in percent|node, nodepool|
-|node_memory_working_set_bytes|Yes|Memory Working Set Bytes|Bytes|Average|Container working set memory used in bytes|node, nodepool|
-|node_memory_working_set_percentage|Yes|Memory Working Set Percentage|Percent|Average|Container working set memory used in percent|node, nodepool|
-|node_network_in_bytes|Yes|Network In Bytes|Bytes|Average|Network received bytes|node, nodepool|
-|node_network_out_bytes|Yes|Network Out Bytes|Bytes|Average|Network transmitted bytes|node, nodepool|
## Microsoft.CustomProviders/resourceproviders
This latest update adds a new column and reorders the metrics to be alphabetical
|d2c.endpoints.latency.serviceBusQueues|Yes|Routing: message latency for Service Bus Queue|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus queue endpoint.|No Dimensions| |d2c.endpoints.latency.serviceBusTopics|Yes|Routing: message latency for Service Bus Topic|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus topic endpoint.|No Dimensions| |d2c.endpoints.latency.storage|Yes|Routing: message latency for storage|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a storage endpoint.|No Dimensions|
-|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped |Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
+|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped|Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
|d2c.telemetry.egress.fallback|Yes|Routing: messages delivered to fallback|Count|Total|The number of times IoT Hub routing delivered messages to the endpoint associated with the fallback route.|No Dimensions| |d2c.telemetry.egress.invalid|Yes|Routing: telemetry messages incompatible|Count|Total|The number of times IoT Hub routing failed to deliver messages due to an incompatibility with the endpoint. This value does not include retries.|No Dimensions|
-|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned |Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule). |No Dimensions|
+|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned|Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule).|No Dimensions|
|d2c.telemetry.egress.success|Yes|Routing: telemetry messages delivered|Count|Total|The number of times messages were successfully delivered to all endpoints using IoT Hub routing. If a message is routed to multiple endpoints, this value increases by one for each successful delivery. If a message is delivered to the same endpoint multiple times, this value increases by one for each successful delivery.|No Dimensions| |d2c.telemetry.ingress.allProtocol|Yes|Telemetry message send attempts|Count|Total|Number of device-to-cloud telemetry messages attempted to be sent to your IoT hub|No Dimensions| |d2c.telemetry.ingress.sendThrottle|Yes|Number of throttling errors|Count|Total|Number of throttling errors due to device throughput throttles|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|d2c.endpoints.egress.storage|Yes|Routing: messages delivered to storage|Count|Total|The number of times IoT Hub routing successfully delivered messages to storage endpoints.|No Dimensions| |d2c.endpoints.egress.storage.blobs|Yes|Routing: blobs delivered to storage|Count|Total|The number of times IoT Hub routing delivered blobs to storage endpoints.|No Dimensions| |d2c.endpoints.egress.storage.bytes|Yes|Routing: data delivered to storage|Bytes|Total|The amount of data (bytes) IoT Hub routing delivered to storage endpoints.|No Dimensions|
-|d2c.endpoints.latency.builtIn.events|Yes|Routing: message latency for messages/events|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into the built-in endpoint (messages/events).|No Dimensions|
-|d2c.endpoints.latency.eventHubs|Yes|Routing: message latency for Event Hub|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and message ingress into an Event Hub endpoint.|No Dimensions|
-|d2c.endpoints.latency.serviceBusQueues|Yes|Routing: message latency for Service Bus Queue|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus queue endpoint.|No Dimensions|
-|d2c.endpoints.latency.serviceBusTopics|Yes|Routing: message latency for Service Bus Topic|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus topic endpoint.|No Dimensions|
-|d2c.endpoints.latency.storage|Yes|Routing: message latency for storage|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a storage endpoint.|No Dimensions|
-|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped |Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
+|d2c.endpoints.latency.builtIn.events|Yes|Routing: message latency for messages/events|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into the built-in endpoint (messages/events).|No Dimensions|
+|d2c.endpoints.latency.eventHubs|Yes|Routing: message latency for Event Hub|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and message ingress into an Event Hub endpoint.|No Dimensions|
+|d2c.endpoints.latency.serviceBusQueues|Yes|Routing: message latency for Service Bus Queue|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus queue endpoint.|No Dimensions|
+|d2c.endpoints.latency.serviceBusTopics|Yes|Routing: message latency for Service Bus Topic|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus topic endpoint.|No Dimensions|
+|d2c.endpoints.latency.storage|Yes|Routing: message latency for storage|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a storage endpoint.|No Dimensions|
+|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped|Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
|d2c.telemetry.egress.fallback|Yes|Routing: messages delivered to fallback|Count|Total|The number of times IoT Hub routing delivered messages to the endpoint associated with the fallback route.|No Dimensions| |d2c.telemetry.egress.invalid|Yes|Routing: telemetry messages incompatible|Count|Total|The number of times IoT Hub routing failed to deliver messages due to an incompatibility with the endpoint. This value does not include retries.|No Dimensions|
-|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned |Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule). |No Dimensions|
+|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned|Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule).|No Dimensions|
|d2c.telemetry.egress.success|Yes|Routing: telemetry messages delivered|Count|Total|The number of times messages were successfully delivered to all endpoints using IoT Hub routing. If a message is routed to multiple endpoints, this value increases by one for each successful delivery. If a message is delivered to the same endpoint multiple times, this value increases by one for each successful delivery.|No Dimensions| |d2c.telemetry.ingress.allProtocol|Yes|Telemetry message send attempts|Count|Total|Number of device-to-cloud telemetry messages attempted to be sent to your IoT hub|No Dimensions| |d2c.telemetry.ingress.sendThrottle|Yes|Number of throttling errors|Count|Total|Number of throttling errors due to device throughput throttles|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|devices.connectedDevices.allProtocol|Yes|Connected devices (deprecated) |Count|Total|Number of devices connected to your IoT hub|No Dimensions| |devices.totalDevices|Yes|Total devices (deprecated)|Count|Total|Number of devices registered to your IoT hub|No Dimensions| |EventGridDeliveries|Yes|Event Grid deliveries|Count|Total|The number of IoT Hub events published to Event Grid. Use the Result dimension for the number of successful and failed requests. EventType dimension shows the type of event (https://aka.ms/ioteventgrid).|Result, EventType|
-|EventGridLatency|Yes|Event Grid latency|MilliSeconds|Average|The average latency (milliseconds) from when the Iot Hub event was generated to when the event was published to Event Grid. This number is an average between all event types. Use the EventType dimension to see latency of a specific type of event.|EventType|
+|EventGridLatency|Yes|Event Grid latency|Milliseconds|Average|The average latency (milliseconds) from when the Iot Hub event was generated to when the event was published to Event Grid. This number is an average between all event types. Use the EventType dimension to see latency of a specific type of event.|EventType|
|jobs.cancelJob.failure|Yes|Failed job cancellations|Count|Total|The count of all failed calls to cancel a job.|No Dimensions| |jobs.cancelJob.success|Yes|Successful job cancellations|Count|Total|The count of all successful calls to cancel a job.|No Dimensions| |jobs.completed|Yes|Completed jobs|Count|Total|The count of all completed jobs.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|jobs.queryJobs.success|Yes|Successful job queries|Count|Total|The count of all successful calls to query jobs.|No Dimensions| |RoutingDataSizeInBytesDelivered|Yes|Routing Delivery Message Size in Bytes (preview)|Bytes|Total|The total size in bytes of messages delivered by IoT hub to an endpoint. You can use the EndpointName and EndpointType dimensions to view the size of the messages in bytes delivered to your different endpoints. The metric value increases for every message delivered, including if the message is delivered to multiple endpoints or if the message is delivered to the same endpoint multiple times.|EndpointType, EndpointName, RoutingSource| |RoutingDeliveries|Yes|Routing Deliveries (preview)|Count|Total|The number of times IoT Hub attempted to deliver messages to all endpoints using routing. To see the number of successful or failed attempts, use the Result dimension. To see the reason of failure, like invalid, dropped, or orphaned, use the FailureReasonCategory dimension. You can also use the EndpointName and EndpointType dimensions to understand how many messages were delivered to your different endpoints. The metric value increases by one for each delivery attempt, including if the message is delivered to multiple endpoints or if the message is delivered to the same endpoint multiple times.|EndpointType, EndpointName, FailureReasonCategory, Result, RoutingSource|
-|RoutingDeliveryLatency|Yes|Routing Delivery Latency (preview)|MilliSeconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into an endpoint. You can use the EndpointName and EndpointType dimensions to understand the latency to your different endpoints.|EndpointType, EndpointName, RoutingSource|
+|RoutingDeliveryLatency|Yes|Routing Delivery Latency (preview)|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into an endpoint. You can use the EndpointName and EndpointType dimensions to understand the latency to your different endpoints.|EndpointType, EndpointName, RoutingSource|
|totalDeviceCount|No|Total devices|Count|Average|Number of devices registered to your IoT hub|No Dimensions| |twinQueries.failure|Yes|Failed twin queries|Count|Total|The count of all failed twin queries.|No Dimensions| |twinQueries.resultSize|Yes|Twin queries result size|Bytes|Average|The average, min, and max of the result size of all successful twin queries.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TwinCount|Yes|Twin Count|Count|Total|Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you are approaching the service limit for max number of twins allowed per instance.|No Dimensions|
-## Microsoft.DocumentDB/DatabaseAccounts
+## Microsoft.DocumentDB/databaseAccounts
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|MongoDBDatabaseUpdate|No|Mongo Database Updated|Count|Count|Mongo Database Updated|ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType| |MongoRequestCharge|Yes|Mongo Request Charge|Count|Total|Mongo Request Units Consumed|DatabaseName, CollectionName, Region, CommandName, ErrorCode, Status| |MongoRequests|Yes|Mongo Requests|Count|Count|Number of Mongo Requests Made|DatabaseName, CollectionName, Region, CommandName, ErrorCode, Status|
+|MongoRequestsCount|No|(deprecated) Mongo Request Rate|CountPerSecond|Average|Mongo request Count per second|DatabaseName, CollectionName, Region, ErrorCode|
+|MongoRequestsDelete|No|(deprecated) Mongo Delete Request Rate|CountPerSecond|Average|Mongo Delete request per second|DatabaseName, CollectionName, Region, ErrorCode|
+|MongoRequestsInsert|No|(deprecated) Mongo Insert Request Rate|CountPerSecond|Average|Mongo Insert count per second|DatabaseName, CollectionName, Region, ErrorCode|
+|MongoRequestsQuery|No|(deprecated) Mongo Query Request Rate|CountPerSecond|Average|Mongo Query request per second|DatabaseName, CollectionName, Region, ErrorCode|
+|MongoRequestsUpdate|No|(deprecated) Mongo Update Request Rate|CountPerSecond|Average|Mongo Update request per second|DatabaseName, CollectionName, Region, ErrorCode|
|NormalizedRUConsumption|No|Normalized RU Consumption|Percent|Maximum|Max RU consumption percentage per minute|CollectionName, DatabaseName, Region, PartitionKeyRangeId| |ProvisionedThroughput|No|Provisioned Throughput|Count|Maximum|Provisioned Throughput|DatabaseName, CollectionName| |RegionFailover|Yes|Region Failed Over|Count|Count|Region Failed Over|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ActiveConnections|No|ActiveConnections|Count|Average|Total Active Connections for Microsoft.EventHub.|No Dimensions|
+|ActiveConnections|No|ActiveConnections|Count|Maximum|Total Active Connections for Microsoft.EventHub.|No Dimensions|
|AvailableMemory|No|Available Memory|Percent|Maximum|Available memory for the Event Hub Cluster as a percentage of total memory.|Role| |CaptureBacklog|No|Capture Backlog.|Count|Total|Capture Backlog for Microsoft.EventHub.|No Dimensions| |CapturedBytes|No|Captured Bytes.|Bytes|Total|Captured Bytes for Microsoft.EventHub.|No Dimensions| |CapturedMessages|No|Captured Messages.|Count|Total|Captured Messages for Microsoft.EventHub.|No Dimensions|
-|ConnectionsClosed|No|Connections Closed.|Count|Average|Connections Closed for Microsoft.EventHub.|No Dimensions|
-|ConnectionsOpened|No|Connections Opened.|Count|Average|Connections Opened for Microsoft.EventHub.|No Dimensions|
+|ConnectionsClosed|No|Connections Closed.|Count|Maximum|Connections Closed for Microsoft.EventHub.|No Dimensions|
+|ConnectionsOpened|No|Connections Opened.|Count|Maximum|Connections Opened for Microsoft.EventHub.|No Dimensions|
|CPU|No|CPU|Percent|Maximum|CPU utilization for the Event Hub Cluster as a percentage|Role| |IncomingBytes|Yes|Incoming Bytes.|Bytes|Total|Incoming Bytes for Microsoft.EventHub.|No Dimensions| |IncomingMessages|Yes|Incoming Messages|Count|Total|Incoming Messages for Microsoft.EventHub.|No Dimensions| |IncomingRequests|Yes|Incoming Requests|Count|Total|Incoming Requests for Microsoft.EventHub.|No Dimensions| |OutgoingBytes|Yes|Outgoing Bytes.|Bytes|Total|Outgoing Bytes for Microsoft.EventHub.|No Dimensions| |OutgoingMessages|Yes|Outgoing Messages|Count|Total|Outgoing Messages for Microsoft.EventHub.|No Dimensions|
-|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|OperationResult|
-|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|OperationResult|
+|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|No Dimensions|
+|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|No Dimensions|
|Size|No|Size|Bytes|Average|Size of an EventHub in Bytes.|Role|
-|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|OperationResult|
-|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|OperationResult|
-|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|OperationResult|
+|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|No Dimensions|
+|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|No Dimensions|
+|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|No Dimensions|
## Microsoft.EventHub/namespaces |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ActiveConnections|No|ActiveConnections|Count|Average|Total Active Connections for Microsoft.EventHub.|No Dimensions|
+|ActiveConnections|No|ActiveConnections|Count|Maximum|Total Active Connections for Microsoft.EventHub.|No Dimensions|
|CaptureBacklog|No|Capture Backlog.|Count|Total|Capture Backlog for Microsoft.EventHub.|EntityName| |CapturedBytes|No|Captured Bytes.|Bytes|Total|Captured Bytes for Microsoft.EventHub.|EntityName| |CapturedMessages|No|Captured Messages.|Count|Total|Captured Messages for Microsoft.EventHub.|EntityName|
-|ConnectionsClosed|No|Connections Closed.|Count|Average|Connections Closed for Microsoft.EventHub.|EntityName|
-|ConnectionsOpened|No|Connections Opened.|Count|Average|Connections Opened for Microsoft.EventHub.|EntityName|
+|ConnectionsClosed|No|Connections Closed.|Count|Maximum|Connections Closed for Microsoft.EventHub.|EntityName|
+|ConnectionsOpened|No|Connections Opened.|Count|Maximum|Connections Opened for Microsoft.EventHub.|EntityName|
|EHABL|Yes|Archive backlog messages (Deprecated)|Count|Total|Event Hub archive messages in backlog for a namespace (Deprecated)|No Dimensions| |EHAMBS|Yes|Archive message throughput (Deprecated)|Bytes|Total|Event Hub archived message throughput in a namespace (Deprecated)|No Dimensions| |EHAMSGS|Yes|Archive messages (Deprecated)|Count|Total|Event Hub archived messages in a namespace (Deprecated)|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|INREQS|Yes|Incoming Requests (Deprecated)|Count|Total|Total incoming send requests for a namespace (Deprecated)|No Dimensions| |INTERR|Yes|Internal Server Errors (Deprecated)|Count|Total|Total internal server errors for a namespace (Deprecated)|No Dimensions| |MISCERR|Yes|Other Errors (Deprecated)|Count|Total|Total failed requests for a namespace (Deprecated)|No Dimensions|
+|NamespaceCpuUsage|No|CPU|Percent|Maximum|CPU usage metric for Premium SKU namespaces.|No Dimensions|
+|NamespaceMemoryUsage|No|Memory Usage|Percent|Maximum|Memory usage metric for Premium SKU namespaces.|No Dimensions|
|OutgoingBytes|Yes|Outgoing Bytes.|Bytes|Total|Outgoing Bytes for Microsoft.EventHub.|EntityName| |OutgoingMessages|Yes|Outgoing Messages|Count|Total|Outgoing Messages for Microsoft.EventHub.|EntityName| |OUTMSGS|Yes|Outgoing Messages (obsolete) (Deprecated)|Count|Total|Total outgoing messages for a namespace. This metric is deprecated. Please use Outgoing Messages metric instead (Deprecated)|No Dimensions|
-|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|EntityName, OperationResult|
-|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|EntityName, OperationResult|
+|QuotaExceededErrors|No|Quota Exceeded Errors.|Count|Total|Quota Exceeded Errors for Microsoft.EventHub.|EntityName, |
+|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.EventHub.|EntityName, |
|Size|No|Size|Bytes|Average|Size of an EventHub in Bytes.|EntityName|
-|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|EntityName, OperationResult|
+|SuccessfulRequests|No|Successful Requests|Count|Total|Successful Requests for Microsoft.EventHub.|EntityName, |
|SUCCREQ|Yes|Successful Requests (Deprecated)|Count|Total|Total successful requests for a namespace (Deprecated)|No Dimensions| |SVRBSY|Yes|Server Busy Errors (Deprecated)|Count|Total|Total server busy errors for a namespace (Deprecated)|No Dimensions|
-|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|EntityName, OperationResult|
-|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|EntityName, OperationResult|
+|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.EventHub.|EntityName, |
+|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.EventHub.|EntityName, |
## Microsoft.HDInsight/clusters
This latest update adds a new column and reorders the metrics to be alphabetical
|provisionedDeviceCount|No|Total Provisioned Devices|Count|Average|Number of devices provisioned in IoT Central application|No Dimensions|
-## microsoft.keyvault/managedhsms
+## Microsoft.KeyVault/managedHSMs
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Availability|No|Overall Service Availability|Percent|Average|Service requests availability|ActivityType, ActivityName, StatusCode, StatusCodeClass|
+|Availability|No|Overall Vault Availability|Percent|Average|Vault requests availability|ActivityType, ActivityName, StatusCode, StatusCodeClass|
|ServiceApiHit|Yes|Total Service Api Hits|Count|Count|Number of total service api hits|ActivityType, ActivityName| |ServiceApiLatency|No|Overall Service Api Latency|Milliseconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass|
+|ServiceApiResult|Yes|Total Service Api Results|Count|Count|Gets the available metrics for a Managed HSM pool|ActivityType, ActivityName, StatusCode, StatusCodeClass|
## Microsoft.KeyVault/vaults
This latest update adds a new column and reorders the metrics to be alphabetical
|Availability|Yes|Overall Vault Availability|Percent|Average|Vault requests availability|ActivityType, ActivityName, StatusCode, StatusCodeClass| |SaturationShoebox|No|Overall Vault Saturation|Percent|Average|Vault capacity used|ActivityType, ActivityName, TransactionType| |ServiceApiHit|Yes|Total Service Api Hits|Count|Count|Number of total service api hits|ActivityType, ActivityName|
-|ServiceApiLatency|Yes|Overall Service Api Latency|MilliSeconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass|
+|ServiceApiLatency|Yes|Overall Service Api Latency|Milliseconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass|
|ServiceApiResult|Yes|Total Service Api Results|Count|Count|Number of total service api results|ActivityType, ActivityName, StatusCode, StatusCodeClass|
This latest update adds a new column and reorders the metrics to be alphabetical
|CacheUtilization|Yes|Cache utilization|Percent|Average|Utilization level in the cluster scope|No Dimensions| |CacheUtilizationFactor|Yes|Cache utilization factor|Percent|Average|Percentage difference between the current number of instances and the optimal number of instances (per cache utilization)|No Dimensions| |ContinuousExportMaxLatenessMinutes|Yes|Continuous Export Max Lateness|Count|Maximum|The lateness (in minutes) reported by the continuous export jobs in the cluster|No Dimensions|
-|ContinuousExportNumOfRecordsExported|Yes|Continuous export ΓÇô num of exported records|Count|Total|Number of records exported, fired for every storage artifact written during the export operation|ContinuousExportName, Database|
+|ContinuousExportNumOfRecordsExported|Yes|Continuous export - num of exported records|Count|Total|Number of records exported, fired for every storage artifact written during the export operation|ContinuousExportName, Database|
|ContinuousExportPendingCount|Yes|Continuous Export Pending Count|Count|Maximum|The number of pending continuous export jobs ready for execution|No Dimensions| |ContinuousExportResult|Yes|Continuous Export Result|Count|Count|Indicates whether Continuous Export succeeded or failed|ContinuousExportName, Result, Database| |CPU|Yes|CPU|Percent|Average|CPU utilization level|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|SteamingIngestRequestRate|Yes|Streaming Ingest Request Rate|Count|RateRequestsPerSecond|Streaming ingest request rate (requests per second)|No Dimensions| |StreamingIngestDataRate|Yes|Streaming Ingest Data Rate|Count|Average|Streaming ingest data rate (MB per second)|No Dimensions| |StreamingIngestDuration|Yes|Streaming Ingest Duration|Milliseconds|Average|Streaming ingest duration in milliseconds|No Dimensions|
-|StreamingIngestResults|Yes|Streaming Ingest Result|Count|Average|Streaming ingest result|Result|
+|StreamingIngestResults|Yes|Streaming Ingest Result|Count|Count|Streaming ingest result|Result|
|TotalNumberOfConcurrentQueries|Yes|Total number of concurrent queries|Count|Maximum|Total number of concurrent queries|No Dimensions|
-|TotalNumberOfExtents|Yes|Total number of extents|Count|Total|Total number of data extents|No Dimensions|
+|TotalNumberOfExtents|Yes|Total number of extents|Count|Average|Total number of data extents|No Dimensions|
|TotalNumberOfThrottledCommands|Yes|Total number of throttled commands|Count|Total|Total number of throttled commands|CommandType| |TotalNumberOfThrottledQueries|Yes|Total number of throttled queries|Count|Maximum|Total number of throttled queries|No Dimensions| |WeakConsistencyLatency|Yes|Weak consistency latency|Seconds|Average|The max latency between the previous metadata sync and the next one (in DB/node scope)|Database, RoleInstance|
-## Microsoft.Logic/IntegrationServiceEnvironments
+## Microsoft.Logic/integrationServiceEnvironments
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|ActionsStarted|Yes|Actions Started |Count|Total|Number of workflow actions started.|No Dimensions| |ActionsSucceeded|Yes|Actions Succeeded |Count|Total|Number of workflow actions succeeded.|No Dimensions| |ActionSuccessLatency|Yes|Action Success Latency |Seconds|Average|Latency of succeeded workflow actions.|No Dimensions|
+|ActionThrottledEvents|Yes|Action Throttled Events|Count|Total|Number of workflow action throttled events..|No Dimensions|
|IntegrationServiceEnvironmentConnectorMemoryUsage|Yes|Connector Memory Usage for Integration Service Environment|Percent|Average|Connector memory usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentConnectorProcessorUsage|Yes|Connector Processor Usage for Integration Service Environment|Percent|Average|Connector processor usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentWorkflowMemoryUsage|Yes|Workflow Memory Usage for Integration Service Environment|Percent|Average|Workflow memory usage for integration service environment.|No Dimensions| |IntegrationServiceEnvironmentWorkflowProcessorUsage|Yes|Workflow Processor Usage for Integration Service Environment|Percent|Average|Workflow processor usage for integration service environment.|No Dimensions|
+|RunFailurePercentage|Yes|Run Failure Percentage|Percent|Total|Percentage of workflow runs failed.|No Dimensions|
|RunLatency|Yes|Run Latency|Seconds|Average|Latency of completed workflow runs.|No Dimensions| |RunsCancelled|Yes|Runs Cancelled|Count|Total|Number of workflow runs cancelled.|No Dimensions| |RunsCompleted|Yes|Runs Completed|Count|Total|Number of workflow runs completed.|No Dimensions| |RunsFailed|Yes|Runs Failed|Count|Total|Number of workflow runs failed.|No Dimensions| |RunsStarted|Yes|Runs Started|Count|Total|Number of workflow runs started.|No Dimensions| |RunsSucceeded|Yes|Runs Succeeded|Count|Total|Number of workflow runs succeeded.|No Dimensions|
+|RunStartThrottledEvents|Yes|Run Start Throttled Events|Count|Total|Number of workflow run start throttled events.|No Dimensions|
|RunSuccessLatency|Yes|Run Success Latency|Seconds|Average|Latency of succeeded workflow runs.|No Dimensions|
+|RunThrottledEvents|Yes|Run Throttled Events|Count|Total|Number of workflow action or trigger throttled events.|No Dimensions|
|TriggerFireLatency|Yes|Trigger Fire Latency |Seconds|Average|Latency of fired workflow triggers.|No Dimensions| |TriggerLatency|Yes|Trigger Latency |Seconds|Average|Latency of completed workflow triggers.|No Dimensions| |TriggersCompleted|Yes|Triggers Completed |Count|Total|Number of workflow triggers completed.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TriggersStarted|Yes|Triggers Started |Count|Total|Number of workflow triggers started.|No Dimensions| |TriggersSucceeded|Yes|Triggers Succeeded |Count|Total|Number of workflow triggers succeeded.|No Dimensions| |TriggerSuccessLatency|Yes|Trigger Success Latency |Seconds|Average|Latency of succeeded workflow triggers.|No Dimensions|
+|TriggerThrottledEvents|Yes|Trigger Throttled Events|Count|Total|Number of workflow trigger throttled events.|No Dimensions|
-## Microsoft.Logic/Workflows
+## Microsoft.Logic/workflows
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|ContentKeyPolicyCount|Yes|Content Key Policy count|Count|Average|How many content key policies are already created in current media service account|No Dimensions| |ContentKeyPolicyQuota|Yes|Content Key Policy quota|Count|Average|How many content key polices are allowed for current media service account|No Dimensions| |ContentKeyPolicyQuotaUsedPercentage|Yes|Content Key Policy quota used percentage|Percent|Average|Content Key Policy used percentage in current media service account|No Dimensions|
-|JobQuota|Yes|Job quota|Count|Average|The Job quota for the current media service account.|No Dimensions|
-|JobsScheduled|Yes|Jobs Scheduled|Count|Average|The number of Jobs in the Scheduled state. Counts on this metric only reflect jobs submitted through the v3 API. Jobs submitted through the v2 (Legacy) API are not counted.|No Dimensions|
|MaxChannelsAndLiveEventsCount|Yes|Max live event quota|Count|Average|The maximum number of live events allowed in the current media services account|No Dimensions| |MaxRunningChannelsAndLiveEventsCount|Yes|Max running live event quota|Count|Average|The maximum number of running live events allowed in the current media services account|No Dimensions| |RunningChannelsAndLiveEventsCount|Yes|Running live event count|Count|Average|The total number of running live events in the current media services account|No Dimensions| |StreamingPolicyCount|Yes|Streaming Policy count|Count|Average|How many streaming policies are already created in current media service account|No Dimensions| |StreamingPolicyQuota|Yes|Streaming Policy quota|Count|Average|How many streaming policies are allowed for current media service account|No Dimensions| |StreamingPolicyQuotaUsedPercentage|Yes|Streaming Policy quota used percentage|Percent|Average|Streaming Policy used percentage in current media service account|No Dimensions|
-|TransformQuota|Yes|Transform quota|Count|Average|The Transform quota for the current media service account.|No Dimensions|
## Microsoft.Media/mediaservices/liveEvents
This latest update adds a new column and reorders the metrics to be alphabetical
|XregionReplicationTotalTransferBytes|Yes|Volume replication total transfer|Bytes|Average|Cumulative bytes transferred for the relationship.|No Dimensions|
-## Microsoft.Network/applicationgateways
+## Microsoft.Network/applicationGateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|BackendFirstByteResponseTime|No|Backend First Byte Response Time|MilliSeconds|Average|Time interval between start of establishing a connection to backend server and receiving the first byte of the response header, approximating processing time of backend server|Listener, BackendServer, BackendPool, BackendHttpSetting| |BackendLastByteResponseTime|No|Backend Last Byte Response Time|MilliSeconds|Average|Time interval between start of establishing a connection to backend server and receiving the last byte of the response body|Listener, BackendServer, BackendPool, BackendHttpSetting| |BackendResponseStatus|Yes|Backend Response Status|Count|Total|The number of HTTP response codes generated by the backend members. This does not include any response codes generated by the Application Gateway.|BackendServer, BackendPool, BackendHttpSetting, HttpStatusGroup|
-|BackendTlsNegotiationError|Yes|Backend TLS Connection Errors|Count|Total|TLS Connection Errors for Application Gateway Backend|BackendHttpSetting, BackendPool, ErrorType|
|BlockedCount|Yes|Web Application Firewall Blocked Requests Rule Distribution|Count|Total|Web Application Firewall blocked requests rule distribution|RuleGroup, RuleId| |BlockedReqCount|Yes|Web Application Firewall Blocked Requests Count|Count|Total|Web Application Firewall blocked requests count|No Dimensions| |BytesReceived|Yes|Bytes Received|Bytes|Total|The total number of bytes received by the Application Gateway from the clients|Listener|
This latest update adds a new column and reorders the metrics to be alphabetical
|HealthyHostCount|Yes|Healthy Host Count|Count|Average|Number of healthy backend hosts|BackendSettingsPool| |MatchedCount|Yes|Web Application Firewall Total Rule Distribution|Count|Total|Web Application Firewall Total Rule Distribution for the incoming traffic|RuleGroup, RuleId| |NewConnectionsPerSecond|No|New connections per second|CountPerSecond|Average|New connections per second established with Application Gateway|No Dimensions|
-|RejectedConnections|Yes|Rejected Connections|Count|Total|Count of rejected connections for Application Gateway Frontend|No Dimensions|
|ResponseStatus|Yes|Response Status|Count|Total|Http response status returned by Application Gateway|HttpStatusGroup| |Throughput|No|Throughput|BytesPerSecond|Average|Number of bytes per second the Application Gateway has served|No Dimensions| |TlsProtocol|Yes|Client TLS Protocol|Count|Total|The number of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the dimension TLS Protocol.|Listener, TlsProtocol|
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |ErGatewayConnectionBitsInPerSecond|No|BitsInPerSecond|BitsPerSecond|Average|Bits ingressing Azure per second|ConnectionName| |ErGatewayConnectionBitsOutPerSecond|No|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|ConnectionName|
-|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer(Preview)|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance|
-|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer (Preview)|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
+|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance|
+|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
|ExpressRouteGatewayCpuUtilization|Yes|CPU utilization|Percent|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance|
-|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change (Preview)|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance|
-|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network(Preview)|Count|Maximum|Number of VMs in the Virtual Network|No Dimensions|
+|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network|Count|Maximum|Number of VMs in the Virtual Network|No Dimensions|
|ExpressRouteGatewayPacketsPerSecond|No|Packets per second|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AllocatedSnatPorts|No|Allocated SNAT Ports|Count|Average|Total number of SNAT ports allocated within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
+|AllocatedSnatPorts|No|Allocated SNAT Ports|Count|Average|Total number of SNAT ports allocated within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, |
|ByteCount|Yes|Byte Count|Bytes|Total|Total number of Bytes transmitted within time period|FrontendIPAddress, FrontendPort, Direction| |DipAvailability|Yes|Health Probe Status|Count|Average|Average Load Balancer health probe status per time duration|ProtocolType, BackendPort, FrontendIPAddress, FrontendPort, BackendIPAddress| |PacketCount|Yes|Packet Count|Count|Total|Total number of Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction| |SnatConnectionCount|Yes|SNAT Connection Count|Count|Total|Total number of new SNAT connections created within time period|FrontendIPAddress, BackendIPAddress, ConnectionState| |SYNCount|Yes|SYN Count|Count|Total|Total number of SYN Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction|
-|UsedSnatPorts|No|Used SNAT Ports|Count|Average|Total number of SNAT ports used within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
+|UsedSnatPorts|No|Used SNAT Ports|Count|Average|Total number of SNAT ports used within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, |
|VipAvailability|Yes|Data Path Availability|Count|Average|Average Load Balancer data path availability per time duration|FrontendIPAddress, FrontendPort|
This latest update adds a new column and reorders the metrics to be alphabetical
|TestResult|Yes|Test Result|Count|Average|Connection monitor test result|SourceAddress, SourceName, SourceResourceId, SourceType, Protocol, DestinationAddress, DestinationName, DestinationResourceId, DestinationType, DestinationPort, TestGroupName, TestConfigurationName, TestResultCriterion, SourceIP, DestinationIP, SourceSubnet, DestinationSubnet|
-## microsoft.network/p2svpngateways
+## Microsoft.Network/p2sVpnGateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Point-to-site bandwidth of a gateway in bytes per second|Instance|
+|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Average point-to-site bandwidth of a gateway in bytes per second|Instance|
|P2SConnectionCount|Yes|P2S Connection Count|Count|Total|Point-to-site connection count of a gateway|Protocol, Instance|
-|UserVpnRouteCount|No|User Vpn Route Count|Count|Total|Count of P2S User Vpn routes learned by gateway|RouteType, Instance|
## Microsoft.Network/privateDnsZones
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|PEBytesIn|Yes|Bytes In|Count|Total|Total number of Bytes In|No Dimensions|
+|PEBytesIn|Yes|Bytes In|Count|Total|Total number of Bytes Out|No Dimensions|
|PEBytesOut|Yes|Bytes Out|Count|Total|Total number of Bytes Out|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|PLSBytesIn|Yes|Bytes In|Count|Total|Total number of Bytes In|PrivateLinkServiceId|
+|PLSBytesIn|Yes|Bytes In|Count|Total|Total number of Bytes Out|PrivateLinkServiceId|
|PLSBytesOut|Yes|Bytes Out|Count|Total|Total number of Bytes Out|PrivateLinkServiceId| |PLSNatPortsUsage|Yes|Nat Ports Usage|Percent|Average|Nat Ports Usage|PrivateLinkServiceId, PrivateLinkServiceIPAddress|
This latest update adds a new column and reorders the metrics to be alphabetical
|CountOfRoutesLearnedFromPeer|No|Count Of Routes Learned From Peer|Count|Maximum|Total number of routes learned from peer|routeserviceinstance, bgppeerip, bgppeertype|
-## microsoft.network/virtualnetworkgateways
+## Microsoft.Network/virtualNetworkGateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Site-to-site bandwidth of a gateway in bytes per second|Instance|
-|BgpPeerStatus|No|BGP Peer Status|Count|Average|Status of BGP peer|BgpPeerAddress, Instance|
-|BgpRoutesAdvertised|Yes|BGP Routes Advertised|Count|Total|Count of Bgp Routes Advertised through tunnel|BgpPeerAddress, Instance|
-|BgpRoutesLearned|Yes|BGP Routes Learned|Count|Total|Count of Bgp Routes Learned through tunnel|BgpPeerAddress, Instance|
-|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer(Preview)|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance|
-|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer (Preview)|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
+|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Average site-to-site bandwidth of a gateway in bytes per second|Instance|
+|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance|
+|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
|ExpressRouteGatewayCpuUtilization|Yes|CPU utilization|Percent|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance|
-|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change (Preview)|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance|
-|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network (Preview)|Count|Maximum|Number of VMs in the Virtual Network|roleInstance|
+|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network|Count|Maximum|Number of VMs in the Virtual Network|No Dimensions|
|ExpressRouteGatewayPacketsPerSecond|No|Packets per second|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
-|MmsaCount|Yes|Tunnel MMSA Count|Count|Total|MMSA Count|ConnectionName, RemoteIP, Instance|
-|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Point-to-site bandwidth of a gateway in bytes per second|Instance|
-|P2SConnectionCount|Yes|P2S Connection Count|Count|Total|Point-to-site connection count of a gateway|Protocol, Instance|
-|QmsaCount|Yes|Tunnel QMSA Count|Count|Total|QMSA Count|ConnectionName, RemoteIP, Instance|
+|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Average point-to-site bandwidth of a gateway in bytes per second|Instance|
+|P2SConnectionCount|Yes|P2S Connection Count|Count|Maximum|Point-to-site connection count of a gateway|Protocol, Instance|
|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP, Instance| |TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP, Instance|
-|TunnelEgressPacketDropCount|Yes|Tunnel Egress Packet Drop Count|Count|Total|Count of outgoing packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelEgressPacketDropTSMismatch|Yes|Tunnel Egress TS Mismatch Packet Drop|Count|Total|Outgoing packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelEgressPackets|Yes|Tunnel Egress Packets|Count|Total|Outgoing packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP, Instance|
-|TunnelIngressPacketDropCount|Yes|Tunnel Ingress Packet Drop Count|Count|Total|Count of incoming packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule |NatRule, ConnectionName, RemoteIP, Instance|
|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, FlowType, ConnectionName, RemoteIP, Instance|
-|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, DropType, ConnectionName, RemoteIP, Instance|
-|TunnelPeakPackets|Yes|Tunnel Peak PPS|Count|Maximum|Tunnel Peak Packets Per Second|ConnectionName, RemoteIP, Instance|
+|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, ConnectionName, RemoteIP, FlowType, Instance|
+|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, ConnectionName, RemoteIP, DropType, Instance|
|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance| |TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelTotalFlowCount|Yes|Tunnel Total Flow Count|Count|Total|Total flow count on a tunnel|ConnectionName, RemoteIP, Instance|
-|UserVpnRouteCount|No|User Vpn Route Count|Count|Total|Count of P2S User Vpn routes learned by gateway|RouteType, Instance|
-|VnetAddressPrefixCount|Yes|VNet Address Prefix Count|Count|Total|Count of Vnet address prefixes behind gateway|Instance|
## Microsoft.Network/virtualNetworks
This latest update adds a new column and reorders the metrics to be alphabetical
|PeeringAvailability|Yes|Bgp Availability|Percent|Average|BGP Availability between VirtualRouter and remote peers|Peer|
-## microsoft.network/vpngateways
+## Microsoft.Network/vpnGateways
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Site-to-site bandwidth of a gateway in bytes per second|Instance|
-|BgpPeerStatus|No|BGP Peer Status|Count|Average|Status of BGP peer|BgpPeerAddress, Instance|
-|BgpRoutesAdvertised|Yes|BGP Routes Advertised|Count|Total|Count of Bgp Routes Advertised through tunnel|BgpPeerAddress, Instance|
-|BgpRoutesLearned|Yes|BGP Routes Learned|Count|Total|Count of Bgp Routes Learned through tunnel|BgpPeerAddress, Instance|
-|MmsaCount|Yes|Tunnel MMSA Count|Count|Total|MMSA Count|ConnectionName, RemoteIP, Instance|
-|QmsaCount|Yes|Tunnel QMSA Count|Count|Total|QMSA Count|ConnectionName, RemoteIP, Instance|
+|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Average site-to-site bandwidth of a gateway in bytes per second|Instance|
|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP, Instance| |TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP, Instance|
-|TunnelEgressPacketDropCount|Yes|Tunnel Egress Packet Drop Count|Count|Total|Count of outgoing packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelEgressPacketDropTSMismatch|Yes|Tunnel Egress TS Mismatch Packet Drop|Count|Total|Outgoing packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelEgressPackets|Yes|Tunnel Egress Packets|Count|Total|Outgoing packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP, Instance|
-|TunnelIngressPacketDropCount|Yes|Tunnel Ingress Packet Drop Count|Count|Total|Count of incoming packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule |NatRule, ConnectionName, RemoteIP, Instance|
|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, FlowType, ConnectionName, RemoteIP, Instance|
-|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, DropType, ConnectionName, RemoteIP, Instance|
-|TunnelPeakPackets|Yes|Tunnel Peak PPS|Count|Maximum|Tunnel Peak Packets Per Second|ConnectionName, RemoteIP, Instance|
+|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, ConnectionName, RemoteIP, FlowType, Instance|
+|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, ConnectionName, RemoteIP, DropType, Instance|
|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance| |TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
-|TunnelTotalFlowCount|Yes|Tunnel Total Flow Count|Count|Total|Total flow count on a tunnel|ConnectionName, RemoteIP, Instance|
-|VnetAddressPrefixCount|Yes|VNet Address Prefix Count|Count|Total|Count of Vnet address prefixes behind gateway|Instance|
## Microsoft.NotificationHubs/Namespaces/NotificationHubs
This latest update adds a new column and reorders the metrics to be alphabetical
|incoming|Yes|Incoming Messages|Count|Total|The count of all successful send API calls. |No Dimensions| |incoming.all.failedrequests|Yes|All Incoming Failed Requests|Count|Total|Total incoming failed requests for a notification hub|No Dimensions| |incoming.all.requests|Yes|All Incoming Requests|Count|Total|Total incoming requests for a notification hub|No Dimensions|
-|incoming.scheduled|Yes|Scheduled Push Notifications Sent|Count|Total|Scheduled Push Notifications Cancelled|No Dimensions|
+|incoming.scheduled|Yes|Scheduled Push Notifications Sent|Count|Total|Scheduled Push Notifications Sent|No Dimensions|
|incoming.scheduled.cancel|Yes|Scheduled Push Notifications Cancelled|Count|Total|Scheduled Push Notifications Cancelled|No Dimensions| |installation.all|Yes|Installation Management Operations|Count|Total|Installation Management Operations|No Dimensions| |installation.delete|Yes|Delete Installation Operations|Count|Total|Delete Installation Operations|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|workload_qpu_metric|Yes|QPU Per Workload (Gen1)|Count|Average|QPU Per Workload. Range for A1 is 0-20, A2 is 0-40, A3 is 0-40, A4 is 0-80, A5 is 0-160, A6 is 0-320. Supported only for Power BI Embedded Generation 1 resources.|Workload|
-## microsoft.purview/accounts
+## Microsoft.Purview/accounts
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ScanCancelled|Yes|Scan Cancelled|Count|Total|Indicates the number of scans cancelled.|No Dimensions|
-|ScanCompleted|Yes|Scan Completed|Count|Total|Indicates the number of scans completed successfully.|No Dimensions|
-|ScanFailed|Yes|Scan Failed|Count|Total|Indicates the number of scans failed.|No Dimensions|
-|ScanTimeTaken|Yes|Scan time taken|Seconds|Total|Indicates the total scan time in seconds.|No Dimensions|
+|ScanBillingUnits|Yes|Scan Billing Units|Count|Total|Indicates the scan billing units.|ResourceId|
+|ScanCancelled|Yes|Scan Cancelled|Count|Total|Indicates the number of scans cancelled.|ResourceId|
+|ScanCompleted|Yes|Scan Completed|Count|Total|Indicates the number of scans completed successfully.|ResourceId|
+|ScanFailed|Yes|Scan Failed|Count|Total|Indicates the number of scans failed.|ResourceId|
+|ScanTimeTaken|Yes|Scan time taken|Seconds|Total|Indicates the total scan time in seconds.|ResourceId|
## Microsoft.RecoveryServices/Vaults
This latest update adds a new column and reorders the metrics to be alphabetical
|ActiveConnections|No|ActiveConnections|Count|Total|Total ActiveConnections for Microsoft.Relay.|EntityName| |ActiveListeners|No|ActiveListeners|Count|Total|Total ActiveListeners for Microsoft.Relay.|EntityName| |BytesTransferred|Yes|BytesTransferred|Bytes|Total|Total BytesTransferred for Microsoft.Relay.|EntityName|
-|ListenerConnections-ClientError|No|ListenerConnections-ClientError|Count|Total|ClientError on ListenerConnections for Microsoft.Relay.|EntityName, OperationResult|
-|ListenerConnections-ServerError|No|ListenerConnections-ServerError|Count|Total|ServerError on ListenerConnections for Microsoft.Relay.|EntityName, OperationResult|
-|ListenerConnections-Success|No|ListenerConnections-Success|Count|Total|Successful ListenerConnections for Microsoft.Relay.|EntityName, OperationResult|
+|ListenerConnections-ClientError|No|ListenerConnections-ClientError|Count|Total|ClientError on ListenerConnections for Microsoft.Relay.|EntityName, |
+|ListenerConnections-ServerError|No|ListenerConnections-ServerError|Count|Total|ServerError on ListenerConnections for Microsoft.Relay.|EntityName, |
+|ListenerConnections-Success|No|ListenerConnections-Success|Count|Total|Successful ListenerConnections for Microsoft.Relay.|EntityName, |
|ListenerConnections-TotalRequests|No|ListenerConnections-TotalRequests|Count|Total|Total ListenerConnections for Microsoft.Relay.|EntityName| |ListenerDisconnects|No|ListenerDisconnects|Count|Total|Total ListenerDisconnects for Microsoft.Relay.|EntityName|
-|SenderConnections-ClientError|No|SenderConnections-ClientError|Count|Total|ClientError on SenderConnections for Microsoft.Relay.|EntityName, OperationResult|
-|SenderConnections-ServerError|No|SenderConnections-ServerError|Count|Total|ServerError on SenderConnections for Microsoft.Relay.|EntityName, OperationResult|
-|SenderConnections-Success|No|SenderConnections-Success|Count|Total|Successful SenderConnections for Microsoft.Relay.|EntityName, OperationResult|
+|SenderConnections-ClientError|No|SenderConnections-ClientError|Count|Total|ClientError on SenderConnections for Microsoft.Relay.|EntityName, |
+|SenderConnections-ServerError|No|SenderConnections-ServerError|Count|Total|ServerError on SenderConnections for Microsoft.Relay.|EntityName, |
+|SenderConnections-Success|No|SenderConnections-Success|Count|Total|Successful SenderConnections for Microsoft.Relay.|EntityName, |
|SenderConnections-TotalRequests|No|SenderConnections-TotalRequests|Count|Total|Total SenderConnections requests for Microsoft.Relay.|EntityName| |SenderDisconnects|No|SenderDisconnects|Count|Total|Total SenderDisconnects for Microsoft.Relay.|EntityName|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|DocumentsProcessedCount|Yes|Document processed count|Count|Total|Number of documents processed|DataSourceName, Failed, IndexerName, IndexName, SkillsetName|
|SearchLatency|Yes|Search Latency|Seconds|Average|Average search latency for the search service|No Dimensions| |SearchQueriesPerSecond|Yes|Search queries per second|CountPerSecond|Average|Search queries per second for the search service|No Dimensions|
-|SkillExecutionCount|Yes|Skill execution invocation count|Count|Total|Number of skill executions|DataSourceName, Failed, IndexerName, SkillName, SkillsetName, SkillType|
|ThrottledSearchQueriesPercentage|Yes|Throttled search queries percentage|Percent|Average|Percentage of search queries that were throttled for the search service|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|AbandonMessage|Yes|Abandoned Messages|Count|Total|Abandoned Messages|EntityName|
|ActiveConnections|No|ActiveConnections|Count|Total|Total Active Connections for Microsoft.ServiceBus.|No Dimensions| |ActiveMessages|No|Count of active messages in a Queue/Topic.|Count|Average|Count of active messages in a Queue/Topic.|EntityName|
+|CompleteMessage|Yes|Completed Messages|Count|Total|Completed Messages|EntityName|
|ConnectionsClosed|No|Connections Closed.|Count|Average|Connections Closed for Microsoft.ServiceBus.|EntityName| |ConnectionsOpened|No|Connections Opened.|Count|Average|Connections Opened for Microsoft.ServiceBus.|EntityName|
-|CPUXNS|No|CPU (Deprecated)|Percent|Maximum|Service bus premium namespace CPU usage metric. This metric is depricated. Please use the CPU metric (NamespaceCpuUsage) instead.|Replica|
+|CPUXNS|No|CPU (Deprecated)|Percent|Maximum|Service bus premium namespace CPU usage metric. This metric is depricated. Please use the CPU metric (NamespaceCpuUsage) instead.|No Dimensions|
|DeadletteredMessages|No|Count of dead-lettered messages in a Queue/Topic.|Count|Average|Count of dead-lettered messages in a Queue/Topic.|EntityName| |IncomingMessages|Yes|Incoming Messages|Count|Total|Incoming Messages for Microsoft.ServiceBus.|EntityName| |IncomingRequests|Yes|Incoming Requests|Count|Total|Incoming Requests for Microsoft.ServiceBus.|EntityName| |Messages|No|Count of messages in a Queue/Topic.|Count|Average|Count of messages in a Queue/Topic.|EntityName|
-|NamespaceCpuUsage|No|CPU|Percent|Maximum|Service bus premium namespace CPU usage metric.|Replica|
-|NamespaceMemoryUsage|No|Memory Usage|Percent|Maximum|Service bus premium namespace memory usage metric.|Replica|
+|NamespaceCpuUsage|No|CPU|Percent|Maximum|CPU usage metric for Premium SKU namespaces.|No Dimensions|
+|NamespaceMemoryUsage|No|Memory Usage|Percent|Maximum|Memory usage metric for Premium SKU namespaces.|No Dimensions|
|OutgoingMessages|Yes|Outgoing Messages|Count|Total|Outgoing Messages for Microsoft.ServiceBus.|EntityName|
+|PendingCheckpointOperationCount|No|Pending Checkpoint Operations Count.|Count|Total|Pending Checkpoint Operations Count.|No Dimensions|
|ScheduledMessages|No|Count of scheduled messages in a Queue/Topic.|Count|Average|Count of scheduled messages in a Queue/Topic.|EntityName|
-|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.ServiceBus.|EntityName, OperationResult|
+|ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.ServiceBus.|EntityName, |
+|ServerSendLatency|Yes|Server Send Latency.|Milliseconds|Average|Server Send Latency.|EntityName|
|Size|No|Size|Bytes|Average|Size of an Queue/Topic in Bytes.|EntityName|
-|SuccessfulRequests|No|Successful Requests|Count|Total|Total successful requests for a namespace|EntityName, OperationResult|
-|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.ServiceBus.|EntityName, OperationResult|
-|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.ServiceBus.|EntityName, OperationResult|
-|WSXNS|No|Memory Usage (Deprecated)|Percent|Maximum|Service bus premium namespace memory usage metric. This metric is deprecated. Please use the Memory Usage (NamespaceMemoryUsage) metric instead.|Replica|
+|SuccessfulRequests|No|Successful Requests|Count|Total|Total successful requests for a namespace|EntityName, |
+|ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.ServiceBus.|EntityName, MessagingErrorSubCode|
+|UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.ServiceBus.|EntityName, |
+|WSXNS|No|Memory Usage (Deprecated)|Percent|Maximum|Service bus premium namespace memory usage metric. This metric is deprecated. Please use the Memory Usage (NamespaceMemoryUsage) metric instead.|No Dimensions|
## Microsoft.SignalRService/SignalR
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |ConnectionCount|Yes|Connection Count|Count|Maximum|The amount of user connection.|Endpoint|
+|ConnectionQuotaUtilization|Yes|Connection Quota Utilization|Percent|Maximum|The percentage of connection connected relative to connection quota.|No Dimensions|
|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The inbound traffic of service|No Dimensions| |MessageCount|Yes|Message Count|Count|Total|The total amount of messages.|No Dimensions| |OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The outbound traffic of service|No Dimensions| |SystemErrors|Yes|System Errors|Percent|Maximum|The percentage of system errors|No Dimensions| |UserErrors|Yes|User Errors|Percent|Maximum|The percentage of user errors|No Dimensions|
-|ConnectionQuotaUtilization|Yes|Connection Quota Utilization (Platform)|Percent|Average|The percentage of connection connected relative to connection quota|No Dimensions|
+ ## Microsoft.SignalRService/WebPubSub |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The traffic originating from outside to inside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions|
-|OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The traffic originating from inside to outside of the service. It is aggregated by adding all the bytes of the traffic.|No Dimensions|
-|TotalConnectionCount|Yes|Connection Count|Count|Maximum|The number of user connections established to the service. It is aggregated by adding all the online connections.|No Dimensions|
+|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The inbound traffic of service|No Dimensions|
+|OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The outbound traffic of service|No Dimensions|
+|TotalConnectionCount|Yes|Connection Count|Count|Maximum|The amount of user connection.|No Dimensions|
## Microsoft.Sql/managedInstances
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |allocated_data_storage|Yes|Data space allocated|Bytes|Average|Data space allocated|No Dimensions| |allocated_data_storage_percent|Yes|Data space allocated percent|Percent|Maximum|Data space allocated percent|No Dimensions|
+|app_cpu_billed|Yes|App CPU billed|Count|Total|App CPU billed. Applies to serverless databases.|No Dimensions|
+|app_cpu_percent|Yes|App CPU percentage|Percent|Average|App CPU percentage. Applies to serverless databases.|No Dimensions|
+|app_memory_percent|Yes|App memory percentage|Percent|Average|App memory percentage. Applies to serverless databases.|No Dimensions|
|cpu_limit|Yes|CPU limit|Count|Average|CPU limit. Applies to vCore-based elastic pools.|No Dimensions| |cpu_percent|Yes|CPU percentage|Percent|Average|CPU percentage|No Dimensions| |cpu_used|Yes|CPU used|Count|Average|CPU used. Applies to vCore-based elastic pools.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication| |Egress|Yes|Egress|Bytes|Total|The amount of egress data. This number includes egress to external client from Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication| |Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication|
-|SuccessE2ELatency|Yes|Success E2E Latency|MilliSeconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
-|SuccessServerLatency|Yes|Success Server Latency|MilliSeconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
+|SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
+|SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication| |UsedCapacity|Yes|Used capacity|Bytes|Average|The amount of storage used by the storage account. For standard storage accounts, it's the sum of capacity used by blob, table, file, and queue. For premium storage accounts and Blob storage accounts, it is the same as BlobCapacity or FileCapacity.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |ClientIOPS|Yes|Total Client IOPS|Count|Average|The rate of client file operations processed by the Cache.|No Dimensions|
-|ClientLatency|Yes|Average Client Latency|MilliSeconds|Average|Average latency of client file operations to the Cache.|No Dimensions|
+|ClientLatency|Yes|Average Client Latency|Milliseconds|Average|Average latency of client file operations to the Cache.|No Dimensions|
|ClientLockIOPS|Yes|Client Lock IOPS|CountPerSecond|Average|Client file locking operations per second.|No Dimensions| |ClientMetadataReadIOPS|Yes|Client Metadata Read IOPS|CountPerSecond|Average|The rate of client file operations sent to the Cache, excluding data reads, that do not modify persistent state.|No Dimensions| |ClientMetadataWriteIOPS|Yes|Client Metadata Write IOPS|CountPerSecond|Average|The rate of client file operations sent to the Cache, excluding data writes, that modify persistent state.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|StorageTargetFreeWriteSpace|Yes|Storage Target Free Write Space|Bytes|Average|Write space available for dirty data associated with a storage target.|StorageTarget| |StorageTargetHealth|Yes|Storage Target Health|Count|Average|Boolean results of connectivity test between the Cache and Storage Targets.|No Dimensions| |StorageTargetIOPS|Yes|Total StorageTarget IOPS|Count|Average|The rate of all file operations the Cache sends to a particular StorageTarget.|StorageTarget|
-|StorageTargetLatency|Yes|StorageTarget Latency|MilliSeconds|Average|The average round trip latency of all the file operations the Cache sends to a partricular StorageTarget.|StorageTarget|
+|StorageTargetLatency|Yes|StorageTarget Latency|Milliseconds|Average|The average round trip latency of all the file operations the Cache sends to a partricular StorageTarget.|StorageTarget|
|StorageTargetMetadataReadIOPS|Yes|StorageTarget Metadata Read IOPS|CountPerSecond|Average|The rate of file operations that do not modify persistent state, and excluding the read operation, that the Cache sends to a particular StorageTarget.|StorageTarget| |StorageTargetMetadataWriteIOPS|Yes|StorageTarget Metadata Write IOPS|CountPerSecond|Average|The rate of file operations that do modify persistent state and excluding the write operation, that the Cache sends to a particular StorageTarget.|StorageTarget| |StorageTargetReadAheadThroughput|Yes|StorageTarget Read Ahead Throughput|BytesPerSecond|Average|The rate the Cache opportunisticly reads data from the StorageTarget.|StorageTarget|
This latest update adds a new column and reorders the metrics to be alphabetical
|BigDataPoolApplicationsEnded|No|Ended Apache Spark applications|Count|Total|Count of Apache Spark pool applications ended|JobType, JobResult|
+## Microsoft.Synapse/workspaces/kustoPools
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|BatchBlobCount|Yes|Batch Blob Count|Count|Average|Number of data sources in an aggregated batch for ingestion.|Database|
+|BatchDuration|Yes|Batch Duration|Seconds|Average|The duration of the aggregation phase in the ingestion flow.|Database|
+|BatchesProcessed|Yes|Batches Processed|Count|Total|Number of batches aggregated for ingestion. Batching Type: whether the batch reached batching time, data size or number of files limit set by batching policy|Database, SealReason|
+|BatchSize|Yes|Batch Size|Bytes|Average|Uncompressed expected data size in an aggregated batch for ingestion.|Database|
+|BlobsDropped|Yes|Blobs Dropped|Count|Total|Number of blobs permanently rejected by a component.|Database, ComponentType, ComponentName|
+|BlobsProcessed|Yes|Blobs Processed|Count|Total|Number of blobs processed by a component.|Database, ComponentType, ComponentName|
+|BlobsReceived|Yes|Blobs Received|Count|Total|Number of blobs received from input stream by a component.|Database, ComponentType, ComponentName|
+|CacheUtilization|Yes|Cache utilization|Percent|Average|Utilization level in the cluster scope|No Dimensions|
+|ContinuousExportMaxLatenessMinutes|Yes|Continuous Export Max Lateness|Count|Maximum|The lateness (in minutes) reported by the continuous export jobs in the cluster|No Dimensions|
+|ContinuousExportNumOfRecordsExported|Yes|Continuous export - num of exported records|Count|Total|Number of records exported, fired for every storage artifact written during the export operation|ContinuousExportName, Database|
+|ContinuousExportPendingCount|Yes|Continuous Export Pending Count|Count|Maximum|The number of pending continuous export jobs ready for execution|No Dimensions|
+|ContinuousExportResult|Yes|Continuous Export Result|Count|Count|Indicates whether Continuous Export succeeded or failed|ContinuousExportName, Result, Database|
+|CPU|Yes|CPU|Percent|Average|CPU utilization level|No Dimensions|
+|DiscoveryLatency|Yes|Discovery Latency|Seconds|Average|Reported by data connections (if exist). Time in seconds from when a message is enqueued or event is created until it is discovered by data connection. This time is not included in the Azure Data Explorer total ingestion duration.|ComponentType, ComponentName|
+|EventsDropped|Yes|Events Dropped|Count|Total|Number of events dropped permanently by data connection. An Ingestion result metric with a failure reason will be sent.|ComponentType, ComponentName|
+|EventsProcessed|Yes|Events Processed|Count|Total|Number of events processed by the cluster|ComponentType, ComponentName|
+|EventsProcessedForEventHubs|Yes|Events Processed (for Event/IoT Hubs)|Count|Total|Number of events processed by the cluster when ingesting from Event/IoT Hub|EventStatus|
+|EventsReceived|Yes|Events Received|Count|Total|Number of events received by data connection.|ComponentType, ComponentName|
+|ExportUtilization|Yes|Export Utilization|Percent|Maximum|Export utilization|No Dimensions|
+|IngestionLatencyInSeconds|Yes|Ingestion Latency|Seconds|Average|Latency of data ingested, from the time the data was received in the cluster until it's ready for query. The ingestion latency period depends on the ingestion scenario.|No Dimensions|
+|IngestionResult|Yes|Ingestion result|Count|Total|Total number of sources that either failed or succeeded to be ingested. Splitting the metric by status, you can get detailed information about the status of the ingestion operations.|IngestionResultDetails, FailureKind|
+|IngestionUtilization|Yes|Ingestion utilization|Percent|Average|Ratio of used ingestion slots in the cluster|No Dimensions|
+|IngestionVolumeInMB|Yes|Ingestion Volume|Bytes|Total|Overall volume of ingested data to the cluster|Database|
+|InstanceCount|Yes|Instance Count|Count|Average|Total instance count|No Dimensions|
+|KeepAlive|Yes|Keep alive|Count|Average|Sanity check indicates the cluster responds to queries|No Dimensions|
+|MaterializedViewAgeMinutes|Yes|Materialized View Age|Count|Average|The materialized view age in minutes|Database, MaterializedViewName|
+|MaterializedViewDataLoss|Yes|Materialized View Data Loss|Count|Maximum|Indicates potential data loss in materialized view|Database, MaterializedViewName, Kind|
+|MaterializedViewExtentsRebuild|Yes|Materialized View Extents Rebuild|Count|Average|Number of extents rebuild|Database, MaterializedViewName|
+|MaterializedViewHealth|Yes|Materialized View Health|Count|Average|The health of the materialized view (1 for healthy, 0 for non-healthy)|Database, MaterializedViewName|
+|MaterializedViewRecordsInDelta|Yes|Materialized View Records In Delta|Count|Average|The number of records in the non-materialized part of the view|Database, MaterializedViewName|
+|MaterializedViewResult|Yes|Materialized View Result|Count|Average|The result of the materialization process|Database, MaterializedViewName, Result|
+|QueryDuration|Yes|Query duration|Milliseconds|Average|Queries' duration in seconds|QueryStatus|
+|QueryResult|No|Query Result|Count|Count|Total number of queries.|QueryStatus|
+|QueueLength|Yes|Queue Length|Count|Average|Number of pending messages in a component's queue.|ComponentType|
+|QueueOldestMessage|Yes|Queue Oldest Message|Count|Average|Time in seconds from when the oldest message in queue was inserted.|ComponentType|
+|ReceivedDataSizeBytes|Yes|Received Data Size Bytes|Bytes|Average|Size of data received by data connection. This is the size of the data stream, or of raw data size if provided.|ComponentType, ComponentName|
+|StageLatency|Yes|Stage Latency|Seconds|Average|Cumulative time from when a message is discovered until it is received by the reporting component for processing (discovery time is set when message is enqueued for ingestion queue, or when discovered by data connection).|Database, ComponentType|
+|SteamingIngestRequestRate|Yes|Streaming Ingest Request Rate|Count|RateRequestsPerSecond|Streaming ingest request rate (requests per second)|No Dimensions|
+|StreamingIngestDataRate|Yes|Streaming Ingest Data Rate|Count|Average|Streaming ingest data rate (MB per second)|No Dimensions|
+|StreamingIngestDuration|Yes|Streaming Ingest Duration|Milliseconds|Average|Streaming ingest duration in milliseconds|No Dimensions|
+|StreamingIngestResults|Yes|Streaming Ingest Result|Count|Count|Streaming ingest result|Result|
+|TotalNumberOfConcurrentQueries|Yes|Total number of concurrent queries|Count|Maximum|Total number of concurrent queries|No Dimensions|
+|TotalNumberOfExtents|Yes|Total number of extents|Count|Total|Total number of data extents|No Dimensions|
+|TotalNumberOfThrottledCommands|Yes|Total number of throttled commands|Count|Total|Total number of throttled commands|CommandType|
+|TotalNumberOfThrottledQueries|Yes|Total number of throttled queries|Count|Maximum|Total number of throttled queries|No Dimensions|
++ ## Microsoft.Synapse/workspaces/sqlPools |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Requests|No|Requests|Count|Total|API Connection Requests|HttpStatusCode, ClientIPAddress|
+|ApiConnectionRequests|Yes|Requests|Count|Total|API Connection Requests|HttpStatusCode, ClientIPAddress|
## Microsoft.Web/hostingEnvironments
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AppConnections|Yes|Connections|Count|Average|The number of bound sockets existing in the sandbox (w3wp.exe and its child processes). A bound socket is created by calling bind()/connect() APIs and remains until said socket is closed with CloseHandle()/closesocket(). For WebApps and FunctionApps.|Instance|
-|AverageMemoryWorkingSet|Yes|Average memory working set|Bytes|Average|The average amount of memory used by the app, in megabytes (MiB). For WebApps and FunctionApps.|Instance|
-|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds. For WebApps and FunctionApps.|Instance|
-|BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB. For WebApps and FunctionApps.|Instance|
-|BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB. For WebApps and FunctionApps.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage). For WebApps only.|Instance|
-|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application. For WebApps and FunctionApps.|Instance|
-|FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app. For WebApps and FunctionApps.|No Dimensions|
-|FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count. For FunctionApps only.|Instance|
-|FunctionExecutionUnits|Yes|Function Execution Units|Count|Total|Function Execution Units. For FunctionApps only.|Instance|
-|Gen0Collections|Yes|Gen 0 Garbage Collections|Count|Total|The number of times the generation 0 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs. For WebApps and FunctionApps.|Instance|
-|Gen1Collections|Yes|Gen 1 Garbage Collections|Count|Total|The number of times the generation 1 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs. For WebApps and FunctionApps.|Instance|
-|Gen2Collections|Yes|Gen 2 Garbage Collections|Count|Total|The number of times the generation 2 objects are garbage collected since the start of the app process. For WebApps and FunctionApps.|Instance|
-|Handles|Yes|Handle Count|Count|Average|The total number of handles currently open by the app process. For WebApps and FunctionApps.|Instance|
-|HealthCheckStatus|Yes|Health check status|Count|Average|Health check status. For WebApps and FunctionApps.|Instance|
-|Http101|Yes|Http 101|Count|Total|The count of requests resulting in an HTTP status code 101. For WebApps and FunctionApps.|Instance|
-|Http2xx|Yes|Http 2xx|Count|Total|The count of requests resulting in an HTTP status code = 200 but < 300. For WebApps and FunctionApps.|Instance|
-|Http3xx|Yes|Http 3xx|Count|Total|The count of requests resulting in an HTTP status code = 300 but < 400. For WebApps and FunctionApps.|Instance|
-|Http401|Yes|Http 401|Count|Total|The count of requests resulting in HTTP 401 status code. For WebApps and FunctionApps.|Instance|
-|Http403|Yes|Http 403|Count|Total|The count of requests resulting in HTTP 403 status code. For WebApps and FunctionApps.|Instance|
-|Http404|Yes|Http 404|Count|Total|The count of requests resulting in HTTP 404 status code. For WebApps and FunctionApps.|Instance|
-|Http406|Yes|Http 406|Count|Total|The count of requests resulting in HTTP 406 status code. For WebApps and FunctionApps.|Instance|
-|Http4xx|Yes|Http 4xx|Count|Total|The count of requests resulting in an HTTP status code = 400 but < 500. For WebApps and FunctionApps.|Instance|
-|Http5xx|Yes|Http Server Errors|Count|Total|The count of requests resulting in an HTTP status code = 500 but < 600. For WebApps and FunctionApps.|Instance|
-|HttpResponseTime|Yes|Response Time|Seconds|Average|The time taken for the app to serve requests, in seconds. For WebApps and FunctionApps.|Instance|
-|IoOtherBytesPerSecond|Yes|IO Other Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is issuing bytes to I/O operations that don't involve data, such as control operations. For WebApps and FunctionApps.|Instance|
-|IoOtherOperationsPerSecond|Yes|IO Other Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing I/O operations that aren't read or write operations. For WebApps and FunctionApps.|Instance|
-|IoReadBytesPerSecond|Yes|IO Read Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is reading bytes from I/O operations. For WebApps and FunctionApps.|Instance|
-|IoReadOperationsPerSecond|Yes|IO Read Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing read I/O operations. For WebApps and FunctionApps.|Instance|
-|IoWriteBytesPerSecond|Yes|IO Write Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is writing bytes to I/O operations. For WebApps and FunctionApps.|Instance|
-|IoWriteOperationsPerSecond|Yes|IO Write Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing write I/O operations. For WebApps and FunctionApps.|Instance|
-|MemoryWorkingSet|Yes|Memory working set|Bytes|Average|The current amount of memory used by the app, in MiB. For WebApps and FunctionApps.|Instance|
-|PrivateBytes|Yes|Private Bytes|Bytes|Average|Private Bytes is the current size, in bytes, of memory that the app process has allocated that can't be shared with other processes. For WebApps and FunctionApps.|Instance|
-|Requests|Yes|Requests|Count|Total|The total number of requests regardless of their resulting HTTP status code. For WebApps and FunctionApps.|Instance|
-|RequestsInApplicationQueue|Yes|Requests In Application Queue|Count|Average|The number of requests in the application request queue. For WebApps and FunctionApps.|Instance|
-|Threads|Yes|Thread Count|Count|Average|The number of threads currently active in the app process. For WebApps and FunctionApps.|Instance|
-|TotalAppDomains|Yes|Total App Domains|Count|Average|The current number of AppDomains loaded in this application. For WebApps and FunctionApps.|Instance|
-|TotalAppDomainsUnloaded|Yes|Total App Domains Unloaded|Count|Average|The total number of AppDomains unloaded since the start of the application. For WebApps and FunctionApps.|Instance|
+|AppConnections|Yes|Connections|Count|Average|The number of bound sockets existing in the sandbox (w3wp.exe and its child processes). A bound socket is created by calling bind()/connect() APIs and remains until said socket is closed with CloseHandle()/closesocket().|Instance|
+|AverageMemoryWorkingSet|Yes|Average memory working set|Bytes|Average|The average amount of memory used by the app, in megabytes (MiB).|Instance|
+|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance|
+|BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance|
+|BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see: https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance|
+|FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions|
+|FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
+|FunctionExecutionUnits|Yes|Function Execution Units|Count|Total|Function Execution Units|Instance|
+|Gen0Collections|Yes|Gen 0 Garbage Collections|Count|Total|The number of times the generation 0 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|Instance|
+|Gen1Collections|Yes|Gen 1 Garbage Collections|Count|Total|The number of times the generation 1 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|Instance|
+|Gen2Collections|Yes|Gen 2 Garbage Collections|Count|Total|The number of times the generation 2 objects are garbage collected since the start of the app process.|Instance|
+|Handles|Yes|Handle Count|Count|Average|The total number of handles currently open by the app process.|Instance|
+|HealthCheckStatus|Yes|Health check status|Count|Average|Health check status|Instance|
+|Http101|Yes|Http 101|Count|Total|The count of requests resulting in an HTTP status code 101.|Instance|
+|Http2xx|Yes|Http 2xx|Count|Total|The count of requests resulting in an HTTP status code = 200 but < 300.|Instance|
+|Http3xx|Yes|Http 3xx|Count|Total|The count of requests resulting in an HTTP status code = 300 but < 400.|Instance|
+|Http401|Yes|Http 401|Count|Total|The count of requests resulting in HTTP 401 status code.|Instance|
+|Http403|Yes|Http 403|Count|Total|The count of requests resulting in HTTP 403 status code.|Instance|
+|Http404|Yes|Http 404|Count|Total|The count of requests resulting in HTTP 404 status code.|Instance|
+|Http406|Yes|Http 406|Count|Total|The count of requests resulting in HTTP 406 status code.|Instance|
+|Http4xx|Yes|Http 4xx|Count|Total|The count of requests resulting in an HTTP status code = 400 but < 500.|Instance|
+|Http5xx|Yes|Http Server Errors|Count|Total|The count of requests resulting in an HTTP status code = 500 but < 600.|Instance|
+|HttpResponseTime|Yes|Response Time|Seconds|Average|The time taken for the app to serve requests, in seconds.|Instance|
+|IoOtherBytesPerSecond|Yes|IO Other Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is issuing bytes to I/O operations that don't involve data, such as control operations.|Instance|
+|IoOtherOperationsPerSecond|Yes|IO Other Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing I/O operations that aren't read or write operations.|Instance|
+|IoReadBytesPerSecond|Yes|IO Read Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is reading bytes from I/O operations.|Instance|
+|IoReadOperationsPerSecond|Yes|IO Read Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing read I/O operations.|Instance|
+|IoWriteBytesPerSecond|Yes|IO Write Bytes Per Second|BytesPerSecond|Total|The rate at which the app process is writing bytes to I/O operations.|Instance|
+|IoWriteOperationsPerSecond|Yes|IO Write Operations Per Second|BytesPerSecond|Total|The rate at which the app process is issuing write I/O operations.|Instance|
+|MemoryWorkingSet|Yes|Memory working set|Bytes|Average|The current amount of memory used by the app, in MiB.|Instance|
+|PrivateBytes|Yes|Private Bytes|Bytes|Average|Private Bytes is the current size, in bytes, of memory that the app process has allocated that can't be shared with other processes.|Instance|
+|Requests|Yes|Requests|Count|Total|The total number of requests regardless of their resulting HTTP status code.|Instance|
+|RequestsInApplicationQueue|Yes|Requests In Application Queue|Count|Average|The number of requests in the application request queue.|Instance|
+|ScmCpuTime|Yes|ScmCpuTime|Seconds|Total|ScmCpuTime|Instance|
+|ScmPrivateBytes|Yes|ScmPrivateBytes|Bytes|Average|ScmPrivateBytes|Instance|
+|Threads|Yes|Thread Count|Count|Average|The number of threads currently active in the app process.|Instance|
+|TotalAppDomains|Yes|Total App Domains|Count|Average|The current number of AppDomains loaded in this application.|Instance|
+|TotalAppDomainsUnloaded|Yes|Total App Domains Unloaded|Count|Average|The total number of AppDomains unloaded since the start of the application.|Instance|
## Microsoft.Web/sites/slots
This latest update adds a new column and reorders the metrics to be alphabetical
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see: https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions| |FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
This latest update adds a new column and reorders the metrics to be alphabetical
|PrivateBytes|Yes|Private Bytes|Bytes|Average|Private Bytes is the current size, in bytes, of memory that the app process has allocated that can't be shared with other processes.|Instance| |Requests|Yes|Requests|Count|Total|The total number of requests regardless of their resulting HTTP status code.|Instance| |RequestsInApplicationQueue|Yes|Requests In Application Queue|Count|Average|The number of requests in the application request queue.|Instance|
+|ScmCpuTime|Yes|ScmCpuTime|Seconds|Total|ScmCpuTime|Instance|
+|ScmPrivateBytes|Yes|ScmPrivateBytes|Bytes|Average|ScmPrivateBytes|Instance|
|Threads|Yes|Thread Count|Count|Average|The number of threads currently active in the app process.|Instance| |TotalAppDomains|Yes|Total App Domains|Count|Average|The current number of AppDomains loaded in this application.|Instance| |TotalAppDomainsUnloaded|Yes|Total App Domains Unloaded|Count|Average|The total number of AppDomains unloaded since the start of the application.|Instance|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |BytesSent|Yes|Data Out|Bytes|Total|BytesSent|Instance|
+|CdnPercentageOf4XX|Yes|CdnPercentageOf4XX|Percent|Total|CdnPercentageOf4XX|Instance|
+|CdnPercentageOf5XX|Yes|CdnPercentageOf5XX|Percent|Total|CdnPercentageOf5XX|Instance|
+|CdnRequestCount|Yes|CdnRequestCount|Count|Total|CdnRequestCount|Instance|
+|CdnResponseSize|Yes|CdnResponseSize|Bytes|Total|CdnResponseSize|Instance|
+|CdnTotalLatency|Yes|CdnTotalLatency|Seconds|Total|CdnTotalLatency|Instance|
|FunctionErrors|Yes|FunctionErrors|Count|Total|FunctionErrors|Instance| |FunctionHits|Yes|FunctionHits|Count|Total|FunctionHits|Instance| |SiteErrors|Yes|SiteErrors|Count|Total|SiteErrors|Instance|
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalMigratedDataInBytes|Yes|Total Migrated Data in Bytes|Bytes|Total|This provides a view of the successfully migrated Bytes for a given migrator|No Dimensions| |TotalTransactions|Yes|Total Transactions|Count|Total|This provides a running total of the Data Transactions for which the user could be billed.|No Dimensions| + ## Next steps - [Read about metrics in Azure Monitor](../data-platform.md)
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 10/05/2021 Last updated : 02/08/2022
Some categories might be supported only for specific types of resources. See the
If you think something is missing, you can open a GitHub comment at the bottom of this article.
-## Microsoft.AAD/DomainServices
+## Microsoft.AAD/domainServices
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |GatewayLogs|Logs related to ApiManagement Gateway|No|
-|WebSocketConnectionLogs|Logs related to Websocket Connections|Yes|
## Microsoft.AppConfiguration/configurationStores
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|AuditEvent|AuditEvent|Yes|
-|DscNodeStatus|DscNodeStatus|No|
-|JobLogs|JobLogs|No|
-|JobStreams|JobStreams|No|
+|DscNodeStatus|Dsc Node Status|No|
+|JobLogs|Job Logs|No|
+|JobStreams|Job Streams|No|
## Microsoft.AutonomousDevelopmentPlatform/accounts
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|cloud-controller-manager|Kubernetes Cloud Controller Manager|Yes|
|cluster-autoscaler|Kubernetes Cluster Autoscaler|No|
-|guard|guard|No|
+|guard|Kubernetes Guard|No|
|kube-apiserver|Kubernetes API Server|No| |kube-audit|Kubernetes Audit|No| |kube-audit-admin|Kubernetes Audit Admin Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |ActivityRuns|Pipeline activity runs log|No|
+|AirflowDagProcessingLogs|Airflow dag processing logs|Yes|
+|AirflowSchedulerLogs|Airflow scheduler logs|Yes|
+|AirflowTaskLogs|Airflow task execution logs|Yes|
+|AirflowWebLogs|Airflow web logs|Yes|
+|AirflowWorkerLogs|Airflow worker logs|Yes|
|PipelineRuns|Pipeline runs log|No| |SandboxActivityRuns|Sandbox Activity runs log|Yes| |SandboxPipelineRuns|Sandbox Pipeline runs log|Yes|
If you think something is missing, you can open a GitHub comment at the bottom o
|ResourceProviderOperation|ResourceProviderOperation|Yes|
-## Microsoft.DocumentDB/DatabaseAccounts
+## Microsoft.DocumentDB/databaseAccounts
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|ApplicationMetricsLogs|Application Metrics Logs|Yes|
|ArchiveLogs|Archive Logs|No| |AutoScaleLogs|Auto Scale Logs|No| |CustomerManagedKeyUserLogs|Customer Managed Key Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|KafkaCoordinatorLogs|Kafka Coordinator Logs|No| |KafkaUserErrorLogs|Kafka User Error Logs|No| |OperationalLogs|Operational Logs|No|
+|RuntimeAuditLogs|Runtime Audit Logs|Yes|
## microsoft.experimentation/experimentWorkspaces
If you think something is missing, you can open a GitHub comment at the bottom o
|AuditLogs|FHIR Audit logs|Yes|
-## microsoft.insights/autoscalesettings
+## Microsoft.Insights/AutoscaleSettings
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|AppTraces|Traces|No|
-## microsoft.keyvault/managedhsms
+## Microsoft.KeyVault/managedHSMs
|Category|Category Display Name|Costs To Export| ||||
-|AuditEvent|Audit Event|No|
+|AuditEvent|Audit Logs|No|
## Microsoft.KeyVault/vaults
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AuditEvent|Audit Logs|No|
-|AzurePolicyEvaluationDetails|Azure Policy Evaluation Details|Yes|
## Microsoft.Kusto/Clusters
If you think something is missing, you can open a GitHub comment at the bottom o
|TableUsageStatistics|Table usage statistics|No|
-## Microsoft.Logic/IntegrationAccounts
+## Microsoft.Logic/integrationAccounts
|Category|Category Display Name|Costs To Export| |||| |IntegrationAccountTrackingEvents|Integration Account track events|No|
-## Microsoft.Logic/Workflows
+## Microsoft.Logic/workflows
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AmlComputeClusterEvent|AmlComputeClusterEvent|No|
-|AmlComputeClusterNodeEvent (deprecated) |AmlComputeClusterNodeEvent|No|
|AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No| |AmlComputeJobEvent|AmlComputeJobEvent|No| |AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No|
-|ComputeInstanceEvent|ComputeInstanceEvent|Yes|
-|DataLabelChangeEvent|DataLabelChangeEvent|Yes|
-|DataLabelReadEvent|DataLabelReadEvent|Yes|
-|DataSetChangeEvent|DataSetChangeEvent|Yes|
-|DataSetReadEvent|DataSetReadEvent|Yes|
-|DataStoreChangeEvent|DataStoreChangeEvent|Yes|
-|DataStoreReadEvent|DataStoreReadEvent|Yes|
-|DeploymentEventACI|DeploymentEventACI|Yes|
-|DeploymentEventAKS|DeploymentEventAKS|Yes|
-|DeploymentReadEvent|DeploymentReadEvent|Yes|
-|EnvironmentChangeEvent|EnvironmentChangeEvent|Yes|
-|EnvironmentReadEvent|EnvironmentReadEvent|Yes|
-|InferencingOperationACI|InferencingOperationACI|Yes|
-|InferencingOperationAKS|InferencingOperationAKS|Yes|
-|ModelsActionEvent|ModelsActionEvent|Yes|
-|ModelsChangeEvent|ModelsChangeEvent|Yes|
-|ModelsReadEvent|ModelsReadEvent|Yes|
-|PipelineChangeEvent|PipelineChangeEvent|Yes|
-|PipelineReadEvent|PipelineReadEvent|Yes|
-|RunEvent|RunEvent|Yes|
-|RunReadEvent|RunReadEvent|Yes|
-
-> [!NOTE]
-> Effective February 2022, the AmlComputeClusterNodeEvent category will be deprecated. We recommend that you instead use the AmlComputeClusterEvent category.
## Microsoft.Media/mediaservices
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |KeyDeliveryRequests|Key Delivery Requests|No|
-|MediaAccount|Media Account Health Status|Yes|
## Microsoft.Media/videoanalyzers
If you think something is missing, you can open a GitHub comment at the bottom o
|Operational|Operational Logs|Yes|
-## Microsoft.Network/applicationgateways
+## Microsoft.Network/applicationGateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|AzureFirewallNetworkRule|Azure Firewall Network Rule|No|
-## microsoft.network/bastionHosts
+## Microsoft.Network/bastionHosts
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|NetworkSecurityGroupRuleCounter|Network Security Group Rule Counter|No|
-## microsoft.network/p2svpngateways
+## Microsoft.Network/p2sVpnGateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|ProbeHealthStatusEvents|Traffic Manager Probe Health Results Event|No|
-## microsoft.network/virtualnetworkgateways
+## Microsoft.Network/virtualNetworkGateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|VMProtectionAlerts|VM protection alerts|No|
-## microsoft.network/vpngateways
+## Microsoft.Network/vpnGateways
|Category|Category Display Name|Costs To Export| ||||
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|Audit|Audit|Yes|
+|Audit|Audit Logs|No|
## Microsoft.PowerBI/tenants
If you think something is missing, you can open a GitHub comment at the bottom o
|Engine|Engine|No|
-## microsoft.purview/accounts
+## Microsoft.Purview/accounts
|Category|Category Display Name|Costs To Export| ||||
-|DataSensitivityLogEvent|DataSensitivity|Yes|
-|ScanStatusLogEvent|ScanStatus|No|
-|Security|PurviewAccountAuditEvents|Yes|
+|ScanStatusLogEvent|ScanStatus|Yes|
## Microsoft.RecoveryServices/Vaults
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |HybridConnectionsEvent|HybridConnections Events|No|
-|HybridConnectionsLogs|HybridConnectionsLogs|No|
## Microsoft.Search/searchServices
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |OperationalLogs|Operational Logs|No|
+|VNetAndIPFilteringLogs|VNet/IP Filtering Connection Logs|No|
## Microsoft.SignalRService/SignalR
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|ConnectivityLogs|Connectivity logs for Azure Web PubSub Service.|Yes|
-|HttpRequestLogs|Http Request logs for Azure Web PubSub Service.|Yes|
-|MessagingLogs|Messaging logs for Azure Web PubSub Service.|Yes|
+|AllLogs|Azure Web PubSub Service Logs.|Yes|
## microsoft.singularity/accounts
If you think something is missing, you can open a GitHub comment at the bottom o
|BigDataPoolAppsEnded|Big Data Pool Applications Ended|No|
+## Microsoft.Synapse/workspaces/kustoPools
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Command|Command|Yes|
+|FailedIngestion|Failed ingest operations|Yes|
+|IngestionBatching|Ingestion batching|Yes|
+|Query|Query|Yes|
+|SucceededIngestion|Successful ingest operations|Yes|
+|TableDetails|Table details|Yes|
+|TableUsageStatistics|Table usage statistics|Yes|
++ ## Microsoft.Synapse/workspaces/sqlPools |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|Management|Management|No|
-## Microsoft.Web/hostingEnvironments
+## microsoft.web/hostingenvironments
|Category|Category Display Name|Costs To Export| |||| |AppServiceEnvironmentPlatformLogs|App Service Environment Platform Logs|No|
-## Microsoft.Web/sites
+## microsoft.web/sites
|Category|Category Display Name|Costs To Export| ||||
azure-monitor Solution Targeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-targeting.md
Last updated 04/27/2017
# Targeting monitoring solutions in Azure Monitor (Preview)+
+> [!IMPORTANT]
+> This feature has been deprecated as the Log Analytics agent is being replaced with the Azure Monitor agent and solutions in Azure Monitor are being replaced with insights. You can continue to use it if you already have it configured, but it's being removed from regions where it is not already being used. The feature will longer be supported after August 31, 2024.
+ When you add a monitoring solution to your subscription, it's automatically deployed by default to all Windows and Linux agents connected to your Log Analytics workspace. You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. This article describes how to use **Solution Targeting** which is a feature that allows you to apply a scope to your solutions. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-log-analytics-rebrand.md)]
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
$SharedKey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
# Specify the name of the record type that you'll be creating $LogType = "MyRecordType"
-# You can use an optional field to specify the timestamp from the data. If the time field is not specified, Azure Monitor assumes the time is the message ingestion time
+# Optional name of a field that includes the timestamp for the data. If the time field is not specified, Azure Monitor assumes the time is the message ingestion time
$TimeStampField = ""
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Blockchain/blockchainMembers | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/blockchainmembers/listapikeys) | | Microsoft.Blockchain/blockchainMembers/transactionNodes | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/transactionnodes/listapikeys) | | Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) |
-| Microsoft.Cache/redis | [listKeys](/rest/api/redis/2021-06-01/redis/list-keys) |
+| Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/list-keys) |
| Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/cognitiveservices/accountmanagement/accounts/listkeys) | | Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/get-build-source-upload-url) | | Microsoft.ContainerRegistry/registries | [listCredentials](/rest/api/containerregistry/registries/listcredentials) |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Blockchain/blockchainMembers | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/blockchainmembers/listapikeys) | | Microsoft.Blockchain/blockchainMembers/transactionNodes | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/transactionnodes/listapikeys) | | Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) |
-| Microsoft.Cache/redis | [listKeys](/rest/api/redis/2021-06-01/redis/list-keys) |
+| Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/list-keys) |
| Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/cognitiveservices/accountmanagement/accounts/listkeys) | | Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/get-build-source-upload-url) | | Microsoft.ContainerRegistry/registries | [listCredentials](/rest/api/containerregistry/registries/listcredentials) |
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
The following table lists the features of Azure SQL Database that have transitio
| Feature | GA Month | Details | | | | | | [Azure Active Directory-only authentication](authentication-azure-ad-only-authentication.md) | November 2021 | It's possible to configure your Azure SQL Database to allow authentication only from Azure Active Directory. |
-| [AAD service principal](authentication-aad-service-principal.md) | September 2021 | Azure Active Directory (Azure AD) supports user creation in Azure SQL Database on behalf of Azure AD applications (service principals).|
-| [Audit management operations](../database/auditing-overview.md#auditing-of-microsoft-support-operations) | March 2021 | Azure SQL audit capabilities enable you you to audit operations done by Microsoft support engineers when they need to access your SQL assets during a support request, enabling more transparency in your workforce. |
+| [Azure AD service principal](authentication-aad-service-principal.md) | September 2021 | Azure Active Directory (Azure AD) supports user creation in Azure SQL Database on behalf of Azure AD applications (service principals).|
+| [Audit management operations](../database/auditing-overview.md#auditing-of-microsoft-support-operations) | March 2021 | Azure SQL audit capabilities enable you to audit operations done by Microsoft support engineers when they need to access your SQL assets during a support request, enabling more transparency in your workforce. |
||||
The following table lists the features of Azure SQL Database that have transitio
Learn about significant changes to the Azure SQL Database documentation.
-### November 2021
+### February 2022
| Changes | Details | | | |
-| **Azure AD-only authentication** | Restricting authentication to your Azure SQL Database only to Azure Active Directory users is now generally available. To learn more, see [Azure AD-only authentication](../database/authentication-azure-ad-only-authentication.md). |
-| | |
+| **Free Azure SQL Database** | Try Azure SQL Database for free using the Azure free account. To learn more, review [Try SQL Database for free](free-sql-db-free-account-how-to-deploy.md).|
+|||
+### 2021
-### October 2021
-
-| Changes | Details |
-| | |
-|**Split what's new** | The previously-combined **What's new** article has been split by product - [What's new in SQL Database](doc-changes-updates-release-notes-whats-new.md) and [What's new in SQL Managed Instance](../managed-instance/doc-changes-updates-release-notes-whats-new.md), making it easier to identify what features are currently in preview, generally available, and significant documentation changes. Additionally, the [Known Issues in SQL Managed Instance](../managed-instance/doc-changes-updates-known-issues.md) content has moved to its own page. |
-
-### September 2021
| Changes | Details | | | |
+| **Azure AD-only authentication** | Restricting authentication to your Azure SQL Database only to Azure Active Directory users is now generally available. To learn more, see [Azure AD-only authentication](../database/authentication-azure-ad-only-authentication.md). |
+|**Split what's new** | The previously combined **What's new** article has been split by product - [What's new in SQL Database](doc-changes-updates-release-notes-whats-new.md) and [What's new in SQL Managed Instance](../managed-instance/doc-changes-updates-release-notes-whats-new.md), making it easier to identify what features are currently in preview, generally available, and significant documentation changes. Additionally, the [Known Issues in SQL Managed Instance](../managed-instance/doc-changes-updates-known-issues.md) content has moved to its own page. |
| **Maintenance Window support for availability zones** | You can now use the [Maintenance Window feature](maintenance-window.md) if your Azure SQL Database is deployed to an availability zone. This feature is currently in preview. |
-|||
--
-### July 2021
-
-| Changes | Details |
-| | |
| **Azure AD-only authentication** | It's now possible to restrict authentication to your Azure SQL Database to Azure Active Directory users only. This feature is currently in preview. To learn more, see [Azure AD-only authentication](authentication-azure-ad-only-authentication.md). |
-|||
-
-### June 2021
-
-| Changes | Details |
-| | |
| **Query store hints** | It's now possible to use query hints to optimize your query execution via the OPTION clause. This feature is currently in preview. To learn more, see [Query store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current&preserve-view=true). |
-|||
-
-### May 2021
-
-| Changes | Details |
-| | |
| **Change data capture** | Using change data capture (CDC) with Azure SQL Database is now in preview. To learn more, see [Change data capture](/sql/relational-databases/track-changes/about-change-data-capture-sql-server). | | **SQL Database ledger** | SQL Database ledger is in preview, and introduces the ability to cryptographically attest to other parties, such as auditors or other business parties, that your data hasn't been tampered with. To learn more, see [Ledger](ledger-overview.md). |
+| **Maintenance window** | The maintenance window feature allows you to configure a maintenance schedule for your Azure SQL Database, currently in preview. To learn more, see [maintenance window](maintenance-window.md).|
+| **SQL insights** | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. To learn more, see [SQL insights](../../azure-monitor/insights/sql-insights-overview.md). |
|||
-### March 2021
-| Changes | Details |
-| | |
- | **Maintenance window** | The maintenance window feature allows you to configure a maintenance schedule for your Azure SQL Database, currently in preview. To learn more, see [maintenance window](maintenance-window.md).|
-| **SQL insights** | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. To learn more, see [SQL insights](../../azure-monitor/insights/sql-insights-overview.md). |
-|||
## Contribute to content
azure-sql Free Sql Db Free Account How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/free-sql-db-free-account-how-to-deploy.md
+
+ Title: Free SQL Database with Azure free account
+description: Guidance on how to deploy an Azure SQL Database for free using an Azure free account.
+++++ Last updated : 02/25/2022++++
+# Try Azure SQL Database free with Azure free account
+
+Azure SQL Database is an intelligent, scalable, relational database service built for the cloud. SQL Database is a fully managed platform as a service (PaaS) database engine that handles most database management functions such as upgrading, patching, backups, and monitoring without user involvement.
+
+Using an Azure free account, you can try Azure SQL Database for **free for 12 months** with the following **monthly limit**:
+- **1 S0 database with 10 database transaction units and 250 GB storage**
+
+This article shows you how to create and use an Azure SQL Database for free using an [Azure free account](https://azure.microsoft.com/free/).
++
+## Prerequisites
+
+To try Azure SQL Database for free, you need:
+
+- An Azure free account. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin.
++
+## Create a database
+
+This article uses the Azure portal to create a SQL Database with public access. Alternatively, you can create a SQL Database using [PowerShell, the Azure CLI](./single-database-create-quickstart.md) or an [ARM template](./single-database-create-arm-template-quickstart.md).
+
+To create your database, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure free account.
+1. Search for and select **SQL databases**:
+
+ :::image type="content" source="./media/free-sql-db-free-account-how-to-deploy/search-sql-database.png" alt-text="Screenshot that shows how to search and select SQL database.":::
+
+ Alternatively, you can search for and navigate to **Free Services**, and then select the **Azure SQL Database** tile from the list:
+
+ :::image type="content" source="media/free-sql-db-free-account-how-to-deploy/free-services-sql-database.png" alt-text="Screenshot that shows a list of all free services on the Azure portal.":::
+
+1. Select **Create**.
+1. On the **Basics** tab of the **Create SQL Database** form, under **Project details**, select the free trial Azure **Subscription**.
+1. For **Resource group**, select **Create new**, enter *myResourceGroup*, and select **OK**.
+1. For **Database name**, enter *mySampleDatabase*.
+1. For **Server**, select **Create new**, and fill out the **New server** form with the following values:
+ - **Server name**: Enter *mysqlserver*, and add some characters for uniqueness. We can't provide an exact server name to use because server names must be globally unique for all servers in Azure, not just unique within a subscription. So enter something like mysqlserver12345, and the portal lets you know if it's available or not.
+ - **Server admin login**: Enter *azureuser*.
+ - **Password**: Enter a password that meets complexity requirements, and enter it again in the **Confirm password** field.
+ - **Location**: Select a location from the dropdown list.
+
+ Select **OK**.
+
+1. Leave **Want to use SQL elastic pool** set to **No**.
+1. Under **Compute + storage**, select **Configure database**.
+1. For the free trial, under **Service Tier** select **Standard (For workloads with typical performance requirements)**. Set **DTUs** to **10** and **Data max size (GB)** to **250**, and then select **Apply**.
+
+ :::image type="content" source="media/free-sql-db-free-account-how-to-deploy/configure-database.png" alt-text="Screenshot that shows selecting database service tier.":::
+
+1. Leave **Backup storage redundancy** set to **Geo-redundant backup storage**
+1. Select **Next: Networking** at the bottom of the page.
+
+ :::image type="content" source="./media/free-sql-db-free-account-how-to-deploy/create-database-basics-tab.png" alt-text="New SQL database - Basic tab":::
+
+1. On the **Networking** tab, for **Connectivity method**, select **Public endpoint**.
+1. For **Firewall rules**, set **Allow Azure services and resources to access this server** set to **Yes** and set **Add current client IP address** to **Yes**.
+1. Leave **Connection policy** set to **Default**.
+1. For **Encrypted Connections**, leave **Minimum TLS version** set to **TLS 1.2**.
+1. Select **Next: Security** at the bottom of the page.
+
+ :::image type="content" source="./media/free-sql-db-free-account-how-to-deploy/create-database-networking-tab.png" alt-text="Networking tab":::
+
+1. Leave the values unchanged on **Security** tab.
++
+ :::image type="content" source="./media/free-sql-db-free-account-how-to-deploy/create-database-security-tab.png" alt-text="Security tab":::
+
+1. Select **Next: Additional settings** at the bottom of the page.
+1. On the **Additional settings** tab, in the **Data source** section, for **Use existing data**, select **Sample**. This creates an AdventureWorksLT sample database so there are some tables and data to query and experiment with, as opposed to an empty blank database.
+1. Select **Review + create** at the bottom of the page.
+
+ :::image type="content" source="./media/free-sql-db-free-account-how-to-deploy/create-database-additional-settings-tab.png" alt-text="Additional settings":::
+
+1. On the **Review + create** page, after reviewing, select **Create**.
+
+ > [!IMPORTANT]
+ > While creating the SQL Database from your Azure free account, you will still see an **Estimated cost per month** in the **Compute + Storage : Cost Summary** blade and **Review + Create** tab. But, as long as you are using your Azure free account, and your free service usage is within monthly limits, you won't be charged for the service. To view usage information, review [**Monitor and track free services usage**](#monitor-and-track-service-usage) later in this article.
+
+## Query the database
+
+Once your database is created, you can use the **Query editor (preview)** in the Azure portal to connect to the database and query data.
+
+1. In the portal, search for and select **SQL databases**, and then select your database from the list.
+1. On the page for your database, select **Query editor (preview)** in the navigation menu.
+1. Enter your server admin login information, and select **OK**.
+
+ :::image type="content" source="./media/single-database-create-quickstart/query-editor-login.png" alt-text="Sign in to Query editor":::
+
+1. Enter the following query in the **Query editor** pane.
+
+ ```sql
+ SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName
+ FROM SalesLT.ProductCategory pc
+ JOIN SalesLT.Product p
+ ON pc.productcategoryid = p.productcategoryid;
+ ```
+
+1. Select **Run**, and then review the query results in the **Results** pane.
+
+ :::image type="content" source="./media/single-database-create-quickstart/query-editor-results.png" alt-text="Query editor results":::
+
+1. Close the **Query editor** page, and select **OK** when prompted to discard your unsaved edits.
+
+## Monitor and track service usage
+
+You are not charged for the Azure SQL Database included with your Azure free account unless you exceed the free service limit. To remain within the limit, use the Azure portal to track and monitor your free services usage.
++
+To track usage, follow these steps:
+
+1. In the Azure portal, search for **Subscriptions** and select the free trial subscription.
+
+1. On the **Overview** page, scroll down to see the tile **Top free services by usage**, and then select **View all free services**.
+
+ :::image type="content" source="media/free-sql-db-free-account-how-to-deploy/free-services-usage-overview.png" alt-text="Screenshot that shows the Free Trial subscription overview page and highlights View all free services.":::
+
+1. Locate the meters related to **Azure SQL Database** to track usage.
+
+ :::image type="content" source="media/free-sql-db-free-account-how-to-deploy/free-services-tracking.png" alt-text="Screenshot that shows the View and track usage information blade on Azure portal for all free services.":::
+
+The following table describes the values on the track usage page:
+
+| **Value**| **Description**|
+| - | - |
+|**Meter** | Identifies the unit of measure for the service being consumed. For example, the meter for Azure SQL Database is *SQL Database, Single Standard, S0 DTUs*, which tracks the number of S0 databases used per day, and has a monthly limit of 1. |
+| **Usage/limit** | The usage of the meter for the current month, and the limit for the meter.
+| **Status**| The current status of your usage of the service defined by the meter. The possible values for status are: </br> **Not in use**: You haven't used the meter or the usage for the meter hasn't reached the billing system. </br> **Exceeded on \<Date\>**: You've exceeded the limit for the meter on \<Date\>. </br> **Unlikely to Exceed**: You're unlikely to exceed the limit for the meter. </br>**Exceeds on \<Date\>**: You're likely to exceed the limit for the meter on \<Date\>. |
+| | |
+
+>[!IMPORTANT]
+> - With an Azure free account, you also get $200 in credit to use in 30 days. During this time, any usage of the service beyond the free monthly amount is deducted from this credit.
+> - At the end of your first 30 days or after you spend your $200 credit (whichever comes first), you'll only pay for what you use beyond the free monthly amount of services. To keep getting free services after 30 days, move to pay-as-you-go pricing. If you don't move to pay as you go, you can't purchase Azure services beyond your $200 credit and eventually your account and services will be disabled.
+> - For more information, see [**Azure free account FAQ**](https://azure.microsoft.com/free/free-account-faq/).
+
+## Clean up resources
+
+When you're finished using these resources, you can delete the resource group you created, which will also delete the server and single database within it.
+
+To delete **myResourceGroup** and all its resources using the Azure portal:
+
+1. In the portal, search for and select **Resource groups**, and then select **myResourceGroup** from the list.
+1. On the resource group page, select **Delete resource group**.
+1. Under **Type the resource group name**, enter *myResourceGroup*, and then select **Delete**.
++
+## Next steps
+
+[Connect and query](connect-query-content-reference-guide.md) your database using different tools and languages:
+> [!div class="nextstepaction"]
+> [Connect and query using SQL Server Management Studio](connect-query-ssms.md)
+>
+> [Connect and query using Azure Data Studio](/sql/azure-data-studio/quickstart-sql-database?toc=/azure/sql-database/toc.json)
+
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
Use the following commands to create these items.
# [JavaScript](#tab/javascript) ```bash
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 12 --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
``` # [C#](#tab/csharp)
Use the following commands to create these items.
```bash az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> ```
+ > [!NOTE]
+ > If you're running the function version other than v3.0, please check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime` parameter to supported value.
1. Deploy the function project to Azure:
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
Use the following commands to create these item.
# [JavaScript](#tab/javascript) ```bash
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 12 --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
``` # [C#](#tab/csharp)
Use the following commands to create these item.
```bash az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> ```
+ > [!NOTE]
+ > If you're running the function version other than v3.0, please check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime` parameter to supported value.
1. Deploy the function project to Azure:
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md
The backup operation failed because the VM is in Failed state. For a successful
Error code: UserErrorFsFreezeFailed <br/> Error message: Failed to freeze one or more mount-points of the VM to take a file-system consistent snapshot.
+**Step 1**
+ * Unmount the devices for which the file system state wasn't cleaned, using the **umount** command. * Run a file system consistency check on these devices by using the **fsck** command. * Mount the devices again and retry backup operation.</ol>
MountsToSkip = /mnt/resource
SafeFreezeWaitInSeconds=600 ```
+**Step 2**
+
+* Check if there are duplicate mount points present.
+
+Identify the failed to freeze mount points from the extension log file. <br>
+For example: /boot, /usr/sap in the below sample output.
+```
+ 2017-11-02 11:22:56 Thawing: /boot
+ 2017-11-02 11:22:56 Failed to FITHAW: /boot
+ 2017-11-02 11:22:56 Thawing: /sapshare
+ 2017-11-02 11:22:56 Thawing: /usr/sap
+ 2017-11-02 11:22:56 Failed to FITHAW: /usr/sap
+```
+
+On the Linux VM execute 'mount' command and check if the failed mount points have multiple entries. If yes, remove the old entries or rename the mount path and retry the backup operation.
++ ### ExtensionSnapshotFailedCOM / ExtensionInstallationFailedCOM / ExtensionInstallationFailedMDTC - Extension installation/operation failed due to a COM+ error Error code: ExtensionSnapshotFailedCOM <br/>
bastion Bastion Nsg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-nsg.md
Azure Bastion is deployed specifically to ***AzureBastionSubnet***.
* **Ingress Traffic:**
- * **Ingress Traffic from public internet:** The Azure Bastion will create a public IP that needs port 443 enabled on the public IP for ingress traffic. Port 3389/22 are NOT required to be opened on the AzureBastionSubnet.
+ * **Ingress Traffic from public internet:** The Azure Bastion will create a public IP that needs port 443 enabled on the public IP for ingress traffic. Port 3389/22 are NOT required to be opened on the AzureBastionSubnet. Note that the source can be either the Internet or a set of public IP addresses that you specify.
* **Ingress Traffic from Azure Bastion control plane:** For control plane connectivity, enable port 443 inbound from **GatewayManager** service tag. This enables the control plane, that is, Gateway Manager to be able to talk to Azure Bastion. * **Ingress Traffic from Azure Bastion data plane:** For data plane communication between the underlying components of Azure Bastion, enable ports 8080, 5701 inbound from the **VirtualNetwork** service tag to the **VirtualNetwork** service tag. This enables the components of Azure Bastion to talk to each other. * **Ingress Traffic from Azure Load Balancer:** For health probes, enable port 443 inbound from the **AzureLoadBalancer** service tag. This enables Azure Load Balancer to detect connectivity
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/virtual-file-mount.md
You can mount an Azure file share on a Batch pool using [Azure PowerShell](/powe
1. Create a Batch pool with the following settings. Replace the sample values with your own information as needed. ```powershell
- $fileShareConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSAzureFileShareConfiguration" -ArgumentList @("<Storage-Account-name>", https://<Storage-Account-name>.file.core.windows.net/batchfileshare1, "S", "Storage-Account-key")
+ $fileShareConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSAzureFileShareConfiguration" -ArgumentList @("<Storage-Account-name>", "https://<Storage-Account-name>.file.core.windows.net/batchfileshare1", "S", "Storage-Account-key")
$mountConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSMountConfiguration" -ArgumentList @($fileShareConfig)
You can mount an Azure file share on a Batch pool using [Azure PowerShell](/powe
$configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageReference, "batch.node.windows amd64")
- New-AzBatchPool -Id "<Pool-Name>" -VirtualMachineSize "STANDARD_D2_V2" -VirtualMachineConfiguration $configuration -TargetDedicatedComputeNodes 1 -MountConfiguration @($mountConfig) -BatchContext $Context
+ New-AzBatchPool -Id "<Pool-Name>" -VirtualMachineSize "STANDARD_D2_V2" -VirtualMachineConfiguration $configuration -TargetDedicatedComputeNodes 1 -MountConfiguration @($mountConfig) -BatchContext $context
``` 1. Access the mount files using your drive's direct path. For example:
When using virtual file mounts with [Azure Batch pools in a virtual network](bat
- Learn more about mounting an Azure Files share with [Windows](../storage/files/storage-how-to-use-files-windows.md) or [Linux](../storage/files/storage-how-to-use-files-linux.md). - Learn about using and mounting [blobfuse](https://github.com/Azure/azure-storage-fuse) virtual file systems. - See [Network File System overview](/windows-server/storage/nfs/nfs-overview) to learn about NFS and its applications.-- See [Microsoft SMB protocol and CIFS protocol overview](/windows/desktop/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) to learn more about CIFS.
+- See [Microsoft SMB protocol and CIFS protocol overview](/windows/desktop/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) to learn more about CIFS.
certification How To Test Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-test-pnp.md
+
+ Title: Test your IoT Plug and Play device with Azure CLI
+description: A guide on how to test IoT Plug and Play device with the Azure CLI in preparation for certification.
++++ Last updated : 01/28/2022+++
+# How to test IoT Plug and Play devices
+
+The IoT Plug and Play device certification program includes tools to check that a device meets the IoT Plug and Play certification requirements. The tools also help organizations to drive awareness of the availability of their IoT Plug and Play devices. These certified devices are tailored for IoT solutions and help to reduce time to market.
+
+This article shows you how to:
+
+- Install the Azure IoT command-line tool extension for the Azure CLI
+- Run the IoT Plug and Play tests to validate your device application while in-development phase
+
+> [!Note]
+> A full walk through the certification process can be found in the [Azure Certified Device certification tutorial](tutorial-00-selecting-your-certification.md).
+
+## Prepare your device
+
+The application code that runs on your IoT Plug and Play must:
+
+- Connect to Azure IoT Hub using the [Device Provisioning Service (DPS)](../iot-dps/about-iot-dps.md).
+- Follow the [IoT Plug an Play conventions](../iot-develop/concepts-developer-guide-device.md) to implement of telemetry, properties, and commands.
+
+The application is software that's installed separately from the operating system or is bundled with the operating system in a firmware image that's flashed to the device.
+
+Prior to certifying your device through the certification process for IoT Plug and Play, you will want to validate that the device implementation matches the telemetry, properties and commands defined in the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) device model locally prior to submitting to the [Azure IoT Public Model Repository](../iot-develop/concepts-model-repository.md).
+
+To meet the certification requirements, your device must:
+
+- Connects to Azure IoT Hub using the [DPS](../iot-dps/about-iot-dps.md).
+- Implement of telemetry, properties, or commands following the IoT Plug and Play convention.
+- Describe the device interactions with a [DTDL v2](https://aka.ms/dtdl) model.
+- Send the model ID during [DPS registration](../iot-develop/concepts-developer-guide-device.md#dps-payload) in the DPS provisioning payload.
+- Announce the model ID during the [MQTT connection](../iot-develop/concepts-developer-guide-device.md#model-id-announcement).
+
+## Test with the Azure IoT Extension CLI
+
+The [Azure IoT CLI extension](/cli/azure/ext/azure-iot/iot/product?view=azure-cli-latest) lets you validate that the device implementation matches the model before you submit the device for certification through the Azure Certified Device portal.
+
+The following steps show you how to prepare for and run the certification tests using the CLI:
+
+### Install the Azure IoT extension for the Azure CLI
+Install the [Azure CLI](/cli/azure/install-azure-cli) and review the installation instructions to set up the [Azure CLI](/cli/azure/iot?view=azure-cli-latest) in your environment.
+
+To install the Azure IoT Extension, run the following command:
+
+```azurecli
+az extension add --name azure-iot
+```
+
+To learn more, see [Azure CLI for Azure IoT](https://docs.microsoft.com/cli/azure/iot/product?view=azure-cli-latest).
+
+### Create a new product test
+
+The following command creates a test using DPS with a symmetric key attestation method:
+
+- Creates a new product to test, and generates a test configuration. The output displays the DPS information that the device must use for provisioning: the primary key, device ID, and ID Scope.
+- Specifies the folder with the DTDL files describing your model.
+
+```azurecli
+az iot product test create --badge-type Pnp --at SymmetricKey --device-type FinishedProduct --models {local folder name}
+```
+
+The JSON output from the command contains the `primaryKey`, `registrationId`, and `scopeID` to use when you connect your device.
+
+Expected output:
+
+```json
+"deviceType": "FinishedProduct",
+"id": "d45d53d9-656d-4be7-9bbf-140bc87e98dc",
+"provisioningConfiguration": {
+ "symmetricKeyEnrollmentInformation": {
+ "primaryKey":"Ci/Ghpqp0n4m8FD5PTicr6dEikIfM3AtVWzAhObU7yc=",
+ "registrationId": "d45d53d9-656d-4be7-9bbf-140bc87e98dc",
+ "scopeId": "0ne000FFA42"
+ }
+}
+```
+
+### Connect your device
+
+Use the DPS information output by the previous command to connect your device to the test IoT Hub instance.
+
+### Manage and configure the product tests
+
+When the device is connected and ready to interact with the IoT hub, generate a product test configuration file. To create the file:
+
+- Use the test `id` from the output of the previous command.
+- Use the `--wait` parameter to get the test case.
+
+```azurecli
+az iot product test task create --type GenerateTestCases --test-id [YourTestId] --wait
+```
+
+Expected output:
+
+```json
+{
+ "deviceTestId": "d45d53d9-656d-4be7-9bbf-140bc87e98dc",
+ "error": null,
+ "id": "526da38e-91fc-4e20-a761-4f04b392c42b",
+ "resultLink": "/deviceTests/d45d53d9-656d-4be7-9bbf-140bc87e98dc/TestCases",
+ "status": "Completed",
+ "type": "GenerateTestCases"
+}
+```
+
+You can use the `az iot product test case update` command to modify the test configuration file.
+
+### Run the tests
+
+After you generate the test configuration, the next step is to run the tests. Use the same `devicetestId` from the previous commands as parameter to run the tests. Check the test results to make sure that all tests have a status `Passed`.
+
+```azurecli
+az iot product test task create --type QueueTestRun --test-id [YourTestId] --wait
+```
+
+Example test run output
+
+```json
+ "validationTasks": [
+ {
+ "componentName": "Default component",
+ "endTime": "2020-08-25T05:18:49.5224772+00:00",
+ "interfaceId": "dtmi:com:example:TemperatureController;1",
+ "logs": [
+ {
+ "message": "Waiting for telemetry from the device",
+ "time": "2020-08-25T05:18:37.3862586+00:00"
+ },
+ {
+ "message": "Validating PnP properties",
+ "time": "2020-08-25T05:18:37.3875168+00:00"
+ },
+ {
+ "message": "Validating PnP commands",
+ "time": "2020-08-25T05:18:37.3894343+00:00"
+ },
+ {
+ "message": "{\"propertyName\":\"serialNumber\",\"expectedSchemaType\":null,\"actualSchemaType\":null,\"message\":\"Property is successfully validated\",\"passed\":true,\"time\":\"2020-08-25T05:18:37.4205985+00:00\"}",
+ "time": "2020-08-25T05:18:37.4205985+00:00"
+ },
+ {
+ "message": "PnP interface properties validation passed",
+ "time": "2020-08-25T05:18:37.4206964+00:00"
+ },
+```
+
+## Test using the Azure Certified Device portal
+
+The following steps show you how to use the [Azure Certified Device portal](https://certify.azure.com) to onboard, register product details, submit a getting started guide, and run the certification tests.
+
+### Onboarding
+
+To use the [certification portal](https://certify.azure.com), you must use an Azure Active Directory from your work or school tenant.
+
+To publish the models to the Azure IoT Public Model Repository, your account must be a member of the [Microsoft Partner Network](https://partner.microsoft.com). The system checks that the Microsoft Partner Network ID exists and the account is fully vetted before publishing to the device catalog.
+
+### Company profile
+
+You can manage your company profile from the left navigation menu. The company profile includes the company URL, email address, and company logo. The program agreement must be accepted on this page before you run any certification operations.
+
+The company profile information is used in the device description showcased in the device catalog.
+
+### Create new project
+
+To certify a device, you must first create a new project.
+
+Navigate to the [certification portal](https://certify.azure.com). On the **Projects** page, select *+ Create new project*. Then enter a name for the project, the device name, and select a device class.
+
+The product information you provide during the certification process falls into four categories:
+
+- Device information. Collects information about the device such as its name, description, certifications, and operating system.
+- The **Get started** guide. You must submit the guide as a PDF document to be approved by the system administrator before publishing the device.
+- Marketing details. Provide customer-ready marketing information for your device. The marketing information includes as description, a photo, and distributors.
+- Additional industry certifications. This optional section lets you provide additional information about any other certifications the device has got.
+
+### Connect and test
+
+The connect and test step checks that your device meets the IoT Plug and Play certification requirements.
+
+There are three steps to be completed:
+
+1. Connect and discover interfaces. The device must connect to the Azure IoT certification service through DPS. Choose the authentication method (X.509 certificate, symmetric keys, or trusted platform module) to use and update the device application with the DPS information.
+1. Review interfaces. Review the interface and make sure each one has payload inputs that make sense for testing.
+1. Test. The system tests each device model to check that the telemetry, properties, and commands described in the model follow the IoT Plug and Play conventions. When the test is complete, select the **view logs** link to see the telemetry from the device and the raw data sent to IoT Hub device twin properties.
+
+### Submit and publish
+
+The final required stage is to submit the project for review. This step notifies an Azure Certified Device team member to review your project for completeness, including the device and marketing details, and the get started guide. A team member may contact you at the company email address previously provided with questions or edit requests before approval.
+
+If your device requires further manual validation as part of certification, you'll receive a notice at this time.
+
+When a device is certified, you can choose to publish your product details to the Azure Certified Device Catalog using the **Publish to catalog** feature in the product summary page.
+
+## Next steps
+
+Now the device submission is completed, you can contact the device certification team at [iotcert@microsoft.com](mailto:iotcert@microsoft.com) to continue to the next steps, which include Microsoft Partner Network membership validation and a review of the getting started guides. When all the requirements are satisfied, you can choose to have your device included in the [Certified for Azure IoT device catalog](https://devicecatalog.azure.com).
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/sovereign-clouds.md
+
+ Title: "Translator: sovereign clouds"
+
+description: Using Translator in sovereign clouds
++++++ Last updated : 02/24/2022+++
+# Translator in sovereign (national) clouds
+
+ Azure sovereign clouds are isolated in-country platforms with independent authentication, storage, and compliance requirements. Sovereign clouds are often used within geographical boundaries where there's a strict data residency requirement. Translator is currently deployed in the following sovereign clouds:
+
+|Cloud | Region identifier |
+||--|
+| [Azure US Government](../../azure-government/documentation-government-welcome.md)|<ul><li>`usgovarizona` (US Gov Arizona)</li><li>`usgovvirginia` (US Gov Virginia)</li></ul>|
+| [Azure China 21 Vianet](/azure/china/overview-operations) |<ul><li>`chinaeast2` (East China 2)</li><li>`chinanorth` (China North)</li></ul>|
+
+## Azure portal endpoints
+
+The following table lists the base URLs for Azure sovereign cloud endpoints:
+
+| Sovereign cloud | Azure portal endpoint |
+| | -- |
+| Azure portal for US Government | `https://portal.azure.us` |
+| Azure portal China operated by 21 Vianet | `https://portal.azure.cn` |
+
+## Translator: sovereign clouds
+
+### [Azure US Government](#tab/us)
+
+ The Azure Government cloud is available to US government customers and their partners. US federal, state, local, tribal governments and their partners have access to the Azure Government cloud dedicated instance. Cloud operations are controlled by screened US citizens.
+
+| Azure US Government | Availability and support |
+|--|--|
+|Azure portal | <ul><li>[Azure Government Portal](https://portal.azure.us/)</li></ul>|
+| Available regions</br></br>The region-identifier is a required header when using Translator for the government cloud. | <ul><li>`usgovarizona` </li><li> `usgovvirginia`</li></ul>|
+|Available pricing tiers|<ul><li>Free (F0) and Standard (S0). See [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/)</li></ul>|
+|Supported Features | <ul><li>Text Translation</li><li>Document Translation</li><li>Custom Translation</li></ul>|
+|Supported Languages| <ul><li>[Translator language support](language-support.md)</li></ul>|
+
+<!-- markdownlint-disable MD036 -->
+
+### Endpoint
+
+#### Azure portal
+
+Base URL:
+
+```http
+https://portal.azure.us
+```
+
+#### Authorization token
+
+Replace the `<region-identifier>` parameter with the sovereign cloud identifier:
+
+|Cloud | Region identifier |
+||--|
+| Azure US Government|<ul><li>`usgovarizona` (US Gov Arizona)</li><li>`usgovvirginia` (US Gov Virginia)</li></ul>|
+| Azure China 21 Vianet|<ul><li>`chinaeast2` (East China 2)</li><li>`chinanorth` (China North)</li></ul>|
+
+```http
+https://<region-identifier>.api.cognitive.microsoft.us/sts/v1.0/issueToken
+```
+
+#### Text translation
+
+```http
+https://api.cognitive.microsofttranslator.us/
+```
+
+#### Document Translation custom endpoint
+
+Replace the `<your-custom-domain>` parameter with your [custom domain endpoint](document-translation/get-started-with-document-translation.md#what-is-the-custom-domain-endpoint).
+
+```http
+https://<your-custom-domain>.cognitiveservices.azure.us/
+```
+
+#### Custom Translator portal
+
+```http
+https://portal.customtranslator.azure.us/
+```
+
+### Example API translation request
+
+Translate a single sentence from English to Simplified Chinese.
+
+**Request**
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.us/translate?api-version=3.0?&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <subscription key>" -H "Ocp-Apim-Subscription-Region: chinanorth" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'你好, 你叫什么名字?'}]"
+```
+
+**Response body**
+
+```JSON
+[
+ {
+ "translations":[
+ {"text": "Hello, what is your name?", "to": "en"}
+ ]
+ }
+]
+```
+
+> [!div class="nextstepaction"]
+> [Azure Government: Translator text reference](reference/rest-api-guide.md)
+
+### [Azure China 21 Vianet](#tab/china)
+
+The Azure China cloud is a physical and logical network-isolated instance of cloud services located in China. In order to apply for an Azure China account, you need a Chinese legal entity, Internet Content provider (ICP) license, and physical presence within China.
+
+|Azure China 21 Vianet | Availability and support |
+|||
+|Azure portal |<ul><li>[Azure China 21 Vianet Portal](https://portal.azure.cn/)</li></ul>|
+|Regions <br></br>The region-identifier is a required header when using a multi-service resource. | <ul><li>`chinanorth` </li><li> `chinaeast2`</li></ul>|
+|Supported Feature|<ul><li>[Text Translation](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)</li></ul>|
+|Supported Languages|<ul><li>[Translator language support.](https://docs.azure.cn/cognitive-services/translator/language-support)</li></ul>|
+
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD024 -->
+
+### Endpoint
+
+Base URL
+
+#### Azure portal
+
+```http
+https://portal.azure.cn
+```
+
+#### Authorization token
+
+Replace the `<region-identifier>` parameter with the sovereign cloud identifier:
+
+```http
+https://<region-identifier>.api.cognitive.azure.cn/sts/v1.0/issueToken
+```
+
+#### Text translation
+
+```http
+https://api.translator.azure.cn/translate
+```
+
+### Example API translation request
+
+Translate a single sentence from English to Simplified Chinese.
+
+**Request**
+
+```curl
+curl -X POST "https://api.translator.azure.cn/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text': 'Hello, what is your name?'}]"
+```
+
+**Response body**
+
+```JSON
+[
+ {
+ "translations":[
+ {"text": "你好, 你叫什么名字?", "to": "zh-Hans"}
+ ]
+ }
+]
+```
+
+> [!div class="nextstepaction"]
+> [Azure China: Translator Text reference](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)
+++
+## Next step
+
+> [!div class="nextstepaction"]
+> [Learn more about Translator](index.yml)
communication-services Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/bring-your-own-storage.md
+
+ Title: Azure Communication Services BYOS overview
+
+description: Learn about the Azure Communication Services BYOS.
+++++ Last updated : 02/25/2021++++
+# Bring your own storage (BYOS) overview
++
+In many applications end-users may want to store their Call Recording files long-term. Some of the common scenarios are compliance, quality assurance, assessment, post call analysis, training, and coaching. Now with the BYOS (bring your own storage) being available, end-users will have an option to store their files long term and manage the files in a way they need. The end user will be responsible for legal regulations about storing the data. BYOS simplifies downloading of the files from Azure Communication Services (ACS) and minimizes the number of support request if customer was unable to download recording in 48 hours. Data will be transferred securely from Microsoft Azure blob storage to a customer Azure blob storage.
+Here are a few examples:
+- Contact Center Recording
+- Compliance Recording Scenario
+- Healthcare Virtual Visits Scenario
+- Conference/meeting recordings and so on
+
+BYOS can be easily integrated into any application regardless of the programming language. When creating a call recording resource in Azure Portal, enable the BYOS option and provide the sas-url to the storage. This simple experience allows developers to meet their needs, scale, and avoid investing time and resources into designing and maintaining a custom solution.
+
+## Feature highlights
+
+- HIPAA complaint
+
+## Next steps
+- TBD
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
Title: Connect to SQL Server, Azure SQL Database, or Azure SQL Managed Instance
-description: Automate tasks for SQL databases on premises or in the cloud by using Azure Logic Apps
+description: Automate tasks for SQL databases on premises or in the cloud using Azure Logic Apps.
ms.suite: integration--++ Last updated 03/24/2021 tags: connectors
-# Automate workflows for a SQL database by using Azure Logic Apps
+# Connect to a SQL database from Azure Logic Apps
This article shows how you can access data in your SQL database from inside a logic app with the SQL Server connector. That way, you can automate tasks, processes, or workflows that manage your SQL data and resources by creating logic apps. The SQL Server connector works for [SQL Server](/sql/sql-server/sql-server-technical-documentation) as well as [Azure SQL Database](../azure-sql/database/sql-database-paas-overview.md) and [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md).
If you're new to logic apps, review [What is Azure Logic Apps](../logic-apps/log
## Prerequisites
-* An Azure subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* An [SQL Server database](/sql/relational-databases/databases/create-a-database), [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md), or [Azure SQL Managed Instance](../azure-sql/managed-instance/instance-create-quickstart.md).
connectors Connectors Create Api Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-twilio.md
Title: Connect to Twilio from Azure Logic Apps
-description: Automate tasks and workflows that manage global SMS, MMS, and IP messages through your Twilio account by using Azure Logic Apps
+ Title: Connect to Twilio with Azure Logic Apps
+description: Automate tasks and workflows that manage global SMS, MMS, and IP messages through your Twilio account using Azure Logic Apps.
ms.suite: integration--++ Last updated 08/25/2018 tags: connectors
-# Manage messages in Twilio with Azure Logic Apps
+# Connect to Twilio from Azure Logic Apps
With Azure Logic Apps and the Twilio connector, you can create automated tasks and workflows
review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription,
+[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* From [Twilio](https://www.twilio.com/):
connectors Connectors Google Data Security Privacy Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-google-data-security-privacy-policy.md
Title: Data security and privacy policies for Google connectors
-description: Learn about the impact that Google security and privacy policies have on Google connectors, such as Gmail, in Azure Logic Apps
+description: Learn about the impact that Google security and privacy policies have on Google connectors, such as Gmail, in Azure Logic Apps.
ms.suite: integration
connectors Connectors Integrate Security Operations Create Api Microsoft Graph Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-integrate-security-operations-create-api-microsoft-graph-security.md
Title: Integrate and manage security operations & Microsoft Graph Security
-description: Improve your app's threat protection, detection, and response with Microsoft Graph Security & Azure Logic Apps
+description: Improve your app's threat protection, detection, and response with Microsoft Graph Security & Azure Logic Apps.
ms.suite: integration --++ Last updated 02/21/2020 tags: connectors
To learn more about Microsoft Graph Security, see the [Microsoft Graph Security
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* To use the Microsoft Graph Security connector, you must have *explicitly given* Azure Active Directory (AD) tenant administrator consent, which is part of the [Microsoft Graph Security Authentication requirements](/graph/security-authorization). This consent requires the Microsoft Graph Security connector's application ID and name, which you can also find in the [Azure portal](https://portal.azure.com):
connectors Connectors Native Delay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-delay.md
Title: Delay the next action in workflows
-description: Wait to run the next action in logic app workflows by using the Delay or Delay Until actions in Azure Logic Apps
+description: Wait to run the next action in logic app workflows by using the Delay or Delay Until actions in Azure Logic Apps.
ms.suite: integration--++ Last updated 05/25/2019 tags: connectors
Here are some example ways to use these actions:
## Prerequisites
-* An Azure subscription. If you don't have a subscription, you can [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have a subscription, you can [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Basic knowledge about [logic apps](../logic-apps/logic-apps-overview.md). Before you can use an action, your logic app must first start with a trigger. You can use any trigger you want and add other actions before you add a delay action. This topic uses an Office 365 Outlook trigger. If you're new to logic apps, learn [how to create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
connectors Connectors Native Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http.md
Title: Call service endpoints by using HTTP or HTTPS
-description: Send outbound HTTP or HTTPS requests to service endpoints from Azure Logic Apps
+description: Send outbound HTTP or HTTPS requests to service endpoints from Azure Logic Apps.
ms.suite: integration-+ Last updated 09/13/2021 tags: connectors
For information about encryption, security, and authorization for outbound calls
## Prerequisites
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* The URL for the target endpoint that you want to call
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
For differences between this trigger and the Sliding Window trigger or for more
## Prerequisites
-* An Azure subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Basic knowledge about [logic apps](../logic-apps/logic-apps-overview.md). If you're new to logic apps, learn [how to create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
Title: Receive and respond to calls by using HTTPS
-description: Handle inbound HTTPS requests from external services by using Azure Logic Apps
+ Title: Receive and respond to HTTPS requests
+description: Handle inbound HTTPS calls from external services using Azure Logic Apps.
ms.suite: integration ms.reviewers: estfan, azla-+ Last updated 08/04/2021 tags: connectors
For more information about security, authorization, and encryption for inbound c
## Prerequisites
-* An Azure account and subscription. If you don't have a subscription, you can [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have a subscription, you can [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md). If you're new to logic apps, review [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)?
connectors Connectors Native Sliding Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-sliding-window.md
Title: Schedule tasks to handle contiguous data
-description: Create and run recurring tasks that handle contiguous data by using sliding windows in Azure Logic Apps
+description: Create and run recurring tasks that handle contiguous data by using sliding windows in Azure Logic Apps.
ms.suite: integration-+ Last updated 03/25/2020
For differences between this trigger and the Recurrence trigger or for more info
## Prerequisites
-* An Azure subscription. If you don't have a subscription, you can [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have a subscription, you can [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Basic knowledge about [logic apps](../logic-apps/logic-apps-overview.md). If you're new to logic apps, learn [how to create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
connectors Connectors Native Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-webhook.md
Title: Wait and respond to events
-description: Automate workflows that trigger, pause, and resume based on events at a service endpoint by using Azure Logic Apps
+description: Automate workflows that trigger, pause, and resume based on events at a service endpoint by using Azure Logic Apps.
ms.suite: integration--++ Last updated 08/27/2020 tags: connectors
For information about encryption, security, and authorization for inbound calls
## Prerequisites
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* The URL for an already deployed endpoint or API that supports the webhook subscribe and unsubscribe pattern for [webhook triggers in logic apps](../logic-apps/logic-apps-create-api-app.md#webhook-triggers) or [webhook actions in logic apps](../logic-apps/logic-apps-create-api-app.md#webhook-actions) as appropriate
connectors Connectors Run 3270 Apps Ibm Mainframe Create Api 3270 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-run-3270-apps-ibm-mainframe-create-api-3270.md
ms.suite: integration --++ Last updated 02/03/2021 tags: connectors
that Azure Logic Apps supports.
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription,
+[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
connectors Connectors Schema Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-schema-migration.md
Title: Migrate apps to latest schema
description: How to migrate logic app workflow JSON definitions to the most recent Workflow Definition Language schema version ms.suite: integration--++ Last updated 08/25/2018
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
In the Azure portal, you can change your default payment method to a new credit
If you want to a delete credit card, see [Delete an Azure billing payment method](delete-azure-payment-method.md).
-The supported payment methods for Microsoft Azure are credit cards and check/wire transfer. To get approved to pay by check/wire transfer, see [Pay for Azure subscriptions by invoice](pay-by-invoice.md).
+The supported payment methods for Microsoft Azure are credit cards and check/wire transfer. To get approved to pay by check/wire transfer, see [Pay for your Azure subscription by check or wire transfer](pay-by-invoice.md).
With a Microsoft Customer Agreement, your payment methods are associated with billing profiles. Learn how to [check access to a Microsoft Customer Agreement](#check-the-type-of-your-account).
If you have questions or need help, [create a support request](https://go.micro
## Next steps - Learn about [Azure reservations](../reservations/save-compute-costs-reservations.md) to see if they can save you money.-- If you want to a delete credit card, see [Delete an Azure billing payment method](delete-azure-payment-method.md).
+- If you want to a delete credit card, see [Delete an Azure billing payment method](delete-azure-payment-method.md).
data-catalog Data Catalog Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-get-started.md
Title: 'Quickstart: Create an Azure Data Catalog' description: This quickstart describes how to create an Azure Data Catalog using the Azure portal.--++ Previously updated : 05/26/2020 Last updated : 02/25/2022 #Customer intent: As a user, I want to access my company's data all in one place so I can easily build reports or presentations from it. + # Quickstart: Create an Azure Data Catalog via the Azure portal [!INCLUDE [Azure Purview redirect](../../includes/data-catalog-use-purview.md)]
To set up Data Catalog, you must be the owner or co-owner of an Azure subscripti
## Create a data catalog
-You can provision only one data catalog per organization (Azure Active Directory domain). Therefore, if the owner or co-owner of an Azure subscription who belongs to this Azure Active Directory domain has already created a catalog, then you can't create a catalog again even if you have multiple Azure subscriptions. To test whether a data catalog has been created by a user in your Azure Active Directory domain, go to the [Azure Data Catalog home page](http://azuredatacatalog.com) and verify whether you see the catalog. If a catalog has already been created for you, skip the following procedure and go to the next section.
+You can create only one data catalog per organization (Azure Active Directory domain). Therefore, if the owner or co-owner of an Azure subscription who belongs to this Azure Active Directory domain has already created a catalog, then you can't create a catalog again even if you have multiple Azure subscriptions. To test whether a data catalog has been created by a user in your Azure Active Directory domain, go to the [Azure Data Catalog home page](http://azuredatacatalog.com) and verify whether you see the catalog. If a catalog has already been created for you, skip the following procedure and go to the next section.
1. Go to the [Azure portal](https://portal.azure.com) > **Create a resource** and select **Data Catalog**.
- ![Azure Data Catalog create button](media/data-catalog-get-started/data-catalog-create.png)
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-create.png" alt-text="Data catalog resource type with the Create button selected.":::
-2. Specify a **name** for the data catalog, the **subscription** you want to use, the **location** for the catalog, and the **pricing tier**. Then select **Create**.
+1. Specify a **name** for the data catalog, the **subscription** you want to use, the **location** for the catalog, and the **pricing tier**. Then select **Create**.
-3. Go to the [Azure Data Catalog home page](http://azuredatacatalog.com) and click **Publish Data**.
+1. Go to the [Azure Data Catalog home page](http://azuredatacatalog.com) and select **Publish Data**.
- ![Azure Data Catalog--Publish Data button](media/data-catalog-get-started/data-catalog-publish-data.png)
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-publish-data.png" alt-text="On the data catalog homepage, the Publish Data button is selected.":::
You can also get to the Data Catalog home page from the [Data Catalog service page](https://azure.microsoft.com/services/data-catalog) by selecting **Get started**.
- ![Azure Data Catalog--marketing landing page](media/data-catalog-get-started/data-catalog-marketing-landing-page.png)
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-marketing-landing-page.png" alt-text="The data catalog service page, with the blue get started button at the bottom.":::
-4. Go to the **Settings** page.
+1. Go to the **Settings** page.
- ![Azure Data Catalog--provision data catalog](media/data-catalog-get-started/data-catalog-create-azure-data-catalog.png)
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-create-azure-data-catalog.png" alt-text="The data catalog settings page, with several expandable options.":::
-5. Expand **Pricing** and verify your Azure Data Catalog **edition** (Free or Standard).
+1. Expand **Pricing** and verify your Azure Data Catalog **edition** (Free or Standard).
- ![Azure Data Catalog--select edition](media/data-catalog-get-started/data-catalog-create-catalog-select-edition.png)
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-create-catalog-select-edition.png" alt-text="The pricing option expanded with the free edition selected.":::
-6. If you choose *Standard* edition as your pricing tier, you can expand **Security Groups** and enable authorizing Active Directory security groups to access Data Catalog and enable automatic adjustment of billing.
+1. If you choose *Standard* edition as your pricing tier, you can expand **Security Groups** and enable authorizing Active Directory security groups to access Data Catalog and enable automatic adjustment of billing.
- ![Azure Data Catalog Security Groups](media/data-catalog-get-started/data-catalog-standard-security-groups.png)
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-standard-security-groups.png" alt-text="The security groups option expanded with the option to enable authorizing shown.":::
-7. Expand **Catalog Users** and click **Add** to add users for the data catalog. You're automatically added to this group.
+1. Expand **Catalog Users** and select **Add** to add users for the data catalog. You're automatically added to this group.
- ![Azure Data Catalog--users](media/data-catalog-get-started/data-catalog-add-catalog-user.png)
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-add-catalog-user.png" alt-text="Catalog users expanded and the add button highlighted.":::
-8. If you choose *Standard* edition as your pricing tier, you can expand **Glossary Administrators** and click **Add** to add glossary administrator users. You're automatically added to this group.
+1. If you choose *Standard* edition as your pricing tier, you can expand **Glossary Administrators** and select **Add** to add glossary administrator users. You're automatically added to this group.
- ![Azure Data Catalog Glossary Administrators](media/data-catalog-get-started/data-catalog-standard-glossary-admin.png)
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-standard-glossary-admin.png" alt-text="Glossary Administrators expanded and the add button highlighted.":::
-9. Expand **Catalog Administrators** and click **Add** to add additional administrators for the data catalog. You're automatically added to this group.
+1. Expand **Catalog Administrators** and select **Add** to add other administrators for the data catalog. You're automatically added to this group.
- ![Azure Data Catalog--administrators](media/data-catalog-get-started/data-catalog-add-catalog-admins.png)
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-add-catalog-admins.png" alt-text="Catalog Administrators expanded and the add button highlighted.":::
-10. Expand **Portal Title** and add additional text that will be displayed in the portal title.
+1. Expand **Portal Title** and add extra text that will be displayed in the portal title.
- ![Azure Data Catalog-Portal Title](media/data-catalog-get-started/data-catalog-portal-title.png)
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-portal-title.png" alt-text="Portal title expanded, showing the text box where optional text can be added.":::
-11. Once you complete the **Settings** page, next navigate to the **Publish** page.
+1. Once you complete the **Settings** page, next navigate to the **Publish** page.
- ![Azure Data Catalog--created](media/data-catalog-get-started/data-catalog-created.png)
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-created.png" alt-text="Data Catalog home page, with the Publish tab selected in the top menu.":::
## Find a data catalog in the Azure portal 1. On a separate tab in the web browser or in a separate web browser window, go to the [Azure portal](https://portal.azure.com) and sign in with the same account that you used to create the data catalog in the previous step.
-2. Select **All services** and then click **Data Catalog**.
-
- ![Azure Data Catalog--browse Azure](media/data-catalog-get-started/data-catalog-browse-azure-portal.png)
-
- You see the data catalog you created.
-
- ![Azure Data Catalog--view catalog in list](media/data-catalog-get-started/data-catalog-azure-portal-show-catalog.png)
+1. Select **All services** and then select **Data Catalog**.
-3. Click the catalog that you created. You see the **Data Catalog** blade in the portal.
+ :::image type="content" source="media/data-catalog-get-started/data-catalog-browse-azure-portal.png" alt-text="The left Azure portal menu is open, with 'all services' selected. In the services menu, Data Catalog is selected.":::
- ![Azure Data Catalog--blade in portal](media/data-catalog-get-started/data-catalog-blade-azure-portal.png)
+ You'll see the data catalog you created in the list. If you don't, check your subscription, resource group, location, and tag filters at the top of the search.
-4. You can view properties of the data catalog and update them. For example, click **Pricing tier** and change the edition.
+1. Select the catalog that you created. You'll see the **Data Catalog** page in the portal, showing details for your Data Catalog.
- ![Azure Data Catalog--pricing tier](media/data-catalog-get-started/data-catalog-change-pricing-tier.png)
+1. You can view properties of the data catalog and update them. For example, you can select **Pricing tier** and change the edition.
## Next steps
data-catalog Data Catalog How To Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-register.md
Title: Register data sources in Azure Data Catalog description: This article highlights how to register data sources in Azure Data Catalog, including the metadata fields extracted during registration.--++ Previously updated : 08/01/2019 Last updated : 02/25/2022 + # Register data sources in Azure Data Catalog [!INCLUDE [Azure Purview redirect](../../includes/data-catalog-use-purview.md)] ## Introduction+ Azure Data Catalog is a fully managed cloud service that serves as a system of registration and discovery for enterprise data sources. In other words, Data Catalog helps people discover, understand, and use data sources, and it helps organizations get more value from their existing data. The first step to making a data source discoverable via Data Catalog is to register that data source. ## Register data sources+ Registration is the process of extracting metadata from the data source and copying that data to the Data Catalog service. The data remains where it currently resides, and it remains under the control of the administrators and policies of the current system.
-To register a data source, do the following:
+To register a data source, follow these steps:
1. In the Azure Data Catalog portal, start the Data Catalog data source registration tool. 2. Sign in with your work or school account with the same Azure Active Directory credentials that you use to sign in to the portal. 3. Select the data source you want to register.
For more step-by-step details, see the [Get Started with Azure Data Catalog](dat
After you've registered the data source, the catalog tracks its location and indexes its metadata. Users can search, browse, and discover the data source, and then use its location to connect to it by using the application or tool of their choice. ## Supported data sources+ For a list of currently supported data sources, see [Data Catalog DSR](data-catalog-dsr.md). ## Structural metadata+ When you register a data source, the registration tool extracts information about the structure of the objects you select. This information is referred to as structural metadata. For all objects, this structural metadata includes the objectΓÇÖs location, so that users who discover the data can use that information to connect to the object in the client tools of their choice. Other structural metadata includes object name and type, and attribute/column name and data type. ## Descriptive metadata+ In addition to the core structural metadata that's extracted from the data source, the data source registration tool extracts descriptive metadata. For SQL Server Analysis Services and SQL Server Reporting Services, this metadata is taken from the Description properties exposed by these services. For SQL Server, values provided using the ms\_description extended property is extracted. For Oracle Database, the data-source registration tool extracts the COMMENTS column from the ALL\_TAB\_COMMENTS view. In addition to the descriptive metadata that's extracted from the data source, users can enter descriptive metadata by using the data source registration tool. Users can add tags, and they can identify experts for the objects being registered. All this descriptive metadata is copied to the Data Catalog service along with the structural metadata. ## Include previews+ By default, only metadata is extracted from data sources and copied to the Data Catalog service, but understanding a data source is often made easier when you can view a sample of the data it contains.
-By using the Data Catalog data-source registration tool, you can include a snapshot preview of the data in each table and view that is registered. If you choose to include previews during registration, the registration tool includes up to 20 records from each table and view. This snapshot is then copied to the catalog along with the structural and descriptive metadata.
+You can include a snapshot preview of the data in each table and view that is registered, using the Data Catalog data-source registration tool. If you choose to include previews during registration, the registration tool includes up to 20 records from each table and view. This snapshot is then copied to the catalog along with the structural and descriptive metadata.
> [!NOTE] > Wide tables with a large number of columns might have fewer than 20 records included in their preview.
->
->
## Include data profiles+ Just as including previews can provide valuable context for users who search for data sources in Data Catalog, including a data profile can make it easier to understand discovered data sources. By using the Data Catalog data-source registration tool, you can include a data profile for each table and view that is registered. If you choose to include a data profile during registration, the registration tool includes aggregate statistics about the data in each table and view, including:
These statistics are then copied to the catalog along with the structural and de
> [!NOTE] > Text and date columns do not include average or standard deviation statistics in their data profile.
->
->
## Update registrations
-Registering a data source makes it discoverable in Data Catalog when you use the metadata and optional preview extracted during registration. If the data source needs to be updated in the catalog (for example, if the schema of an object has changed, tables originally excluded should be included, or you want to update the data that's included in the previews), the data source registration tool can be re-run.
+
+Registering a data source makes it discoverable in Data Catalog when you use the metadata and optional preview extracted during registration. If the data source needs to be updated in the catalog (for example, if the schema of an object has changed, tables originally excluded should be included, or you want to update the data that's included in the previews), the data source registration tool can be rerun.
Re-registering an already-registered data source performs a merge ΓÇ£upsertΓÇ¥ operation: existing objects are updated, and new objects are created. Any metadata provided by users through the Data Catalog portal are retained. ## Summary
-Because it copies structural and descriptive metadata from a data source to the catalog service, registering the data source in Data Catalog makes the data easier to discover and understand. After you have registered the data source, you can annotate, manage, and discover it by using the Data Catalog portal.
+
+Because it copies structural and descriptive metadata from a data source to the catalog service, registering the data source in Data Catalog makes the data easier to discover and understand. After you've registered the data source, you can annotate, manage, and discover it by using the Data Catalog portal.
## Next steps+ For more information about registering data sources, see the [Get Started with Azure Data Catalog](data-catalog-get-started.md) tutorial.
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connect-data-factory-to-azure-purview.md
Title: Connect a Data Factory to Azure Purview description: Learn about how to connect a Data Factory to Azure Purview--++
data-factory Connector Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-github.md
Title: Connect to GitHub description: Use GitHub to specify your Common Data Model entity references-+ Last updated 09/09/2021-+ # Use GitHub to read Common Data Model entity references
data-factory Connector Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-adwords.md
Previously updated : 09/09/2021 Last updated : 02/24/2022 # Copy data from Google AdWords using Azure Data Factory or Synapse Analytics
The following sections provide details about properties that are used to define
## Linked service properties
+> [!Important]
+> Due to the sunset of Google AdWords API by **April 27, 2022**, the service has upgraded to the new Google Ads API. Please refer this [document](connector-troubleshoot-google-adwords.md#migrate-to-the-new-version-of-google-ads-api) for detailed migration steps and recommendations. Please make sure the migration to be done before **April 27, 2022**.
+ The following properties are supported for Google AdWords linked service: | Property | Description | Required | |: |: |: | | type | The type property must be set to: **GoogleAdWords** | Yes |
+| connectionProperties | A group of properties that defines how to connect to Google AdWords. | Yes |
+| ***Under `connectionProperties`:*** | | |
| clientCustomerID | The Client customer ID of the AdWords account that you want to fetch report data for. | Yes |
+| loginCustomerID | The customer ID of the Google AdWords manager account through which you want to fetch report data of specific customer.| No |
| developerToken | The developer token associated with the manager account that you use to grant access to the AdWords API. You can choose to mark this field as a SecureString to store it securely, or store password in Azure Key Vault and let the copy activity pull from there when performing data copy - learn more from [Store credentials in Key Vault](store-credentials-in-key-vault.md). | Yes | | authenticationType | The OAuth 2.0 authentication mechanism used for authentication. ServiceAuthentication can only be used on self-hosted IR. <br/>Allowed values are: **ServiceAuthentication**, **UserAuthentication** | Yes | | refreshToken | The refresh token obtained from Google for authorizing access to AdWords for UserAuthentication. You can choose to mark this field as a SecureString to store it securely, or store password in Azure Key Vault and let the copy activity pull from there when performing data copy - learn more from [Store credentials in Key Vault](store-credentials-in-key-vault.md). | No |
The following properties are supported for Google AdWords linked service:
"properties": { "type": "GoogleAdWords", "typeProperties": {
- "clientCustomerID" : "<clientCustomerID>",
- "developerToken": {
- "type": "SecureString",
- "value": "<developerToken>"
- },
- "authenticationType" : "ServiceAuthentication",
- "refreshToken": {
- "type": "SecureString",
- "value": "<refreshToken>"
- },
- "clientId": {
- "type": "SecureString",
- "value": "<clientId>"
- },
- "clientSecret": {
- "type": "SecureString",
- "value": "<clientSecret>"
- },
- "email" : "<email>",
- "keyFilePath" : "<keyFilePath>",
- "trustedCertPath" : "<trustedCertPath>",
- "useSystemTrustStore" : true,
+ "connectionProperties": {
+ "clientCustomerID": "<clientCustomerID>",
+ "loginCustomerID": "<loginCustomerID>",
+ "developerToken": {
+ "type": "SecureString",
+ "value": "<developerToken>"
+ },
+ "authenticationType": "ServiceAuthentication",
+ "refreshToken": {
+ "type": "SecureString",
+ "value": "<refreshToken>"
+ },
+ "clientId": {
+ "type": "SecureString",
+ "value": "<clientId>"
+ },
+ "clientSecret": {
+ "type": "SecureString",
+ "value": "<clientSecret>"
+ },
+ "email": "<email>",
+ "keyFilePath": "<keyFilePath>",
+ "trustedCertPath": "<trustedCertPath>",
+ "useSystemTrustStore": true,
+ }
} } }- ``` ## Dataset properties
data-factory Connector Sap Business Warehouse Open Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse-open-hub.md
Title: Copy data from SAP Business Warehouse via Open Hub description: Learn how to copy data from SAP Business Warehouse (BW) via Open Hub to supported sink data stores by using a copy activity in an Azure Data Factory or Synapse Analytics pipeline.--++
data-factory Connector Sap Business Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse.md
Title: Copy data from SAP BW description: Learn how to copy data from SAP Business Warehouse to supported sink data stores using a copy activity in an Azure Data Factory or Synapse Analytics pipeline.--++
data-factory Connector Sap Cloud For Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-cloud-for-customer.md
Title: Copy data from/to SAP Cloud for Customer
description: Learn how to copy data from SAP Cloud for Customer to supported sink data stores (or) from supported source data stores to SAP Cloud for Customer using an Azure Data Factory or Synapse Analytics pipeline. -+
data-factory Connector Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-ecc.md
Title: Copy data from SAP ECC description: Learn how to copy data from SAP ECC to supported sink data stores using a copy activity in an Azure Data Factory or Synapse Analytics pipeline.--++
data-factory Connector Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-hana.md
Title: Copy data from SAP HANA description: Learn how to copy data from SAP HANA to supported sink data stores by using a copy activity in an Azure Data Factory or Synapse Analytics pipeline.--++
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-table.md
Title: Copy data from an SAP table description: Learn how to copy data from an SAP table to supported sink data stores by using a copy activity in an Azure Data Factory or Azure Synapse Analytics pipeline.--++
data-factory Connector Troubleshoot Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-google-adwords.md
+
+ Title: Troubleshoot the Google AdWords connector
+
+description: Learn how to troubleshoot issues with the Google AdWords connector in Azure Data Factory and Azure Synapse Analytics.
++++ Last updated : 02/23/2022++++
+# Troubleshoot the Google AdWords connector in Azure Data Factory and Azure Synapse
++
+This article provides suggestions to troubleshoot common problems with the Google AdWords connector in Azure Data Factory and Azure Synapse.
+
+## Migrate to the new version of Google Ads API
+
+- **Symptoms**
+
+ You see a hint on the Google AdWords linked service configuration page. It reminds you to upgrade your linked service to a newer version before the legacy API is deprecated by Google.
+
+- **Cause**
+
+ Due to the sunset of Google AdWords API by **April 27, 2022**, you are recommended to migrate your existing linked service to the new version of Google Ads API before the date. Starting **April 27, 2022**, connection will start to fail because of the deprecation of Google AdWords API (see this [link](https://ads-developers.googleblog.com/2021/04/upgrade-to-google-ads-api-from-adwords.html)). Migration steps:
+
+ 1. Open your Google AdWords connector linked service configuration page.
+ 2. Edit the linked service and choose the new API version (select **Google Ads**).
+
+ :::image type="content" source="media/connector-troubleshoot-guide/update-google-adwords-linked-service.png" alt-text="Screenshot of updating the linked service configuration for Google AdWords.":::
+
+ 3. Apply the changes.
+
+- **Known issues and recommendations**
+
+ 1. The new Google Ads API doesn't provide a migration plan for below reports/tables:
+ a. AD_CUSTOMIZERS_FEED_ITEM_REPORT
+ b. CAMPAIGN_GROUP_PERFORMANCE_REPORT
+ c. CAMPAIGN_NEGATIVE_KEYWORDS_PERFORMANCE_REPORT
+ d. CAMPAIGN_NEGATIVE_LOCATIONS_REPORT
+ e. CAMPAIGN_NEGATIVE_PLACEMENTS_PERFORMANCE_REPORT
+ f. CREATIVE_CONVERSION_REPORT
+ g. CRITERIA_PERFORMANCE_REPORT
+ h. FINAL_URL_REPORT
+ i. KEYWORDLESS_CATEGORY_REPORT
+ j. MARKETPLACE_PERFORMANCE_REPORT
+ k. TOP_CONTENT_PERFORMANCE_REPORT
+
+ 2. The syntax for Google Ads query language is similar to AWQL from the AdWords API, but not identical. You can refer this [document](https://developers.google.com/google-ads/api/docs/migration/querying) for more details.
++
+## Next steps
+
+For more troubleshooting help, try these resources:
+
+- [Connector troubleshooting guide](connector-troubleshoot-guide.md)
+- [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
+- [Data Factory feature requests](/answers/topics/azure-data-factory.html)
+- [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory)
+- [Microsoft Q&A page](/answers/topics/azure-data-factory.html)
+- [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
+- [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Copy Data Tool Metadata Driven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-data-tool-metadata-driven.md
Previously updated : 06/19/2021 Last updated : 02/25/2022
-# Build large-scale data copy pipelines with metadata-driven approach in copy data tool (Preview)
+# Build large-scale data copy pipelines with metadata-driven approach in copy data tool
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] When you want to copy huge amounts of objects (for example, thousands of tables) or load data from large variety of sources, the appropriate approach is to input the name list of the objects with required copy behaviors in a control table, and then use parameterized pipelines to read the same from the control table and apply them to the jobs accordingly. By doing so, you can maintain (for example, add/remove) the objects list to be copied easily by just updating the object names in control table instead of redeploying the pipelines. WhatΓÇÖs more, you will have single place to easily check which objects copied by which pipelines/triggers with defined copy behaviors.
Copy data tool in ADF eases the journey of building such metadata driven data co
:::image type="content" source="./media/copy-data-tool-metadata-driven/select-table.png" alt-text="Select table"::: > [!NOTE]
- > If you select tabular data store, you will have chance to further select either full load or incremental load in the next page. If you select storage store, you can further select full load only in the next page. Incrementally loading new files only from storage store is currently not supported.
+ > If you select tabular data store, you will have chance to further select either full load or delta load in the next page. If you select storage store, you can further select full load only in the next page. Incrementally loading new files only from storage store is currently not supported.
4. Choose **loading behavior**. >[!TIP]
This pipeline will copy one batch of objects. The objects belonging to this batc
| Parameters name | Description | |: |: | | MaxNumberOfObjectsReturnedFromLookupActivity | In order to avoid reaching the limit of output lookup activity, there is a way to define the max number of objects returned by lookup activity. In most case, the default value is not required to be changed. |
-| TopLayerPipelineName | The name of top layer pipeline. |
+| TopLevelPipelineName | The name of top layer pipeline. |
| TriggerName | The name of trigger. | | CurrentSequentialNumberOfBatch | The ID of sequential batch. | | SumOfObjectsToCopy | The total number of objects to copy. |
data-factory How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-access-secured-purview-account.md
Title: Access a secured Azure Purview account description: Learn about how to access a firewall protected Azure Purview account through private endpoints from Azure Data Factory--++
data-factory How To Create Custom Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-custom-event-trigger.md
Event-driven architecture (EDA) is a common data integration pattern that involv
> [!NOTE] > The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more information, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the `Microsoft.EventGrid/eventSubscriptions/` action. This action is part of the [EventGrid EventSubscription Contributor](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-contributor) built-in role. +
+> [!IMPORTANT]
+> If you are using this feature in Azure Synapse Analytics, please ensure that your subscription is also registered with Data Factory resource provider, or otherwise you will get an error stating that _the creation of an "Event Subscription" failed_.
++ If you combine pipeline parameters and a custom event trigger, you can parse and reference custom `data` payloads in pipeline runs. Because the `data` field in a custom event payload is a free-form, JSON key-value structure, you can control event-driven pipeline runs. > [!IMPORTANT]
Data Factory doesn't require special permission to your Event Grid. You also do
Specifically, you need `Microsoft.EventGrid/EventSubscriptions/Write` permission on `/subscriptions/####/resourceGroups//####/providers/Microsoft.EventGrid/topics/someTopics`.
+- When authoring in the data factory (in the development environment for instance), the Azure account signed in needs to have the above permission
+- When publishing through [CI/CD](continuous-integration-delivery.md), the account used to publish the ARM template into the testing or production factory needs to have the above permission.
+ ## Next steps * Get detailed information about [trigger execution](concepts-pipeline-execution-triggers.md#trigger-execution-with-json).
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-event-trigger.md
This article describes the Storage Event Triggers that you can create in your Da
Event-driven architecture (EDA) is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require customers to trigger pipelines based on events happening in storage account, such as the arrival or deletion of a file in Azure Blob Storage account. Data Factory and Synapse pipelines natively integrate with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/), which lets you trigger pipelines on such events.
-For a ten-minute introduction and demonstration of this feature, watch the following video:
-
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Event-based-data-integration-with-Azure-Data-Factory/player]
- > [!NOTE] > The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more info, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the *Microsoft.EventGrid/eventSubscriptions/** action. This action is part of the EventGrid EventSubscription Contributor built-in role.
+> [!IMPORTANT]
+> If you are using this feature in Azure Synapse Analytics, please ensure that your subscription is also registered with Data Factory resource provider, or otherwise you will get an error stating that _the creation of an "Event Subscription" failed_.
+ > [!NOTE] > If the blob storage account resides behind a [private endpoint](../storage/common/storage-private-endpoints.md) and blocks public network access, you need to configure network rules to allow communications from blob storage to Azure Event Grid. You can either grant storage access to trusted Azure services, such as Event Grid, following [Storage documentation](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services), or configure private endpoints for Event Grid that map to VNet address space, following [Event Grid documentation](../event-grid/configure-private-endpoints.md)
Any of following RBAC settings works for storage event trigger:
* Contributor role to the storage account * _Microsoft.EventGrid/EventSubscriptions/Write_ permission to storage account _/subscriptions/####/resourceGroups/####/providers/Microsoft.Storage/storageAccounts/storageAccountName_ +
+Specifically,
+
+- When authoring in the data factory (in the development environment for instance), the Azure account signed in needs to have the above permission
+- When publishing through [CI/CD](continuous-integration-delivery.md), the account used to publish the ARM template into the testing or production factory needs to have the above permission.
+ In order to understand how the service delivers the two promises, let's take back a step and take a peek behind the scenes. Here are the high-level work flows for integration between Azure Data Factory/Azure Synapse, Storage, and Event Grid. ### Create a new Storage Event Trigger
data-factory How To Discover Explore Purview Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-discover-explore-purview-data.md
Title: Discover and explore data in ADF using Azure Purview
description: Learn how to discover, explore data in Azure Data Factory using Azure Purview --++ Last updated 08/10/2021
data-factory Load Sap Bw Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-sap-bw-data.md
Title: Load data from SAP Business Warehouse description: Copy data from SAP Business Warehouse (BW) with Azure Data Factory or Synapse Analytics--++
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
The following data sources have native Private Endpoint support and can be conne
- Azure Machine Learning - Azure Private Link Service - Azure Purview-- Azure SQL Database (not including Azure SQL Managed Instance)
+- Azure SQL Database
+- Azure SQL Managed Instance - (public preview)
- Azure Synapse Analytics - Azure Table Storage (not including Storage account V1)
The following data sources have native Private Endpoint support and can be conne
> You still can access all data sources that are supported by Data Factory through public network. > [!NOTE]
-> Because Azure SQL Managed Instance doesn't support native Private Endpoint right now, you can access it from managed Virtual Network using Private Linked Service and Load Balancer. Please see [How to access SQL Managed Instance from Data Factory Managed VNET using Private Endpoint](tutorial-managed-virtual-network-sql-managed-instance.md).
+> Because Azure SQL Managed Instance native Private Endpoint in public preview, you can access it from managed Virtual Network using Private Linked Service and Load Balancer. Please see [How to access SQL Managed Instance from Data Factory Managed VNET using Private Endpoint](tutorial-managed-virtual-network-sql-managed-instance.md).
### On-premises data sources To access on-premises data sources from managed Virtual Network using Private Endpoint, please see this tutorial [How to access on-premises SQL Server from Data Factory Managed VNET using Private Endpoint](tutorial-managed-virtual-network-on-premise-sql-server.md).
data-factory Quickstart Create Data Factory Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-azure-cli.md
Title: "Quickstart: Create an Azure Data Factory using Azure CLI" description: This quickstart creates an Azure Data Factory, including a linked service, datasets, and a pipeline. You can run the pipeline to do a file copy action.--++
data-factory Quickstart Create Data Factory Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-dot-net.md
Title: Create Azure Data Factory using .NET SDK description: Create an Azure Data Factory and pipeline using .NET SDK to copy data from one location in Azure Blob storage to another location.-+ ms.devlang: csharp Last updated 12/10/2021-+ # Quickstart: Create a data factory and pipeline using .NET SDK
data-factory Quickstart Create Data Factory Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-powershell.md
Title: Copy data in Blob Storage using Azure Data Factory description: Create an Azure Data Factory using PowerShell to copy data from one location in Azure Blob storage to another location.-+ ms.devlang: powershell Last updated 01/26/2022-+ # Quickstart: Create an Azure Data Factory using PowerShell
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-rest-api.md
Title: Create an Azure data factory using REST API description: Create an Azure data factory pipeline to copy data from one location in Azure Blob storage to another location.-+ ms.devlang: rest-api Last updated 05/31/2021-+
data-factory Tutorial Incremental Copy Change Data Capture Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal.md
In this step, you create a pipeline, which first checks the number of changed re
DECLARE @from_lsn binary(10), @to_lsn binary(10); SET @from_lsn =sys.fn_cdc_get_min_lsn('dbo_customers'); SET @to_lsn = sys.fn_cdc_map_time_to_lsn('largest less than or equal', GETDATE());
- SELECT count(1) changecount FROM cdc.fn_cdc_get_all_changes_dbo_customers(@from_lsn, @to_lsn, 'all')
+ SELECT count(1) changecount FROM cdc.fn_cdc_get_net_changes_dbo_customers(@from_lsn, @to_lsn, 'all')
``` 3. Enable **First row only**
In this step, you create a pipeline, which first checks the number of changed re
DECLARE @from_lsn binary(10), @to_lsn binary(10); SET @from_lsn =sys.fn_cdc_get_min_lsn('dbo_customers'); SET @to_lsn = sys.fn_cdc_map_time_to_lsn('largest less than or equal', GETDATE());
- SELECT * FROM cdc.fn_cdc_get_all_changes_dbo_customers(@from_lsn, @to_lsn, 'all')
+ SELECT * FROM cdc.fn_cdc_get_net_changes_dbo_customers(@from_lsn, @to_lsn, 'all')
``` :::image type="content" source="./media/tutorial-incremental-copy-change-data-capture-feature-portal/copy-source-settings.png" alt-text="Copy Activity - source settings":::
In this step, you create a tumbling window trigger to run the job on a frequent
SET @end_time = ''',pipeline().parameters.triggerEndTime,'''; SET @from_lsn = sys.fn_cdc_map_time_to_lsn(''smallest greater than or equal'', @begin_time); SET @to_lsn = sys.fn_cdc_map_time_to_lsn(''largest less than'', @end_time);
- SELECT count(1) changecount FROM cdc.fn_cdc_get_all_changes_dbo_customers(@from_lsn, @to_lsn, ''all'')')
+ SELECT count(1) changecount FROM cdc.fn_cdc_get_net_changes_dbo_customers(@from_lsn, @to_lsn, ''all'')')
``` 3. Navigate to the **Copy** activity in the True case of the **If Condition** activity and click on the **Source** tab. Copy the following into the query:
In this step, you create a tumbling window trigger to run the job on a frequent
SET @end_time = ''',pipeline().parameters.triggerEndTime,'''; SET @from_lsn = sys.fn_cdc_map_time_to_lsn(''smallest greater than or equal'', @begin_time); SET @to_lsn = sys.fn_cdc_map_time_to_lsn(''largest less than'', @end_time);
- SELECT * FROM cdc.fn_cdc_get_all_changes_dbo_customers(@from_lsn, @to_lsn, ''all'')')
+ SELECT * FROM cdc.fn_cdc_get_net_changes_dbo_customers(@from_lsn, @to_lsn, ''all'')')
``` 4. Click on the **Sink** tab of the **Copy** activity and click **Open** to edit the dataset properties. Click on the **Parameters** tab and add a new parameter called **triggerStart**
data-factory Tutorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-push-lineage-to-purview.md
Title: Push Data Factory lineage data to Azure Purview description: Learn about how to push Data Factory lineage data to Azure Purview--++
data-factory Data Factory Amazon Redshift Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-amazon-redshift-connector.md
Title: Move data from Amazon Redshift by using Azure Data Factory description: Learn how to move data from Amazon Redshift by using Azure Data Factory Copy Activity.-+ Last updated 10/22/2021-+ # Move data From Amazon Redshift using Azure Data Factory
data-factory Data Factory Amazon Simple Storage Service Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-amazon-simple-storage-service-connector.md
Title: Move data from Amazon Simple Storage Service by using Data Factory description: Learn about how to move data from Amazon Simple Storage Service (S3) by using Azure Data Factory.-+ Last updated 10/22/2021-+ # Move data from Amazon Simple Storage Service by using Azure Data Factory
data-factory Data Factory Azure Blob Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-blob-connector.md
Title: Copy data to/from Azure Blob Storage description: 'Learn how to copy blob data in Azure Data Factory. Use our sample: How to copy data to and from Azure Blob Storage and Azure SQL Database.'-+ Last updated 10/22/2021-+ # Copy data to or from Azure Blob Storage using Azure Data Factory
data-factory Data Factory Azure Copy Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-copy-wizard.md
Title: Data Factory Azure Copy Wizard description: Learn about how to use the Data Factory Azure Copy Wizard to copy data from supported data sources to sinks.-+ Last updated 10/22/2021-+
data-factory Data Factory Azure Datalake Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-datalake-connector.md
Title: Copy data to and from Azure Data Lake Storage Gen1 description: Learn how to copy data to and from Data Lake Store by using Azure Data Factory-+ Last updated 10/22/2021-+
data-factory Data Factory Azure Documentdb Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-documentdb-connector.md
Title: Move data to/from Azure Cosmos DB description: Learn how move data to/from Azure Cosmos DB collection using Azure Data Factory--++
data-factory Data Factory Azure Search Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-search-connector.md
Title: Push data to Search index by using Data Factory description: 'Learn about how to push data to Azure Cognitive Search Index by using Azure Data Factory.'-+ Last updated 10/22/2021-+
data-factory Data Factory Azure Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-sql-connector.md
Title: Copy data to/from Azure SQL Database description: Learn how to copy data to/from Azure SQL Database using Azure Data Factory.-+ Last updated 10/22/2021-+ # Copy data to and from Azure SQL Database using Azure Data Factory
data-factory Data Factory Azure Sql Data Warehouse Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-sql-data-warehouse-connector.md
Title: Copy data to/from Azure Synapse Analytics description: Learn how to copy data to/from Azure Synapse Analytics using Azure Data Factory-+ Last updated 10/22/2021-+ # Copy data to and from Azure Synapse Analytics using Azure Data Factory
data-factory Data Factory Azure Table Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-table-connector.md
Title: Move data to/from Azure Table description: Learn how to move data to/from Azure Table Storage using Azure Data Factory.-+ Last updated 10/22/2021-+ # Move data to and from Azure Table using Azure Data Factory
data-factory Data Factory Copy Activity Fault Tolerance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-fault-tolerance.md
Title: Add fault tolerance in Azure Data Factory Copy Activity by skipping incompatible rows description: Learn how to add fault tolerance in Azure Data Factory Copy Activity by skipping incompatible rows during copy-+ Last updated 10/22/2021-+ # Add fault tolerance in Copy Activity by skipping incompatible rows
data-factory Data Factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-performance.md
Title: Copy Activity performance and tuning guide description: Learn about key factors that affect the performance of data movement in Azure Data Factory when you use Copy Activity.-+ Last updated 10/22/2021-+ # Copy Activity performance and tuning guide
data-factory Data Factory Copy Activity Tutorial Using Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-azure-resource-manager-template.md
Title: 'Tutorial: Create a pipeline using Resource Manager Template ' description: In this tutorial, you create an Azure Data Factory pipeline by using an Azure Resource Manager template. This pipeline copies data from an Azure blob storage to Azure SQL Database. -+ Last updated 10/22/2021-+
data-factory Data Factory Copy Activity Tutorial Using Dotnet Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-dotnet-api.md
 Title: 'Tutorial: Create a pipeline with Copy Activity using .NET API ' description: In this tutorial, you create an Azure Data Factory pipeline with a Copy Activity by using .NET API.-+ Last updated 10/22/2021-+ # Tutorial: Create a pipeline with Copy Activity using .NET API
data-factory Data Factory Copy Activity Tutorial Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-powershell.md
 Title: 'Tutorial: Create a pipeline to move data by using Azure PowerShell ' description: In this tutorial, you create an Azure Data Factory pipeline with Copy Activity by using Azure PowerShell.-+ Last updated 10/22/2021-+ # Tutorial: Create a Data Factory pipeline that moves data by using Azure PowerShell
data-factory Data Factory Copy Activity Tutorial Using Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-rest-api.md
Title: 'Tutorial: Use REST API to create an Azure Data Factory pipeline ' description: In this tutorial, you use REST API to create an Azure Data Factory pipeline with a Copy Activity to copy data from an Azure blob storage to Azure SQL Database. -+ Last updated 10/22/2021-+
data-factory Data Factory Copy Activity Tutorial Using Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-visual-studio.md
Title: 'Tutorial: Create a pipeline with Copy Activity using Visual Studio ' description: In this tutorial, you create an Azure Data Factory pipeline with a Copy Activity by using Visual Studio.-+ Last updated 10/22/2021-+ # Tutorial: Create a pipeline with Copy Activity using Visual Studio
data-factory Data Factory Copy Data From Azure Blob Storage To Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-data-from-azure-blob-storage-to-sql-database.md
Title: Copy data from Blob Storage to SQL Database - Azure description: This tutorial shows you how to use Copy Activity in an Azure Data Factory pipeline to copy data from Blob storage to SQL database.-+ Last updated 10/22/2021-+ # Tutorial: Copy data from Blob Storage to SQL Database using Data Factory
data-factory Data Factory Copy Data Wizard Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-data-wizard-tutorial.md
Title: 'Tutorial: Create a pipeline using Copy Wizard ' description: In this tutorial, you create an Azure Data Factory pipeline with a Copy Activity by using the Copy Wizard supported by Data Factory-+ Last updated 10/22/2021-+ # Tutorial: Create a pipeline with Copy Activity using Data Factory Copy Wizard
data-factory Data Factory Copy Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-wizard.md
Title: Copy data easily with Copy Wizard - Azure description: Learn about how to use the Data Factory Copy Wizard to copy data from supported data sources to sinks.-+ Last updated 10/22/2021-+ # Copy or move data easily with Azure Data Factory Copy Wizard
data-factory Data Factory Data Movement Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-movement-activities.md
Title: Move data by using Copy Activity description: 'Learn about data movement in Data Factory pipelines: data migration between cloud stores, and between an on-premises store and a cloud store. Use Copy Activity.'-+ Last updated 10/22/2021-+ # Move data by using Copy Activity
data-factory Data Factory Ftp Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-ftp-connector.md
Title: Move data from an FTP server by using Azure Data Factory description: Learn about how to move data from an FTP server using Azure Data Factory.-+ Last updated 10/22/2021-+ # Move data from an FTP server by using Azure Data Factory
data-factory Data Factory Hdfs Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-hdfs-connector.md
Title: Move data from on-premises HDFS description: Learn about how to move data from on-premises HDFS using Azure Data Factory.-+ Last updated 10/22/2021-+ # Move data from on-premises HDFS using Azure Data Factory
data-factory Data Factory Http Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-http-connector.md
Title: Move data from an HTTP source - Azure description: Learn how to move data from an on-premises or cloud HTTP source by using Azure Data Factory.--++
data-factory Data Factory Invoke Stored Procedure From Copy Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-invoke-stored-procedure-from-copy-activity.md
Title: Invoke stored procedure from Azure Data Factory Copy Activity description: 'Learn how to invoke a stored procedure in Azure SQL Database, or SQL Server from an Azure Data Factory copy activity.'-+ Last updated 10/22/2021-+
data-factory Data Factory Load Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-load-sql-data-warehouse.md
Title: Load terabytes of data into Azure Synapse Analytics description: Demonstrates how 1 TB of data can be loaded into Azure Synapse Analytics under 15 minutes with Azure Data Factory-+ Last updated 10/22/2021-+ # Load 1 TB into Azure Synapse Analytics under 15 minutes with Data Factory
data-factory Data Factory Map Columns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-map-columns.md
Title: Mapping dataset columns in Azure Data Factory description: 'Learn how to map source columns to destination columns.'--++
data-factory Data Factory Odata Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-odata-connector.md
Title: Move data from OData sources description: Learn about how to move data from OData sources using Azure Data Factory.-+ Last updated 10/22/2021-+ # Move data From an OData source using Azure Data Factory
data-factory Data Factory Odbc Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-odbc-connector.md
Title: Move data from ODBC data stores description: Learn about how to move data from ODBC data stores using Azure Data Factory.-+ Last updated 10/22/2021-+
data-factory Data Factory On Premises Mongodb Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-on-premises-mongodb-connector.md
Title: Move data from MongoDB description: Learn about how to move data from MongoDB database using Azure Data Factory.--++
data-factory Data Factory Onprem Cassandra Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-cassandra-connector.md
Title: Move data from Cassandra using Data Factory description: Learn about how to move data from an on-premises Cassandra database using Azure Data Factory.-+ Last updated 10/22/2021-+ # Move data from an on-premises Cassandra database using Azure Data Factory
data-factory Data Factory Onprem Db2 Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-db2-connector.md
Title: Move data from DB2 by using Azure Data Factory description: Learn how to move data from an on-premises DB2 database by using Azure Data Factory Copy Activity-+ Last updated 10/22/2021-+ # Move data from DB2 by using Azure Data Factory Copy Activity
data-factory Data Factory Onprem File System Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-file-system-connector.md
Title: Copy data to/from a file system using Azure Data Factory description: Learn how to copy data to and from an on-premises file system by using Azure Data Factory.-+ Last updated 10/22/2021-+ # Copy data to and from an on-premises file system by using Azure Data Factory
data-factory Data Factory Onprem Mysql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-mysql-connector.md
Title: Move data from MySQL using Azure Data Factory description: Learn about how to move data from MySQL database using Azure Data Factory.-+ Last updated 10/22/2021-+ # Move data From MySQL using Azure Data Factory
data-factory Data Factory Onprem Oracle Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-oracle-connector.md
Title: Copy data to or from Oracle by using Data Factory description: Learn how to copy data to or from an on-premises Oracle database by using Azure Data Factory.-+ Last updated 10/22/2021-+ # Copy data to or from Oracle on-premises by using Azure Data Factory
data-factory Data Factory Onprem Postgresql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-postgresql-connector.md
Title: Move data From PostgreSQL using Azure Data Factory description: Learn about how to move data from PostgreSQL Database using Azure Data Factory.--++
data-factory Data Factory Onprem Sybase Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-sybase-connector.md
Title: Move data from Sybase using Azure Data Factory description: Learn about how to move data from Sybase Database using Azure Data Factory.--++
data-factory Data Factory Onprem Teradata Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-teradata-connector.md
Title: Move data from Teradata using Azure Data Factory description: Learn about Teradata Connector for the Data Factory service that lets you move data from Teradata Database-+ Last updated 10/22/2021-+ # Move data from Teradata using Azure Data Factory
data-factory Data Factory Repeatable Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-repeatable-copy.md
Title: Repeatable copy in Azure Data Factory description: 'Learn how to avoid duplicates even though a slice that copies data is run more than once.'-+ Last updated 10/22/2021-+
data-factory Data Factory Salesforce Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-salesforce-connector.md
Title: Move data from Salesforce by using Data Factory description: Learn about how to move data from Salesforce by using Azure Data Factory.-+ Last updated 10/22/2021-+ # Move data from Salesforce by using Azure Data Factory
data-factory Data Factory Sap Business Warehouse Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-sap-business-warehouse-connector.md
Title: Move data from SAP Business Warehouse using Azure Data Factory description: Learn about how to move data from SAP Business Warehouse using Azure Data Factory.--++
data-factory Data Factory Sap Hana Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-sap-hana-connector.md
Title: Move data from SAP HANA using Azure Data Factory description: Learn about how to move data from SAP HANA using Azure Data Factory.--++
data-factory Data Factory Sftp Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-sftp-connector.md
Title: Move data from SFTP server using Azure Data Factory description: Learn about how to move data from an on-premises or a cloud SFTP server using Azure Data Factory.--++
data-factory Data Factory Sqlserver Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-sqlserver-connector.md
Title: Move data to and from SQL Server description: Learn about how to move data to/from SQL Server database that is on-premises or in an Azure VM using Azure Data Factory.--++
data-factory Data Factory Supported File And Compression Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-supported-file-and-compression-formats.md
Title: File and compression formats in Azure Data Factory description: 'Learn about the file formats supported by Azure Data Factory.'-+ Last updated 10/22/2021-+
data-factory Data Factory Web Table Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-web-table-connector.md
Title: Move data from Web Table using Azure Data Factory description: Learn about how to move data from a table in a Web page using Azure Data Factory.-+ Last updated 10/22/2021-+ # Move data from a Web table source using Azure Data Factory
data-factory How To Invoke Ssis Package Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/how-to-invoke-ssis-package-stored-procedure-activity.md
Title: Invoke SSIS package using Azure Data Factory - Stored Procedure Activity description: This article describes how to invoke a SQL Server Integration Services (SSIS) package from an Azure Data Factory pipeline using the Stored Procedure Activity.-+ ms.devlang: powershell Last updated 10/22/2021-+ # Invoke an SSIS package using stored procedure activity in Azure Data Factory This article describes how to invoke an SSIS package from an Azure Data Factory pipeline by using a stored procedure activity.
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
Previously updated : 02/15/2022 Last updated : 02/23/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Follow these steps to configure the network for your device.
* Serial number for any port corresponds to the node serial number. <!--* On 25-Gbps interfaces, you can set the RDMA (Remote Direct Access Memory) mode to iWarp or RoCE (RDMA over Converged Ethernet). Where low latencies are the primary requirement and scalability is not a concern, use RoCE. When latency is a key requirement, but ease-of-use and scalability are also high priorities, iWARP is the best candidate.--> <!--* Network Interface Card (NIC) Teaming or link aggregation is not supported with Azure Stack Edge. <!--NIC teaming should work for 2-node -->
+ > [!NOTE]
+ > If you need to connect to your device from an outside network, see [Enable device access from outside network](azure-stack-edge-gpu-manage-access-power-connectivity-mode.md#enable-device-access-from-outside-network) for additional network settings.
+ Once the device network is configured, the page updates as shown below. ![Screenshot of local web UI "Network" page for fully configured one node. ](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/network-2.png)
databox-online Azure Stack Edge Gpu Manage Access Power Connectivity Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-access-power-connectivity-mode.md
Previously updated : 07/08/2021 Last updated : 02/22/2022
This article describes how to manage the access, power, and connectivity mode for your Azure Stack Edge Pro with GPU device. These operations are performed via the local web UI or the Azure portal.
-This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
- In this article, you learn how to: > [!div class="checklist"] > * Manage device access
+> * Enable device access via remote PowerShell over HTTP
+> * Enable device access from outside network
> * Manage resource access > * Manage connectivity mode > * Manage power
Follow these steps in the local UI to enable remote PowerShell over HTTP:
You can now connect to the PowerShell interface of the device over HTTP. For details, see [Connect to the PowerShell interface of your device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
+## Enable device access from outside network
+
+To be able to connect to your Azure Stack Edge device from an outside network, make sure the network for your laptop and the network for the device meet the following requirements.
+
+| Traffic direction | Out-of-network requirements |
+|--|--|
+| Outbound to laptop |On the network for the Azure Stack Edge device:<ul><li>Configure the correct gateways on the device to enable traffic to reach the laptopΓÇÖs network.</li><li>If you configure multiple gateways on the device, ensure that traffic can reach your laptop's network on all gateways.<br>A device ideally tries to use the network interface card (NIC) with the lowest route metric. However, there's no clear way for an Azure Stack Edge device to identify the NIC with the lowest metric. So it's best to make your laptop network reachable on all configured gateways.</li></ul>|
+|Inbound to device |On the network for your laptop:<ul><li>Configure a clear network route from the laptop to the network for the device, possibly through defined gateways.</li></ul>|
+
+> [!NOTE]
+> Diagnostic tests for Azure Stack Edge return a warning if all gateways don't have internet connectivity. For diagnostics information, see [Run diagnostics](azure-stack-edge-gpu-troubleshoot.md#run-diagnostics).
+
+<!--ORIGINAL PRESENTATION: If a user needs to be able to connect to the Azure Stack Edge appliance from an outside network, you'll need this network configuration:
+
+- **In-bound traffic:** For in-bound traffic from the customer's laptop (in network A) to the appliance (in network B), network A should have a clear route to network B, possibly through defined gateways.
+
+- **Outbound traffic:** For outbound traffic from the appliance to the customer's laptop (network B to network A):
+
+ - Configure the correct gateways on the appliance so that traffic can reach network A.
+
+ - If you configure multiple gateways on the appliance, ensure that traffic can reach network B on all gateways.
+
+ An appliance ideally tries to use the network interface card (NIC) with the lowest route metric. However, there's no clear way for an Azure Stack Edge appliance to identify the NIC with the lowest metric. So it's best to make network A reachable on all configured gateways.
+
+ > [!NOTE]
+ > Diagnostic tests for Azure Stack Edge return a warning if all gateways don't have internet connectivity. For diagnostics information, see [Run diagnostics](azure-stack-edge-gpu-troubleshoot.md#run-diagnostics). Terminology creep: This is the first reference to internet connectivity.-->
++ ## Manage resource access
-To create your Azure Stack Edge / Data Box Gateway, IoT Hub, and Azure Storage resource, you need permissions as a contributor or higher at a resource group level. You also need the corresponding resource providers to be registered. For any operations that involve activation key and credentials, permissions to the Microsoft Graph API are also required. These are described in the following sections.
+To create your Azure Stack Edge / Data Box Gateway, IoT Hub, and Azure Storage resource, you need permissions as a contributor or higher at a resource group level. You also need the corresponding resource providers to be registered. For any operations that involve activation key and credentials, permissions to the Microsoft Graph API are also required. These requirements are described in the following sections.
### Manage Microsoft Graph API permissions
databox-online Azure Stack Edge Mini R Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy.md
Previously updated : 08/12/2021 Last updated : 02/22/2022 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Mini R so I can use it to transfer data to Azure.
Follow these steps to configure the network for your device.
- Serial number for any port corresponds to the node serial number. For a K-series device, only one serial number is displayed. > [!NOTE]
- > We recommend that you do not switch the local IP address of the network interface from static to DCHP, unless you have another IP address to connect to the device. If using one network interface and you switch to DHCP, there would be no way to determine the DHCP address. If you want to change to a DHCP address, wait until after the device has registered with the service, and then change. You can then view the IPs of all the adapters in the **Device properties** in the Azure portal for your service.
+ > - We recommend that you do not switch the local IP address of the network interface from static to DCHP, unless you have another IP address to connect to the device. If using one network interface and you switch to DHCP, there would be no way to determine the DHCP address. If you want to change to a DHCP address, wait until after the device has registered with the service, and then change. You can then view the IPs of all the adapters in the **Device properties** in the Azure portal for your service.
+ > - If you need to connect to your device from an outside network, see [Enable device access from outside network](azure-stack-edge-gpu-manage-access-power-connectivity-mode.md#enable-device-access-from-outside-network) for additional network settings.
After you have configured and applied the network settings, select **Next: Compute** to configure compute network.
databox-online Azure Stack Edge Pro R Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy.md
Previously updated : 05/11/2021 Last updated : 02/23/2022 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro R so I can use it to transfer data to Azure.
Follow these steps to configure the network for your device.
* Network Interface Card (NIC) Teaming or link aggregation is not supported with Azure Stack Edge. * Serial number for any port corresponds to the node serial number. <!--* On the 25-Gbps interfaces, you can set the RDMA (Remote Direct Access Memory) mode to iWarp or RoCE (RDMA over Converged Ethernet). Where low latencies are the primary requirement and scalability is not a concern, use RoCE. When latency is a key requirement, but ease-of-use and scalability are also high priorities, iWARP is the best candidate.-->+
+ > [!NOTE]
+ > If you need to connect to your device from an outside network, see [Enable device access from outside network](azure-stack-edge-gpu-manage-access-power-connectivity-mode.md#enable-device-access-from-outside-network) for additional network settings.
+ Once the device network is configured, the page updates as shown below. ![Local web UI "Network settings" page 2](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/network-2a.png)<!--change--> >[!NOTE]
- >
- > * We recommend that you do not switch the local IP address of the network interface from static to DCHP, unless you have another IP address to connect to the device. If using one network interface and you switch to DHCP, there would be no way to determine the DHCP address. If you want to change to a DHCP address, wait until after the device has activated with the service, and then change. You can then view the IPs of all the adapters in the **Device properties** in the Azure portal for your service.
-
+ > We recommend that you do not switch the local IP address of the network interface from static to DCHP, unless you have another IP address to connect to the device. If using one network interface and you switch to DHCP, there would be no way to determine the DHCP address. If you want to change to a DHCP address, wait until after the device has activated with the service, and then change. You can then view the IPs of all the adapters in the **Device properties** in the Azure portal for your service.
After you have configured and applied the network settings, select **Next: Compute** to configure compute network.
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies.md
This article describes the concept of industry ontologies and how they can be used within the context of Azure Digital Twins.
-The vocabulary of an Azure Digital Twins solution is defined using [models](concepts-models.md), which describe the types of entities that exist in your environment.
+The vocabulary of an Azure Digital Twins solution is defined using [models](concepts-models.md), which describe the types of entities that exist in your environment. An *ontology* is a set of models for a given domain, like building structures, IoT systems, smart cities, energy grids, web content, and more.
-Sometimes, when your solution is tied to a particular industry, it can be easier and more effective to start with a set of models for that industry that already exist, instead of authoring your own model set from scratch. These pre-existing model sets are called **ontologies**.
-
-In general, an ontology is a set of models for a given domainΓÇölike a building structure, IoT system, smart city, the energy grid, web content, and so on. Ontologies are often used as schemas for twin graphs, as they can enable:
-* Harmonization of software components, documentation, query libraries, and so on.
-* Reduced investment in conceptual modeling and system development
-* Easier data interoperability on a semantic level
-* Best practice reuse, rather than starting from scratch or "reinventing the wheel"
-
-This article explains why to use ontologies for your Azure Digital Twins models and how to do so. It also explains what ontologies and tools for them are available today.
+Sometimes, when your solution is tied to a particular industry, it can be easier and more effective to start with a set of models for that industry that already exist, instead of authoring your own model set from scratch. This article explains more about using pre-existing industry ontologies for your Azure Digital Twins scenarios, including strategies for using the ontologies that are available today.
## Using ontologies for Azure Digital Twins
Also, using these ontologies in your solutions can set them up for more seamless
Because models in Azure Digital Twins are represented in [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), ontologies for use with Azure Digital Twins are also written in DTDL.
+Here are some other benefits to using industry-standard DTDL ontologies as schemas for your twin graphs:
+* Harmonization of software components, documentation, query libraries, and more
+* Reduced investment in conceptual modeling and system development
+* Easier data interoperability on a semantic level
+* Best practice reuse, rather than starting from scratch
+ ## Strategies for integrating ontologies There are three possible strategies for integrating industry-standard ontologies with DTDL. You can pick the one that works best for you depending on your needs:
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
The workflow of the migration process is illustrated below.
:::image type="content" source="media/migration-using-azure-data-studio/architecture-ads-sql-migration.png" alt-text="Diagram of architecture for database migration using Azure Data Studio with DMS":::
-1. **Source SQL Server**: SQL Server instance on-premises, private cloud, or any public cloud virtual machine. All editions of SQL Server 2016 and above are supported.
+1. **Source SQL Server**: SQL Server instance on-premises, private cloud, or any public cloud virtual machine. All versions of SQL Server 2008 and above are supported.
1. **Target Azure SQL**: Supported Azure SQL targets are Azure SQL Managed Instance or SQL Server on Azure Virtual Machines (registered with SQL IaaS Agent extension in [Full management mode](../azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md#management-modes)) 1. **Network File Share**: Server Message Block (SMB) network file share where backup files are stored for the database(s) to be migrated. Azure Storage blob containers and Azure Storage file share are also supported. 1. **Azure Data Studio**: Download and install the [Azure SQL Migration extension in Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
Azure Database Migration Service prerequisites that are common across all suppor
- SSIS packages - Server roles - Server audit-- SQL Server 2014 and below are not supported.
+- When migrating to SQL Server on Azure Virtual Machines, SQL Server 2014 and below are not supported.
- Migrating to Azure SQL Database isn't supported. - Azure storage accounts secured by specific firewall rules or configured with a private endpoint are not supported for migrations. - You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL Migration extension in Azure Data Studio and can be reused for further database migrations.
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Last updated 10/05/2021
# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine offline using Azure Data Studio with DMS (Preview)
-Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance (SQL Server 2016 and above) to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
+Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine (SQL Server 2016 and above)](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with the offline migration method by using Azure Data Studio with Azure Database Migration Service.
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
Last updated 10/05/2021
# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio with DMS (Preview)
-Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance (SQL Server 2016 and above) to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
+Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine (SQL Server 2016 and above)](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with minimal downtime by using Azure Data Studio with Azure Database Migration Service.
hpc-cache Hpc Cache Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-prerequisites.md
Title: Azure HPC Cache prerequisites description: Prerequisites for using Azure HPC Cache-+ Previously updated : 01/19/2022 Last updated : 02/24/2022
A paid subscription is recommended.
## Network infrastructure
-Two network-related prerequisites should be set up before you can use your cache:
+These network-related prerequisites need to be set up before you can use your cache:
* A dedicated subnet for the Azure HPC Cache instance * DNS support so that the cache can access storage and other resources
+* Access from the subnet to additional Microsoft Azure infrastructure services, including NTP servers and the Azure Queue Storage service.
### Cache subnet
The Azure HPC Cache needs a dedicated subnet with these qualities:
The best practice is to create a new subnet for each cache. You can create a new virtual network and subnet as part of creating the cache.
+When creating this subnet, be careful that its security settings allow access to the necessary infrastructure services mentioned later in this section. You can restrict outbound internet connectivity, but make sure that there are exceptions for the items documented here.
+ ### DNS access The cache needs DNS to access resources outside of its virtual network. Depending on which resources you're using, you might need to set up a customized DNS server and configure forwarding between that server and Azure DNS servers:
More tips for NTP access:
* You can configure which NTP server your HPC Cache uses on the **Networking** page. Read [Configure additional settings](configuration.md#customize-ntp) for more information.
+### Azure Queue Storage access
+
+The cache must be able to securely access the [Azure Queue Storage service](../storage/queues/storage-queues-introduction.md) from inside its dedicated subnet. Azure HPC Cache uses the queues service when communicating configuration and state information.
+
+If the cache can't access the queue service, you might see a CacheConnectivityError message when creating the cache.
+
+There are two ways to provide access:
+
+* Create an Azure Storage service endpoint in your cache subnet.
+ Read [Add a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md#add-a-subnet) for instructions to add the **Microsoft.Storage** service endpoint.
+
+* Individually configure access to the Azure storage queue service domain in your network security group or other firewalls.
+
+ Add rules to permit access on these ports:
+
+ * TCP port 443 for secure traffic to any host in the domain queue.core.windows.net (`*.queue.core.windows.net`).
+
+ * TCP port 80 - used for verification of the server-side certificate. This is sometimes referred to as certificate revocation list (CRL) checking and online certificate status protocol (OCSP) communications. All of *.queue.core.windows.net uses the same certificate, and thus the same CRL/OCSP servers. The hostname is stored in the server-side SSL certificate.
+
+ Refer to the security rule tips in [NTP access](#ntp-access) for more information.
+
+ This command lists the CRL and OSCP servers that need to be permitted access. These servers must be resolvable by DNS and reachable on port 80 from the cache subnet.
+
+ ```bash
+
+ openssl s_client -connect azure.queue.core.windows.net:443 2>&1 < | sed -n '/--BEGIN/,/--END/p' | openssl x509 -noout -text -in /dev/stdin |egrep -i crl\|ocsp|grep URI
+
+ ```
+
+ The output looks something like this, and can change if the SSL certificate updates:
+
+ ```bash
+ OCSP - URI:http://ocsp.msocsp.com
+ CRL - URI:http://mscrl.microsoft.com/pki/mscorp/crl/Microsoft%20RSA%20TLS%20CA%2002.crl
+ CRL - URI:http://crl.microsoft.com/pki/mscorp/crl/Microsoft%20RSA%20TLS%20CA%2002.crl
+ ```
+
+You can check the subnet's connectivity by using this command from a test VM inside the subnet:
+
+```bash
+openssl s_client -connect azure.queue.core.windows.net:443 -status 2>&1 < |grep "OCSP Response Status"
+```
+
+A successful connection gives this response:
+
+```bash
+OCSP Response Status: successful (0x0)
+```
+ ## Permissions Check these permission-related prerequisites before starting to create your cache.
logic-apps Add Artifacts Integration Service Environment Ise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/add-artifacts-integration-service-environment-ise.md
Title: Add resources to integration service environments
-description: Add logic apps, integration accounts, custom connectors, and managed connectors to your integration service environment (ISE)
+description: Add logic apps, integration accounts, custom connectors, and managed connectors to your integration service environment (ISE).
ms.suite: integration--++ Last updated 02/28/2021
After you create an [integration service environment (ISE)](../logic-apps/connec
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* The ISE that you created to run your logic apps. If you don't have an ISE, [create an ISE first](../logic-apps/connect-virtual-network-vnet-isolated-environment.md).
logic-apps Block Connections Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/block-connections-connectors.md
Title: Block connections for specific API connectors
-description: Restrict creating and using API connections in Azure Logic Apps
+description: Restrict creating and using API connections in Azure Logic Apps.
ms.suite: integration--++ Last updated 07/23/2020
This topic shows how to set up a policy that blocks specific connections by usin
## Prerequisites
-* An Azure subscription. If you don't have a subscription, [create a free Azure account](https://azure.microsoft.com/free/) before you start.
+* An Azure account and subscription. If you don't have a subscription, [create a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* The reference ID for the connector that you want to block. For more information, see [Find the connector reference ID](#connector-reference-ID).
logic-apps Business Continuity Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/business-continuity-disaster-recovery-guidance.md
Title: Business continuity and disaster recovery
-description: Design your strategy to protect data, recover quickly from disruptive events, restore resources required by critical business functions, and maintain business continuity for Azure Logic Apps
+description: Design your strategy to protect data, recover quickly from disruptive events, restore resources required by critical business functions, and maintain business continuity for Azure Logic Apps.
ms.suite: integration-+ Last updated 03/24/2021
logic-apps Call From Power Automate Power Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/call-from-power-automate-power-apps.md
Title: Call logic apps from Power Automate and Power Apps
description: Call logic apps from Microsoft Power Automate flows by exporting logic apps as connectors. ms.suite: integration--++ Last updated 10/05/2020
If you want to migrate your flow from Power Automate or Power to Logic Apps inst
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* A Power Automate or Power Apps license.
logic-apps Connect Virtual Network Vnet Isolated Environment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md
Title: Overview - Access to Azure virtual networks
-description: Learn about accessing Azure virtual networks (VNETs) from Azure Logic Apps using an integration service environment (ISE)
+description: Learn about accessing Azure virtual networks (VNETs) from Azure Logic Apps using an integration service environment (ISE).
ms.suite: integration
logic-apps Connect Virtual Network Vnet Set Up Single Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-set-up-single-ip-address.md
Title: Set up a public outbound IP address for ISEs
-description: Learn how to set up a single public outbound IP address for integration service environments (ISEs) in Azure Logic Apps
+description: Learn how to set up a single public outbound IP address for integration service environments (ISEs) in Azure Logic Apps.
ms.suite: integration--++ Last updated 05/06/2020
logic-apps Create Integration Service Environment Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-integration-service-environment-rest-api.md
Title: Create integration service environments (ISEs) with Logic Apps REST API
-description: Create an integration service environment (ISE) by using the Logic Apps REST API so you can access Azure virtual networks (VNETs) from Azure Logic Apps
+description: Create an integration service environment (ISE) to access Azure virtual networks (VNETs) using the Azure Logic Apps REST API.
ms.suite: integration--++ Last updated 02/03/2021
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
This article shows how to enable and set up the system-assigned identity or user
## Prerequisites
-* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/). Both the managed identity and the target Azure resource where you need access must use the same Azure subscription.
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Both the managed identity and the target Azure resource where you need access must use the same Azure subscription.
* To give a managed identity access to an Azure resource, you need to add a role to the target resource for that identity. To add roles, you need [Azure AD administrator permissions](../active-directory/roles/permissions-reference.md) that can assign roles to identities in the corresponding Azure AD tenant.
logic-apps Create Monitoring Tracking Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-monitoring-tracking-queries.md
Title: View and create queries for logic apps in Azure Monitor logs
-description: View and create queries in Azure Monitor logs for Azure Logic Apps
+description: View and create queries in Azure Monitor logs for Azure Logic Apps.
ms.suite: integration--++ Last updated 01/30/2020
You can view the underlying queries that produce the results from [Azure Monitor
## Prerequisites
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+ * A Log Analytics workspace. If you don't have a Log Analytics workspace, learn [how to create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md). * A logic app that's set up with Azure Monitor logging and sends that information to a Log Analytics workspace. Learn [how to set up Azure Monitor logs for your logic app](../logic-apps/monitor-logic-apps.md).
logic-apps Custom Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/custom-connector-overview.md
Title: Custom connector topic links
-description: Links to topics about how to create, use, share, and certify custom connectors
+description: Links to topics about how to create, use, share, and certify custom connectors in Azure Logic Apps.
ms.suite: integration--++ Last updated 1/30/2018
logic-apps Customer Managed Keys Integration Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/customer-managed-keys-integration-service-environment.md
Title: Set up customer-managed keys to encrypt data at rest in ISEs
-description: Create and manage your own encryption keys to secure data at rest for integration service environments (ISEs) in Azure Logic Apps
+description: Create and manage your own encryption keys to secure data at rest for integration service environments (ISEs) in Azure Logic Apps.
ms.suite: integration -+ Last updated 01/20/2021
logic-apps Designer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/designer-overview.md
Title: About single-tenant workflow designer
description: Learn how the designer in single-tenant Azure Logic Apps helps you visually create workflows through the Azure portal. Discover the benefits and features in this latest version. ms.suite: integration-+ Last updated 06/30/2021
logic-apps Estimate Storage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/estimate-storage-costs.md
Storage costs change based on your workflows' content. Different triggers, actio
## Prerequisites
-* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* A single-tenant based logic Apps workflow. You can create a workflow [using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). If you don't have a workflow yet, you can use the sample small, medium, and large workflows in the storage calculator.
logic-apps Export From Microsoft Flow Logic App Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-microsoft-flow-logic-app-template.md
Title: Export flows from Power Automate to Azure Logic Apps
-description: Migrate flows from Power Automate to Azure Logic Apps by exporting as Azure Resource Manager templates
+description: Migrate flows from Power Automate to Azure Logic Apps by exporting as Azure Resource Manager templates.
ms.suite: integration--++ Last updated 02/22/2021
Not all Power Automate connectors are available in Azure Logic Apps. You can mig
## Prerequisites
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* The flow that you want to export from Power Automate
logic-apps Handle Long Running Stored Procedures Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/handle-long-running-stored-procedures-sql-connector.md
Title: Handle long-running stored procedures in the SQL connector
-description: How to handle stored procedures that time out when using the SQL connector in Azure Logic Apps
+description: Handle stored procedures that time out when using the SQL connector in Azure Logic Apps.
ms.suite: integration--++ Last updated 10/27/2020
logic-apps Handle Throttling Problems 429 Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/handle-throttling-problems-429-errors.md
Title: Handle throttling problems, or '429 - Too many requests' errors
-description: How to work around throttling problems, or 'HTTP 429 Too many requests' errors, in Azure Logic Apps
+description: How to work around throttling problems or 'HTTP 429 Too many requests' errors in Azure Logic Apps.
ms.suite: integration--++ Last updated 04/13/2020
To handle throttling at this level, you have these options:
For this option, consider whether you can distribute the workload by dividing an action's requests across multiple connections to the same destination using the same credentials.
- For example, suppose that your logic app gets tables from a SQL Server database and then gets the rows from each table. Based on the number of rows that you have to process, you can use multiple connections and multiple **For each** loops to divide the total number of rows into smaller sets for processing. This scenario uses two **For each** loops to split the total number of rows in half. The first **For each** loop uses an expression that gets the first half. The other **For each** loop uses a different expression that gets the second half, for example:<p>
+ For example, suppose that your logic app gets tables from a SQL Server database and then gets the rows from each table. Based on the number of rows that you have to process, you can use multiple connections and multiple **For each** loops to divide the total number of rows into smaller sets for processing. This scenario uses two **For each** loops to split the total number of rows in half. The first **For each** loop uses an expression that gets the first half. The other **For each** loop uses a different expression that gets the second half, for example:<br><br>
- * Expression 1: The `take()` function gets the front of a collection. For more information, see the [**`take()`** function](workflow-definition-language-functions-reference.md#take).
+ * Expression 1: The `take()` function gets the front of a collection. For more information, see the [**`take()`** function](workflow-definition-language-functions-reference.md#take).
- `@take(collection-or-array-name, div(length(collection-or-array-name), 2))`
+ `@take(collection-or-array-name, div(length(collection-or-array-name), 2))`
- * Expression 2: The `skip()` function removes the front of a collection and returns all the other items. For more information, see the [**`skip()`** function](workflow-definition-language-functions-reference.md#skip).
+ * Expression 2: The `skip()` function removes the front of a collection and returns all the other items. For more information, see the [**`skip()`** function](workflow-definition-language-functions-reference.md#skip).
- `@skip(collection-or-array-name, div(length(collection-or-array-name), 2))`
+ `@skip(collection-or-array-name, div(length(collection-or-array-name), 2))`
Here's a visual example that shows how you can use these expressions:
logic-apps Healthy Unhealthy Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/healthy-unhealthy-resource.md
Title: Set up logging to monitor logic apps in Azure Security Center
description: Monitor the health of your Logic Apps resources in Azure Security Center by setting up diagnostic logging. ms.suite: integration--++ Last updated 12/07/2020
When you monitor your Logic Apps resources in [Microsoft Azure Security Center](
## Prerequisites
-* An Azure subscription. If you don't have a subscription, [create a free Azure account](https://azure.microsoft.com/free/) before you start.
+* An Azure subscription. If you don't have a subscription, [create a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+ * Existing logic apps with [diagnostic logging enabled](#enable-diagnostic-logging).+ * A Log Analytics workspace, which is required to enable logging for your logic app. If you don't have a workspace, first [create your workspace](../azure-monitor/logs/quick-create-workspace.md). ## Enable diagnostic logging
logic-apps Ise Manage Integration Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/ise-manage-integration-service-environment.md
Title: Manage integration service environments in Azure Logic Apps
-description: Check network health and manage logic apps, connections, custom connectors, and integration accounts in your integration service environment (ISE) for Azure Logic Apps
+description: Check network health and manage logic apps, connections, custom connectors, and integration accounts in your integration service environment (ISE) for Azure Logic Apps.
ms.suite: integration -+ Last updated 10/25/2020
logic-apps Logic Apps Add Run Inline Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-add-run-inline-code.md
Title: Add and run code snippets by using inline code
-description: Learn how to create and run code snippets by using inline code actions for automated tasks and workflows that you create with Azure Logic Apps
+description: Learn how to create and run code snippets by using inline code actions for automated tasks and workflows that you create with Azure Logic Apps.
ms.suite: integration--++ Last updated 05/25/2021
In this article, the example logic app triggers when a new email arrives in a wo
## Prerequisites
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* The logic app workflow where you want to add your code snippet, including a trigger. The example in this topic uses the Office 365 Outlook trigger that's named **When a new email arrives**.
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Advanced forecasting configuration includes:
* rolling window aggregate features
-See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Beverage Production Forecast](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb).
+See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb).
### Computer vision (preview)
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Below is a list of common resources that might run out of quota when using Azure
* [CPU](#cpu-quota) * [Disk](#disk-quota)
+* [Memory](#memory-quota)
* [Role assignments](#role-assignment-quota) * [Endpoints](#endpoint-quota) * [Kubernetes](#kubernetes-quota)
A possible mitigation is to check if there are unused deployments that can be de
#### Disk quota
-This issue happens when the size of the model is larger than the available disk space and the model is not able to be downloaded. Try an SKU with more disk space.
+This issue happens when the size of the model is larger than the available disk space and the model is not able to be downloaded. Try a SKU with more disk space.
+* Try a [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md) with more disk space
+* Try reducing image and model size
+
+#### Memory quota
+This issue happens when the memory footprint of the model is larger than the available memory. Try a [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md) with more memory.<br>
#### Role assignment quota
marketplace Azure Private Plan Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-private-plan-troubleshooting.md
While troubleshooting the Azure Subscription Hierarchy, keep these things in min
## Troubleshooting Checklist -- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your tenant ID - Azure Active Directory | Microsoft Docs](../active-directory/fundamentals/active-directory-how-to-find-tenant.md). For VMs use the [Azure Subscription ID. (video guide)](../media-services/latest/setup-azure-subscription-how-to.md?tabs=portal)
+- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md). For VMs use the [Azure Subscription ID. (video guide)](../media-services/latest/setup-azure-subscription-how-to.md?tabs=portal)
- ISV to ensure that the Customer is not buying through a CSP. Private Plans are not available on a CSP-managed subscription. - Customer to ensure customer is logging in with an email ID that is registered under the same tenant ID (use the same user ID they used in step #1 above)-- ISV to ask the customer to find the Private Plan in Azure Marketplace: [Private offers in Azure Marketplace - Microsoft marketplace | Microsoft Docs](/marketplace/private-plans)-- Customer to ensure marketplace is enabled - [Azure Marketplace | Microsoft Docs](../cost-management-billing/manage/ea-azure-marketplace.md) ΓÇô if it is not, the user has to contact their Azure Administrator to enable marketplace, for more information regarding Azure Marketplace, see [Azure Marketplace | Microsoft Docs](../cost-management-billing/manage/ea-azure-marketplace.md).-- (Customer) If the offer is still not visible, itΓÇÖs possible that the customer has Private Marketplace enabled - Customer to Ask the Azure Administrator to enable the specific Private Plan in Private Marketplace: [Create and manage Private Azure Marketplace in the Azure portal - Microsoft marketplace | Microsoft Docs](/marketplace/create-manage-private-azure-marketplace-new)
+- ISV to ask the customer to find the Private Plan in Azure Marketplace: [Private plans in Azure Marketplace](/marketplace/private-plans)
+- Customer to ensure marketplace is enabled - [Azure Marketplace](../cost-management-billing/manage/ea-azure-marketplace.md) ΓÇô if it is not, the user has to contact their Azure Administrator to enable marketplace, for more information regarding Azure Marketplace, see [Azure Marketplace](../cost-management-billing/manage/ea-azure-marketplace.md).
+- (Customer) If the offer is still not visible, itΓÇÖs possible that the customer has Private Marketplace enabled - Customer to Ask the Azure Administrator to enable the specific Private Plan in Private Marketplace: [Create and manage Private Azure Marketplace collections in the Azure portal](/marketplace/create-manage-private-azure-marketplace-new)
- If the Private Plan is visible, and the deployment fails, the troubleshooting moves to ensuring the customer allows for Marketplace billing:
- - (Customer) The Azure Administrator must follow the instructions in [Azure EA portal administration | Microsoft Docs](../cost-management-billing/manage/ea-portal-administration.md), and discuss with their Microsoft Representative the steps to enable billing for Marketplace
+ - (Customer) The Azure Administrator must follow the instructions in [Azure EA portal administration](../cost-management-billing/manage/ea-portal-administration.md), and discuss with their Microsoft Representative the steps to enable billing for Marketplace
- (customer) [This documentation](../cost-management-billing/manage/ea-portal-administration.md) explains the details to enable Marketplace billing for customers with an Azure Enterprise Agreement. ### If all else fails, open a ticket and create a HAR file
While troubleshooting the Azure Subscription Hierarchy, keep these things in min
## Next steps -- [Create an Azure Support Request](../azure-portal/supportability/how-to-create-azure-support-request.md)
+- [Create an Azure Support Request](../azure-portal/supportability/how-to-create-azure-support-request.md)
marketplace Azure Vm Image Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-image-test.md
The Certification Test Tool for Azure Certified runs on a local Windows machine
1. Download and install the most recent [Certification Test Tool for Azure Certified](https://www.microsoft.com/download/details.aspx?id=44299). 2. Open the certification tool, then select **Start New Test**. 3. From the Test Information screen, enter a **Test Name** for the test run.
-4. Select the Platform for your VM, either **Windows Server** or **Linux**. Your platform choice affects the remaining options.
+4. Select the Platform for your VM, either **Windows Server** (allow port 5986 for Windows) or **Linux** (allow port 22 for Linux). Your platform choice affects the remaining options.
5. If your VM is using this database service, select the **Test for Azure SQL Database** check box. #### Connect the certification tool to a VM image
$accessToken = $response.access_token
:::image type="content" source="media/vm/generate-access-token.png" lightbox="media/vm/generate-access-token.png" alt-text="Shows a screen example for generating an access token in PowerShell.":::
-This example shows a PowerShell call to the API:
+This example shows a PowerShell call to the API (allow port 22 during VM creation):
```POWERSHELL $accesstoken = "token"
Call the API in PowerShell:
2. The method is Post and content type is JSON, as shown in the following code example and sample screen. 3. Create the body parameters in JSON format.
-This code sample shows a PowerShell call to the API:
+This code sample shows a PowerShell call to the API (allow port 5986 during VM creation):
```PowerShell $accesstoken = "Get token for your Client AAD App"
marketplace Marketplace Apis Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-apis-guide.md
The activities below are not sequential. The activity you use is dependent on yo
| <center>Activity | ISV sales activities | Corresponding Marketplace API | Corresponding Marketplace UI | | | | | |
-| <center>**1. Product Marketing**<br><img src="medi)</ul> | Create product messaging, positioning, promotion, pricing<br>Partner Center (PC) → Offer Creation |
+| <center>**1. Product Marketing**<br><img src="medi)</ul> | Create product messaging, positioning, promotion, pricing<br>Partner Center (PC) → Offer Creation |
| <center>**2. Demand Generation**<br><img src="medi)<br>[Co-Sell Connector for SalesForce CRM](/partner-center/connector-salesforce)<br>[Co-Sell Connector for Dynamics 365 CRM](/partner-center/connector-dynamics) | Product Promotion<br>Lead nurturing<br>Eval, trial & PoC<br>Azure Marketplace and AppSource<br>PC Marketplace Insights<br>PC Co-Sell Opportunities | | <center>**3. Negotiation and Quote Creation**<br><img src="medi)<br>[Partner Center '7' API Family](https://apidocs.microsoft.com/services/partnercenter) | T&Cs<br>Pricing<br>Discount approvals<br>Final quote<br>PC → Plans (public or private) | | <center>**4. Sale**<br><img src="medi)<br>[Reporting APIs](https://partneranalytics-api.azureedge.net/partneranalytics-api/Programmatic%20Access%20to%20Commercial%20Marketplace%20Analytics%20Data_v1.pdf) | Contract signing<br>Revenue Recognition<br>Invoicing<br>Billing<br>Azure portal / Admin Center<br>PC Marketplace Rewards<br>PC Payouts Reports<br>PC Marketplace Analytics<br>PC Co-Sell Closing |
The activities below are not sequential. The activity you use is dependent on yo
## Next steps -- Visit the links above for each API as needed.
+- Visit the links above for each API as needed.
marketplace Submission Api Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/submission-api-onboard.md
For the tenant_id value in the POST URI and the client_id and client_secret para
### Step 3: Use the Partner Center submission API
-After you have an Azure AD access token, call methods in the Partner Center submission API. To create or update submissions, you typically call multiple methods in the Partner Center submission API in a specific order. For information about each scenario and the syntax of each method, see the [Ingestion API](https://apidocs.microsoft.com/services/partneringestion/) swagger.
+After you have an Azure AD access token, call methods in the Partner Center submission API. To create or update submissions, you typically call multiple methods in the Partner Center submission API in a specific order. For information about each scenario and the syntax of each method, see the [Ingestion API](https://ingestionapi-swagger.azureedge.net/#/) Swagger.
## Next steps
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md
Azure Migrate: Server Migration tool migrates all the UEFI-based machines to Azu
| SUSE Linux Enterprise Server 15 SP1 | Y | Y | Y | | SUSE Linux Enterprise Server 12 SP4 | Y | Y | Y | | Ubuntu Server 16.04, 18.04, 19.04, 19.10 | Y | Y | Y |
-| RHEL 8.1, 8.0, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x | Y<br> _RHEL 8.x requires [manual preparation](./prepare-for-migration.md#linux-machines)_ | Y | Y |
-| Cent OS 8.1, 8.0, 7.7, 7.6, 7.5, 7.4, 6.x | Y<br>_Cent OS 8.x requires [manual preparation](./prepare-for-migration.md#linux-machines)_ | Y | Y |
+| RHEL 8.1, 8.0, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x | Y | Y | Y |
+| Cent OS 8.1, 8.0, 7.7, 7.6, 7.5, 7.4, 6.x | Y | Y | Y |
| Oracle Linux 7.7, 7.7-CI | Y | Y | Y | ### Can I migrate Active Directory domain-controllers using Azure Migrate?
mysql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-create-users.md
After you create an Azure Database for MySQL server, you can use the first serve
CREATE DATABASE testdb; ```
-## Create a non-dmin user
+## Create a non-admin user
Now that the database is created , you can create with a non-admin user with the ``` CREATE USER``` MySQL statement. ``` sql CREATE USER 'db_user'@'%' IDENTIFIED BY 'StrongPassword!';
postgresql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-azure-cli.md
You can provide additional arguments for this command to customize it. See all a
You can view all the arguments for this command with ```--help``` argument. ```azurecli
-az postgresql flexible-server connect --help
+az postgres flexible-server connect --help
``` ## Test database server connection
postgresql Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-limits.md
Previously updated : 01/14/2022 Last updated : 02/25/2022 # Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) limits and limitations
Every PostgreSQL connection (even idle ones) uses at least 10 MB of memory, so
it's important to limit simultaneous connections. Here are the limits we chose to keep nodes healthy:
-* Coordinator node
- * Maximum connections
- * 300 for 0-3 vCores
- * 500 for 4-15 vCores
- * 1000 for 16+ vCores
- * Maximum user connections
- * 297 for 0-3 vCores
- * 497 for 4-15 vCores
- * 997 for 16+ vCores
-* Worker node
- * Maximum connections
- * 600
-
-Attempts to connect beyond these limits will fail with an error. The system
-reserves three connections for monitoring nodes, which is why there are three
-fewer connections available for user queries than connections total.
+* Maximum connections per node
+ * 300 for 0-3 vCores
+ * 500 for 4-15 vCores
+ * 1000 for 16+ vCores
+
+The connection limits above are for *user* connections (`max_connections` minus
+`superuser_reserved_connections`). We reserve extra connections for
+administration and recovery.
+
+The limits apply to both worker nodes and the coordinator node. Attempts to
+connect beyond these limits will fail with an error.
#### Connection pooling
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) / postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com | | Azure Database for MySQL (Microsoft.DBforMySQL/servers) / mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com | | Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) / mariadbServer | privatelink.mariadb.database.azure.com | mariadb.database.azure.com |
-| Azure Key Vault (Microsoft.KeyVault/vaults) / vault | privatelink.vault.azure.net | vault.azure.net |
+| Azure Key Vault (Microsoft.KeyVault/vaults) / vault | privatelink.vaultcore.azure.net | vault.azure.net <br> vaultcore.azure.net |
| Azure Key Vault (Microsoft.KeyVault/managedHSMs) / Managed HSMs | privatelink.managedhsm.azure.net | managedhsm.azure.net | | Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) / management | privatelink.{region}.azmk8s.io | {region}.azmk8s.io | | Azure Search (Microsoft.Search/searchServices) / searchService | privatelink.search.windows.net | search.windows.net |
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure Purview | Microsoft.Purview/accounts | portal | | Azure Backup | Microsoft.RecoveryServices/vaults | vault | | Azure Relay | Microsoft.Relay/namespaces | namespace |
-| Microsoft Search | Microsoft.Search/searchServices | search service |
+| Azure Cognitive Search | Microsoft.Search/searchServices | search service |
| Azure Service Bus | Microsoft.ServiceBus/namespaces | namespace | | Azure SignalR Service | Microsoft.SignalRService/SignalR | signalr | | Azure SignalR Service | Microsoft.SignalRService/webPubSub | webpubsub |
purview Catalog Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-asset-details.md
Title: 'How to view, edit, and delete assets'
+ Title: How to view, edit, and delete assets
description: This how to guide describes how you can view and edit asset details. --++ Previously updated : 09/27/2021 Last updated : 02/24/2022 # View, edit and delete assets in Azure Purview catalog
-This article discusses how to you can view your assets and their relevant details. It also describes how you can edit and delete assets from your catalog.
+This article discusses how you can view your assets and their relevant details. It also describes how you can edit and delete assets from your catalog.
## Prerequisites
This article discusses how to you can view your assets and their relevant detail
## Viewing asset details
-You can discover your assets in Azure Purview by either:
-- [Browsing the Azure Purview Data catalog](how-to-browse-catalog.md)-- [Searching the Azure Purview Data Catalog](how-to-search-catalog.md)
+You can discover your assets in the Azure Purview data catalog by either:
+- [Browsing the data catalog](how-to-browse-catalog.md)
+- [Searching the data catalog](how-to-search-catalog.md)
Once you find the asset you are looking for, you can view all of its details, edit, or delete them as described in following sections.
Once you find the asset you are looking for, you can view all of its details, ed
:::image type="content" source="media/catalog-asset-details/asset-tabs.png" alt-text="Asset details tabs"::: -- **Overview** - The overview tab covers an asset's basic details like description, classification, hierarchy, and glossary terms.-- **Properties** - The properties tab covers both basic and advanced properties regarding an asset.
+- **Overview** - An asset's basic details like description, classification, hierarchy, and glossary terms.
+- **Properties** - The technical metadata and relationships discovered in the data source.
- **Schema** - The schema of the asset including column names, data types, column level classifications, terms, and descriptions are represented in the schema tab. - **Lineage** - This tab contains lineage graph details for assets where it is available. - **Contacts** - Every asset can have an assigned owner and expert that can be viewed and managed from the contacts tab.-- **Related** - This tab lets you navigate to assets that are related to the current asset you are viewing.
+- **Related** - This tab lets you navigate through the technical hierarchy of assets that are related to the current asset you are viewing.
## Asset overview The overview section of the asset details gives you a summarized view of an asset. The sections that follow explains the different parts of the overview page.
purview Concept Asset Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-asset-normalization.md
Previously updated : 07/23/2021 Last updated : 02/24/2022
Before: `https://mystore.azuredatalakestore.net/folderA/folderB/abc.csv`
After: `adl://mystore.azuredatalakestore.net/folderA/folderB/abc.csv`
+### Remove Trailing Slash
+Remove the trailing slash from higher level assets for Azure Blob, ADLS Gen1,and ADLS Gen2
+
+Applies to: Azure Blob, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2
+
+Asset types: "azure_blob_container", "azure_blob_service", "azure_storage_account", "azure_datalake_gen2_service", "azure_datalake_gen2_filesystem", "azure_datalake_gen1_account"
+
+Before: `https://myaccount.core.windows.net/`
+
+After: `https://myaccount.core.windows.net`
## Next steps [Scan in an Azure Blob Storage](register-scan-azure-blob-storage-source.md) account into the Azure Purview data map.
purview How To Certify Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-certify-assets.md
+
+ Title: Asset certification in the Azure Purview data catalog
+description: How to certify assets in the Azure Purview data catalog
+++++ Last updated : 02/24/2022+
+# Asset certification in the Azure Purview data catalog
+
+As an Azure Purview data catalog grows in size, it becomes important for data consumers to understand what assets they can trust. Data consumers must know if an asset meet their organization's quality standards and can be regarded as reliable. Azure Purview allows data stewards to manually endorse assets to indicate that they're ready to use across an organization or business unit. This article describes how data stewards can certify assets and data consumers can view certification labels.
+
+## How to certify an asset
+
+To certify an asset, you must be a **data curator** for the collection containing the asset.
+
+1. Navigate to the [asset details](catalog-asset-details.md) of the desired asset. Select **Edit**.
+
+ :::image type="content" source="media/how-to-certify-assets/edit-asset.png" alt-text="Edit an asset from the asset details page" border="true":::
+
+1. Toggle the **Certified** field to **Yes**.
+
+ :::image type="content" source="media/how-to-certify-assets/toggle-certification-on.png" alt-text="Toggle an asset to be certified" border="true":::
+
+1. Save your changes. The asset will now have a "Certified" label next to the asset name.
+
+ :::image type="content" source="media/how-to-certify-assets/view-certified-asset.png" alt-text="An asset with a certified label" border="true":::
+
+> [!NOTE]
+> PowerBI assets can only be [certified in a PowerBI workspace](https://docs.microsoft.com/power-bi/collaborate-share/service-endorse-content). PowerBI endorsement labels are displayed in Azure Purview's search and browse experiences.
+
+### Certify assets in bulk
+
+You can use the Azure Purview [bulk edit experience](how-to-bulk-edit-assets.md) to certify multiple assets at once.
+
+1. After searching or browsing the data catalog, select checkbox next to the assets you wish to certify.
+
+ :::image type="content" source="media/how-to-certify-assets/bulk-edit-select.png" alt-text="Select assets to bulk certify" border="true":::
+
+1. Select **View selected**.
+1. Select **Bulk edit**
+
+ :::image type="content" source="media/how-to-certify-assets/bulk-edit-open.png" alt-text="Open the bulk edit experience" border="true":::
+
+1. Choose attribute **Certified**, operation **Replace with**, and new value **Yes**.
+
+ :::image type="content" source="media/how-to-certify-assets/bulk-edit-certify.png" alt-text="Apply certification labels to all selected assets" border="true":::
+
+1. Select **Apply**
+
+All assets selected will have the "Certified" label.
+
+## Viewing certification labels in Search
+
+When search or browsing the data catalog, you'll see a certification label on any asset that is certified. Certified assets will also be boosted in search results to help data consumers discover them easily.
+++
+## Next steps
+
+Discover your assets in the Azure Purview data catalog by either:
+- [Browsing the data catalog](how-to-browse-catalog.md)
+- [Searching the data catalog](how-to-search-catalog.md)
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
This article outlines how to register on-premises SQL server instances, and how
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
+The supported SQL Server versions are 2005 and above. SQL Server Express LocalDB is not supported.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
service-bus-messaging Service Bus Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-partitioning.md
using (TransactionScope ts = new TransactionScope(committableTransaction))
ServiceBusMessage msg = new ServiceBusMessage("This is a message"); msg.PartitionKey = "myPartitionKey"; await sender.SendMessageAsync(msg);
- await ts.CompleteAsync();
+ ts.Complete();
} committableTransaction.Commit(); ```
using (TransactionScope ts = new TransactionScope(committableTransaction))
ServiceBusMessage msg = new ServiceBusMessage("This is a message"); msg.SessionId = "mySession"; await sender.SendMessageAsync(msg);
- await ts.CompleteAsync();
+ ts.Complete();
} committableTransaction.Commit(); ```
service-fabric How To Managed Cluster Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-autoscale.md
Last updated 2/14/2022
* Only [Azure Monitor published metrics](../azure-monitor/essentials/metrics-supported.md) are supported. >[!NOTE]
-> If using an OS Image "-with-containersö memory metrics from Azure Monitor will not be available.
+> If using Windows OS image with Hyper-V role enabled, ie. the VM will be configured for nested virtualization, the Available Memory Metric will not be available, since the dynamic memory driver within the VM will be in a stopped state.
A common scenario where autoscaling is useful is when the load on a particular service varies over time. For example, a service such as a gateway can scale based on the amount of resources necessary to handle incoming requests. Let's take a look at an example of what those scaling rules could look like and we'll use them later in the article:
service-fabric Service Fabric Cluster Scale In Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-scale-in-out.md
Virtual machine scale sets are an Azure compute resource that you can use to dep
> > > [!NOTE]
-> If using an OS Image "-with-containersΓÇ¥ memory metrics from Azure Monitor will not be available.
+> If using Windows OS image with Hyper-V role enabled, ie. the VM will be configured for nested virtualization, the Available Memory Metric will not be available, since the dynamic memory driver within the VM will be in a stopped state.
> >
site-recovery Vmware Azure Disaster Recovery Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-disaster-recovery-powershell.md
Errors : {}
## Create storage accounts for replication
-**To write to managed disk, use [Powershell Az.RecoveryServices module 2.0.0](https://www.powershellgallery.com/packages/Az.RecoveryServices/2.0.0-preview) onwards.** It only requires creation of a log storage account. It is recommended to use a standard account type and LRS redundancy since it is used to store only temporary logs. Ensure that the storage account is created in the same Azure region as the vault.
+**To write to managed disk, use [PowerShell Az.RecoveryServices module 2.0.0](https://www.powershellgallery.com/packages/Az.RecoveryServices/2.0.0-preview) onwards.** It only requires creation of a log storage account. It is recommended to use a standard account type and LRS redundancy since it is used to store only temporary logs. Ensure that the storage account is created in the same Azure region as the vault.
If you are using a version of Az.RecoveryServices module older than 2.0.0, use the following steps to create storage accounts. These storage accounts are used later to replicate virtual machines. Ensure that the storage accounts are created in the same Azure region as the vault. You can skip this step if you plan to use an existing storage account for replication.
spring-cloud How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-system-assigned-managed-identity.md
To set up a managed identity in the portal, first create an app, and then enable
3. Select **Identity**. 4. Within the **System assigned** tab, switch **Status** to *On*. Select **Save**.
-![Managed identity in portal](/images/enterprise/msi/msi-enable.png)
+![Managed identity in portal](./media/enterprise/msi/msi-enable.png)
# [Azure CLI](#tab/azure-cli)
To remove system-assigned managed identity from an app that no longer needs it:
1. Navigate to the desired application and select **Identity**. 1. Under **System assigned**/**Status**, select **Off** and then select **Save**:
-![Managed identity](/images/enterprise/msi/msi-disable.png)
+![Managed identity](./media/enterprise/msi/msi-disable.png)
# [Azure CLI](#tab/azure-cli)
spring-cloud How To Enterprise Marketplace Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-marketplace-offer.md
You can obtain and pay for a license to Tanzu components through an [Azure Marke
To purchase in the Azure Marketplace, you must meet the following prerequisites: - Your Azure subscription is registered to the `Microsoft.SaaS` resource provider. See how to [register a resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).-- Your Azure subscription has an [associated payment method](/marketplace/azure-marketplace-overview.md#purchasing-requirements). Azure credits or free MSDN subscriptions aren't supported.
+- Your Azure subscription has an [associated payment method](/marketplace/azure-marketplace-overview#purchasing-requirements). Azure credits or free MSDN subscriptions aren't supported.
- Your Azure subscription belongs to a billing account in a [Marketplace supported geographic location](../marketplace/marketplace-geo-availability-currencies.md#supported-geographic-locations). - Your organization allows [Azure Marketplace purchases](../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases).-- Your organization allows acquiring any Azure Marketplace software application listed in [Purchase policy management](/marketplace/azure-purchasing-invoicing.md#purchase-policy-management).
+- Your organization allows acquiring any Azure Marketplace software application listed in [Purchase policy management](/marketplace/azure-purchasing-invoicing#purchase-policy-management).
## View Azure Spring Cloud Enterprise Tier offering from Azure Marketplace
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
If you choose to stream your logs to an event hub, you'll pay for the volume of
> [!div class="mx-imgBorder"] > ![Diagnostic settings page log analytics](media/monitor-blob-storage/diagnostic-logs-settings-pane-log-analytics.png) + ### [PowerShell](#tab/azure-powershell) 1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
If you choose to stream your logs to an event hub, you'll pay for the volume of
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter. ```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log> -RetentionEnabled <retention-bool> -RetentionInDays <number-of-days>
+Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log>
``` Here's an example:
For a description of each parameter, see the [Stream Data to Event Hubs via Powe
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. ```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log> -RetentionEnabled <retention-bool> -RetentionInDays <number-of-days>
+Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
``` + Here's an example: `Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/blobServices/default -WorkspaceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace -Enabled $true -Category StorageDelete`
If you choose to stream your logs to an event hub, you'll pay for the volume of
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. ```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true "retentionPolicy": {"days": <number-days>, "enabled": <retention-bool}}]'
+az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true}]'
``` Here's an example:
For a description of each parameter, see the [Stream data to Event Hubs via Azur
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. ```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true "retentionPolicy": {"days": <days>, "enabled": <retention-bool}}]'
+az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
``` + Here's an example: `az monitor diagnostic-settings create --name setting1 --workspace /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/blobServices/default --logs '[{"category": StorageDelete, "enabled": true ]'`
storage Network File System Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-known-issues.md
Title: Known issues with NFS 3.0 in Azure Blob Storage
-description: Learn about limitations and known issues of Network File System (NFS) 3.0 protocol support in Azure Blob Storage.
+description: Learn about limitations and known issues of Network File System (NFS) 3.0 protocol support for Azure Blob Storage.
-# Known issues with Network File System (NFS) 3.0 protocol support in Azure Blob Storage
+# Known issues with Network File System (NFS) 3.0 protocol support for Azure Blob Storage
-This article describes limitations and known issues of Network File System (NFS) 3.0 protocol support in Azure Blob Storage.
+This article describes limitations and known issues of Network File System (NFS) 3.0 protocol support for Azure Blob Storage.
> [!IMPORTANT] > Because you must enable the hierarchical namespace feature of your account to use NFS 3.0, all of the known issues that are described in the [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md) article also apply to your account.
Windows client for NFS is not yet supported
When you enable NFS 3.0 protocol support, some Blob Storage features will be fully supported, but some features might be supported only at the preview level or not yet supported at all.
-To see how each Blob Storage feature is supported in accounts that have NFS 3.0 support enabled, see [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md).
+To see how each Blob Storage feature is supported in accounts that have NFS 3.0 support enabled, see [Blob Storage feature support for Azure Storage accounts](storage-feature-support-in-storage-accounts.md).
## See also -- [Network File System (NFS) 3.0 protocol support in Azure Blob Storage](network-file-system-protocol-support.md)
+- [Network File System (NFS) 3.0 protocol support for Azure Blob Storage](network-file-system-protocol-support.md)
- [Mount Blob storage by using the Network File System (NFS) 3.0 protocol](network-file-system-protocol-support-how-to.md)
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md
# Mount Blob Storage by using the Network File System (NFS) 3.0 protocol
-This article provides guidance on how to mount a container in Azure Blob Storage from a Linux-based Azure virtual machine (VM) or a Linux system that runs on-premises by using the Network File System (NFS) 3.0 protocol. To learn more about NFS 3.0 protocol support in Blob Storage, see [Network File System (NFS) 3.0 protocol support in Azure Blob Storage](network-file-system-protocol-support.md).
+This article provides guidance on how to mount a container in Azure Blob Storage from a Linux-based Azure virtual machine (VM) or a Linux system that runs on-premises by using the Network File System (NFS) 3.0 protocol. To learn more about NFS 3.0 protocol support in Blob Storage, see [Network File System (NFS) 3.0 protocol support for Azure Blob Storage](network-file-system-protocol-support.md).
## Step 1: Create an Azure virtual network
Create a directory on your Linux system, and then mount the container in the sto
## See also -- [Network File System (NFS) 3.0 protocol support in Azure Blob Storage](network-file-system-protocol-support.md)-- [Known issues with Network File System (NFS) 3.0 protocol support in Azure Blob Storage](network-file-system-protocol-known-issues.md)
+- [Network File System (NFS) 3.0 protocol support for Azure Blob Storage](network-file-system-protocol-support.md)
+- [Known issues with Network File System (NFS) 3.0 protocol support for Azure Blob Storage](network-file-system-protocol-known-issues.md)
storage Network File System Protocol Support Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-performance.md
# Network File System (NFS) 3.0 performance considerations in Azure Blob storage
-Blob storage now supports the Network File System (NFS) 3.0 protocol. This article contains recommendations that help you to optimize the performance of your storage requests. To learn more about NFS 3.0 support in Azure Blob Storage, see [Network File System (NFS) 3.0 protocol support in Azure Blob storage](network-file-system-protocol-support.md).
+Blob storage now supports the Network File System (NFS) 3.0 protocol. This article contains recommendations that help you to optimize the performance of your storage requests. To learn more about NFS 3.0 support for Azure Blob Storage, see [Network File System (NFS) 3.0 protocol support for Azure Blob storage](network-file-system-protocol-support.md).
## Add clients to increase throughput
Some applications may require low latency in addition to high throughput. You ca
## Next steps -- To learn more about NFS 3.0 support in Azure Blob Storage, see [Network File System (NFS) 3.0 protocol support in Azure Blob storage](network-file-system-protocol-support.md).
+- To learn more about NFS 3.0 support for Azure Blob Storage, see [Network File System (NFS) 3.0 protocol support for Azure Blob storage](network-file-system-protocol-support.md).
- To get started, see [Mount Blob storage by using the Network File System (NFS) 3.0 protocol](network-file-system-protocol-support-how-to.md).
storage Network File System Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support.md
Title: Network File System 3.0 support in Azure Blob Storage
+ Title: Network File System 3.0 support for Azure Blob Storage
description: Blob storage now supports the Network File System (NFS) 3.0 protocol. This support enables Linux clients to mount a container in Blob storage from an Azure Virtual Machine (VM) or a computer that runs on-premises.
-# Network File System (NFS) 3.0 protocol support in Azure Blob Storage
+# Network File System (NFS) 3.0 protocol support for Azure Blob Storage
Blob storage now supports the Network File System (NFS) 3.0 protocol. This support provides Linux file system compatibility at object storage scale and prices and enables Linux clients to mount a container in Blob storage from an Azure Virtual Machine (VM) or a computer on-premises.
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
Title: Host keys for SFTP support in Azure Blob Storage (preview) | Microsoft Docs
+ Title: Host keys for SFTP support for Azure Blob Storage (preview) | Microsoft Docs
description: Find a list of valid host keys when using an SFTP client to connect with Azure Blob Storage.
This article contains a list of valid host keys used to connect to Azure Blob Storage from SFTP clients.
-Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to leverage SFTP for file access, file transfer, as well as file management. For more information, see [SSH File Transfer Protocol (SFTP) support in Azure Blob Storage](secure-file-transfer-protocol-support.md).
+Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to leverage SFTP for file access, file transfer, as well as file management. For more information, see [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md).
When you connect to Blob Storage by using an SFTP client, you might be prompted to trust a host key. During the public preview, you can verify the host key by finding that key in the list presented in this article.
When you connect to Blob Storage by using an SFTP client, you might be prompted
## See also -- [SSH File Transfer Protocol (SFTP) support in Azure Blob Storage](secure-file-transfer-protocol-support.md)
+- [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md)
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
Title: Known issues with SFTP in Azure Blob Storage (preview) | Microsoft Docs
-description: Learn about limitations and known issues of SSH File Transfer Protocol (SFTP) support in Azure Blob Storage.
+description: Learn about limitations and known issues of SSH File Transfer Protocol (SFTP) support for Azure Blob Storage.
-# Known issues with SSH File Transfer Protocol (SFTP) support in Azure Blob Storage (preview)
+# Known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
-This article describes limitations and known issues of SFTP support in Azure Blob Storage.
+This article describes limitations and known issues of SFTP support for Azure Blob Storage.
> [!IMPORTANT] > SFTP support is currently in PREVIEW and is available in [these regions](secure-file-transfer-protocol-support.md#regional-availability).
This article describes limitations and known issues of SFTP support in Azure Blo
## See also -- [SSH File Transfer Protocol (SFTP) support in Azure Blob Storage](secure-file-transfer-protocol-support.md)
+- [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md)
- [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)](secure-file-transfer-protocol-support-how-to.md)
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
You can securely connect to the Blob Storage endpoint of an Azure Storage account by using an SFTP client, and then upload and download files. This article shows you how to enable SFTP, and then connect to Blob Storage by using an SFTP client.
-To learn more about SFTP support in Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) in Azure Blob Storage](secure-file-transfer-protocol-support.md).
+To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) in Azure Blob Storage](secure-file-transfer-protocol-support.md).
> [!IMPORTANT] > SFTP support is currently in PREVIEW and is available in [these regions](secure-file-transfer-protocol-support.md#regional-availability).
See the documentation of your SFTP client for guidance about how to connect and
## See also -- [SSH File Transfer Protocol (SFTP) support in Azure Blob Storage](secure-file-transfer-protocol-support.md)-- [Known issues with SSH File Transfer Protocol (SFTP) support in Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
+- [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md)
+- [Known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Title: SFTP support in Azure Blob Storage (preview) | Microsoft Docs
+ Title: SFTP support for Azure Blob Storage (preview) | Microsoft Docs
description: Blob storage now supports the SSH File Transfer Protocol (SFTP).
-# SSH File Transfer Protocol (SFTP) support in Azure Blob Storage (preview)
+# SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to leverage SFTP for file access, file transfer, as well as file management.
Azure allows secure data transfer to Blob Storage accounts using Azure Blob serv
Prior to the release of this feature, if you wanted to use SFTP to transfer data to Azure Blob Storage you would have to either purchase a third party product or orchestrate your own solution. You would have to create a virtual machine (VM) in Azure to host an SFTP server, and then figure out a way to move data into the storage account.
-Now, with SFTP support in Azure Blob Storage, you can enable an SFTP endpoint for Blob Storage accounts with a single setting. Then you can set up local user identities for authentication to transfer data securely without the need to do any additional work.
+Now, with SFTP support for Azure Blob Storage, you can enable an SFTP endpoint for Blob Storage accounts with a single setting. Then you can set up local user identities for authentication to transfer data securely without the need to do any additional work.
-This article describes SFTP support in Azure Blob Storage. To learn how to enable SFTP for your storage account, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)](secure-file-transfer-protocol-support-how-to.md).
+This article describes SFTP support for Azure Blob Storage. To learn how to enable SFTP for your storage account, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)](secure-file-transfer-protocol-support-how-to.md).
## SFTP and the hierarchical namespace
You can use many different SFTP clients to securely connect and then transfer fi
| ecdsa-sha2-nistp384| diffie-hellman-group16-sha512 | aes256-cbc | | ||| aes192-cbc ||
-SFTP support in Azure Blob Storage currently limits its cryptographic algorithm support based on security considerations. We strongly recommend that customers utilize Microsoft Security Development Lifecycle (SDL) approved algorithms to securely access their data. More details can be found [here](/security/sdl/cryptographic-recommendations)
+SFTP support for Azure Blob Storage currently limits its cryptographic algorithm support based on security considerations. We strongly recommend that customers utilize Microsoft Security Development Lifecycle (SDL) approved algorithms to securely access their data. More details can be found [here](/security/sdl/cryptographic-recommendations)
## Known issues and limitations
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
To apply a virtual network rule to a storage account, the user must have the app
Storage account and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant. > [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through Powershell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
### Available virtual network regions
You can manage virtual network rules for storage accounts through the Azure port
> [!NOTE] > If a service endpoint for Azure Storage wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation. >
- > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use , Powershell, CLI or REST APIs.
+ > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use , PowerShell, CLI or REST APIs.
> > Even if you registered the `AllowGlobalTagsForStorageOnly` feature, subnets in regions other than the region of the storage account or its paired region aren't shown for selection. If you want to enable access to your storage account from a virtual network/subnet in a different region, use the instructions in the PowerShell or Azure CLI tabs.
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring.md
If you choose to stream your logs to an event hub, you'll pay for the volume of
> [!div class="mx-imgBorder"] > ![Diagnostic settings page log analytics](media/storage-files-monitoring/diagnostic-logs-settings-pane-log-analytics.png) + ### [PowerShell](#tab/azure-powershell) 1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
If you choose to stream your logs to an event hub, you'll pay for the volume of
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter. ```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log> -RetentionEnabled <retention-bool> -RetentionInDays <number-of-days>
+Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log>
``` Here's an example:
For a description of each parameter, see the [Stream Data to Event Hubs via Powe
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. ```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log> -RetentionEnabled <retention-bool> -RetentionInDays <number-of-days>
+Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
``` + Here's an example: `Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/fileServices/default -WorkspaceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace -Enabled $true -Category StorageDelete`
If you choose to stream your logs to an event hub, you'll pay for the volume of
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. ```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true "retentionPolicy": {"days": <number-days>, "enabled": <retention-bool}}]'
+az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true}]'
``` Here's an example:
For a description of each parameter, see the [Stream data to Event Hubs via Azur
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. ```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true "retentionPolicy": {"days": <days>, "enabled": <retention-bool}}]'
+az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
``` + Here's an example: `az monitor diagnostic-settings create --name setting1 --workspace /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/fileServices/default --logs '[{"category": StorageDelete, "enabled": true ]'`
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
By Setting up a VPN to your specific Storage Account, the traffic will go throug
#### Solution 3 ΓÇö Unblock port 445 with help of your ISP/IT Admin Work with your IT department or ISP to open port 445 outbound to [Azure IP ranges](https://www.microsoft.com/download/details.aspx?id=41653).
-#### Solution 4 ΓÇö Use REST API-based tools like Storage Explorer/Powershell
+#### Solution 4 ΓÇö Use REST API-based tools like Storage Explorer/PowerShell
Azure Files also supports REST in addition to SMB. REST access works over port 443 (standard tcp). There are various tools that are written using REST API that enable rich UI experience. [Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows) is one of them. [Download and Install Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) and connect to your file share backed by Azure Files. You can also use [PowerShell](./storage-how-to-use-files-portal.md) which also user REST API. ### Cause 2: NTLMv1 is enabled
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md
If you choose to stream your logs to an event hub, you'll pay for the volume of
> [!div class="mx-imgBorder"] > ![Diagnostic settings page log analytics](media/monitor-queue-storage/diagnostic-logs-settings-pane-log-analytics.png) + ### [PowerShell](#tab/azure-powershell) 1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
If you choose to stream your logs to an event hub, you'll pay for the volume of
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter. ```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log> -RetentionEnabled <retention-bool> -RetentionInDays <number-of-days>
+Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log>
``` Here's an example:
For a description of each parameter, see [Stream data to Event Hubs via PowerShe
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. ```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log> -RetentionEnabled <retention-bool> -RetentionInDays <number-of-days>
+Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
``` + Here's an example: `Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/queueServices/default -WorkspaceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace -Enabled $true -Category StorageDelete`
If you choose to stream your logs to an event hub, you'll pay for the volume of
Enable logs by using the [`az monitor diagnostic-settings create`](/cli/azure/monitor/diagnostic-settings#az_monitor_diagnostic_settings_create) command. ```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true "retentionPolicy": {"days": <number-days>, "enabled": <retention-bool}}]'
+az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true}]'
``` Here's an example:
For a description of each parameter, see [Stream data to Event Hubs via Azure CL
Enable logs by using the [`az monitor diagnostic-settings create`](/cli/azure/monitor/diagnostic-settings#az_monitor_diagnostic_settings_create) command. ```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true "retentionPolicy": {"days": <days>, "enabled": <retention-bool}}]'
+az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
``` + Here's an example: `az monitor diagnostic-settings create --name setting1 --workspace /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/queueServices/default --logs '[{"category": StorageDelete, "enabled": true ]'`
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md
If you choose to stream your logs to an event hub, you'll pay for the volume of
> [!div class="mx-imgBorder"] > ![Diagnostic settings page log analytics](media/monitor-table-storage/diagnostic-logs-settings-pane-log-analytics.png) + ### [PowerShell](#tab/azure-powershell) 1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
If you choose to stream your logs to an event hub, you'll pay for the volume of
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter. ```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log> -RetentionEnabled <retention-bool> -RetentionInDays <number-of-days>
+Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log>
``` Here's an example:
For more information about sending resource logs to event hubs, see [Azure Resou
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. ```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log> -RetentionEnabled <retention-bool> -RetentionInDays <number-of-days>
+Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
``` + Here's an example: `Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/tableServices/default -WorkspaceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace -Enabled $true -Category StorageDelete`
If you choose to archive your logs to a storage account, you'll pay for the volu
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az_monitor_diagnostic_settings_create) command. ```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --storage-account <storage-account-name> --resource <storage-service-resource-id> --resource-group <resource-group> --logs '[{"category": <operations>, "enabled": true "retentionPolicy": {"days": <number-days>, "enabled": <retention-bool}}]'
+az monitor diagnostic-settings create --name <setting-name> --storage-account <storage-account-name> --resource <storage-service-resource-id> --resource-group <resource-group> --logs '[{"category": <operations>, "enabled": true}]'
``` Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID Table storage service. You can find the resource ID in the Azure portal by opening the **Properties** page of your storage account.
If you choose to stream your logs to an event hub, you'll pay for the volume of
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az_monitor_diagnostic_settings_create) command. ```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true "retentionPolicy": {"days": <number-days>, "enabled": <retention-bool}}]'
+az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true}]'
``` Here's an example:
Here's an example:
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az_monitor_diagnostic_settings_create) command. ```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true "retentionPolicy": {"days": <days>, "enabled": <retention-bool}}]'
+az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
``` + Here's an example: `az monitor diagnostic-settings create --name setting1 --workspace /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/tableServices/default --logs '[{"category": StorageDelete, "enabled": true ]'`
storsimple Storsimple 8000 Automation Azurerm Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-automation-azurerm-runbook.md
To create an automation module for the StorSimple 8000 series device management,
wget https://raw.githubusercontent.com/anoobbacker/storsimpledevicemgmttools/master/Monitor-Backups.ps1 -Out Monitor-Backups.ps1 ```
-5. Create an Azure Automation Runbook Module for StorSimple 8000 Series device management. On the Windows Powershell window, type the following commands:
+5. Create an Azure Automation Runbook Module for StorSimple 8000 Series device management. On the Windows PowerShell window, type the following commands:
```powershell # set path variables
storsimple Storsimple Data Manager Dotnet Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-data-manager-dotnet-jobs.md
Before you begin, ensure that you have:
- Visual Studio 2012, 2013, 2015, or 2017.
- - Azure Powershell. [Download Azure Powershell](/powershell/azure/).
+ - Azure PowerShell. [Download Azure PowerShell](/powershell/azure/).
* A correctly configured job definition in StorSimple Data Manager within a resource group. * All the required dlls. Download these dlls from the [GitHub repository](https://github.com/Azure-Samples/storsimple-dotnet-data-manager-get-started/tree/master/Data_Manager_Job_Run/dlls). * [`Get-ConfigurationParams.ps1`](https://github.com/Azure-Samples/storsimple-dotnet-data-manager-get-started/blob/master/Data_Manager_Job_Run/Get-ConfigurationParams.ps1) script from the GitHub repository.
stream-analytics Sql Database Upsert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-database-upsert.md
Update the `sqltext` command building section to match your own schema (notice h
```C# var sqltext =
- $"MERGE INTO [device03] AS old " +
+ $"MERGE INTO [device_updated] AS old " +
$"USING (VALUES ({DeviceId},{Value},'{Timestamp}')) AS new (DeviceId, Value, Timestamp) " + $"ON new.DeviceId = old.DeviceId " + $"WHEN MATCHED THEN UPDATE SET old.Value += new.Value, old.Timestamp = new.Timestamp " +
Outside of Azure Functions, there are multiple ways to achieve the expected resu
A background task will operate once the data is inserted in the database via the standard ASA SQL outputs.
-For Azure SQL, `INSTEAD OF` [DML triggers](/sql/relational-databases/triggers/dml-triggers?view=azuresqldb-current&preserve-view=true) can be used to intercept the INSERT commands issued by ASA and replace them with UPDATEs.
+For Azure SQL, `INSTEAD OF` [DML triggers](/sql/relational-databases/triggers/dml-triggers?view=azuresqldb-current&preserve-view=true) can be used to intercept the INSERT commands issued by ASA and replace them with UPDATE or MERGE:
+
+```SQL
+CREATE TRIGGER tr_devices_updated_upsert ON device_updated INSTEAD OF INSERT
+AS
+BEGIN
+ MERGE device_updated AS old
+ USING inserted AS new
+ ON new.DeviceId = old.DeviceId
+
+ WHEN MATCHED THEN
+ UPDATE SET
+ old.Value += new.Value,
+ old.Timestamp = new.Timestamp
+
+ WHEN NOT MATCHED THEN
+ INSERT (DeviceId, Value, Timestamp)
+ VALUES (new.DeviceId, new.Value, new.Timestamp);
+END;
+```
For Synapse SQL, ASA can insert into a [staging table](../synapse-analytics/sql/data-loading-best-practices.md#load-to-a-staging-table). A recurring task can then transform the data as needed into an intermediary table. Finally the [data is moved](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-partition.md#partition-switching) to the production table.
For further assistance, try our [Microsoft Q&A question page for Azure Stream An
* [Use managed identities to access Azure SQL Database or Azure Synapse Analytics from an Azure Stream Analytics job](sql-database-output-managed-identity.md) * [Use reference data from a SQL Database for an Azure Stream Analytics job](sql-reference-data.md) * [Run Azure Functions in Azure Stream Analytics jobs - Tutorial for Redis output](stream-analytics-with-azure-functions.md)
-* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
+* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
stream-analytics Stream Analytics Cicd Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-cicd-api.md
wget -q -O- --{ <method> } -data="<request body>" --header=Content-Type:applicat
### Windows
-For Windows, use Powershell:
+For Windows, use PowerShell:
```powershell $user = "<username>"
synapse-analytics Sql Data Warehouse Table Constraints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-table-constraints.md
Learn about table constraints in dedicated SQL pool, including primary key, fore
Dedicated SQL pool supports these table constraints: - PRIMARY KEY is only supported when NONCLUSTERED and NOT ENFORCED are both used. -- UNIQUE constraint is only supported with NOT ENFORCED is used.
+- UNIQUE constraint is only supported when NOT ENFORCED is used.
For syntax, check [ALTER TABLE](/sql/t-sql/statements/alter-table-transact-sql) and [CREATE TABLE](/sql/t-sql/statements/create-table-azure-sql-data-warehouse).
CREATE TABLE t6 (c1 INT UNIQUE NOT ENFORCED, c2 INT);
## Next steps
-After creating the tables for your dedicated SQL pool, the next step is to load data into the table. For a loading tutorial, see [Loading data to dedicated SQL pool](load-data-wideworldimportersdw.md).
+After creating the tables for your dedicated SQL pool, the next step is to load data into the table. For a loading tutorial, see [Loading data to dedicated SQL pool](load-data-wideworldimportersdw.md).
traffic-manager Traffic Manager Subnet Override Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-subnet-override-powershell.md
Last updated 09/18/2019
-# Traffic Manager subnet override using Azure Powershell
+# Traffic Manager subnet override using Azure PowerShell
Traffic Manager subnet override allows you to alter the routing method of a profile. The addition of an override will direct traffic based upon the end user's IP address with a predefined IP range to endpoint mapping.
virtual-desktop Autoscale Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-diagnostics.md
The following JSON file is an example of what you'll see when you open a report:
- Review how to create a scaling plan at [Autoscale for Azure Virtual Desktop session hosts](autoscale-scaling-plan.md). - [Assign your scaling plan to new or existing host pools](autoscale-new-existing-host-pool.md).
+- Learn more about terms used in this article at our [autoscale glossary](autoscale-glossary.md).
+- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions.
virtual-desktop Autoscale Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-glossary.md
+
+ Title: Azure Virtual Desktop autoscale (preview) glossary - Azure
+description: A glossary of terms and concepts for the Azure Virtual Desktop autoscale (preview) feature.
+++++ Last updated : 02/25/2022+++
+# Autoscale (preview) glossary
+
+This article is a list of definitions for key terms and concepts related to the autoscale (preview) feature for Azure Virtual Desktop.
+
+## Autoscale
+
+The autoscale feature is Azure Virtual DesktopΓÇÖs native scaling service that turns VMs on and off based on the number of sessions on the session hosts in the host pool and which phase of the [scaling plan](#scaling-plan) [schedule](#schedule) the workday is in.
+
+## Scaling tool
+
+Azure Virtual DesktopΓÇÖs scaling tool uses Azure Automation and Azure Logic Apps to scale the VMs in a host pool based on how many user sessions per CPU core there are during peak and off-peak hours.
+
+## Scaling plan
+
+A scaling plan is an Azure Virtual Desktop Azure Resource Manager object that defines the schedules for scaling session hosts in a host pool. You can assign one scaling plan to multiple host pools. Each host pool can only have one scaling plan assigned to it.
+
+## Schedule
+
+Schedules are sub-resources of [scaling plans](#scaling-plan) that specify the start time, capacity threshold, minimum percentage of hosts, load-balancing algorithm, and other configuration settings for the different phases of the day.
+
+## Ramp up
+
+The ramp-up phase of a [scaling plan](#scaling-plan) [schedule](#schedule) is usually at the beginning of the work day, when users start to sign in and start their sessions. In this phase, the number of [active user sessions](#active-user-session) usually increases at a rapid pace without reaching the maximum number of active sessions for the day yet.
+
+## Peak
+
+The peak phase of a [scaling plan](#scaling-plan) [schedule](#schedule) is when your host pool reaches the maximum number of [active user sessions](#active-user-session) for the day. In this phase, the number of active sessions usually holds steady until the peak phase ends. New active user sessions can be established during this phase, but usually at a slower rate than the ramp-up phase.
+
+## Ramp down
+
+The ramp-down phase of a [scaling plan](#scaling-plan) [schedule](#schedule) is usually at the end of the work day, when users start to sign out and end their sessions for the evening. In this phase, the number of [active user sessions](#active-user-session) usually decreases rapidly.
+
+## Off-peak
+
+The off-peak phase of the [scaling plan](#scaling-plan) [schedule](#schedule) is when the host pool usually reaches the minimum number of [active user sessions](#active-user-session) for the day. During this phase, there aren't usually many active users, but you can keep a small amount of resources on to accommodate users who work after the peak and ramp-down phases.
+
+## Available session host
+
+Available session hosts are session hosts that have passed all Azure Virtual Desktop agent health checks and have VM objects that are powered on, making them available for users to start their user sessions on.
+
+## Capacity threshold
+
+The capacity threshold is the percentage of a [host pool's capacity](#available-host-pool-capacity) that, when reached, triggers a [scaling action](#scaling-action) to happen.
+
+For example:
+
+- If the [used host pool capacity](#used-host-pool-capacity) is below the capacity threshold and the autoscale feature can turn off virtual machines (VMs) without going over the capacity threshold, then the feature will turn the VMs off.
+- If the used host pool capacity goes over the capacity threshold, then the autoscale feature will turn more VMs on until the used host pool capacity goes below the capacity threshold.
+
+## Available host pool capacity
+
+Available host pool capacity is how many user sessions a host pool can host based on the number of [available session hosts](#available-session-host). The available host pool capacity is the host pool's maximum session limit multiplied by the number of [available session hosts](#available-session-host) in the host pool.
+
+In other words:
+
+Host pool maximum session limit × number of available session hosts = available host pool capacity.
+
+## Used host pool capacity
+
+The used host pool capacity is the amount of [host pool capacity](#available-host-pool-capacity) that's currently taken up by active and disconnected user sessions.
+
+In other words:
+
+The number of [active](#active-user-session) and [disconnected user sessions](#disconnected-user-session) ├╖ [the host pool capacity](#available-host-pool-capacity) = used host pool capacity.
+
+## Scaling action
+
+Scaling actions are when [the autoscale feature](#autoscale) turns VMs on or off.
+
+## Minimum percentage of hosts
+
+The minimum percentage of hosts is the lowest percentage of all session hosts in the host pool that must be turned on for each phase of the [scaling plan](#scaling-plan) [schedule](#schedule).
+
+## Active user session
+
+A user session is considered "active" when the user signs in and connects to their remote app or desktop resource.
+
+## Disconnected user session
+
+A disconnected user session is an inactive session that the user hasn't signed out of yet. When a user closes the remote session window without signing out, the session becomes disconnected. When a user reconnects to their remote resources, they'll be redirected to their disconnected session on the session host they were working on. At this point, the disconnected session becomes an [active session](#active-user-session) again.
+
+## Force logoff
+
+A force logoff, or forced sign-out, is when the service ends an [active user session](#active-user-session) or a [disconnected user session](#disconnected-user-session) without the user's consent.
+
+## Exclusion tag
+
+An exclusion tag is a property of a [scaling plan](#scaling-plan) that's a tag name you can apply to VMs that you want to exclude from [scaling actions](#scaling-action). [The autoscale feature](#autoscale) only performs scaling actions on VMs without tag names that match the exclusion tag.
+
+## Next steps
+
+- For more information about the autoscale feature, see the [autoscale feature document](autoscale-scaling-plan.md).
+- For more information about the scaling script, see the [scaling script document](set-up-scaling-script.md).
virtual-desktop Autoscale New Existing Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-new-existing-host-pool.md
To edit an existing scaling plan:
## Next steps - Review how to create a scaling plan at [Autoscale (preview) for Azure Virtual Desktop session hosts](autoscale-new-existing-host-pool.md).-- Learn how to troubleshoot your scaling plan at [Enable diagnostics for your scaling plan](autoscale-diagnostics.md).
+- Learn how to troubleshoot your scaling plan at [Enable diagnostics for your scaling plan](autoscale-diagnostics.md).
+- Learn more about terms used in this article at our [autoscale glossary](autoscale-glossary.md).
+- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions.
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
Now that you've created your scaling plan, here are some things you can do:
- [Assign your scaling plan to new and existing host pools](autoscale-new-existing-host-pool.md) - [Enable diagnostics for your scaling plan](autoscale-diagnostics.md)+
+If you'd like to learn more about terms used in this article, check out our [autoscale glossary](autoscale-glossary.md). You can also look at our [autoscale FAQ](autoscale-faq.yml) if you have additional questions.
virtual-desktop Troubleshoot Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-powershell-2019.md
> [!IMPORTANT] > This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../troubleshoot-powershell.md).
-Use this article to resolve errors and issues when using PowerShell with Azure Virtual Desktop. For more information on Remote Desktop Services PowerShell, see [Azure Virtual Desktop Powershell](/powershell/windows-virtual-desktop/overview).
+Use this article to resolve errors and issues when using PowerShell with Azure Virtual Desktop. For more information on Remote Desktop Services PowerShell, see [Azure Virtual Desktop PowerShell](/powershell/windows-virtual-desktop/overview).
## Provide feedback
virtual-desktop Whats New Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-azure-monitor.md
Title: WhatΓÇÖs new in Azure Monitor for Azure Virtual Desktop?
-description: New features and product updates for the Azure Virtual Desktop agent.
+ Title: What's new in Azure Monitor for Azure Virtual Desktop?
+description: New features and product updates in Azure Monitor for Azure Virtual Desktop.
-+ Last updated 07/09/2021
-# WhatΓÇÖs new in Azure Monitor for Azure Virtual Desktop?
+# What's new in Azure Monitor for Azure Virtual Desktop?
This article describes the changes we make to each new version of Azure Monitor for Azure Virtual Desktop.
virtual-machine-scale-sets Virtual Machine Scale Sets Instance Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids.md
Each VM in a scale set gets an instance ID that uniquely identifies it. This ins
REST API: `POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/reimage?api-version={apiVersion}` (for more information, see the [REST API documentation](/rest/api/compute/virtualmachinescalesetvms/reimage))
-Powershell: `Set-AzVmssVM -ResourceGroupName {resourceGroupName} -VMScaleSetName {vmScaleSetName} -InstanceId {instanceId} -Reimage` (for more information, see the [Powershell documentation](/powershell/module/az.compute/set-azvmssvm))
+PowerShell: `Set-AzVmssVM -ResourceGroupName {resourceGroupName} -VMScaleSetName {vmScaleSetName} -InstanceId {instanceId} -Reimage` (for more information, see the [PowerShell documentation](/powershell/module/az.compute/set-azvmssvm))
CLI: `az vmss reimage -g {resourceGroupName} -n {vmScaleSetName} --instance-id {instanceId}` (for more information, see the [CLI documentation](/cli/azure/vmss)).
You can get the list of instance IDs by listing all instances in a scale set:
REST API: `GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualMachines?api-version={apiVersion}` (for more information, see the [REST API documentation](/rest/api/compute/virtualmachinescalesetvms/list))
-Powershell: `Get-AzVmssVM -ResourceGroupName {resourceGroupName} -VMScaleSetName {vmScaleSetName}` (for more information, see the [Powershell documentation](/powershell/module/az.compute/get-azvmssvm))
+PowerShell: `Get-AzVmssVM -ResourceGroupName {resourceGroupName} -VMScaleSetName {vmScaleSetName}` (for more information, see the [PowerShell documentation](/powershell/module/az.compute/get-azvmssvm))
CLI: `az vmss list-instances -g {resourceGroupName} -n {vmScaleSetName}` (for more information, see the [CLI documentation](/cli/azure/vmss)).
virtual-machines Constrained Vcpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/constrained-vcpu.md
> [!TIP] > Try the **[Virtual Machine selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
-Some database workloads like SQL Server or Oracle require high memory, storage, and I/O bandwidth, but not a high core count. Many database workloads are not CPU-intensive. Azure offers certain VM sizes where you can constrain the VM vCPU count to reduce the cost of software licensing, while maintaining the same memory, storage, and I/O bandwidth.
+Some database workloads like SQL Server require high memory, storage, and I/O bandwidth, but not a high core count. Many database workloads are not CPU-intensive. Azure offers certain VM sizes where you can constrain the VM vCPU count to reduce the cost of software licensing, while maintaining the same memory, storage, and I/O bandwidth.
The vCPU count can be constrained to one half or one quarter of the original VM size. These new VM sizes have a suffix that specifies the number of active vCPUs to make them easier for you to identify. For example, the current VM size Standard_GS5 comes with 32 vCPUs, 448 GB RAM, 64 disks (up to 256 TB), and 80,000 IOPs or 2 GB/s of I/O bandwidth. The new VM sizes Standard_GS5-16 and Standard_GS5-8 comes with 16 and 8 active vCPUs respectively, while maintaining the rest of the specs of the Standard_GS5 for memory, storage, and I/O bandwidth.
-The licensing fees charged for SQL Server or Oracle are constrained to the new vCPU count, and other products should be charged based on the new vCPU count. This results in a 50% to 75% increase in the ratio of the VM specs to active (billable) vCPUs. These new VM sizes allow customer workloads to use the same memory, storage, and I/O bandwidth while optimizing their software licensing cost. At this time, the compute cost, which includes OS licensing, remains the same one as the original size. For more information, see [Azure VM sizes for more cost-effective database workloads](https://azure.microsoft.com/blog/announcing-new-azure-vm-sizes-for-more-cost-effective-database-workloads/).
+The licensing fees charged for SQL Server are constrained to the new vCPU count, and other products should be charged based on the new vCPU count. This results in a 50% to 75% increase in the ratio of the VM specs to active (billable) vCPUs. These new VM sizes allow customer workloads to use the same memory, storage, and I/O bandwidth while optimizing their software licensing cost. At this time, the compute cost, which includes OS licensing, remains the same one as the original size. For more information, see [Azure VM sizes for more cost-effective database workloads](https://azure.microsoft.com/blog/announcing-new-azure-vm-sizes-for-more-cost-effective-database-workloads/).
| Name | vCPU | Specs |
virtual-machines Dav4 Dasv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dav4-dasv4-series.md
Dav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
[VM Generation Support](generation-2.md): Generation 1<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / Read MBps / Write MBps | Max NICs | Expected network bandwidth (Mbps) |
virtual-machines Disks Performance Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-performance-tiers.md
description: Learn how to change performance tiers for existing managed disks us
Previously updated : 06/29/2021 Last updated : 02/25/2022
# Change your performance tier without downtime using the Azure PowerShell module or the Azure CLI
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
[!INCLUDE [virtual-machines-disks-performance-tiers-intro](../../includes/virtual-machines-disks-performance-tiers-intro.md)]
virtual-machines Eav4 Easv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/eav4-easv4-series.md
The Eav4-series and Easv4-series utilize AMD's 2.35Ghz EPYC<sup>TM</sup> 7452 pr
[VM Generation Support](generation-2.md): Generations 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> Eav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor that can achieve a boosted maximum frequency of 3.35GHz. The Eav4-series sizes are ideal for memory-intensive enterprise applications. Data disk storage is billed separately from virtual machines. To use premium SSD, use the Easv4-series sizes. The pricing and billing meters for Easv4 sizes are the same as the Eav3-series.
virtual-machines Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks.md
A: No, ephemeral OS disk can only be used during VM and scale set creation.
A: No, you can't have a mix of ephemeral and persistent OS disk instances within the same scale set.
-**Q: Can the ephemeral OS disk be created using Powershell or CLI?**
+**Q: Can the ephemeral OS disk be created using PowerShell or CLI?**
A: Yes, you can create VMs with Ephemeral OS Disk using REST, Templates, PowerShell, and CLI.
virtual-machines Vmsnapshot Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmsnapshot-linux.md
az backup protection enable-for-vm \
--policy-name DefaultPolicy ```
-## Azure Powershell deployment
+## Azure PowerShell deployment
-Azure Powershell can be used to enable backup on a virtual machine. Once the backup is configured, first scheduled backup job will install the Vm snapshot extension on the VM.
+Azure PowerShell can be used to enable backup on a virtual machine. Once the backup is configured, first scheduled backup job will install the Vm snapshot extension on the VM.
```azurepowershell $targetVault = Get-AzRecoveryServicesVault -ResourceGroupName "myResourceGroup" -Name "myRecoveryServicesVault"
virtual-machines Vmsnapshot Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmsnapshot-windows.md
az backup protection enable-for-vm \
--policy-name DefaultPolicy ```
-## Azure Powershell deployment
+## Azure PowerShell deployment
-Azure Powershell can be used to enable backup on a virtual machine. Once the backup is configured, first scheduled backup job will install the Vm snapshot extension on the VM.
+Azure PowerShell can be used to enable backup on a virtual machine. Once the backup is configured, first scheduled backup job will install the Vm snapshot extension on the VM.
```azurepowershell $targetVault = Get-AzRecoveryServicesVault -ResourceGroupName "myResourceGroup" -Name "myRecoveryServicesVault"
virtual-machines H Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/h-series-retirement.md
Azure [HC](hc-series.md), [HBv2](hbv2-series.md), [HBv3](hbv3-series.md), [Dv4](
## How does the H-series migration affect me?
-1-year and 3-year RI offerings for the VMs are no longer available; however, regular PAYGO offer can still be transacted until the official decommission date. After August 31, 2022, any remaining H-series VM subscriptions in the preceding list will be set to a deallocated state. They'll stop working and no longer incur billing charges. If you need RI, please refer to our migration documents to find the suitable VM offerings that have RI available.
+1-year and 3-year RI offerings for the VMs are no longer available; however, regular PAYGO offer can still be transacted until the official decommission date. After August 31, 2022, any remaining H-series VM subscriptions in the preceding list will be set to a deallocated state. They'll stop working and no longer incur billing charges. If you need RI, please refer to our migration documents to find the suitable VM offerings that have RI available. You can either exchange/refund your existing reservations. If you choose not to, after the hardware is deprecated you wonΓÇÖt be getting the reservation benefit but still be paying for it.
The current VM size retirement only affects the VM sizes in the [H series](h-series.md), which includes the H-series Promo.
virtual-machines Create Upload Openbsd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-openbsd.md
Now you can shut down your VM.
## Prepare the VHD
-The VHDX format is not supported in Azure, only **fixed VHD**. You can convert the disk to fixed VHD format using Hyper-V Manager or the Powershell [convert-vhd](/powershell/module/hyper-v/convert-vhd) cmdlet. An example is as following.
+The VHDX format is not supported in Azure, only **fixed VHD**. You can convert the disk to fixed VHD format using Hyper-V Manager or the PowerShell [convert-vhd](/powershell/module/hyper-v/convert-vhd) cmdlet. An example is as following.
```powershell Convert-VHD OpenBSD61.vhdx OpenBSD61.vhd -VHDType Fixed
virtual-machines Disk Encryption Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-key-vault.md
While the portal is accessible through your browser, Azure CLI and Azure PowerSh
### Connect to your Azure account
-Before using the Azure CLI or Azure PowerShell, you must first connect to your Azure subscription. You do so by [Signing in with Azure CLI](/cli/azure/authenticate-azure-cli), [Signing in with Azure Powershell](/powershell/azure/authenticate-azureps), or supplying your credentials to the Azure portal when prompted.
+Before using the Azure CLI or Azure PowerShell, you must first connect to your Azure subscription. You do so by [Signing in with Azure CLI](/cli/azure/authenticate-azure-cli), [Signing in with Azure PowerShell](/powershell/azure/authenticate-azureps), or supplying your credentials to the Azure portal when prompted.
```azurecli-interactive az login
virtual-machines Disk Encryption Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-powershell-quickstart.md
Title: Create and encrypt a Linux VM with Azure Powershell
-description: In this quickstart, you learn how to use Azure Powershell to create and encrypt a Linux virtual machine
+ Title: Create and encrypt a Linux VM with Azure PowerShell
+description: In this quickstart, you learn how to use Azure PowerShell to create and encrypt a Linux virtual machine
virtual-machines Share Images Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/share-images-across-tenants.md
Azure Compute Galleries let you share images using Azure RBAC. You can use Azure
[!INCLUDE [virtual-machines-share-images-across-tenants](../../../includes/virtual-machines-share-images-across-tenants.md)] > [!IMPORTANT]
-> You cannot use the portal to deploy a VM from an image in another azure tenant. To create a VM from an image shared between tenants, you must use the Azure CLI or [Powershell](../windows/share-images-across-tenants.md).
+> You cannot use the portal to deploy a VM from an image in another azure tenant. To create a VM from an image shared between tenants, you must use the Azure CLI or [PowerShell](../windows/share-images-across-tenants.md).
## Create a VM using Azure CLI
virtual-machines Nva10v5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nva10v5-series.md
Last updated 02/01/2022
-# NVadsA10 v5-series
+# NVadsA10 v5-series (Preview)
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Capture Image Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/capture-image-resource.md
Previously updated : 09/27/2018 Last updated : 02/24/2022
**Applies to:** :heavy_check_mark: Windows VMs
-A managed image resource can be created from a generalized virtual machine (VM) that is stored as either a managed disk or an unmanaged disk in a storage account. The image can then be used to create multiple VMs. For information on how managed images are billed, see [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/).
+Managed images are helpful in development and test environments where you need a consistent baseline VM. A managed image resource can be created from a generalized virtual machine (VM) that is stored as either a managed disk or an unmanaged disk in a storage account. The image can then be used to create multiple VMs. For information on how managed images are billed, see [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/).
One managed image supports up to 20 simultaneous deployments. Attempting to create more than 20 VMs concurrently, from the same managed image, may result in provisioning timeouts due to the storage performance limitations of a single VHD. To create more than 20 VMs concurrently, use an [Azure Compute Gallery](../shared-image-galleries.md) (formerly known as Shared Image Gallery) image configured with 1 replica for every 20 concurrent VM deployments.
One managed image supports up to 20 simultaneous deployments. Attempting to crea
You need a [generalized](../generalize.md) VM in order to create an image.
-## Create a managed image in the portal
+## Create a managed image from a VM using the portal
-1. Go to the [Azure portal](https://portal.azure.com) to manage the VM image. Search for and select **Virtual machines**.
+1. Go to the [Azure portal](https://portal.azure.com). Search for and select **Virtual machines**.
2. Select your VM from the list.
-3. In the **Virtual machine** page for the VM, on the upper menu, select **Capture**.
+3. In the **Virtual machine** page for the VM, on the upper menu, select **Capture**. The **Create an image** page appears.
+4. For **Share image to Azure compute gallery**, select **No, capture only a managed image.**
+5. For **Resource Group**, you can either create the image in the same resource group as the VM or select another resource group in your subscription.
- The **Create image** page appears.
-
-4. For **Name**, either accept the pre-populated name or enter a name that you would like to use for the image.
-
-5. For **Resource group**, either select **Create new** and enter a name, or select a resource group to use from the drop-down list.
+4. For **Name**, either accept the pre-populated name or type your own name for the image.
6. If you want to delete the source VM after the image has been created, select **Automatically delete this virtual machine after creating the image**.-
-7. If you want the ability to use the image in any [availability zone](../../availability-zones/az-overview.md), select **On** for **Zone resiliency**.
+7. 7. If you want the ability to use the image in any [availability zone](../../availability-zones/az-overview.md), select **On** for **Zone resiliency**.
8. Select **Create** to create the image.
After the image is created, you can find it as an **Image** resource in the list
-## Create an image of a VM using PowerShell
-
-
+## Create a managed image of a VM using PowerShell
Creating an image directly from the VM ensures that the image includes all of the disks associated with the VM, including the OS disk and any data disks. This example shows how to create a managed image from a VM that uses managed disks.
If you want to create an image of only the OS disk, specify the managed disk ID
```
-## Create an image from a snapshot using PowerShell
+## Create a managed image from a snapshot using PowerShell
You can create a managed image from a snapshot of a generalized VM by following these steps:
You can create a managed image from a snapshot of a generalized VM by following
```
-## Create an image from a VM that uses a storage account
+## Create a managed image from a VM that uses a storage account
To create a managed image from a VM that doesn't use managed disks, you need the URI of the OS VHD in the storage account, in the following format: https://*mystorageaccount*.blob.core.windows.net/*vhdcontainer*/*vhdfilename.vhd*. In this example, the VHD is in *mystorageaccount*, in a container named *vhdcontainer*, and the VHD filename is *vhdfilename.vhd*.
To create a managed image from a VM that doesn't use managed disks, you need the
## Next steps - [Create a VM from a managed image](create-vm-generalized-managed.md).
+- Learn more about using an [Azure Compute Gallery](../shared-image-galleries.md) (formerly known as Shared Image Gallery)
virtual-machines Disk Encryption Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-key-vault.md
While the portal is accessible through your browser, Azure CLI and Azure PowerSh
### Connect to your Azure account
-Before using the Azure CLI or Azure PowerShell, you must first connect to your Azure subscription. You do so by [Signing in with Azure CLI](/cli/azure/authenticate-azure-cli), [Signing in with Azure Powershell](/powershell/azure/authenticate-azureps), or supplying your credentials to the Azure portal when prompted.
+Before using the Azure CLI or Azure PowerShell, you must first connect to your Azure subscription. You do so by [Signing in with Azure CLI](/cli/azure/authenticate-azure-cli), [Signing in with Azure PowerShell](/powershell/azure/authenticate-azureps), or supplying your credentials to the Azure portal when prompted.
```azurecli-interactive az login
virtual-machines Disk Encryption Windows Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-windows-aad.md
New-AzVM -VM $VirtualMachine -ResourceGroupName "MyVirtualMachineResourceGroup"
You can [add a new disk to a Windows VM using PowerShell](attach-disk-ps.md), or [through the Azure portal](attach-managed-disk-portal.md). ### Enable encryption on a newly added disk with Azure PowerShell
- When using Powershell to encrypt a new disk for Windows VMs, a new sequence version should be specified. The sequence version has to be unique. The script below generates a GUID for the sequence version. In some cases, a newly added data disk might be encrypted automatically by the Azure Disk Encryption extension. Auto encryption usually occurs when the VM reboots after the new disk comes online. This is typically caused because "All" was specified for the volume type when disk encryption previously ran on the VM. If auto encryption occurs on a newly added data disk, we recommend running the Set-AzVmDiskEncryptionExtension cmdlet again with new sequence version. If your new data disk is auto encrypted and you do not wish to be encrypted, decrypt all drives first then re-encrypt with a new sequence version specifying OS for the volume type.
-
+When using PowerShell to encrypt a new disk for Windows VMs, a new sequence version should be specified. The sequence version has to be unique. The script below generates a GUID for the sequence version. In some cases, a newly added data disk might be encrypted automatically by the Azure Disk Encryption extension. Auto encryption usually occurs when the VM reboots after the new disk comes online. This is typically caused because "All" was specified for the volume type when disk encryption previously ran on the VM. If auto encryption occurs on a newly added data disk, we recommend running the Set-AzVmDiskEncryptionExtension cmdlet again with new sequence version. If your new data disk is auto encrypted and you do not wish to be encrypted, decrypt all drives first then re-encrypt with a new sequence version specifying OS for the volume type.
- **Encrypt a running VM using a client secret:** The script below initializes your variables and runs the Set-AzVMDiskEncryptionExtension cmdlet. The resource group, VM, key vault, AAD app, and client secret should have already been created as prerequisites. Replace MyKeyVaultResourceGroup, MyVirtualMachineResourceGroup, MySecureVM, MySecureVault, My-AAD-client-ID, and My-AAD-client-secret with your values. This example uses "All" for the -VolumeType parameter, which includes both OS and Data volumes. If you only want to encrypt the OS volume, use "OS" for the -VolumeType parameter.
virtual-machines Move Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/move-vm.md
This article walks you through how to move a Windows virtual machine (VM) betwee
[!INCLUDE [virtual-machines-common-move-vm](../../../includes/virtual-machines-common-move-vm.md)]
-## Use Powershell to move a VM
+## Use PowerShell to move a VM
To move a virtual machine to another resource group, you need to make sure that you also move all of the dependent resources. To get a list with the resource ID of each of these resources, use the [Get-AzResource](/powershell/module/az.resources/get-azresource) cmdlet.
virtual-machines N Series Amd Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/n-series-amd-driver-setup.md
For basic specs, storage capacities, and disk details, see [GPU Windows VM sizes
| OS | Driver | | -- |- |
-| Windows 10 - Build 2009, 2004, 1909 <br/><br/>Windows 10 Enterprise multi-session - Build 2009, 2004, 1909 <br/><br/>Windows Server 2016 (version 1607)<br/><br/>Windows Server 2019 (version 1909) | [21.Q2](https://download.microsoft.com/download/3/4/8/3481cf8d-1706-49b0-aa09-08c9468305ab/AMD-Azure-NVv4-Windows-Driver-21Q2.exe) (.exe) |
+| Windows 10 - Build 2009, 2004, 1909 <br/><br/>Windows 10 Enterprise multi-session - Build 2009, 2004, 1909 <br/><br/>Windows Server 2016 (version 1607)<br/><br/>Windows Server 2019 (version 1909) | [21.Q2-1](https://download.microsoft.com/download/4/e/-Azure-NVv4-Driver-21Q2-1.exe) (.exe) |
Previous supported driver version for Windows builds up to 1909 is [20.Q4](https://download.microsoft.com/download/f/1/6/f16e6275-a718-40cd-a366-9382739ebd39/AMD-Azure-NVv4-Driver-20Q4.exe) (.exe)
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command-managed.md
az vm run-command list --name "myVM" --resource-group "myRG"
This command will retrieve current execution progress, including latest output, start/end time, exit code, and terminal state of the execution. ```azurecli-interactive
-az vm run-command show --name "myRunCommand" --vm-name "myVM" --resource-group "myRG" --expand
+az vm run-command show --name "myRunCommand" --vm-name "myVM" --resource-group "myRG" --instance-view
``` ### Delete RunCommand resource from the VM
az vm run-command delete --name "myRunCommand" --vm-name "myVM" --resource-group
This command will deliver the script to the VM, execute it, and return the captured output. ```powershell-interactive
-Set-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName" ΓÇôSourceScript "Write-Host Hello World!"
+Set-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -RunCommandName "RunCommandName" ΓÇôSourceScript "Write-Host Hello World!"
``` ### List all deployed RunCommand resources on a VM This command will return a full list of previously deployed Run Commands along with their properties. ```powershell-interactive
-Get-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM"
+Get-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM"
``` ### Get execution status and results This command will retrieve current execution progress, including latest output, start/end time, exit code, and terminal state of the execution. ```powershell-interactive
-Get-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName" -Status
+Get-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -RunCommandName "RunCommandName" -Status
``` ### Delete RunCommand resource from the VM Remove the RunCommand resource previously deployed on the VM. If the script execution is still in progress, execution will be terminated. ```powershell-interactive
-Remove-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName"
+Remove-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -RunCommandName "RunCommandName"
```
virtual-machines Share Images Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/share-images-across-tenants.md
Azure Compute Galleries let you share images using Azure RBAC. You can use Azure
> [!IMPORTANT]
-> You cannot use the portal to deploy a VM from an image in another azure tenant. To create a VM from an image shared between tenants, you must use the [Azure CLI](../linux/share-images-across-tenants.md) or Powershell.
+> You cannot use the portal to deploy a VM from an image in another azure tenant. To create a VM from an image shared between tenants, you must use the [Azure CLI](../linux/share-images-across-tenants.md) or PowerShell.
## Create a VM using PowerShell
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
Title: Design virtual networks with NAT gateway description: Learn how to design virtual networks that use Network Address Translation (NAT) gateway resources.- -
-# Customer intent: As an IT administrator, I want to learn more about how to design virtual networks with NAT gateway resources.
Previously updated : 11/11/2021 Last updated : 02/25/2022
NAT gateway provides outbound internet connectivity for one or more subnets of a virtual network. Once NAT gateway is associated to a subnet, NAT provides source network address translation (SNAT) for that subnet. NAT gateway specifies which static IP addresses virtual machines use when creating outbound flows. Static IP addresses come from public IP addresses, public IP prefixes, or both. If a public IP prefix is used, all IP addresses of the entire public IP prefix are consumed by a NAT gateway. A NAT gateway can use a total of up to 16 static IP addresses from either.
-<p align="center">
- <img src="media/nat-overview/flow-direction1.svg" alt="Diagram that depicts a NAT gateway resource that consumes all I P addresses for a public I P prefix and directs that traffic to and from two subnets of V Ms and a virtual machine scale set." width="256" title="Virtual Network NAT for flows outbound to the internet">
-</p>
*Figure: Virtual Network NAT for outbound to internet*
Review this section to familiarize yourself with considerations for designing vi
### Connect to Azure services
-When connecting to Azure services from your private network, the recommended approach is to use [Private Link](../../private-link/private-link-overview.md).
+When you connect to Azure services from your private network, the recommended approach is to use [Private Link](../../private-link/private-link-overview.md).
-Private Link lets you access services in Azure from your private network without the use of a public IP address. Connecting to these services over the internet are not necessary and are handled over the Azure backbone network. For example, when you access Azure Storage, you can use a private endpoint to ensure your connection is fully private.
+Private Link lets you access services in Azure from your private network without the use of a public IP address. Connecting to these services over the internet aren't necessary and are handled over the Azure backbone network. For example, when you access Azure Storage, you can use a private endpoint to ensure your connection is fully private.
### Connect to the internet NAT is recommended for outbound scenarios for all production workloads where you need to connect to a public endpoint. The following scenarios are examples of how to ensure coexistence of inbound with NAT gateway for outbound.
-#### NAT and VM with instance-level Public IP
+#### NAT and VM with an instance-level public IP
-<p align="center">
- <img src="media/nat-overview/flow-direction2.svg" alt="Diagram that depicts a NAT gateway that supports outbound traffic to the internet from a virtual network and inbound traffic with an instance-level public I P." width="300" title="Virtual Network NAT and VM with an instance-level public IP">
-</p>
-*Figure: Virtual Network NAT and VM with instance level Public IP*
+*Figure: Virtual Network NAT and VM with an instance level public IP*
| Direction | Resource | |::|::|
-| Inbound | VM with instance-level Public IP |
+| Inbound | VM with instance-level public IP |
| Outbound | NAT gateway | VM will use NAT gateway for outbound. Inbound originated isn't affected.
-#### NAT and VM with Standard Public Load Balancer
+#### NAT and VM with a standard public load balancer
-<p align="center">
- <img src="media/nat-overview/flow-direction3.svg" alt="Diagram that depicts a NAT gateway that supports outbound traffic to the internet from a virtual network and inbound traffic with a public load balancer." width="350" title="Virtual Network NAT and VM with a standard public load balancer">
-</p>
-*Figure: Virtual Network NAT and VM with Standard Public Load Balancer*
+*Figure: Virtual Network NAT and VM with a standard public load balancer*
| Direction | Resource | |::|::|
-| Inbound | Standard Public Load Balancer |
+| Inbound | Standard public load balancer |
| Outbound | NAT gateway | Any outbound configuration from a load-balancing rule or outbound rules is superseded by NAT gateway. Inbound originated isn't affected.
-#### NAT and VM with instance-level Public IP and Standard Public Load Balancer
+#### NAT and VM with an instance-level public IP and a standard public load balancer
-<p align="center">
- <img src="media/nat-overview/flow-direction4.svg" alt="Diagram that depicts a NAT gateway that supports outbound traffic to the internet from a virtual network and inbound traffic with an instance-level public I P and a public load balancer." width="425" title="Virtual Network NAT and VM with instance-level public IP and standard public load balancer">
-</p>
-*Figure: Virtual Network NAT and VM with instance-level public IP and Standard Public Load Balancer*
+*Figure: Virtual Network NAT and VM with an instance-level public IP and a standard public load balancer*
| Direction | Resource | |::|::|
-| Inbound | VM with instance-level Public IP and Standard Public Load Balancer |
+| Inbound | VM with instance-level public IP and a standard public load balancer |
| Outbound | NAT gateway | Any outbound configuration from a load-balancing rule or outbound rules is superseded by NAT gateway. The VM will also use NAT gateway for outbound. Inbound originated isn't affected.
For guides on how to enable NSG flow logs, see [Enabling NSG Flow Logs](../../ne
Each NAT gateway can provide up to 50 Gbps of throughput. You can split your deployments into multiple subnets and assign each subnet or group of subnets a NAT gateway to scale out.
-Each NAT gateway can support 64,000 flows each for TCP and UDP per assigned outbound IP address. Review the following section for details and the [troubleshooting article](./troubleshoot-nat.md) for specific problem resolution guidance.
+NAT gateway can be attached to up to 16 public IP addresses. Each NAT gateway can support up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet for TCP and UDP. Review the following section for details and the [troubleshooting article](./troubleshoot-nat.md) for specific problem resolution guidance.
## Source Network Address Translation Source Network Address Translation (SNAT) rewrites the source of a flow to originate from a different IP address and/or port. Typically, SNAT is used when a private network needs to connect to a public host over the internet. SNAT allows multiple compute resources within the private VNet to use the same single Public IP address or set of IP addresses (prefix) to connect to the internet.
-NAT gateway uses SNAT to rewrite the source IP address and source port of a virtual machine (or other compute resource) to a static public IP address. NAT gateway can be assigned up to 16 public IP addresses, with each IP having 64,000 available ports.
-
-<a id="source-network-address-translation"></a>
+NAT gateway SNATs the private IP address and source port of a virtual machine (or other compute resource) to a static public IP address before going outbound to the internet from a virtual network.
### Fundamentals
-Let's look at some example flows to explain the basic concept of SNAT and how it works with NAT gateway.
+The following example flows explain the basic concept of SNAT and how it works with NAT gateway.
In the table below the VM is making connections to destination IP 65.52.0.1 from the following source tuples (IPs and ports):
In the table below the VM is making connections to destination IP 65.52.0.1 from
| 2 | 192.168.0.16:4284 | 65.52.0.1:80 | | 3 | 192.168.0.17.5768 | 65.52.0.1:80 |
-When NAT gateway is configured with public IP address 65.52.1.1, the source IPs are SNATed into the public IP address as shown below:
+When NAT gateway is configured with public IP address 65.52.1.1, the source IPs are SNAT'd into NAT gateway's public IP address as shown below:
| Flow | Source tuple | Source tuple after SNAT | Destination tuple | |::|::|::|::|
When NAT gateway is configured with public IP address 65.52.1.1, the source IPs
| 2 | 192.168.0.16:4284 | **65.52.1.1:1235** | 65.52.0.1:80 | | 3 | 192.168.0.17.5768 | **65.52.1.1:1236** | 65.52.0.1:80 |
-The destination will now see the source of the flows as 65.52.1.1 (source tuple after SNAT), with the assigned port shown. The act of NAT gateway replacing all of the source ports and IPs with the public IP and port before connecting to the internet is known as *IP masquerading* or *port masquerading*. Multiple private sources are masqueraded behind a public IP.
+The source IP address and port of each flow is SNAT'd to the public IP address 65.52.1.1 (source tuple after SNAT) and to a different port for each new connection going to the same destination endpoint. The act of NAT gateway replacing all of the source ports and IPs with the public IP and port before connecting to the internet is known as *IP masquerading* or *port masquerading*. Multiple private sources are masqueraded behind a public IP.
#### Source (SNAT) port reuse
-An inventory of ports are available to any virtual machine within a subnet that is attached to NAT gateway. NAT gateway selects a port at random out of the available inventory of ports for the virtual machine to use. NAT gateway will also opportunistically reuse source (SNAT) ports.
+Azure provides ~64,000 SNAT ports per public IP address. For each public IP address attached to NAT gateway, the entire inventory of ports provided by those IPs is made available to any virtual machine instance within a subnet that is also attached to NAT gateway. NAT gateway selects a port at random out of the available inventory of ports for a virtual machine to use. Each time a new connection is made to the same destination endpoint over the internet, a new source port is used. As mentioned in the [Performance](#performance) section, NAT gateway supports up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet. NAT gateway will continue to select a new source port at random to go to the same destination endpoint until no more SNAT ports are available for use. If NAT gateway doesn't find any available SNAT ports, only then will it reuse a SNAT port. A port can be reused so long as it's going to a different destination endpoint.
The following flow illustrates this concept with a VM flowing to destination IP 65.52.0.2 after flows 1 - 3 from the above tables have already taken place.
The following flow illustrates this concept with a VM flowing to destination IP
|::|::|::| | 4 | 192.168.0.16:4285 | 65.52.0.2:80 |
-A NAT gateway will translate flow 4 to a port that may have been recently used for other destinations as well. See [Scale NAT](#scale-nat) for more discussion on correctly sizing your IP address provisioning.
+A NAT gateway will translate flow 4 to a source port that may have been recently used for a different destination endpoint. See [Scale NAT](#scale-nat) for more discussion on correctly sizing your IP address provisioning.
| Flow | Source tuple | Source tuple after SNAT | Destination tuple | |::|::|::|::|
Don't take a dependency on the specific way source ports are assigned in the abo
SNAT provided by NAT is different from SNAT provided by a [load balancer](../../load-balancer/load-balancer-outbound-connections.md) in several aspects, including: -- NAT gateway selects source ports at random for outbound traffic flow whereas Load Balancer selects ports sequentially. - NAT gateway dynamically allocates SNAT ports across all VMs within a NAT gateway configured subnet whereas Load Balancer pre-allocates a fixed number of SNAT ports to each VM.-- NAT gateway does not require manual configuration of SNAT ports to VMs in your VNet, whereas Load Balancer can be manually configured to allow fixed amounts of SNAT ports per VM.+
+- NAT gateway selects source ports at random for outbound traffic flow whereas Load Balancer selects ports sequentially.
+
+- NAT gateway doesn't reuse a SNAT port until no other SNAT ports are available to make new connections, whereas Load Balancer looks to select the lowest available SNAT port in sequential order.
### On-demand NAT provides on-demand SNAT ports for new outbound traffic flows. All available SNAT ports in inventory can be used by any virtual machine on subnets configured with NAT:
-<p align="center">
- <img src="media/nat-overview/lb-vnnat-chart.svg" alt="Diagram that depicts the inventory of all available SNAT ports used by any V M on subnets configured with NAT." width="550" title="Virtual Network NAT on-demand for outbound SNAT">
-</p>
*Figure: Virtual Network NAT on-demand outbound SNAT*
-Any IP configuration of a virtual machine can create outbound flows on-demand as needed. Pre-allocation or per-instance planning including per-instance worst-case over-provisioning isn't required.
+Any IP configuration of a virtual machine can create outbound flows on-demand as needed. Pre-allocation of SNAT ports to each virtual machine isn't required.
-<p align="center">
- <img src="media/nat-overview/exhaustion-threshold.svg" alt="Diagram that depicts the inventory of all available SNAT ports used by any V M on subnets configured with NAT with an exhaustion threshold." width="550" title="Differences in exhaustion scenarios">
-</p>
*Figure: Differences in exhaustion scenarios*
-After a SNAT port is released, it's available for use by any VM on subnets configured with NAT. On-demand allocation allows dynamic and divergent workloads on subnets to use SNAT ports as needed. As long as SNAT ports are available, SNAT flows will succeed. SNAT port hotspots benefit from a larger inventory. SNAT ports aren't left unused for VMs not actively needing them.
+After a SNAT port is released, it's available for use by any VM on subnets configured with NAT. On-demand allocation allows dynamic and divergent workloads on subnets to use SNAT ports as needed. As long as SNAT ports are available, SNAT flows will succeed. SNAT port hot spots benefit from a larger inventory. SNAT ports aren't left unused for VMs not actively needing them.
### Scale NAT
Scaling NAT is primarily a function of managing the shared, available SNAT port
> [!NOTE] > If you assign a public IP prefix, the entire public IP prefix is used. You can't assign a public IP prefix and then break out individual IP addresses to assign to other resources. If you want to assign individual IP addresses from a public IP prefix to multiple resources, you need to create individual public IP addresses and assign them as needed instead of using the public IP prefix itself.
-SNAT maps private addresses to one or more public IP addresses, rewriting the source address and source port in the process. A NAT gateway uses 64,000 ports (SNAT ports) per configured public IP address for this translation. A single NAT gateway can scale up to 16 IP addresses and 1 million SNAT ports. If a public IP prefix is provided, each IP address within the prefix provides SNAT port inventory. Adding more public IP addresses increases the available inventory of SNAT ports. TCP and UDP are separate SNAT port inventories and are unrelated to NAT gateway.
+SNAT maps private addresses to one or more public IP addresses, rewriting the source address and source port in the process. A single NAT gateway can scale up to 16 IP addresses. If a public IP prefix is provided, each IP address within the prefix provides SNAT port inventory. Adding more public IP addresses increases the available inventory of SNAT ports. TCP and UDP are separate SNAT port inventories and are unrelated to NAT gateway.
-NAT gateway opportunistically reuses source (SNAT) ports. When you scale your workload, assume that each flow requires a new SNAT port, and then scale the total number of available IP addresses for outbound traffic. Carefully consider the scale you're designing for, and then provision IP addresses quantities accordingly.
+NAT gateway opportunistically reuses source (SNAT) ports. When you scale your workload, assume that each flow requires a new SNAT port, and then scale the total number of available IP addresses for outbound traffic. Carefully consider the scale you're designing for, and then allocate IP addresses quantities accordingly.
SNAT ports set to different destinations will most likely be reused when possible. As SNAT port exhaustion approaches, flows may not succeed.
NAT gateway interacts with IP and IP transport headers of UDP and TCP flows. NAT
### Timers
-TCP timers determine the amount of time a connection is held between two endpoints before it is terminated and the port is available for reuse. Depending on the type of packet sent by either endpoint, a specific type of timer will be triggered.
+TCP timers determine the amount of time a connection is held between two endpoints before it's terminated and the port is available for reuse. Depending on the type of packet sent by either endpoint, a specific type of timer will be triggered.
The following timers indicate how long a connection is maintained before closing and releasing the destination SNAT port for reuse: | Timer | Description | Value | |||| | TCP FIN | Occurs when the private side of NAT initiates termination of a TCP connection. A timer is set after the FIN packet is sent by the public endpoint. This timer allows the private endpoint time to resend an ACK (acknowledgment) packet should it be lost. Once the timer ends, the connection is closed. | 60 seconds |
-| TCP RST | Occurs when the private side of NAT sends a RST (reset) packet in an attempt to communicate on the TCP connection. If the RST packet is not received by the public side of NAT, or the RST packet is returned to the private endpoint, the connection will time out and close. The public side of NAT doesn't generate TCP RST packets or any other traffic. | 10 seconds |
+| TCP RST | Occurs when the private side of NAT sends an RST (reset) packet in an attempt to communicate on the TCP connection. If the RST packet isn't received by the public side of NAT, or the RST packet is returned to the private endpoint, the connection will time out and close. The public side of NAT doesn't generate TCP RST packets or any other traffic. | 10 seconds |
| TCP half open | Occurs when the public endpoint is waiting for acknowledgment from the private endpoint that the connection between the two is fully bidirectional. | 30 seconds | | TCP idle timeout | TCP connections can go idle when no data is transmitted between either endpoint for a prolonged period of time. A timer can be configured from 4 minutes (default) to 120 minutes (2 hours) to time out a connection that has gone idle. Traffic on the flow will reset the idle timeout timer. | Configurable; 4 minutes (default) - 120 minutes | > [!NOTE] > These timer settings are subject to change. The values are provided to help with troubleshooting and you should not take a dependency on specific timers at this time.
-After a SNAT port is no longer in use, it is available for reuse to the same destination IP address and port after 5 seconds.
+After a SNAT port is no longer in use, it's available for reuse to the same destination IP address and port after 5 seconds.
-#### Timer Considerations
+#### Timer considerations
+
+Design recommendations for configuring timers:
+
+- In an idle connection scenario, NAT gateway holds onto SNAT ports until the connection idle times out. Because long idle timeout timers can unnecessarily increase the likelihood of SNAT port exhaustion, it isn't recommended to increase the idle timeout duration to longer than the default time of 4 minutes. If a flow never goes idle, then it will not be impacted by the idle timer.
-Here are some design recommendations for configuring timers:
-- In an idle connection scenario, NAT gateway holds onto SNAT ports until the connection idle times out. Because long idle timeout timers can unnecessarily increase the likelihood of SNAT port exhaustion, it is not recommended to increase the idle timeout duration to longer than the default time of 4 minutes. If a flow never goes idle, then it will not be impacted by the idle timer. - TCP keepalives can be used to provide a pattern of refreshing long idle connections and endpoint liveness detection. TCP keepalives appear as duplicate ACKs to the endpoints, are low overhead, and invisible to the application layer. ## Limitations -- Basic load balancers and basic Public IP addresses are not compatible with NAT. Use standard SKU load balancers and Public IPs instead.
+- Basic load balancers and basic public IP addresses aren't compatible with NAT. Use standard SKU load balancers and public IPs instead.
+
- To upgrade a basic load balancer to standard, see [Upgrade Azure Public Load Balancer](../../load-balancer/upgrade-basic-standard.md)
- - To upgrade a basic public IP address to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
+
+ - To upgrade a basic public IP address too standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
+ - IP fragmentation isn't available for NAT gateway. ## Next steps - Review [virtual network NAT](nat-overview.md).+ - Learn about [metrics and alerts for NAT gateway](nat-metrics.md).-- Learn how to [troubleshoot NAT gateway](troubleshoot-nat.md).+
+- Learn how to [troubleshoot NAT gateway](troubleshoot-nat.md).
virtual-network Nat Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-metrics.md
Azure Virtual Network NAT gateway resources provide multi-dimensional metrics. You can use these metrics to observe the operation and for [troubleshooting](troubleshoot-nat.md). Alerts can be configured for critical issues such as SNAT exhaustion.
-<p align="center">
- <img src="media/nat-overview/flow-direction1.svg" alt="Figure depicts a NAT gateway resource that consumes all IP addresses for a public IP prefix and directs that traffic to and from two subnets of virtual machines and a virtual machine scale set." width="256" title="Virtual Network NAT for outbound to Internet">
-</p>
*Figure: Virtual Network NAT for outbound to Internet*
Azure Virtual Network NAT gateway resources provide multi-dimensional metrics. Y
NAT gateway resources provide the following multi-dimensional metrics in Azure Monitor:
-| Metric | Description | Recommended Aggregation | Dimensions |
+| Metric | Description | Recommended aggregation | Dimensions |
||||| | Bytes | Bytes processed inbound and outbound | Sum | Direction (In; Out), Protocol (6 TCP; 17 UDP) | | Packets | Packets processed inbound and outbound | Sum | Direction (In; Out), Protocol (6 TCP; 17 UDP) |
Alerts for metrics can be configured in Azure Monitor for each of the preceding
## Limitations
-Resource Health isn't supported.
+Resource health isn't supported.
## Next steps
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Title: What is Azure Virtual Network NAT?
-description: Overview of Virtual Network NAT features, resources, architecture, and implementation. Learn how Virtual Network NAT works and how to use NAT gateway resources in the cloud.
+description: Overview of Virtual Network NAT features, resources, architecture, and implementation. Learn how Virtual Network NAT works and how to use NAT gateway resources in Azure.
Previously updated : 10/20/2021 Last updated : 02/25/2022
-# Customer intent: As an IT administrator, I want to learn more about Virtual Network NAT, its NAT gateway resources, and what I can use them for.
+ # What is Virtual Network NAT?
-Virtual Network NAT is a fully managed and highly resilient Network Address Translation (NAT) service. VNet NAT simplifies outbound Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses the VNet NAT's static public IP addresses.
+Virtual Network NAT is a fully managed and highly resilient Network Address Translation (NAT) service. Virtual Network NAT simplifies outbound Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses the Virtual Network NAT's static public IP addresses.
:::image type="content" source="./media/nat-overview/flow-map.png" alt-text="Figure shows a NAT receiving traffic from internal subnets and directing it to a public IP (PIP) and an IP prefix."::: *Figure: Virtual Network NAT*
-## VNet NAT benefits
+## Virtual Network NAT benefits
### Security
-With NAT, individual VMs (or other compute resources) do not need public IP addresses and can remain fully private. Such resources without a public IP address can still reach external sources outside the VNet. You can also associate a Public IP Prefix to ensure that a contiguous set of IPs will be used for outbound. Destination firewall rules can be then configured based on this predictable IP list.
+
+With NAT, individual VMs (or other compute resources) don't need public IP addresses and can remain fully private. Resources without a public IP address can still reach external sources outside the virtual network. You can associate a public IP prefix to ensure that a contiguous set of IPs will be used for outbound. Destination firewall rules can be configured based on this predictable IP list.
### Resiliency
-NAT is a fully managed and distributed service. It doesn't depend on any individual compute instances such as VMs or a single physical gateway device. It leverages software defined networking making it highly resilient.
+
+NAT is a fully managed and distributed service. It doesn't depend on any individual compute instances such as VMs or a single physical gateway device. NAT uses software defined networking making it highly resilient.
### Scalability
-NAT can be associated to a subnet and can be used by all compute resources in that subnet. Further, all subnets in a VNet can leverage the same resource. When associated to a Public Ip Prefix, it will automatically scale to the number of IP addresses needed for outbound.
+
+NAT can be associated to a subnet and can be used by all compute resources in that subnet. Further, all subnets in a virtual network can use the same resource. When associated to a public IP prefix, it automatically scales to the number of IP addresses needed for outbound.
### Performance
-NAT will not impact the network bandwidth of your compute resources since it is a software defined networking service. Learn more about [NAT gateway's performance](nat-gateway-resource.md#performance).
+NAT won't affect the network bandwidth of your compute resources since it's a software defined networking service. Learn more about [NAT gateway's performance](nat-gateway-resource.md#performance).
+
+## Virtual Network NAT basics
-## VNet NAT basics
+NAT can be created in a specific availability zone and has redundancy built in within the specified zone. NAT is non-zonal by default. When you create [availability zones](../../availability-zones/az-overview.md) scenarios, NAT can be isolated in a specific zone. This deployment is called a zonal deployment.
-NAT can be created in a specific Availability Zone and has redundancy built in within the specified zone. NAT is non-zonal by default. When creating [availability zones](../../availability-zones/az-overview.md) scenarios, NAT can be isolated in a specific zone. This is known as a zonal deployment.
+NAT is fully scaled out from the start. There's no ramp up or scale-out operation required. Azure manages the operation of NAT for you. NAT always has multiple fault domains and can sustain multiple failures without service outage.
-NAT is fully scaled out from the start. There's no ramp up or scale-out operation required. Azure manages the operation of NAT for you. NAT always has multiple fault domains and can sustain multiple failures without service outage.
+* Outbound connectivity can be defined for each subnet with NAT. Multiple subnets within the same virtual network can have different NATs. Or multiple subnets within the same virtual network can use the same NAT. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by NAT automatically without any customer configuration. NAT takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet.
-* Outbound connectivity can be defined for each subnet with NAT. Multiple subnets within the same virtual network can have different NATs. Or multiple subnets within the same virtual network can use the same NAT. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by NAT automatically without any customer configuration. NAT takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet.
* UDRs that have been set up to direct traffic outbound to the internet take precedence over NAT gateway. See [Troubleshooting NAT gateway](./troubleshoot-nat.md#udr-supersedes-nat-gateway-for-going-outbound) to learn more.
-* NAT supports TCP and UDP protocols only. ICMP is not supported.
+
+* NAT supports TCP and UDP protocols only. ICMP isn't supported.
+ * A NAT gateway resource can use a: * Public IP+ * Public IP prefix
-* NAT is compatible with Standard SKU public IP address or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. NAT will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as Basic Load Balancer or Basic Public IP aren't compatible with NAT. Basic resources must be placed on a subnet not associated to a NAT Gateway. Basic Load Balancer and Basic Public IP can be upgraded to standard in order to work with NAT gateway.
- * To upgrade a basic load balancer to standard, see [Upgrade Azure Public Load Balancer](../../load-balancer/upgrade-basic-standard.md)
- * To upgrade a basic public IP to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
-* NAT is the recommended method for outbound connectivity. A NAT gateway does not have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md).
- * To migrate outbound access to NAT gateway from default outbound access or from outbound rules of a load balancer, see [Migrate outbound access to Azure Virtual Network NAT](./tutorial-migrate-outbound-nat.md)
-* NAT cannot be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. However, it can be associated to a dual stack subnet.
-* NAT allows flows to be created from the virtual network to the services outside your VNet. Return traffic from the Internet is only allowed in response to an active flow. Services outside your VNet cannot initiate an inbound connection through NAT gateway.
-* NAT cannot span multiple virtual networks.
-* Multiple NATs cannot be attached to a single subnet.
-* NAT cannot be deployed in a [Gateway Subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub)
-* The private side of NAT (virtual machine instances or other compute resources) sends TCP Reset packets for attempts to communicate on a TCP connection that doesn't exist. One example is connections that have reached idle timeout. The next packet received will return a TCP Reset to the private IP address to signal and force connection closure. The public side of NAT doesn't generate TCP Reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted.
+
+* NAT is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. NAT will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as basic load balancer or basic public IPs aren't compatible with NAT. Basic resources must be placed on a subnet not associated to a NAT Gateway. Basic load balancer and basic public IP can be upgraded to standard to work with NAT gateway.
+
+* To upgrade a basic load balancer to standard, see [Upgrade a public Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md)
+
+* To upgrade a basic public IP to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
+
+* NAT is the recommended method for outbound connectivity. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md).
+
+ * To migrate outbound access to a NAT gateway from default outbound access or from load balancer outbound rules, see [Migrate outbound access to Azure Virtual Network NAT](./tutorial-migrate-outbound-nat.md)
+
+* NAT canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet.
+
+* NAT allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the Internet is only allowed in response to an active flow. Services outside your virtual network canΓÇÖt initiate an inbound connection through NAT gateway.
+
+* NAT canΓÇÖt span multiple virtual networks.
+
+* Multiple NATs canΓÇÖt be attached to a single subnet.
+
+* NAT canΓÇÖt be deployed in a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub)
+
+* The private side of NAT (virtual machine instances or other compute resources) sends TCP reset packets for attempts to communicate on a TCP connection that doesn't exist. One example is connections that have reached idle timeout. The next packet received will return a TCP reset to the private IP address to signal and force connection closure. The public side of NAT doesn't generate TCP reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted.
+ * A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives. ## Pricing and SLA
-For pricing details, see [Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network). NAT data path is at least 99.9% available.
+For pricing details, see [Virtual network pricing](https://azure.microsoft.com/pricing/details/virtual-network). NAT data path is at least 99.9% available.
## Next steps
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
Check the following configurations to ensure that NAT gateway can be used to dir
### How to validate connectivity
-[Virtual Network NAT gateway](./nat-overview.md#vnet-nat-basics) supports IPv4 UDP and TCP protocols. ICMP is not supported and is expected to fail.
+[Virtual Network NAT gateway](./nat-overview.md#virtual-network-nat-basics) supports IPv4 UDP and TCP protocols. ICMP is not supported and is expected to fail.
To validate end-to-end connectivity of NAT gateway, follow these steps: 1. Validate that your [NAT gateway public IP address is being used](./tutorial-create-nat-gateway-portal.md#test-nat-gateway).
virtual-network Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-template.md
Last updated 06/09/2021 -+ # Quickstart: Create a virtual network - Resource Manager template