Updates from: 05/10/2022 01:12:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
Previously updated : 04/30/2022 Last updated : 12/09/2021
The following table summarizes the Security Assertion Markup Language (SAML) app
| - | :--: | -- | | [MFA using time-based one-time password (TOTP) with authenticator apps](multi-factor-authentication.md#verification-methods) | Preview | Users can use any authenticator app that supports TOTP verification, such as the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app).| | [Phone factor authentication](phone-factor-technical-profile.md) | GA | |
-| [Azure AD MFA authentication](multi-factor-auth-technical-profile.md) | GA | |
+| [Azure AD MFA authentication](multi-factor-auth-technical-profile.md) | Preview | |
| [One-time password](one-time-password-technical-profile.md) | GA | | | [Azure Active Directory](active-directory-technical-profile.md) as local directory | GA | | | [Predicate validations](predicates.md) | GA | For example, password complexity. |
The following table summarizes the Security Assertion Markup Language (SAML) app
| Feature | Custom policy | Notes | | - | :--: | -- | | Azure portal | GA | |
-| [Application Insights user journey logs](troubleshoot-with-application-insights.md) | GA | Used for troubleshooting during development. |
-| [Application Insights event logs](analytics-with-application-insights.md) | GA | Used to monitor user flows in production. |
+| [Application Insights user journey logs](troubleshoot-with-application-insights.md) | Preview | Used for troubleshooting during development. |
+| [Application Insights event logs](analytics-with-application-insights.md) | Preview | Used to monitor user flows in production. |
## Responsibilities of custom policy feature-set developers
active-directory-b2c Deploy Custom Policies Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/deploy-custom-policies-devops.md
Previously updated : 04/30/2022 Last updated : 03/25/2022
active-directory-b2c Multi Factor Auth Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multi-factor-auth-technical-profile.md
Previously updated : 04/30/2022 Last updated : 12/09/2021
Azure Active Directory B2C (Azure AD B2C) provides support for verifying a phone number by using a verification code, or verifying a Time-based One-time Password (TOTP) code. + ## Protocol The **Name** attribute of the **Protocol** element needs to be set to `Proprietary`. The **handler** attribute must contain the fully qualified name of the protocol handler assembly that is used by Azure AD B2C:
The following example shows an Azure AD MFA technical profile used to verify the
In this mode, the user is required to install any authenticator app that supports time-based one-time password (TOTP) verification, such as the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app), on a device that they own.
-During the first sign up or sign in, the user scans a QR code, opens a deep link, or enters the code manually using the authenticator app. To verify the TOTP code, use the [Begin verify OTP](#begin-verify-totp) followed by [Verify TOTP](#verify-totp) validation technical profiles.
+During the first sign-up or sign-in, the user scans a QR code, opens a deep link, or enters the code manually using the authenticator app. To verify the TOTP code, use the [Begin verify OTP](#begin-verify-totp) followed by [Verify TOTP](#verify-totp) validation technical profiles.
-For subsequent sign ins, use the [Get available devices](#get-available-devices) method to check if the user has already enrolled their device. If the number of available devices is greater than zero, this indicates the user has enrolled before. In this case, the user needs to type the TOTP code that appears in the authenticator app.
+For subsequent sign-ins, use the [Get available devices](#get-available-devices) method to check if the user has already enrolled their device. If the number of available devices is greater than zero, this indicates the user has enrolled before. In this case, the user needs to type the TOTP code that appears in the authenticator app.
The technical profile:
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md
Previously updated : 04/30/2022 Last updated : 02/17/2022
In a self-asserted technical profile, you can use the **InputClaims** and **Inpu
## Display claims
+The display claims feature is currently in **preview**.
+ The **DisplayClaims** element contains a list of claims to be presented on the screen for collecting data from the user. To prepopulate the values of display claims, use the input claims that were previously described. The element may also contain a default value. The order of the claims in **DisplayClaims** specifies the order in which Azure AD B2C renders the claims on the screen. To force the user to provide a value for a specific claim, set the **Required** attribute of the **DisplayClaim** element to `true`.
Use output claims when:
- **Claims are output by output claims transformation**. - **Setting a default value in an output claim** without collecting data from the user or returning the data from the validation technical profile. The `LocalAccountSignUpWithLogonEmail` self-asserted technical profile sets the **executed-SelfAsserted-Input** claim to `true`. - **A validation technical profile returns the output claims** - Your technical profile may call a validation technical profile that returns some claims. You may want to bubble up the claims and return them to the next orchestration steps in the user journey. For example, when signing in with a local account, the self-asserted technical profile named `SelfAsserted-LocalAccountSignin-Email` calls the validation technical profile named `login-NonInteractive`. This technical profile validates the user credentials and also returns the user profile. Such as 'userPrincipalName', 'displayName', 'givenName' and 'surName'.-- **A display control returns the output claims** - Your technical profile may have a reference to a [display control](display-controls.md). The display control returns some claims, such as the verified email address. You may want to bubble up the claims and return them to the next orchestration steps in the user journey.
+- **A display control returns the output claims** - Your technical profile may have a reference to a [display control](display-controls.md). The display control returns some claims, such as the verified email address. You may want to bubble up the claims and return them to the next orchestration steps in the user journey. The display control feature is currently in **preview**.
The following example demonstrates the use of a self-asserted technical profile that uses both display claims and output claims.
active-directory-b2c Technicalprofiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technicalprofiles.md
Previously updated : 04/30/2022 Last updated : 11/30/2021
The **TechnicalProfile** element contains the following elements:
| InputClaimsTransformations | 0:1 | A list of previously defined references to claims transformations that should be executed before any claims are sent to the claims provider or the relying party. | | InputClaims | 0:1 | A list of previously defined references to claim types that are taken as input in the technical profile. | | PersistedClaims | 0:1 | A list of previously defined references to claim types that will be persisted by the technical profile. |
-| DisplayClaims | 0:1 | A list of previously defined references to claim types that are presented by the [self-asserted technical profile](self-asserted-technical-profile.md). |
+| DisplayClaims | 0:1 | A list of previously defined references to claim types that are presented by the [self-asserted technical profile](self-asserted-technical-profile.md). The DisplayClaims feature is currently in preview. |
| OutputClaims | 0:1 | A list of previously defined references to claim types that are taken as output in the technical profile. | | OutputClaimsTransformations | 0:1 | A list of previously defined references to claims transformations that should be executed after the claims are received from the claims provider. | | ValidationTechnicalProfiles | 0:n | A list of references to other technical profiles that the technical profile uses for validation purposes. For more information, see [Validation technical profile](validation-technical-profile.md).|
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 05/04/2022 Last updated : 04/04/2022
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
-## April 2022
-
-### New articles
--- [Configure Asignio with Azure Active Directory B2C for multifactor authentication](partner-asignio.md)-- [Set up sign up and sign in with Mobile ID using Azure Active Directory B2C](identity-provider-mobile-id.md)-- [Find help and open a support ticket for Azure Active Directory B2C](find-help-open-support-ticket.md)-
-### Updated articles
--- [Configure authentication in a sample single-page application by using Azure AD B2C](configure-authentication-sample-spa-app.md)-- [Configure xID with Azure Active Directory B2C for passwordless authentication](partner-xid.md)-- [Azure Active Directory B2C service limits and restrictions](service-limits.md)-- [Localization string IDs](localization-string-ids.md)-- [Manage your Azure Active Directory B2C tenant](tenant-management.md)-- [Page layout versions](page-layout.md)-- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md)-- [Azure Active Directory B2C: What's new](whats-new-docs.md)-- [Application types that can be used in Active Directory B2C](application-types.md)-- [Publish your Azure Active Directory B2C app to the Azure Active Directory app gallery](publish-app-to-azure-ad-app-gallery.md)-- [Quickstart: Set up sign in for a desktop app using Azure Active Directory B2C](quickstart-native-app-desktop.md)-- [Register a single-page application (SPA) in Azure Active Directory B2C](tutorial-register-spa.md)- ## March 2022 ### New articles
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
+
+ Title: Complex applications for Azure Active Directory Application Proxy
+description: Provides an understanding of complex application in Azure Active Directory Application Proxy, and how to configure one.
+++++++ Last updated : 04/22/2022++++
+# Understanding Azure Active Directory Application Proxy Complex application scenario (Preview)
+
+When applications are made up of multiple individual web application using different domain suffixes or different ports or paths in the URL, the individual web application instances must be published in separate Azure AD Application Proxy apps and the following problems might arise:
+1. Pre-authentication- The client must separately acquire an access token or cookie for each Azure AD Application Proxy app. This might lead to additional redirects to login.microsoftonline.com and CORS issues.
+2. CORS issues- Cross-origin resource sharing calls (OPTIONS request) might be triggered to validate if the caller web app is allowed to access the URL of the targeted web app. These will be blocked by the Azure AD Application Proxy Cloud service, since these requests cannot contain authentication information.
+3. Poor app management- Multiple enterprise apps are created to enable access to a private app adding friction to the app management experience.
+
+The following figure shows an example for complex application domain structure.
+
+![Diagram of domain structure for a complex application showing resource sharing between primary and secondary application.](./media/application-proxy-configure-complex-application/complex-app-structure.png)
+
+With [Azure AD Application Proxy](application-proxy.md), you can address this issue by using complex application publishing that is made up of multiple URLs across various domains.
+
+![Diagram of a Complex application with multiple application segments definition.](./media/application-proxy-configure-complex-application/complex-app-flow.png)
+
+A complex app has multiple app segments, with each app segment being a pair of an internal & external URL.
+There is one conditional access policy associated with the app and access to any of the external URLs work with pre-authentication with the same set of policies that are enforced for all.
+
+This solution that allows user to:
+
+- by successfully authenticating
+- not being blocked by CORS errors
+- including those that uses different domain suffixes or different ports or paths in the URL internally
+
+This article provides you with the information you need to configure wildcard application publishing in your environment.
+
+## Characteristics of application segment(s) for complex application.
+1. Application segments can be configured only for a wildcard application.
+2. External and alternate URL should match the wildcard external and alternate URL domain of the application respectively.
+3. Application segment URLΓÇÖs (internal and external) need to maintain uniqueness across complex applications.
+4. CORS Rules (optional) can be configured per application segment.
+5. Access will only be granted to defined application segments for a complex application.
+ - Note - If all application segments are deleted, a complex application will behave as a wildcard application opening access to all valid URL by specified domain.
+6. You can have an internal URL defined both as an application segment and a regular application.
+ - Note - Regular application will always take precedence over a complex app (wildcard application).
+
+## Pre-requisites
+Before you get started with single sign-on for header-based authentication apps, make sure your environment is ready with the following settings and configurations:
+- You need to enable Application Proxy and install a connector that has line of site to your applications. See the tutorial [Add an on-premises application for remote access through Application Proxy](application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad) to learn how to prepare your on-premises environment, install and register a connector, and test the connector.
++
+## Configure application segment(s) for complex application.
+
+To configure (and update) Application Segments for a complex app using the API, you first [create a wildcard application](application-proxy-wildcard.md#create-a-wildcard-application), and then update the application's onPremisesPublishing property to configure the application segments and respective CORS settings.
+
+> [!NOTE]
+> One application segment is supported in preview. Support for multiple application segment to be announced soon.
+
+If successful, this method returns a `204 No Content` response code and does not return anything in the response body.
+## Example
+
+##### Request
+Here is an example of the request.
++
+```http
+PATCH https://graph.microsoft.com/beta/applications/{<object-id-of--the-complex-app}
+Content-type: application/json
+
+{
+ "onPremisesPublishing": {
+ "onPremisesApplicationSegments": [
+ {
+ "externalUrl": "https://home.contoso.net/",
+ "internalUrl": "https://home.test.com/",
+ "alternateUrl": "",
+ "corsConfigurations": []
+ },
+ {
+ "externalUrl": "https://assets.constoso.net/",
+ "internalUrl": "https://assets.test.com",
+ "alternateUrl": "",
+ "corsConfigurations": [
+ {
+ "resource": "/",
+ "allowedOrigins": [
+ "https://home.contoso.net/"
+ ],
+ "allowedHeaders": [
+ "*"
+ ],
+ "allowedMethods": [
+ "*"
+ ],
+ "maxAgeInSeconds": 0
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+##### Response
+
+```http
+HTTP/1.1 204 No Content
+```
++
+## See also
+- [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)
+- [Plan an Azure AD Application Proxy deployment](application-proxy-deployment-plan.md)
+- [Remote access to on-premises applications through Azure Active Directory Application Proxy](application-proxy.md)
+- [Understand and solve Azure Active Directory Application Proxy CORS issues](application-proxy-understand-cors-issues.md)
active-directory Msal Net Migration Public Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-public-client.md
Title: Migrate public client applications to MSAL.NET
description: Learn how to migrate a public client application from Azure Active Directory Authentication Library for .NET to Microsoft Authentication Library for .NET. -+
Last updated 08/31/2021-+ #Customer intent: As an application developer, I want to migrate my public client app from ADAL.NET to MSAL.NET.
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
You can also specify options to limit the size of the in-memory token cache:
#### Distributed caches
-If you use `app.AddDistributedTokenCache`, the token cache is an adapter against the .NET `IDistributedCache` implementation. So you can choose between a distributed memory cache, a SQL Server cache, a Redis cache, or an Azure Cosmos DB cache. For details about the `IDistributedCache` implementations, see [Distributed memory cache](/aspnet/core/performance/caching/distributed).
+If you use `app.AddDistributedTokenCache`, the token cache is an adapter against the .NET `IDistributedCache` implementation. So you can choose between a SQL Server cache, a Redis cache, an Azure Cosmos DB cache, or any other cache implementing the [IDistributedCache](https://docs.microsoft.com/dotnet/api/microsoft.extensions.caching.distributed.idistributedcache?view=dotnet-plat-ext-6.0) interface.
-Here's the code for a distributed in-memory token cache:
-
-```CSharp
- // In-memory distributed token cache
- app.AddDistributedTokenCache(services =>
- {
- // In net462/net472, requires to reference Microsoft.Extensions.Caching.Memory
- services.AddDistributedMemoryCache();
-
- // Distributed token caches have an L1/L2 mechanism.
- // L1 is in memory, and L2 is the distributed cache
- // implementation that you will choose below.
- // You can configure them to limit the memory of the
- // L1 cache, encrypt, and set eviction policies.
- services.Configure<MsalDistributedTokenCacheAdapterOptions>(options =>
- {
- // You can disable the L1 cache if you want
- options.DisableL1Cache = false;
-
- // Or limit the memory (by default, this is 500 MB)
- options.sizeLimit = 1024 * 1024 * 1024, // 1 GB
-
- // You can choose to encrypt the cache or not
- options.Encrypt = false;
-
- // And you can set eviction policies for the distributed
- // cache
- options.SlidingExpiration = TimeSpan.FromHours(1);
- });
- });
-```
+For testing purposes only, you may want to use `services.AddDistributedMemoryCache()`, an in-memory implementation of `IDistributedCache`.
Here's the code for a SQL Server cache:
Here's the code for a SQL Server cache:
{ services.AddDistributedSqlServerCache(options => {
- // In net462/net472, requires to reference Microsoft.Extensions.Caching.Memory
-
+
// Requires to reference Microsoft.Extensions.Caching.SqlServer options.ConnectionString = @"Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=TestCache;Integrated Security=True;Connect Timeout=30;Encrypt=False;TrustServerCertificate=False;ApplicationIntent=ReadWrite;MultiSubnetFailover=False"; options.SchemaName = "dbo";
active-directory Msal Python Adfs Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-python-adfs-support.md
Title: Azure AD FS support (MSAL Python)
description: Learn about Active Directory Federation Services (AD FS) support in the Microsoft Authentication Library for Python -+
Last updated 11/23/2019-+ #Customer intent: As an application developer, I want to learn about AD FS support in MSAL for Python so I can decide if this platform meets my application development needs and requirements.
active-directory Quickstart Configure App Expose Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-expose-web-apis.md
For example, if your web API's application ID URI is `https://contoso.com/api` a
## Using the exposed scopes
-In the next article in the series, you configure a client app's registration with access to your web API and the scopes you defined by following the steps this article.
+In the next article in the series, you configure a client app's registration with access to your web API and the scopes you defined by following the steps in this article.
Once a client app registration is granted permission to access your web API, the client can be issued an OAuth 2.0 access token by the Microsoft identity platform. When the client calls the web API, it presents an access token whose scope (`scp`) claim is set to the permissions you've specified in the client's app registration.
active-directory Quickstart V2 Python Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-webapp.md
Title: "Quickstart: Add sign-in with Microsoft to a Python web app | Azure"
description: In this quickstart, learn how a Python web app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API. -+
Last updated 11/22/2021 -+
active-directory Scenario Desktop Acquire Token Device Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md
Title: Acquire a token to call a web API using device code flow (desktop app) |
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using device code flow -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Integrated Windows Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-integrated-windows-authentication.md
Title: Acquire a token to call a web API using integrated Windows authentication
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using integrated Windows authentication -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-interactive.md
Title: Acquire a token to call a web API interactively (desktop app) | Azure
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app interactively -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Username Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-username-password.md
Title: Acquire a token to call a web API using username and password (desktop ap
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using username and password. -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Wam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md
Title: Acquire a token to call a web API using web account manager (desktop app)
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using web account manager -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token.md
Title: Acquire a token to call a web API (desktop app) | Azure
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Direct Federation Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation-adfs.md
Previously updated : 04/27/2021 Last updated : 04/14/2022
The next section illustrates how to configure the required attributes and claims
### Before you begin An AD FS server must already be set up and functioning before you begin this procedure. For help with setting up an AD FS server, see [Create a test AD FS 3.0 instance on an Azure virtual machine](https://medium.com/in-the-weeds/create-a-test-active-directory-federation-services-3-0-instance-on-an-azure-virtual-machine-9071d978e8ed).
+### Add the relying party trust and claim rules
-### Add the relying party trust and claim rules
-1. On the AD FS server, go to **Tools** > **AD FS management**.
-1. In the navigation pane, select **Trust Relationships** > **Relying Party Trusts**.
-1. Under **Actions**, select **Add Relying Party Trust**.
-1. In the add relying party trust wizard, for **Select Data Source**, use the option **Import data about the relying party published online or on a local network**. Specify this federation metadata URL: `https://nexus.microsoftonline-p.com/federationmetadata/2007-06/federationmetadata.xml`. Leave other default selections. Select **Close**.
-1. The **Edit Claim Rules** wizard opens.
-1. In the **Edit Claim Rules** wizard, select **Add Rule**. In **Choose Rule Type**, select **Send Claims Using a Custom Rule**. Select *Next*.
+1. On the AD FS server, go to **Tools** > **AD FS management**.
+1. In the navigation pane, select **Trust Relationships** > **Relying Party Trusts**.
+1. Under **Actions**, select **Add Relying Party Trust**.
+1. In the **Select Data Source** section, select **Enter data about the relying party manually**, and then select **Next**.
+1. On the **Specify Display Name** page, type a name in **Display name**, under **Notes** type a description for this relying party trust, and then select **Next**.
+1. On the **Configure Certificate** page, if you have an optional token encryption certificate, select **Browse** to locate a certificate file, and then select **Next**.
+1. On the **Configure URL** page, select the **Enable support for the WS-Federation Passive protocol** check box. Under **Relying party WS-Federation Passive protocol URL**, type the URL for this relying party trust: `https://login.microsoftonline.com/login.srf`
+1. Select **Next**.
+1. On the **Configure Identifiers** page, specify the relying party trust identifier, including the tenant ID of the service partnerΓÇÖs Azure AD tenant: `https://login.microsoftonline.com/<tenant_ID>/`
+1. Select **Add** to add the identifier to the list, and then select **Next**.
+1. On the **Choose Access Control Policy** page, select a policy, and then select **Next**.
+1. On the **Ready to Add Trust** page, review the settings, and then select **Next** to save your relying party trust information.
+1. On the **Finish** page, select **Close**. This action automatically displays the **Edit Claim Rules** dialog box.
+1. In the **Edit Claim Rules** wizard, select **Add Rule**. In **Choose Rule Type**, select **Send Claims Using a Custom Rule**. Select *Next*.
1. In **Configure Claim Rule**, specify the following values: - **Claim rule name**: Issue Immutable ID - **Custom rule**: `c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"] => issue(store = "Active Directory", types = ("http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID"), query = "samAccountName={0};objectGUID;{1}", param = regexreplace(c.Value, "(?<domain>[^\\]+)\\(?<user>.+)", "${user}"), param = c.Value);`
-1. Select **Finish**.
+1. Select **Finish**.
1. The **Edit Claim Rules** window will show the new rule. Click **Apply**.
-1. In the same **Edit Claim Rules** wizard, select **Add Rule**. In **Cohose Rule Type**, select **Send LDAP Attributes as Claims**. Select **Next**.
-1. In **Configure Claim Rule**, specify the following values:
+1. In the same **Edit Claim Rules** wizard, select **Add Rule**. In **Choose Rule Type**, select **Send LDAP Attributes as Claims**. Select **Next**.
+1. In **Configure Claim Rule**, specify the following values:
- **Claim rule name**: Email claim rule - **Attribute store**: Active Directory - **LDAP Attribute**: E-Mail-Addresses
- - **Outgoing Claim Type**: E-Mail Address
+ - **Outgoing Claim Type**: E-Mail Address
-1. Select **Finish**.
+1. Select **Finish**.
1. The **Edit Claim Rules** window will show the new rule. Click **Apply**. 1. Click **OK**. The AD FS server is now configured for federation using WS-Fed. ## Next steps
-Next, you'll [configure SAML/WS-Fed IdP federation in Azure AD](direct-federation.md#step-3-configure-samlws-fed-idp-federation-in-azure-ad) either in the Azure AD portal or by using PowerShell.
+Next, you'll [configure SAML/WS-Fed IdP federation in Azure AD](direct-federation.md#step-3-configure-samlws-fed-idp-federation-in-azure-ad) either in the Azure AD portal or by using the Microsoft Graph API.
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
Previously updated : 03/21/2022 Last updated : 05/09/2022
An Azure Active Directory (Azure AD) B2B collaboration user can decide to leave
## Leave an organization
+In your My Account portal, on the Organizations page, you can view and manage the organizations you have access to:
+
+- **Home organization**: Your home organization is listed first. This is the organization that owns your work or school account. Because your account is managed by your administrator, you're not allowed to leave your home organization. (If you don't have an assigned home organization, you'll just see a single heading that says Organizations with the list of your associated organizations.)
+
+- **Other organizations you belong to**: You'll also see the other organizations that you've signed in to previously using your work or school account. You can leave any of these organizations at any time.
+ To leave an organization, follow these steps. 1. Go to your **My Account** page by doing one of the following:-- If you're using a work or school account, go to https://myaccount.microsoft.com and sign in.-- If you're using a personal account, go to https://myapps.microsoft.com and sign in, and then select your account icon in the upper right and select **View account**. Or, use a My Account URL that includes your tenant information to go directly to your My Account page (examples are shown in the following note). +
+ - If you're using a work or school account, go to https://myaccount.microsoft.com and sign in.
+ - If you're using a personal account, go to https://myapps.microsoft.com and sign in, and then select your account icon in the upper right and select **View account**. Or, use a My Account URL that includes your tenant information to go directly to your My Account page (examples are shown in the following note).
> [!NOTE] > If you use the email one-time passcode feature when signing in, you'll need to use a My Account URL that includes your tenant name or tenant ID, for example: `https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com` or `https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789`.
-2. Under **Organizations**, find the organization that you want to leave, and select **Leave organization**.
+1. Select **Organizations** from the left navigation pane or select the **Manage organizations** link from the **Organizations** block.
- ![Screenshot showing Leave organization option in the user interface](media/leave-the-organization/leave-org.png)
-3. When asked to confirm, select **Leave**.
+1. Under **Other organizations you belong to**, find the organization that you want to leave, and select **Leave organization**.
-> [!NOTE]
- > You cannot leave your home organization.
+ ![Screenshot showing Leave organization option in the user interface.](media/leave-the-organization/leave-org.png)
+1. When asked to confirm, select **Leave**.
## Account removal
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
Previously updated : 03/31/2022 Last updated : 05/09/2022
# B2B collaboration overview
-Azure Active Directory (Azure AD) B2B collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. With B2B collaboration, you can securely share your company's applications and services with guest users from any other organization, while maintaining control over your own corporate data. Work safely and securely with external partners, large or small, even if they don't have Azure AD or an IT department.
+Azure Active Directory (Azure AD) B2B collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. With B2B collaboration, you can securely share your company's applications and services with external users, while maintaining control over your own corporate data. Work safely and securely with external partners, large or small, even if they don't have Azure AD or an IT department.
![Diagram illustrating B2B collaboration](media/what-is-b2b/b2b-collaboration-overview.png)
active-directory How To Connect Sync Configure Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-configure-filtering.md
# Azure AD Connect sync: Configure filtering By using filtering, you can control which objects appear in Azure Active Directory (Azure AD) from your on-premises directory. The default configuration takes all objects in all domains in the configured forests. In general, this is the recommended configuration. Users using Microsoft 365 workloads, such as Exchange Online and Skype for Business, benefit from a complete Global Address List so they can send email and call everyone. With the default configuration, they would have the same experience that they would have with an on-premises implementation of Exchange or Lync.
-In some cases however, you're required make some changes to the default configuration. Here are some examples:
+In some cases however, you're required to make some changes to the default configuration. Here are some examples:
* You run a pilot for Azure or Microsoft 365 and you only want a subset of users in Azure AD. In the small pilot, it's not important to have a complete Global Address List to demonstrate the functionality. * You have many service accounts and other nonpersonal accounts that you don't want in Azure AD.
You can use multiple filtering options at the same time. For example, you can us
## Domain-based filtering This section provides you with the steps to configure your domain filter. If you added or removed domains in your forest after you installed Azure AD Connect, you also have to update the filtering configuration.
-The preferred way to change domain-based filtering is by running the installation wizard and changing [domain and OU filtering](how-to-connect-install-custom.md#domain-and-ou-filtering). The installation wizard automates all the tasks that are documented in this topic.
+To change domain-based filtering, run the installation wizard: [domain and OU filtering](how-to-connect-install-custom.md#domain-and-ou-filtering). The installation wizard automates all the tasks that are documented in this topic.
-You should only follow these steps if you're unable to run the installation wizard for some reason.
-Domain-based filtering configuration consists of these steps:
-
-1. Select the domains that you want to include in the synchronization.
-2. For each added and removed domain, adjust the run profiles.
-3. [Apply and verify changes](#apply-and-verify-changes).
-
-### Select the domains to be synchronized
-There are two ways to select the domains to be synchronized:
- - Using the Synchronization Service
- - Using the Azure AD Connect wizard.
--
-#### Select the domains to be synchronized using the Synchronization Service
-To set the domain filter, do the following steps:
-
-1. Sign in to the server that is running Azure AD Connect sync by using an account that is a member of the **ADSyncAdmins** security group.
-2. Start **Synchronization Service** from the **Start** menu.
-3. Select **Connectors**, and in the **Connectors** list, select the Connector with the type **Active Directory Domain Services**. In **Actions**, select **Properties**.
- ![Connector properties](./media/how-to-connect-sync-configure-filtering/connectorproperties.png)
-4. Click **Configure Directory Partitions**.
-5. In the **Select directory partitions** list, select and unselect domains as needed. Verify that only the partitions that you want to synchronize are selected.
- ![Screenshot that shows the directory partitions in the "Properties" window.](./media/how-to-connect-sync-configure-filtering/connectorpartitions.png)
- If you've changed your on-premises Active Directory infrastructure and added or removed domains from the forest, then click the **Refresh** button to get an updated list. When you refresh, you're asked for credentials. Provide any credentials with read access to Windows Server Active Directory. It doesn't have to be the user that is prepopulated in the dialog box.
- ![Refresh needed](./media/how-to-connect-sync-configure-filtering/refreshneeded.png)
-6. When you're done, close the **Properties** dialog by clicking **OK**. If you removed domains from the forest, a message pop-up says that a domain was removed and that configuration will be cleaned up.
-7. Continue to adjust the run profiles.
-
-#### Select the domains to be synchronized using the Azure AD Connect wizard
-To set the domain filter, do the following steps:
-
-1. Start the Azure AD Connect wizard
-2. Click **Configure**.
-3. Select **Customize Synchronization Options** and click **Next**.
-4. Enter your Azure AD credentials
-5. On the **Connected Directories** screen click **Next**.
-6. On the **Domain and OU filtering page** click **Refresh**. New domains will now appear and deleted domains will disappear.
- ![Partitions](./media/how-to-connect-sync-configure-filtering/update2.png)
-
-### Update the run profiles
-If you've updated your domain filter, you also need to update the run profiles.
-
-1. In the **Connectors** list, make sure that the Connector that you changed in the previous step is selected. In **Actions**, select **Configure Run Profiles**.
- ![Connector run profiles 1](./media/how-to-connect-sync-configure-filtering/connectorrunprofiles1.png)
-2. Find and identify the following profiles:
- * Full Import
- * Full Synchronization
- * Delta Import
- * Delta Synchronization
- * Export
-3. For each profile, adjust the **added** and **removed** domains.
- 1. For each of the five profiles, do the following steps for each **added** domain:
- 1. Select the run profile and click **New Step**.
- 2. On the **Configure Step** page, in the **Type** drop-down menu, select the step type with the same name as the profile that you're configuring. Then click **Next**.
- ![Connector run profiles 2](./media/how-to-connect-sync-configure-filtering/runprofilesnewstep1.png)
- 3. On the **Connector Configuration** page, in the **Partition** drop-down menu, select the name of the domain that you've added to your domain filter.
- ![Connector run profiles 3](./media/how-to-connect-sync-configure-filtering/runprofilesnewstep2.png)
- 4. To close the **Configure Run Profile** dialog, click **Finish**.
- 2. For each of the five profiles, do the following steps for each **removed** domain:
- 1. Select the run profile.
- 2. If the **Value** of the **Partition** attribute is a GUID, select the run step and click **Delete Step**.
- ![Connector run profiles 4](./media/how-to-connect-sync-configure-filtering/runprofilesdeletestep.png)
- 3. Verify your change. Each domain that you want to synchronize should be listed as a step in each run profile.
-4. To close the **Configure Run Profiles** dialog, click **OK**.
-5. To complete the configuration, you need to run a **Full import** and a **Delta sync**. Continue reading the section [Apply and verify changes](#apply-and-verify-changes).
## Organizational unitΓÇôbased filtering
-The preferred way to change OU-based filtering is by running the installation wizard and changing [domain and OU filtering](how-to-connect-install-custom.md#domain-and-ou-filtering). The installation wizard automates all the tasks that are documented in this topic.
-
-You should only follow these steps if you're unable to run the installation wizard for some reason.
-
-To configure organizational unitΓÇôbased filtering, do the following steps:
-
-1. Sign in to the server that is running Azure AD Connect sync by using an account that is a member of the **ADSyncAdmins** security group.
-2. Start **Synchronization Service** from the **Start** menu.
-3. Select **Connectors**, and in the **Connectors** list, select the Connector with the type **Active Directory Domain Services**. In **Actions**, select **Properties**.
- ![Connector properties](./media/how-to-connect-sync-configure-filtering/connectorproperties.png)
-4. Click **Configure Directory Partitions**, select the domain that you want to configure, and then click **Containers**.
-5. When you're prompted, provide any credentials with read access to your on-premises Active Directory. It doesn't have to be the user that is prepopulated in the dialog box.
-6. In the **Select Containers** dialog box, clear the OUs that you donΓÇÖt want to synchronize with the cloud directory, and then click **OK**.
- ![OUs in the Select Containers dialog box](./media/how-to-connect-sync-configure-filtering/ou.png)
- * The **Computers** container should be selected for your Windows 10 computers to be successfully synchronized to Azure AD. If your domain-joined computers are located in other OUs, make sure those are selected.
- * The **ForeignSecurityPrincipals** container should be selected if you have multiple forests with trusts. This container allows cross-forest security group membership to be resolved.
- * The **RegisteredDevices** OU should be selected if you enabled the device writeback feature. If you use another writeback feature, such as group writeback, make sure these locations are selected.
- * Select any other OU where Users, iNetOrgPersons, Groups, Contacts, and Computers are located. In the picture, all these OUs are located in the ManagedObjects OU.
- * If you use group-based filtering, then the OU where the group is located must be included.
- * Note that you can configure whether new OUs that are added after the filtering configuration finishes are synchronized or not synchronized. See the next section for details.
-7. When you're done, close the **Properties** dialog by clicking **OK**.
-8. To complete the configuration, you need to run a **Full import** and a **Delta sync**. Continue reading the section [Apply and verify changes](#apply-and-verify-changes).
-
-### Synchronize new OUs
-New OUs that are created after filtering has been configured are synchronized by default. This state is indicated by a selected check box. You can also unselect some sub-OUs. To get this behavior, click the box until it becomes white with a blue check mark (its default state). Then unselect any sub-OUs that you don't want to synchronize.
-
-If all sub-OUs are synchronized, then the box is white with a blue check mark.
-![OU with all boxes selected](./media/how-to-connect-sync-configure-filtering/ousyncnewall.png)
-
-If some sub-OUs have been unselected, then the box is gray with a white check mark.
-![OU with some sub-OUs unselected](./media/how-to-connect-sync-configure-filtering/ousyncnew.png)
-
-With this configuration, a new OU that was created under ManagedObjects is synchronized.
-
-The Azure AD Connect installation wizard always creates this configuration.
-
-### Don't synchronize new OUs
-You can configure the sync engine to not synchronize new OUs after the filtering configuration has finished. This state is indicated in the UI by the box appearing solid gray with no check mark. To get this behavior, click the box until it becomes white with no check mark. Then select the sub-OUs that you want to synchronize.
-
-![OU with the root unselected](./media/how-to-connect-sync-configure-filtering/oudonotsyncnew.png)
+To change OU-based filtering, run the installation wizard: [domain and OU filtering](how-to-connect-install-custom.md#domain-and-ou-filtering). The installation wizard automates all the tasks that are documented in this topic.
-With this configuration, a new OU that was created under ManagedObjects isn't synchronized.
## Attribute-based filtering Make sure that you're using the November 2015 ([1.0.9125](reference-connect-version-history.md)) or later build for these steps to work.
active-directory Qs Configure Cli Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss.md
To create a virtual machine scale set with the system-assigned managed identity
1. [Create](/cli/azure/vmss/#az-vmss-create) a virtual machine scale set. The following example creates a virtual machine scale set named *myVMSS* with a system-assigned managed identity, as requested by the `--assign-identity` parameter. The `--admin-username` and `--admin-password` parameters specify the administrative user name and password account for virtual machine sign-in. Update these values as appropriate for your environment:
- ```azurecli-interactive
- az vmss create --resource-group myResourceGroup --name myVMSS --image win2016datacenter --upgrade-policy-mode automatic --custom-data cloud-init.txt --admin-username azureuser --admin-password myPassword12 --assign-identity --generate-ssh-keys
+ ```azurecli-interactive
+ az vmss create --resource-group myResourceGroup --name myVMSS --image win2016datacenter --upgrade-policy-mode automatic --custom-data cloud-init.txt --admin-username azureuser --admin-password myPassword12 --assign-identity --generate-ssh-keys --role contributor
``` ### Enable system-assigned managed identity on an existing Azure virtual machine scale set
This section walks you through creation of a virtual machine scale set and assig
} ```
-3. [Create](/cli/azure/vmss/#az-vmss-create) a virtual machine scale set. The following example creates a virtual machine scale set associated with the new user-assigned managed identity, as specified by the `--assign-identity` parameter. Be sure to replace the `<RESOURCE GROUP>`, `<VMSS NAME>`, `<USER NAME>`, `<PASSWORD>`, and `<USER ASSIGNED IDENTITY>` parameter values with your own values.
+3. [Create](/cli/azure/vmss/#az-vmss-create) a virtual machine scale set. The following example creates a virtual machine scale set associated with the new user-assigned managed identity, as specified by the `--assign-identity` parameter. Be sure to replace the `<RESOURCE GROUP>`, `<VMSS NAME>`, `<USER NAME>`, `<PASSWORD>`, `<USER ASSIGNED IDENTITY>`, and `<ROLE>` parameter values with your own values.
- ```azurecli-interactive
- az vmss create --resource-group <RESOURCE GROUP> --name <VMSS NAME> --image UbuntuLTS --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY>
+ ```azurecli-interactive
+ az vmss create --resource-group <RESOURCE GROUP> --name <VMSS NAME> --image UbuntuLTS --admin-username <USER NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY> --role <ROLE>
``` ### Assign a user-assigned managed identity to an existing virtual machine scale set
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md
Previously updated : 03/22/2022 Last updated : 05/09/2022
You can assign an Azure AD role with an administrative unit scope by using the A
### PowerShell
+Use the [New-AzureADMSRoleAssignment](/powershell/module/azuread/new-azureadmsroleassignment) command and the `DirectoryScopeId` parameter to assign a role with administrative unit scope.
+ ```powershell
-$adminUser = Get-AzureADUser -ObjectId "Use the user's UPN, who would be an admin on this unit"
-$role = Get-AzureADDirectoryRole | Where-Object -Property DisplayName -EQ -Value "User Administrator"
-$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'The display name of the unit'"
-$roleMember = New-Object -TypeName Microsoft.Open.MSGraph.Model.MsRoleMemberInfo
-$roleMember.Id = $adminUser.ObjectId
-Add-AzureADMSScopedRoleMembership -Id $adminUnitObj.Id -RoleId $role.ObjectId -RoleMemberInfo $roleMember
+$user = Get-AzureADUser -Filter "userPrincipalName eq 'Example_UPN'"
+$roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Example_role_name'"
+$adminUnit = Get-AzureADMSAdministrativeUnit -Filter "displayName eq 'Example_admin_unit_name'"
+$directoryScope = '/administrativeUnits/' + $adminUnit.Id
+$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $directoryScope -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId
```
-You can change the highlighted section as required for the specific environment.
- ### Microsoft Graph API Request
You can view all the role assignments created with an administrative unit scope
### PowerShell
+Use the [Get-AzureADMSScopedRoleMembership](/powershell/module/azuread/get-azureadmsscopedrolemembership) command to list role assignments with administrative unit scope.
+ ```powershell
-$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'The display name of the unit'"
-Get-AzureADMSScopedRoleMembership -Id $adminUnitObj.Id | fl *
+$adminUnit = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Example_admin_unit_name'"
+Get-AzureADMSScopedRoleMembership -Id $adminUnit.Id | fl *
```
-You can change the highlighted section as required for your specific environment.
- ### Microsoft Graph API Request
active-directory Custom Assign Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-assign-powershell.md
Get-AzureADMSRoleAssignment -Filter "principalId eq '27c8ca78-ab1c-40ae-bd1b-eae
Get-AzureADMSRoleAssignment -Filter "roleDefinitionId eq '355aed8a-864b-4e2b-b225-ea95482e7570'" ```
-### Delete a role assignment
+### Remove a role assignment
``` PowerShell
-# Delete role assignment
+# Remove role assignment
Remove-AzureADMSRoleAssignment -Id 'qiho4WOb9UKKgng_LbPV7tvKaKRCD61PkJeKMh7Y458-1' ```
active-directory Authomize Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/authomize-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Authomize'
+description: Learn how to configure single sign-on between Azure Active Directory and Authomize.
++++++++ Last updated : 05/06/2022+++
+# Tutorial: Azure AD SSO integration with Authomize
+
+In this tutorial, you'll learn how to integrate Authomize with Azure Active Directory (Azure AD). When you integrate Authomize with Azure AD, you can:
+
+* Control in Azure AD who has access to Authomize.
+* Enable your users to be automatically signed-in to Authomize with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Authomize single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Authomize supports **SP and IDP** initiated SSO.
+* Authomize supports **Just In Time** user provisioning.
+
+## Add Authomize from the gallery
+
+To configure the integration of Authomize into Azure AD, you need to add Authomize from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Authomize** in the search box.
+1. Select **Authomize** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Authomize
+
+Configure and test Azure AD SSO with Authomize using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Authomize.
+
+To configure and test Azure AD SSO with Authomize, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Authomize SSO](#configure-authomize-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Authomize test user](#create-authomize-test-user)** - to have a counterpart of B.Simon in Authomize that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Authomize** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<CustomerName>.authomize.com/api/sso/metadata.xml?domain=<DOMAIN>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<CustomerName>.authomize.com/api/sso/assert?domain=<DOMAIN>`
+
+1. Click **Set additional URLs** and perform the following steps if you wish to configure the application in **SP** initiated mode:
+
+ a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CustomerName>.authomize.com`
+
+ b. In the **Relay State** textbox, type a URL using the following pattern:
+ `https://<CustomerName>.authomize.com`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL, Sign-on URL and Relay State URL. Contact [Authomize Client support team](mailto:support@authomize.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. Click **Save**.
+
+1. Authomize application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the Authomize application image.](common/default-attributes.png "Image")
+
+1. In addition to above, Authomize application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | user_id | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Authomize** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URLs.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Authomize.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Authomize**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Authomize SSO
+
+1. Log in to your Authomize company site as an administrator.
+
+1. Go to **Settings** (gear icon) > **SSO**.
+
+1. In the **SSO Settings** page, perform the following steps:
+
+ ![Screenshot that shows the Configuration Settings.](./media/authomize-tutorial/settings.png "Configuration")
+
+ a. Select **Enable SSO** checkbox.
+
+ b. Enter a valid name in the **Title** textbox.
+
+ c. Enter your **Email domain** in the textbox.
+
+ d. In the **Identity provider SSO URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ e. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **Public x509 certificate** textbox.
+
+ f. Click **Save configuration**.
+
+### Create Authomize test user
+
+In this section, a user called B.Simon is created in Authomize. Authomize supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Authomize, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Authomize Sign on URL where you can initiate the login flow.
+
+* Go to Authomize Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Authomize for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Authomize tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Authomize for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Authomize you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Fidelity Planviewer Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fidelity-planviewer-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Fidelity PlanViewer'
+description: Learn how to configure single sign-on between Azure Active Directory and Fidelity PlanViewer.
++++++++ Last updated : 05/05/2022++++
+# Tutorial: Azure AD SSO integration with Fidelity PlanViewer
+
+In this tutorial, you'll learn how to integrate Fidelity PlanViewer with Azure Active Directory (Azure AD). When you integrate Fidelity PlanViewer with Azure AD, you can:
+
+* Control in Azure AD who has access to Fidelity PlanViewer.
+* Enable your users to be automatically signed-in to Fidelity PlanViewer with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Fidelity PlanViewer single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Fidelity PlanViewer supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Fidelity PlanViewer from the gallery
+
+To configure the integration of Fidelity PlanViewer into Azure AD, you need to add Fidelity PlanViewer from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Fidelity PlanViewer** in the search box.
+1. Select **Fidelity PlanViewer** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Fidelity PlanViewer
+
+Configure and test Azure AD SSO with Fidelity PlanViewer using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Fidelity PlanViewer.
+
+To configure and test Azure AD SSO with Fidelity PlanViewer, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Fidelity PlanViewer SSO](#configure-fidelity-planviewer-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Fidelity PlanViewer test user](#create-fidelity-planviewer-test-user)** - to have a counterpart of B.Simon in Fidelity PlanViewer that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Fidelity PlanViewer** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the value:
+ `sp.fidelityworldwideinvestments.com`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://sso.sp.fidelity.co.uk/sp/ACS.saml2 `
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://cat-idr560.fidelity.co.uk/planviewer/jsp/home.jsp`
+
+1. Fidelity PlanViewer application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of Fidelity PlanViewer application attributes.](common/edit-attribute.png "Mapping")
+
+1. In addition to above, Fidelity PlanViewer application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirement.
+
+ | Name | Source Attribute |
+ |-| |
+ | LAST_NAME | user.surname |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Fidelity PlanViewer** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URLs.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Fidelity PlanViewer.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Fidelity PlanViewer**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Fidelity PlanViewer SSO
+
+To configure single sign-on on **Fidelity PlanViewer** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Fidelity PlanViewer support team](mailto:service.delivery@fil.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Fidelity PlanViewer test user
+
+In this section, you create a user called Britta Simon in Fidelity PlanViewer. Work with [Fidelity PlanViewer support team](mailto:service.delivery@fil.com) to add the users in the Fidelity PlanViewer platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Fidelity PlanViewer Sign-on URL where you can initiate the login flow.
+
+* Go to Fidelity PlanViewer Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Fidelity PlanViewer tile in the My Apps, this will redirect to Fidelity PlanViewer Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Fidelity PlanViewer you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Framer Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/framer-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Framer'
+description: Learn how to configure single sign-on between Azure Active Directory and Framer.
++++++++ Last updated : 05/08/2022++++
+# Tutorial: Azure AD SSO integration with Framer
+
+In this tutorial, you'll learn how to integrate Framer with Azure Active Directory (Azure AD). When you integrate Framer with Azure AD, you can:
+
+* Control in Azure AD who has access to Framer.
+* Enable your users to be automatically signed-in to Framer with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Framer single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Framer supports **SP** and **IDP** initiated SSO.
+
+## Add Framer from the gallery
+
+To configure the integration of Framer into Azure AD, you need to add Framer from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Framer** in the search box.
+1. Select **Framer** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Framer
+
+Configure and test Azure AD SSO with Framer using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Framer.
+
+To configure and test Azure AD SSO with Framer, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Framer SSO](#configure-framer-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Framer test user](#create-framer-test-user)** - to have a counterpart of B.Simon in Framer that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Framer** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://api.framer.com/auth/saml/callback/<ID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://api.framer.com/auth/saml/callback/<ID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://api.framer.com/auth/saml/callback/<ID>`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Framer Client support team](mailto:support@framer.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Framer** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URLs.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Framer.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Framer**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Framer SSO
+
+To configure single sign-on on **Framer** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Framer support team](mailto:support@framer.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Framer test user
+
+In this section, you create a user called Britta Simon in Framer. Work with [Framer support team](mailto:support@framer.com) to add the users in the Framer platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Framer Sign-on URL where you can initiate the login flow.
+
+* Go to Framer Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Framer for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Framer tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Framer for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Framer you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Hubble Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hubble-tutorial.md
In this tutorial, you'll learn how to integrate Hubble with Azure Active Directo
* Control in Azure AD who has access to Hubble. * Enable your users to be automatically signed-in to Hubble with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
## Prerequisites
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/metadataxml.png)
-1. On the **Set up Hubble** section, copy the appropriate URL(s) based on your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
- ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Hubble SSO
-To configure single sign-on on **Hubble** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Hubble support team](mailto:cs@hubble-inc.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Hubble** side, you need to upload the downloaded **Federation Metadata XML** to the configuration page on Hubble.
### Create Hubble test user
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Previously updated : 05/04/2022 Last updated : 04/27/2022
This article lists the latest features, improvements, and changes in the Azure Active Directory (Azure AD) Verifiable Credentials service.
-## May
-
-We are expanding our service to all Azure AD customers! Verifiable credentials are now available to everyone with an Azure AD subscription (Free and Premium). Existing tenants that configured the Verifiable Credentials service prior to May 4, 2022 must make a [small change](verifiable-credentials-faq.md#updating-the-vc-service-configuration) to avoid service disruptions.
- ## April Starting next month, we are rolling out exciting changes to the subscription requirements for the Verifiable Credentials service. Administrators must perform a small configuration change before **May 4, 2022** to avoid service disruptions. Follow [these steps](verifiable-credentials-faq.md?#updating-the-vc-service-configuration) to apply the required configuration changes.
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
Title: Create a static volume for pods in Azure Kubernetes Service (AKS)
description: Learn how to manually create a volume with Azure disks for use with a pod in Azure Kubernetes Service (AKS) Previously updated : 04/01/2019 Last updated : 05/09/2019 #Customer intent: As a developer, I want to learn how to manually create and attach storage to a specific pod in AKS.
For more information on Kubernetes volumes, see [Storage options for application
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+If you want to interact with Azure Disks on an AKS cluster with 1.20 or previous version, see the [Kubernetes plugin for Azure Disks][kubernetes-disks].
## Create an Azure disk
kubectl apply -f azure-disk-pod.yaml
For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
-For more information about AKS clusters interact with Azure disks, see the [Kubernetes plugin for Azure Disks][kubernetes-disks].
- <!-- LINKS - external --> [kubernetes-disks]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_disk/README.md [kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
aks Azure Files Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-volume.md
description: Learn how to manually create a volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 04/1/2022 Last updated : 05/09/2022 #Customer intent: As a developer, I want to learn how to manually create and attach storage using Azure Files to a pod in AKS.
For more information on Kubernetes volumes, see [Storage options for application
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+If you want to interact with Azure Files on an AKS cluster with 1.20 or previous version, see the [Kubernetes plugin for Azure Files][kubernetes-files].
## Create an Azure file share
kubectl create secret generic azure-secret --from-literal=azurestorageaccountnam
## Mount file share as an inline volume > [!NOTE]
-> Inline `azureFile` volume can only access secrets in the same namespace as the pod. To specify a different secret namespace, [please use the persistent volume example][persistent-volume-example] below instead.
+> Inline volume can only access secrets in the same namespace as the pod. To specify a different secret namespace, [please use the persistent volume example][persistent-volume-example] below instead.
To mount the Azure Files share into your pod, configure the volume in the container spec. Create a new file named `azure-files-pod.yaml` with the following contents. If you changed the name of the Files share or secret name, update the *shareName* and *secretName*. If desired, update the `mountPath`, which is the path where the Files share is mounted in the pod. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
kubectl apply -f azure-files-pod.yaml
For Azure File CSI driver parameters, see [CSI driver parameters][CSI driver parameters].
-For information about AKS 1.20 or below clusters interact with Azure Files, see the [Kubernetes plugin for Azure Files][kubernetes-files].
- For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage]. <!-- LINKS - external -->
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
az aks update \
--cluster-autoscaler-profile scan-interval=30s ```
-When you enable the cluster autoscaler on node pools in the cluster, those clusters will also use the cluster autoscaler profile. For example:
+When you enable the cluster autoscaler on node pools in the cluster, these node pools with CA enabled will also use the cluster autoscaler profile. For example:
```azurecli-interactive az aks nodepool update \
This article showed you how to automatically scale the number of AKS nodes. You
[kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why [kubernetes-hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ [kubernetes-hpa-walkthrough]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
-[metrics-server]: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server
+[metrics-server]: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
This article shows you how to enable encryption at rest for your Kubernetes data
* Bring your own keys * Provide encryption at rest for secrets stored in etcd
-For more details on using the KMS plugin, see [Encrypting Secret Data at Rest](https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/).
+For more information on using the KMS plugin, see [Encrypting Secret Data at Rest](https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/).
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
az provider register --namespace Microsoft.ContainerService
The following limitations apply when you integrate KMS etcd encryption with AKS: * Disabling of the KMS etcd encryption feature.
-* Changing of key Id, including key name and key version.
+* Changing of key ID, including key name and key version.
* Deletion of the key, Key Vault, or the associated identity.
-* KMS etcd encryption does not work with System-Assigned Managed Identity. The keyvault access-policy is required to be set before the feature is enabled. In addition, System-Assigned Managed Identity is not available until cluster creation, thus there is a cycle dependency.
+* KMS etcd encryption doesn't work with System-Assigned Managed Identity. The keyvault access-policy is required to be set before the feature is enabled. In addition, System-Assigned Managed Identity isn't available until cluster creation, thus there's a cycle dependency.
* Using Azure Key Vault with PrivateLink enabled. * Using more than 2000 secrets in a cluster.
+* Managed HSM Support
* Bring your own (BYO) Azure Key Vault from another tenant.
Use `az identity create` to create a User-assigned managed identity.
az identity create --name MyIdentity --resource-group MyResourceGroup ```
-Use `az identity show` to get Identity Object Id.
+Use `az identity show` to get Identity Object ID.
```azurecli IDENTITY_OBJECT_ID=$(az identity show --name MyIdentity --resource-group MyResourceGroup --query 'principalId' -o tsv) echo $IDENTITY_OBJECT_ID ```
-The above example stores the value of the Identity Object Id in *IDENTITY_OBJECT_ID*.
+The above example stores the value of the Identity Object ID in *IDENTITY_OBJECT_ID*.
-Use `az identity show` to get Identity Resource Id.
+Use `az identity show` to get Identity Resource ID.
```azurecli IDENTITY_RESOURCE_ID=$(az identity show --name MyIdentity --resource-group MyResourceGroup --query 'id' -o tsv) echo $IDENTITY_RESOURCE_ID ```
-The above example stores the value of the Identity Resource Id in *IDENTITY_RESOURCE_ID*.
+The above example stores the value of the Identity Resource ID in *IDENTITY_RESOURCE_ID*.
## Assign permissions (decrypt and encrypt) to access key vault
Use [az aks update][az-aks-update] with the `--enable-azure-keyvault-kms` and `-
az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-id $KEY_ID ```
-Use below command to update all secrets. Otherwise, the old secrets are not encrypted.
+Use below command to update all secrets. Otherwise, the old secrets aren't encrypted.
```azurecli-interactive kubectl get secrets --all-namespaces -o json | kubectl replace -f -
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
At this time, App Service Environment migrations to v3 using the migration featu
- Australia Southeast - Brazil South - Canada Central
+- Canada East
- Central India - Central US - East Asia
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
App Service Environment v3 is available in the following regions:
| Australia Southeast | Brazil South | | Brazil South | Canada Central | | Canada Central | Central India |
-| Central India | Central US |
-| Central US | East Asia |
-| East Asia | East US |
-| East US | East US 2 |
-| East US 2 | France Central |
-| France Central | Germany West Central |
-| Germany West Central | Japan East |
-| Japan East | Korea Central |
-| Korea Central | North Europe |
-| North Central US | Norway East |
-| North Europe | South Africa North |
-| Norway East | South Central US |
-| South Africa North | Southeast Asia |
-| South Central US | UK South |
-| Southeast Asia | West Europe |
-| Switzerland North | West US 2 |
-| UAE North | West US 3 |
+| Canada East | Central US |
+| Central India | East Asia |
+| Central US | East US |
+| East Asia | East US 2 |
+| East US | France Central |
+| East US 2 | Germany West Central |
+| France Central | Japan East |
+| Germany West Central | Korea Central |
+| Japan East | North Europe |
+| Korea Central | Norway East |
+| North Central US | South Africa North |
+| North Europe | South Central US |
+| Norway East | Southeast Asia |
+| South Africa North | UK South |
+| South Central US | West Europe |
+| Southeast Asia | West US 2 |
+| Switzerland North | West US 3 |
+| UAE North | |
| UK South | | | UK West | | | West Central US | |
app-service Overview Arc Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md
If your extension was in the stable version and auto-upgrade-minor-version is se
### Application services extension v 0.13.0 (April 2022) -- Added support for Azure Functions v4 and introduces support for PowerShell functions - Added support for Application Insights codeless integration for Node JS applications - Added support for [Access Restrictions](app-service-ip-restrictions.md) via CLI - More details provided when extension fails to install, to assist with troubleshooting issues
application-gateway Ingress Controller Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-troubleshoot.md
spec:
port: 80 targetPort: 80
-apiVersion: extensions/v1beta1
+apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: test-agic-app-ingress
rbac:
# Specify aks cluster related information. THIS IS BEING DEPRECATED. aksClusterConfiguration: apiServerAddress: <aks-api-server-address>
-```
+```
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/multiple-site-overview.md
This ordering can be established by providing a 'Priority' field value to the re
The priority field only impacts the order of evaluation of a request routing rule, this wont change the order of evaluation of path based rules within a `PathBasedRouting` request routing rule. >[!NOTE]
->If you wish to use rule priority, you will have to specify rule-priority field values for all the existing request routing rules. Once the rule priority field is in use, any new routing rule that is created would also need to have a rule priority field value as part of its config.
+>If you wish to use rule priority, you will have to specify rule priority field values for all the existing request routing rules. Once the rule priority field is in use, any new routing rule that is created would also need to have a rule priority field value as part of its config.
+Starting with API version 2021-08-01 rule priority field would be a mandatory field as part of the request routing rules.
+From this API version, rule priority field values would be auto-populated for existing request routing rules based on current ordering of evaluation as part of the first PUT call. Any future updates to request routing rules would need to have the rule priority field provided as part of the configuration.
+
+> [!IMPORTANT]
+> Rule priority field values for existing request routing rules based on current order would be automatically populated if any configuration updates are applied using API version 2021-08-01 and above, portal, Azure PowerShell and Azure CLI. Any future updates to request routing rules would need to have the rule priority field provided as part of the configuration.
## Wildcard host names in listener
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
Title: Form Recognizer quotas and limits
description: Quick reference, detailed description, and best practices on Azure Form Recognizer service Quotas and Limits -+ Previously updated : 02/15/2022 Last updated : 05/09/2022
For the usage with [Form Recognizer SDK](quickstarts/try-v3-csharp-sdk.md), [For
| **Concurrent Request limit** | 1 | 15 (default value) | | Adjustable | No<sup>2</sup> | Yes<sup>2</sup> | | **Compose Model limit** | 5 | 100 (default value) |
+| Adjustable | No<sup>2</sup> | No<sup>2</sup> |
| **Custom neural model train** | 10 per month | 10 per month | | Adjustable | No<sup>2</sup> | Yes<sup>2</sup> |
automanage Move Automanaged Configuration Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/move-automanaged-configuration-profile.md
+
+ Title: Move an Azure Automanage configuration profile across regions
+description: Learn how to move an Automanage Configuration Profile across regions
+++ Last updated : 05/01/2022+
+# Customer intent: As a sysadmin, I want move my Automanage Configuration Profile to a different region.
++
+# Move an Azure Automanage configuration profile to a different region
+This article describes how to migrate an Automanage Configuration Profile to a different region. You might want to move your Configuration Profiles to another region for many reasons. For example, to take advantage of a new Azure region, to meet internal policy and governance requirements, or in response to capacity planning requirements. You may want to deploy Azure Automanage to some VMs that are in a new region. Some regions may require that you use Automanage Configuration Profiles that are local to that region.
+
+## Prerequisites
+* Ensure that your target region is [supported by Automanage](./automanage-virtual-machines.md#prerequisites).
+* Ensure that your Log Analytics workspace region, Automation account region, and your target region are all regions supported by the region mappings [here](../automation/how-to/region-mappings.md).
+
+## Download your desired Automanage configuration profile
+
+We'll begin by downloading our previous Configuration Profile using PowerShell. First, perform a `GET` using `Invoke-RestMethod` against the Automanage Resource Provider, substituting the values for your subscription.
+
+```url
+https://management.azure.com/subscriptions/<yourSubscription>/providers/Microsoft.Automanage/configurationProfiles?api-version=2021-04-30-preview
+```
+
+The GET command will display a list of Automanage Configuration Profile information, including the settings and the ConfigurationProfile ID
+```azurepowershell-interactive
+$listConfigurationProfilesURI = "https://management.azure.com/subscriptions/<yourSubscription>/providers/Microsoft.Automanage/configurationProfiles?api-version=2021-04-30-preview"
+
+Invoke-RestMethod `
+ -URI $listConfigurationProfilesURI
+```
+
+Here are the results, edited for brevity.
+
+```json
+ {
+ "id": "/subscriptions/yourSubscription/resourceGroups/yourResourceGroup/providers/Microsoft.Automanage/configurationProfiles/testProfile1",
+ "name": "testProfile1",
+ "type": "Microsoft.Automanage/configurationProfiles",
+ "location": "westus",
+ "properties": {
+ "configuration": {
+ "Antimalware/Enable": false,
+ "Backup/Enable": true,
+ "Backup/PolicyName": "dailyBackupPolicy",
+ }
+ }
+ },
+ {
+ "id": "/subscriptions/yourSubscription/resourceGroups/yourResourceGroup/providers/Microsoft.Automanage/configurationProfiles/testProfile2",
+ "name": "testProfile2",
+ "type": "Microsoft.Automanage/configurationProfiles",
+ "location": "eastus2euap",
+ "properties": {
+ "configuration": {
+ "Antimalware/Enable": false,
+ "Backup/Enable": true,
+ "Backup/PolicyName": "dailyBackupPolicy",
+ }
+ }
+ }
+```
+
+The next step is to do another `GET`, this time to retrieve the specific profile we would like to create in a new region. For this example, we'll retrieve 'testProfile1'. We'll perform a `GET` against the `id` value for the desired profile.
+
+```azurepowershell-interactive
+$profileId = "https://management.azure.com/subscriptions/yourSubscription/resourceGroups/yourResourceGroup/providers/Microsoft.Automanage/configurationProfiles/testProfile1?api-version=2021-04-30-preview"
+
+$profile = Invoke-RestMethod `
+ -URI $listConfigurationProfilesURI
+```
+
+## Adjusting the location
+
+Creating the profile in a new location is as simple as changing the `Location` property to our desired Azure Region.
+
+We also will need to create a new name for this profile. Let's change the name of the Configuration Profile `profileUk`. We should update the `Name` property within the profile, and also in the URL. We can use the `-replace` format operator to make this simple.
+
+```powershell
+$profile.Location = "westeurope"
+$profile.Name -replace "testProfile1", "profileUk"
+$profileId -replace "testProfile1", "profileUk"
+```
+
+Now that we have changed the Location value, this updated Configuration Profile will be created in Western Europe when we submit it.
+
+## Creating the new profile in the desired location
+
+All that remains now is to `PUT` this new profile, using `Invoke-RestMethod` once more.
+
+```powershell
+$profile = Invoke-RestMethod `
+ -Method PUT `
+ -URI $profileId
+```
+
+## Enable Automanage on your VMs
+For details on how to move your VMs, see this [article](../resource-mover/tutorial-move-region-virtual-machines.md).
+
+Once you've moved your profile to a new region, you may use it as a custom profile for any VM. Details are available [here](./automanage-virtual-machines.md#enabling-automanage-for-vms-in-azure-portal).
+
+## Next steps
+* [Learn more about Azure Automanage](./automanage-virtual-machines.md)
+* [View frequently asked questions about Azure Automanage](./faq.yml)
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
In the Product Catalog, always-available services are listed as "non-regional" s
| **Products** | **Resiliency** | | | |
-| [Azure Application Gateway (V2)](../application-gateway/application-gateway-autoscaling-zone-redundant.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| [Azure Application Gateway (V2)](../application-gateway/application-gateway-autoscaling-zone-redundant.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
| [Azure Backup](../backup/backup-create-rs-vault.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure Cosmos DB](../cosmos-db/high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure DNS: Azure DNS Private Zones](../dns/private-dns-getstarted-portal.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
You can access Azure availability zones by using your Azure subscription. To lea
- [Building solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability) - [High availability with Azure services](/azure/architecture/framework/resiliency/overview)-- [Design patterns for high availability](/azure/architecture/framework/resiliency/app-design)
+- [Design patterns for high availability](/azure/architecture/framework/resiliency/app-design)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
For complete release version information, see [Version log](version-log.md).
- Set `--readable-secondaries` to any value between 0 and the number of replicas minus 1. - `--readable-secondaries` only applies to Business Critical tier. - Automatic backups are taken on the primary instance in a Business Critical service tier when there are multiple replicas. When a failover happens, backups move to the new primary. -- RWX capable storage class is required for backups, for both General Purpose and Business Critical service tiers.
+- [ReadWriteMany (RWX) capable storage class](/azure/aks/concepts-storage#azure-disks) is required for backups, for both General Purpose and Business Critical service tiers. Specifying a non-ReadWriteMany storage class will cause the SQL Managed Instance to be stuck in "Pending" status during deployment.
- Billing support when using multiple read replicas. For additional information about service tiers, see [High Availability with Azure Arc-enabled SQL Managed Instance (preview)](managed-instance-high-availability.md).
azure-arc Ssh Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md
SSH access to Arc-enabled servers is currently supported in the following region
### Supported operating systems - Windows: Windows 7+ and Windows Server 2012+ - Linux:
- - CentOS: CentOS 7, CentOS 8
- - RedHat Enterprise Linux (RHEL): RHEL 7.4 to RHEL 7.10, RHEL 8.3+
- - SUSE Linux Enterprise Server (SLES): SLES 12, SLES 15.1+
- - Ubuntu Server: Ubuntu Server 16.04 to Ubuntu Server 20.04
+ - CentOS: CentOS 7, CentOS 8
+ - RedHat Enterprise Linux (RHEL): RHEL 7.4 to RHEL 7.10, RHEL 8.3+
+ - SUSE Linux Enterprise Server (SLES): SLES 12, SLES 15.1+
+ - Ubuntu Server: Ubuntu Server 16.04 to Ubuntu Server 20.04
## Getting started ### Register the HybridConnectivity resource provider
To add access to SSH connections, run the following:
> If you are using a non-default port for your SSH connection, replace port 22 with your desired port in the previous command. ## Examples
-To view examples of using the ```az ssh vm``` command, view the az CLI documentation page for [az ssh](/cli/azure/ssh).
+To view examples of using the ```az ssh vm``` command, view the az CLI documentation page for [az ssh](/cli/azure/ssh).
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
If you don't have a reference on Application Insights SDK yet:
Get an instance of `TelemetryClient` (except in JavaScript in webpages):
-For [ASP.NET Core](asp-net-core.md#how-can-i-track-telemetry-thats-not-automatically-collected) apps and [Non HTTP/Worker for .NET/.NET Core](worker-service.md#how-can-i-track-telemetry-thats-not-automatically-collected) apps, it is recommended to get an instance of `TelemetryClient` from the dependency injection container as explained in their respective documentation.
+For [ASP.NET Core](asp-net-core.md) apps and [Non HTTP/Worker for .NET/.NET Core](worker-service.md#how-can-i-track-telemetry-thats-not-automatically-collected) apps, it is recommended to get an instance of `TelemetryClient` from the dependency injection container as explained in their respective documentation.
If you use AzureFunctions v2+ or Azure WebJobs v3+ - follow [this document](../../azure-functions/functions-monitoring.md).
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
Run your application and make requests to it. Telemetry should now flow to Appli
### ILogger logs
-The default configuration collects `ILogger` `Warning` logs and more severe logs. You can [customize this configuration](#how-do-i-customize-ilogger-logs-collection).
+The default configuration collects `ILogger` `Warning` logs and more severe logs. Review the FAQ to [customize this configuration](../faq.yml).
### Dependencies
If you want to disable telemetry conditionally and dynamically, you can resolve
The preceding code sample prevents the sending of telemetry to Application Insights. It doesn't prevent any automatic collection modules from collecting telemetry. If you want to remove a particular auto collection module, see [remove the telemetry module](#configuring-or-removing-default-telemetrymodules).
-## Frequently asked questions
-
-### Does Application Insights support ASP.NET Core 3.X?
-
-Yes. Update to [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) version 2.8.0 or later. Earlier versions of the SDK don't support ASP.NET Core 3.X.
-
-Also, if you're [enabling server-side telemetry based on Visual Studio](#enable-application-insights-server-side-telemetry-visual-studio), update to the latest version of Visual Studio 2019 (16.3.0) to onboard. Earlier versions of Visual Studio don't support automatic onboarding for ASP.NET Core 3.X apps.
-
-### How can I track telemetry that's not automatically collected?
-
-Get an instance of `TelemetryClient` by using constructor injection, and call the required `TrackXXX()` method on it. We don't recommend creating new `TelemetryClient` or `TelemetryConfiguration` instances in an ASP.NET Core application. A singleton instance of `TelemetryClient` is already registered in the `DependencyInjection` container, which shares `TelemetryConfiguration` with rest of the telemetry. Creating a new `TelemetryClient` instance is recommended only if it needs a configuration that's separate from the rest of the telemetry.
-
-The following example shows how to track more telemetry from a controller.
-
-```csharp
-using Microsoft.ApplicationInsights;
-
-public class HomeController : Controller
-{
- private TelemetryClient telemetry;
-
- // Use constructor injection to get a TelemetryClient instance.
- public HomeController(TelemetryClient telemetry)
- {
- this.telemetry = telemetry;
- }
-
- public IActionResult Index()
- {
- // Call the required TrackXXX method.
- this.telemetry.TrackEvent("HomePageRequested");
- return View();
- }
-```
-
-For more information about custom data reporting in Application Insights, see [Application Insights custom metrics API reference](./api-custom-events-metrics.md). A similar approach can be used for sending custom metrics to Application Insights using the [GetMetric API](./get-metric.md).
-
-### How do I customize ILogger logs collection?
-
-By default, only `Warning` logs and more severe logs are automatically captured. To change this behavior, explicitly override the logging configuration for the provider `ApplicationInsights` as shown below.
-The following configuration allows ApplicationInsights to capture all `Information` logs and more severe logs.
-
-```json
-{
- "Logging": {
- "LogLevel": {
- "Default": "Warning"
- },
- "ApplicationInsights": {
- "LogLevel": {
- "Default": "Information"
- }
- }
- }
-}
-```
-
-It's important to note that the following example doesn't cause the ApplicationInsights provider to capture `Information` logs. It doesn't capture it because the SDK adds a default logging filter that instructs `ApplicationInsights` to capture only `Warning` logs and more severe logs. ApplicationInsights requires an explicit override.
-
-```json
-{
- "Logging": {
- "LogLevel": {
- "Default": "Information"
- }
- }
-}
-```
-
-For more information, see [ILogger configuration](ilogger.md#logging-level).
-
-### Some Visual Studio templates used the UseApplicationInsights() extension method on IWebHostBuilder to enable Application Insights. Is this usage still valid?
-
-The extension method `UseApplicationInsights()` is still supported, but it's marked as obsolete in Application Insights SDK version 2.8.0 and later. It will be removed in the next major version of the SDK. To enable Application Insights telemetry, we recommend using `AddApplicationInsightsTelemetry()` because it provides overloads to control some configuration. Also, in ASP.NET Core 3.X apps, `services.AddApplicationInsightsTelemetry()` is the only way to enable Application Insights.
-
-### I'm deploying my ASP.NET Core application to Web Apps. Should I still enable the Application Insights extension from Web Apps?
-
-If the SDK is installed at build time as shown in this article, you don't need to enable the [Application Insights extension](./azure-web-apps.md) from the App Service portal. If the extension is installed, it will back off when it detects the SDK is already added. If you enable Application Insights from the extension, you don't have to install and update the SDK. But if you enable Application Insights by following instructions in this article, you have more flexibility because:
-
- * Application Insights telemetry will continue to work in:
- * All operating systems, including Windows, Linux, and Mac.
- * All publish modes, including self-contained or framework dependent.
- * All target frameworks, including the full .NET Framework.
- * All hosting options, including Web Apps, VMs, Linux, containers, Azure Kubernetes Service, and non-Azure hosting.
- * All .NET Core versions including preview versions.
- * You can see telemetry locally when you're debugging from Visual Studio.
- * You can track more custom telemetry by using the `TrackXXX()` API.
- * You have full control over the configuration.
-
-### Can I enable Application Insights monitoring by using tools like Azure Monitor Application Insights Agent (formerly Status Monitor v2)?
-
- Yes. In [Application Insights Agent 2.0.0-beta1](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/2.0.0-beta1) and later, ASP.NET Core applications hosted in IIS are supported.
-
-### Are all features supported if I run my application in Linux?
-
-Yes. Feature support for the SDK is the same in all platforms, with the following exceptions:
-
-* The SDK collects [Event Counters](./eventcounters.md) on Linux because [Performance Counters](./performance-counters.md) are only supported in Windows. Most metrics are the same.
-* Although `ServerTelemetryChannel` is enabled by default, if the application is running in Linux or macOS, the channel doesn't automatically create a local storage folder to keep telemetry temporarily if there are network issues. Because of this limitation, telemetry is lost when there are temporary network or server issues. To work around this issue, configure a local folder for the channel:
-
-```csharp
-using Microsoft.ApplicationInsights.Channel;
-using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
-
- public void ConfigureServices(IServiceCollection services)
- {
- // The following will configure the channel to use the given folder to temporarily
- // store telemetry items during network or Application Insights server issues.
- // User should ensure that the given folder already exists
- // and that the application has read/write permissions.
- services.AddSingleton(typeof(ITelemetryChannel),
- new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"});
- services.AddApplicationInsightsTelemetry();
- }
-```
-
-This limitation isn't applicable from version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later.
-
-### Is this SDK supported for the new .NET Core 3.X Worker Service template applications?
-
-This SDK requires `HttpContext`; therefore, it doesn't work in any non-HTTP applications, including the .NET Core 3.X Worker Service applications. To enable Application Insights in such applications using the newly released Microsoft.ApplicationInsights.WorkerService SDK, see [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md).
- ## Open-source SDK * [Read and contribute to the code](https://github.com/microsoft/ApplicationInsights-dotnet).
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
Last updated 05/21/2020
* SDK channel keeps telemetry in buffer, and sends them in batches. If the application is shutting down, you might need to explicitly call [Flush()](api-custom-events-metrics.md#flushing-data). Behavior of `Flush()` depends on the actual [channel](telemetry-channels.md#built-in-telemetry-channels) used.
+* Per [.NET Core/.NET Framework Console application](worker-service.md#net-corenet-framework-console-application), explicitly calling Flush() followed by sleep is required in Console Apps.
+ ## Request count collected by Application Insights SDK doesn't match the IIS log count for my application
-Internet Information Services (IIS) logs counts of all request reaching IIS and inherently could differ from the total request reaching an application. Due to this, it isn't guaranteed that the request count collected by the SDKs will match the total IIS log count.
+Internet Information Services (IIS) logs counts of all request reaching IIS and inherently could differ from the total request reaching an application. Due to this behavior, it isn't guaranteed that the request count collected by the SDKs will match the total IIS log count.
## No data from my server * I installed my app on my web server, and now I don't see any telemetry from it. It worked OK on my dev machine.*
Performance data (CPU, IO rate, and so on) is available for [Java web services](
* Have you hit your monthly quota of data points? Open the Settings/Quota and Pricing to find out. If so, you can upgrade your plan, or pay for more capacity. See the [pricing scheme](https://azure.microsoft.com/pricing/details/application-insights/). ## I don't see all the data I'm expecting
-If your application sends a lot of data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the [adaptive sampling](./sampling.md) feature may operate and send only a percentage of your telemetry.
+If your application sends considerable data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the [adaptive sampling](./sampling.md) feature may operate and send only a percentage of your telemetry.
-You can disable it, but doing so is not recommended. Sampling is designed so that related telemetry is correctly transmitted, for diagnostic purposes.
+You can disable it, but doing so isn't recommended. Sampling is designed so that related telemetry is correctly transmitted, for diagnostic purposes.
## Client IP address is 0.0.0.0
-On February 5 2018, we announced that we removed logging of the Client IP address. This doesn't affect Geo Location.
+On February 5 2018, we announced that we removed logging of the Client IP address. This recommendation doesn't affect Geo Location.
> [!NOTE] > If you need the first 3 octets of the IP address, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to add a custom attribute.
Follow these instructions to capture troubleshooting logs for your framework.
> [!NOTE] > Starting in version 2.14, the [Microsoft.AspNet.ApplicationInsights.HostingStartup](https://www.nuget.org/packages/Microsoft.AspNet.ApplicationInsights.HostingStartup) package is no longer necessary, SDK logs are now collected with the [Microsoft.ApplicationInsights](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) package. No additional package is required.
-1. Modify your applicationinsights.config file to include the following:
+1. Modify your applicationinsights.config file to include the following XML:
```xml <TelemetryModules>
For more information,
## Collect logs with dotnet-trace
-Alternatively, customers can also use a cross-platform .NET Core tool, [`dotnet-trace`](/dotnet/core/diagnostics/dotnet-trace) for collecting logs that can further help in troubleshooting. This may be helpful for linux-based environments.
+Alternatively, customers can also use a cross-platform .NET Core tool, [`dotnet-trace`](/dotnet/core/diagnostics/dotnet-trace) for collecting logs that can further help in troubleshooting. This tool may be helpful for linux-based environments.
After installation of [`dotnet-trace`](/dotnet/core/diagnostics/dotnet-trace), execute the command below in bash.
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
# Configuring the Application Insights SDK with ApplicationInsights.config or .xml
-The Application Insights .NET SDK consists of a number of NuGet packages. The
+The Application Insights .NET SDK consists of many NuGet packages. The
[core package](https://www.nuget.org/packages/Microsoft.ApplicationInsights) provides the API for sending telemetry to
-the Application Insights. [Additional packages](https://www.nuget.org/packages?q=Microsoft.ApplicationInsights) provide
+the Application Insights. [More packages](https://www.nuget.org/packages?q=Microsoft.ApplicationInsights) provide
telemetry *modules* and *initializers* for automatically tracking telemetry from your application and its context. By adjusting the configuration file, you can enable or disable Telemetry Modules and initializers, and set parameters for some of them.
-The configuration file is named `ApplicationInsights.config` or `ApplicationInsights.xml`, depending on the type of your application. It is automatically added to your project when you [install most versions of the SDK][start]. By default, when using the automated experience from the Visual Studio template projects that support **Add > Application Insights Telemetry**, the ApplicationInsights.config file is created in the project root folder and when compiled is copied to the bin folder. It is also added to a web app by [Status Monitor on an IIS server][redfield]. The configuration file is ignored if [extension for Azure website](azure-web-apps.md) or [extension for Azure VM and virtual machine scale set](azure-vm-vmss-apps.md) is used.
+The configuration file is named `ApplicationInsights.config` or `ApplicationInsights.xml`, depending on the type of your application. It's automatically added to your project when you [install most versions of the SDK][start]. By default, when using the automated experience from the Visual Studio template projects that support **Add > Application Insights Telemetry**, the ApplicationInsights.config file is created in the project root folder and when compiled is copied to the bin folder. It's also added to a web app by [Status Monitor on an IIS server][redfield]. The configuration file is ignored if [extension for Azure website](azure-web-apps.md) or [extension for Azure VM and virtual machine scale set](azure-vm-vmss-apps.md) is used.
There isn't an equivalent file to control the [SDK in a web page][client].
This document describes the sections you see in the configuration file, how they
> [!NOTE] > ApplicationInsights.config and .xml instructions do not apply to the .NET Core SDK. For configuring .NET Core applications, follow [this](./asp-net-core.md) guide. -- ## Telemetry Modules (ASP.NET) Each Telemetry Module collects a specific type of data and uses the core API to send the data. The modules are installed by different NuGet packages, which also add the required lines to the .config file.
Dependencies can be auto-collected without modifying your code by using agent-ba
### Application Insights Diagnostics Telemetry The `DiagnosticsTelemetryModule` reports errors in the Application Insights instrumentation code itself. For example,
-if the code cannot access performance counters or if an `ITelemetryInitializer` throws an exception. Trace telemetry
+if the code can't access performance counters or if an `ITelemetryInitializer` throws an exception. Trace telemetry
tracked by this module appears in the [Diagnostic Search][diagnostic]. ```
tracked by this module appears in the [Diagnostic Search][diagnostic].
``` ### Developer Mode
-`DeveloperModeWithDebuggerAttachedTelemetryModule` forces the Application Insights `TelemetryChannel` to send data immediately, one telemetry item at a time, when a debugger is attached to the application process. This reduces the amount of time between the moment when your application tracks telemetry and when it appears on the Application Insights portal. It causes significant overhead in CPU and network bandwidth.
+`DeveloperModeWithDebuggerAttachedTelemetryModule` forces the Application Insights `TelemetryChannel` to send data immediately, one telemetry item at a time, when a debugger is attached to the application process. This design reduces the amount of time between the moment when your application tracks telemetry and when it appears on the Application Insights portal. It causes significant overhead in CPU and network bandwidth.
* `Microsoft.ApplicationInsights.WindowsServer.DeveloperModeWithDebuggerAttachedTelemetryModule` * [Application Insights Windows Server](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer/) NuGet package
Reports the [response time and result code](../../azure-monitor/app/asp-net.md)
* [Microsoft.ApplicationInsights.EtwCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EtwCollector) ### Microsoft.ApplicationInsights
-The Microsoft.ApplicationInsights package provides the [core API](/dotnet/api/microsoft.applicationinsights) of the SDK. The other Telemetry Modules use this, and you can also [use it to define your own telemetry](./api-custom-events-metrics.md).
+The Microsoft.ApplicationInsights package provides the [core API](/dotnet/api/microsoft.applicationinsights) of the SDK. The other Telemetry Modules use this API, and you can also [use it to define your own telemetry](./api-custom-events-metrics.md).
* No entry in ApplicationInsights.config. * [Microsoft.ApplicationInsights](https://www.nuget.org/packages/Microsoft.ApplicationInsights) NuGet package. If you just install this NuGet, no .config file is generated.
The standard initializers are all set either by the Web or WindowsServer NuGet p
* `OperationNameTelemetryInitializer` updates the `Name` property of the `RequestTelemetry` and the `Name` property of the `Operation` context of all telemetry items based on the HTTP method, as well as names of ASP.NET MVC controller and action invoked to process the request. * `OperationIdTelemetryInitializer` or `OperationCorrelationTelemetryInitializer` updates the `Operation.Id` context property of all telemetry items tracked while handling a request with the automatically generated `RequestTelemetry.Id`. * `SessionTelemetryInitializer` updates the `Id` property of the `Session` context for all telemetry items with value extracted from the `ai_session` cookie generated by the ApplicationInsights JavaScript instrumentation code running in the user's browser.
-* `SyntheticTelemetryInitializer` or `SyntheticUserAgentTelemetryInitializer` updates the `User`, `Session`, and `Operation` contexts properties of all telemetry items tracked when handling a request from a synthetic source, such as an availability test or search engine bot. By default, [Metrics Explorer](../essentials/metrics-charts.md) does not display synthetic telemetry.
+* `SyntheticTelemetryInitializer` or `SyntheticUserAgentTelemetryInitializer` updates the `User`, `Session`, and `Operation` contexts properties of all telemetry items tracked when handling a request from a synthetic source, such as an availability test or search engine bot. By default, [Metrics Explorer](../essentials/metrics-charts.md) doesn't display synthetic telemetry.
The `<Filters>` set identifying properties of the requests. * `UserTelemetryInitializer` updates the `Id` and `AcquisitionDate` properties of `User` context for all telemetry items with values extracted from the `ai_user` cookie generated by the Application Insights JavaScript instrumentation code running in the user's browser.
The standard initializers are all set either by the Web or WindowsServer NuGet p
For .NET applications running in Service Fabric, you can include the `Microsoft.ApplicationInsights.ServiceFabric` NuGet package. This package includes a `FabricTelemetryInitializer`, which adds Service Fabric properties to telemetry items. For more information, see the [GitHub page](https://github.com/Microsoft/ApplicationInsights-ServiceFabric/blob/master/README.md) about the properties added by this NuGet package. ## Telemetry Processors (ASP.NET)
-Telemetry Processors can filter and modify each telemetry item just before it is sent from the SDK to the portal.
+Telemetry Processors can filter and modify each telemetry item just before it's sent from the SDK to the portal.
You can [write your own Telemetry Processors](./api-filtering-sampling.md#filtering). #### Adaptive sampling Telemetry Processor (from 2.0.0-beta3)
-This is enabled by default. If your app sends a lot of telemetry, this processor removes some of it.
+This functionality is enabled by default. If your app sends considerable telemetry, this processor removes some of it.
```xml
The parameter provides the target that the algorithm tries to achieve. Each inst
[Learn more about sampling](./sampling.md). #### Fixed-rate sampling Telemetry Processor (from 2.0.0-beta1)
-There is also a standard [sampling Telemetry Processor](./api-filtering-sampling.md) (from 2.0.1):
+There's also a standard [sampling Telemetry Processor](./api-filtering-sampling.md) (from 2.0.1):
```xml
There is also a standard [sampling Telemetry Processor](./api-filtering-sampling
```
+## ConnectionString
+
+[Connection string code samples](sdk-connection-string.md#code-samples)
+ ## InstrumentationKey
-This determines the Application Insights resource in which your data appears. Typically you create a separate resource, with a separate key, for each of your applications.
++
+This setting determines the Application Insights resource in which your data appears. Typically you create a separate resource, with a separate key, for each of your applications.
If you want to set the key dynamically - for example if you want to send results from your application to different resources - you can omit the key from the configuration file, and set it in code instead.
-To set the key for all instances of TelemetryClient, including standard Telemetry Modules. Do this in an initialization method, such as global.aspx.cs in an ASP.NET service:
+To set the key for all instances of TelemetryClient, including standard Telemetry Modules. Do this step in an initialization method, such as global.aspx.cs in an ASP.NET service:
```csharp using Microsoft.ApplicationInsights.Extensibility;
To get a new key, [create a new resource in the Application Insights portal][new
_Available starting in v2.6.0_
-The purpose of this provider is to lookup an Application ID based on an Instrumentation Key. The Application ID is included in RequestTelemetry and DependencyTelemetry and used to determine Correlation in the Portal.
+The purpose of this provider is to look up an Application ID based on an Instrumentation Key. The Application ID is included in RequestTelemetry and DependencyTelemetry and used to determine Correlation in the Portal.
-This is available by setting `TelemetryConfiguration.ApplicationIdProvider` either in code or in config.
+This functionality is available by setting `TelemetryConfiguration.ApplicationIdProvider` either in code or in config.
### Interface: IApplicationIdProvider
We provide two implementations in the [Microsoft.ApplicationInsights](https://ww
### ApplicationInsightsApplicationIdProvider
-This is a wrapper around our Profile API. It will throttle requests and cache results.
+This wrapper is for our Profile API. It will throttle requests and cache results.
This provider is added to your config file when you install either [Microsoft.ApplicationInsights.DependencyCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector) or [Microsoft.ApplicationInsights.Web](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Web/) This class has an optional property `ProfileQueryEndpoint`.
-By default this is set to `https://dc.services.visualstudio.com/api/profiles/{0}/appId`.
-If you need to configure a proxy for this configuration, we recommend proxying the base address and including "/api/profiles/{0}/appId". Note that '{0}' is substituted at runtime per request with the Instrumentation Key.
+By default it's set to `https://dc.services.visualstudio.com/api/profiles/{0}/appId`.
+If you need to configure a proxy for this configuration, we recommend proxying the base address and including "/api/profiles/{0}/appId". A '{0}' is substituted at runtime per request with the Instrumentation Key.
#### Example Configuration via ApplicationInsights.config: ```xml
TelemetryConfiguration.Active.ApplicationIdProvider = new ApplicationInsightsApp
### DictionaryApplicationIdProvider
-This is a static provider, which will rely on your configured Instrumentation Key / Application ID pairs.
+This static provider relies on your configured Instrumentation Key / Application ID pairs.
This class has a property `Defined`, which is a Dictionary<string,string> of Instrumentation Key to Application ID pairs.
-This class has an optional property `Next` which can be used to configure another provider to use when an Instrumentation Key is requested that does not exist in your configuration.
+This class has an optional property `Next`, which can be used to configure another provider to use when an Instrumentation Key is requested that doesn't exist in your configuration.
#### Example Configuration via ApplicationInsights.config: ```xml
TelemetryConfiguration.Active.ApplicationIdProvider = new DictionaryApplicationI
} }; ```
+## Configure snapshot collection for ASP.NET applications
+
+[Configure snapshot collection for ASP.NET applications](snapshot-debugger-vm.md#configure-snapshot-collection-for-aspnet-applications)
## Next steps [Learn more about the API][api].
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Once the migration is complete, you can use [diagnostic settings](../essentials/
> - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, then use that setting to control the retention days for the telemetry data still saved in your classic resource's storage. - Understand [Workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.-- Understand [Workspace-based resource changes](#workspace-based-resource-changes). ## Migrate your resource
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
public class MyController : ApiController
``` > [!NOTE]
-> If you use the `Microsoft.ApplicationInsights.AspNetCore` package to enable Application Insights, modify this code to get `TelemetryClient` directly in the constructor. For an example, see [this FAQ](./asp-net-core.md#frequently-asked-questions).
+> If you use the `Microsoft.ApplicationInsights.AspNetCore` package to enable Application Insights, modify this code to get `TelemetryClient` directly in the constructor. For an example, see [this FAQ](../faq.yml).
### What Application Insights telemetry type is produced from ILogger logs? Where can I see ILogger logs in Application Insights?
azure-monitor Change Analysis Custom Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-custom-filters.md
+
+ Title: Navigate to a change using custom filters in Change Analysis
+description: Learn how to navigate to a change in your service using custom filters in Azure Monitor's Change Analysis.
+++
+ms.contributor: cawa
+ Last updated : 05/09/2022++++
+# Navigate to a change using custom filters in Change Analysis
+
+Browsing through a long list of changes in the entire subscription is time consuming. With Change Analysis custom filters and search capability, you can efficiently navigate to changes relevant to issues for troubleshooting.
+
+## Custom filters and search bar
++
+### Filters
+
+| Filter | Description |
+| | -- |
+| Subscription | This filter is in-sync with the Azure portal subscription selector. It supports multiple-subscription selection. |
+| Time range | Specifies how far back the UI display changes, up to 14 days. By default, itΓÇÖs set to the past 24 hours. |
+| Resource group | Select the resource group to scope the changes. By default, all resource groups are selected. |
+| Change level | Controls which levels of changes to display. Levels include: important, normal, and noisy. <ul><li>Important: related to availability and security</li><li>Noisy: Read-only properties that are unlikely to cause any issues</li></ul> By default, important and normal levels are checked. |
+| Resource | Select **Add filter** to use this filter. </br> Filter the changes to specific resources. Helpful if you already know which resources to look at for changes. |
+| Resource type | Select **Add filter** to use this filter. </br> Filter the changes to specific resource types. |
+
+### Search bar
+
+The search bar filters the changes according to the input keywords. Search bar results apply only to the changes loaded by the page already and don't pull in results from the server side.
+
+## Next steps
+- Use [Change Analysis with the Az.ChangeAnalysis PowerShell module](./change-analysis-powershell.md) to determine changes made to resources in your Azure subscription.
+- [Troubleshoot Change Analysis](./change-analysis-troubleshoot.md).
azure-monitor Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-tutorial.md
This is the simplest query that we can write. It just returns all the records in
You can see that we do have results. The number of records that the query has returned appears in the lower-right corner.
-## Filter query results
-
-Let's add a filter to the query to reduce the number of records that are returned. Select the **Filter** tab on the left pane. This tab shows columns in the query results that you can use to filter the results. The top values in those columns are displayed with the number of records that have that value. Select **200** under **ResultCode**, and then select **Apply & Run**.
--
-A **where** statement is added to the query with the value that you selected. The results now include only records with that value, so you can see that the record count is reduced.
--- ### Time range All queries return records generated within a set time range. By default, the query returns records generated in the last 24 hours.
azure-monitor Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md
Last updated 01/19/2022
The restore operation makes a specific time range of data in a table available in the hot cache for high-performance queries. This article describes how to restore data, query that data, and then dismiss the data when you're done. ## When to restore logs
-Use the restore operation to query data in [Archived Logs](data-retention-archive.md). You can also use the restore operation to run powerful queries within a specific time range on any Analytics table when the log queries you run on the source table cannot complete within the log query timeout of 10 minutes.
+Use the restore operation to query data in [Archived Logs](data-retention-archive.md). You can also use the restore operation to run powerful queries within a specific time range on any Analytics table when the log queries you run on the source table can't complete within the log query timeout of 10 minutes.
> [!NOTE] > Restore is one method for accessing archived data. Use restore to run queries against a set of data within a particular time range. Use [Search jobs](search-jobs.md) to access data based on specific criteria.
When you restore data, you specify the source table that contains the data you w
The restore operation creates the restore table and allocates additional compute resources for querying the restored data using high-performance queries that support full KQL.
-The destination table provides a view of the underlying source data, but does not affect it in any way. The table has no retention setting, and you must explicitly [dismiss the restored data](#dismiss-restored-data) when you no longer need it.
+The destination table provides a view of the underlying source data, but doesn't affect it in any way. The table has no retention setting, and you must explicitly [dismiss the restored data](#dismiss-restored-data) when you no longer need it.
## Restore data
az monitor log-analytics workspace table restore create --subscription ContosoSI
``` +
+## Query restored data
+
+Restored logs retain their original timestamps. When you run a query on restored logs, set the query time range based on when the data was originally generated.
+
+Set the query time range by either:
+
+- Selecting **Custom** in the **Time range** dropdown at the top of the query editor and setting **From** and **To** values.<br>
+ or
+- Specifying the time range in the query. For example:
+
+ ```kusto
+ let startTime =datetime(01/01/2022 8:00:00 PM);
+ let endTime =datetime(01/05/2022 8:00:00 PM);
+ TabelName_RST
+ | where TimeGenerated between(startTime .. endTime)
+ ```
+ ## Dismiss restored data To save costs, dismiss restored data when you no longer need it by deleting the restored table.
-Deleting the restored table does not delete the data in the source table.
+Deleting the restored table doesn't delete the data in the source table.
> [!NOTE] > Restored data is available as long as the underlying source data is available. When you delete the source table from the workspace or when the source table's retention period ends, the data is dismissed from the restored table. However, the empty table will remain if you do not delete it explicitly.
Restore is subject to the following limitations.
You can: - Restore data for a minimum of two days.-- Restore up to 60TB.
+- Restore up to 60 TB.
- Perform up to four restores per workspace per week. - Run up to two restore processes in a workspace concurrently. - Run only one active restore on a specific table at a given time. Executing a second restore on a table that already has an active restore will fail.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 05/03/2022 Last updated : 04/04/2022 # What's new in Azure Monitor documentation This article lists significant changes to Azure Monitor documentation.
-## April, 2022
-
-### General
-
-**New articles**
--- [Monitoring Azure Monitor data reference](azure-monitor-monitoring-reference.md)-
-**Updated articles**
--- [Azure Monitor best practices - Analyze and visualize data](best-practices-analysis.md)-
-### Agents
-
-**New articles**
--- [Azure Monitor agent extension versions](agents/azure-monitor-agent-extension-versions.md)-- [Azure Monitor agent on Windows client devices (Preview)](agents/azure-monitor-agent-windows-client.md)-- [Collect text and IIS logs with Azure Monitor agent (preview)](agents/data-collection-text-log.md)--
-**Updated articles**
--- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)-- [Collect text and IIS logs with Azure Monitor agent (preview)](agents/data-collection-text-log.md)-- [Overview of Azure Monitor agents](agents/agents-overview.md)-
-### Alerts
-
-**Updated articles**
--- [Alerts on activity log](alerts/activity-log-alerts.md)-- [Configure Azure to connect ITSM tools using Secure Webhook](alerts/itsm-connector-secure-webhook-connections-azure-configuration.md)-- [Connect Azure to ITSM tools by using IT Service Management Solution](alerts/itsmc-definition.md)-- [Connect Azure to ITSM tools by using Secure Webhook](alerts/it-service-management-connector-secure-webhook-connections.md)-- [Create a metric alert with a Resource Manager template](alerts/alerts-metric-create-templates.md)-- [Create, view, and manage log alerts using Azure Monitor](alerts/alerts-log.md)-- [IT Service Management (ITSM) Integration](alerts/itsmc-overview.md)-- [Log alerts in Azure Monitor](alerts/alerts-unified-log.md)-- [Manage alert instances with unified alerts](alerts/alerts-managing-alert-instances.md)-- [Troubleshoot problems in IT Service Management Connector](alerts/itsmc-troubleshoot-overview.md)-
-### Application Insights
-
-**New articles**
--- [PageView telemetry: Application Insights data model](app/data-model-pageview-telemetry.md)-- [Profile live Azure containers with Application Insights](app/profiler-containers.md)-
-**Updated articles**
--- [Angular plugin for Application Insights JavaScript SDK](app/javascript-angular-plugin.md)-- [Application Insights for web pages](app/javascript.md)-- [Configure Application Insights Profiler](app/profiler-settings.md)-- [Connection strings](app/sdk-connection-string.md)-- [Live Metrics Stream: Monitor & Diagnose with 1-second latency](app/live-stream.md)-- [Monitor your Node.js services and apps with Application Insights](app/nodejs.md)-- [Profile production applications in Azure with Application Insights](app/profiler-overview.md)-- [React Native plugin for Application Insights JavaScript SDK](app/javascript-react-native-plugin.md)-- [React plugin for Application Insights JavaScript SDK](app/javascript-react-plugin.md)-- [Tips for updating your JVM args - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)-- [Troubleshooting no data - Application Insights for .NET/.NET Core](app/asp-net-troubleshoot-no-data.md)-
-### Autoscale
-
-**Updated articles**
--- [Get started with Autoscale in Azure](autoscale/autoscale-get-started.md)-
-### Essentials
-
-**Updated articles**
--- [Supported categories for Azure Monitor resource logs](essentials/resource-logs-categories.md)-- [Supported metrics with Azure Monitor](essentials/metrics-supported.md)-
-### Insights
-
-**Updated articles**
--- [Monitor Surface Hubs with Azure Monitor to track their health](insights/surface-hubs.md)-
-### Logs
-
-**New articles**
--- [Collect and ingest data from a file using Data Collection Rules (DCR) (Preview)](logs/data-ingestion-from-file.md)-
-**Updated articles**
--- [Azure Monitor Logs pricing details](logs/cost-logs.md)-- [Log Analytics workspace data export in Azure Monitor](logs/logs-data-export.md)-- [Tutorial: Send custom logs to Azure Monitor Logs using Resource Manager templates (preview)](logs/tutorial-custom-logs-api.md)-
-### Visualizations
-
-**Updated articles**
--- [Monitor your Azure services in Grafana](visualize/grafana-plugin.md)- ## March, 2022 ### Agents
This article lists significant changes to Azure Monitor documentation.
- [Enable VM insights by using Azure Policy](vm/vminsights-enable-policy.md)
-### Visualizations
+## Visualizations
**Updated articles**
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 04/18/2022 Last updated : 04/29/2022 # Create and manage Active Directory connections for Azure NetApp Files
This setting is configured in the **Active Directory Connections** under **NetAp
This is the domain name of your Active Directory Domain Services that you want to join. * **AD Site Name** This is the site name that the domain controller discovery will be limited to. This should match the site name in Active Directory Sites and Services.
+
+ > [!IMPORTANT]
+ > Without an AD Site Name specified, service disruption may occur. Without an AD Site Name specified, the Azure NetApp Files service may attempt to authenticate with a domain controller beyond what your network topology allows and result in a service disruption. See [Understanding Active Directory Site Topology | Microsoft Docs](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) for more information.
+ * **SMB server (computer account) prefix** This is the naming prefix for the machine account in Active Directory that Azure NetApp Files will use for creation of new accounts.
azure-percept Azureeyemodule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azureeyemodule-overview.md
The Azure Percept Workload enables several features that end users can take adva
- A retraining loop for grabbing images from the device periodically, retraining the model in the cloud, and then pushing the newly trained model back down to the device. Using the device's ability to update and swap models on the fly. ## AI workload details
-The Workload application is open-sourced in the Azure Percept Advanced Development [github repository](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app) and is made up of many small C++ modules, with some of the more important being:
+The Workload application is open-sourced in the Azure Percept Advanced Development [GitHub repository](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app) and is made up of many small C++ modules, with some of the more important being:
- [main.cpp](https://github.com/microsoft/azure-percept-advanced-development/blob/main/azureeyemodule/app/main.cpp): Sets up everything and then runs the main loop. - [iot](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app/iot): This folder contains modules that handle incoming and outgoing messages from the Azure IoT Edge Hub, and the twin update method. - [model](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app/model): This folder contains modules for a class hierarchy of computer vision models.
azure-resource-manager Template Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-object.md
Title: Template functions - objects description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with objects. Previously updated : 03/10/2022 Last updated : 05/09/2022 # Object functions for ARM templates
The output from the preceding example with the default values is:
| objectOutput | Object | {"one": "a", "three": "c"} | | arrayOutput | Array | ["two", "three"] |
+## items
+
+`items(object)`
+
+Converts a dictionary object to an array.
+
+In Bicep, use the [items](../bicep/bicep-functions-object.md#items).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| object |Yes |object |The dictionary object to convert to an array. |
+
+### Return value
+
+An array of objects for the converted dictionary. Each object in the array has a `key` property that contains the key value for the dictionary. Each object also has a `value` property that contains the properties for the object.
+
+### Example
+
+The following example converts a dictionary object to an array. For each object in the array, it creates a new object with modified values.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "variables": {
+ "copy": [
+ {
+ "name": "modifiedListOfEntities",
+ "count": "[length(items(variables('entities')))]",
+ "input": {
+ "key": "[items(variables('entities'))[copyIndex('modifiedListOfEntities')].key]",
+ "fullName": "[items(variables('entities'))[copyIndex('modifiedListOfEntities')].value.displayName]",
+ "itemEnabled": "[items(variables('entities'))[copyIndex('modifiedListOfEntities')].value.enabled]"
+ }
+ }
+ ],
+ "entities": {
+ "item002": {
+ "enabled": false,
+ "displayName": "Example item 2",
+ "number": 200
+ },
+ "item001": {
+ "enabled": true,
+ "displayName": "Example item 1",
+ "number": 300
+ }
+ }
+ },
+ "resources": [],
+ "outputs": {
+ "modifiedResult": {
+ "type": "array",
+ "value": "[variables('modifiedListOfEntities')]"
+ }
+ }
+}
+```
+
+The preceding example returns:
+
+```json
+"modifiedResult": {
+ "type": "Array",
+ "value": [
+ {
+ "fullName": "Example item 1",
+ "itemEnabled": true,
+ "key": "item001"
+ },
+ {
+ "fullName": "Example item 2",
+ "itemEnabled": false,
+ "key": "item002"
+ }
+ ]
+}
+```
+
+The following example shows the array that is returned from the items function.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "variables": {
+ "entities": {
+ "item002": {
+ "enabled": false,
+ "displayName": "Example item 2",
+ "number": 200
+ },
+ "item001": {
+ "enabled": true,
+ "displayName": "Example item 1",
+ "number": 300
+ }
+ },
+ "entitiesArray": "[items(variables('entities'))]"
+ },
+ "resources": [],
+ "outputs": {
+ "itemsResult": {
+ "type": "array",
+ "value": "[variables('entitiesArray')]"
+ }
+ }
+}
+```
+
+The example returns:
+
+```json
+"itemsResult": {
+ "type": "Array",
+ "value": [
+ {
+ "key": "item001",
+ "value": {
+ "displayName": "Example item 1",
+ "enabled": true,
+ "number": 300
+ }
+ },
+ {
+ "key": "item002",
+ "value": {
+ "displayName": "Example item 2",
+ "enabled": false,
+ "number": 200
+ }
+ }
+ ]
+}
+```
+
+The items() function sorts the objects in the alphabetical order. For example, **item001** appears before **item002** in the outputs of the two preceding samples.
+ <a id="json"></a> ## json
azure-resource-manager Template Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions.md
Resource Manager provides several functions for working with objects.
* [createObject](template-functions-object.md#createobject) * [empty](template-functions-object.md#empty) * [intersection](template-functions-object.md#intersection)
+* [items](template-functions-object.md#items)
* [json](template-functions-object.md#json) * [length](template-functions-object.md#length) * [null](template-functions-object.md#null)
azure-signalr Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-metrics.md
Metrics provide the running info of the service. The available metrics are:
|Outbound Traffic|Bytes|Sum|The outbound traffic of service|No Dimensions| |System Errors|Percent|Avg|The percentage of system errors|No Dimensions| |User Errors|Percent|Avg|The percentage of user errors|No Dimensions|
+|Server Load|Percent|Max / Avg|The percentage of server load|No Dimensions|
### Understand Dimensions
The Errors are the percentage of failure operations. Operations are consist of c
> [!IMPORTANT] > In some cases, the User Error will be always very high, especially in serverless case. In some browser, when user close the web page, the SignalR client doesn't close gracefully. The service will finally close it because of timeout. The timeout closure will be counted into User Error.
+### Metrics suitable for autoscaling
+
+Connection Quota Utilization and Server load are percentage metrics which show the usage **under current unit** configuration. So they could be used to set autoscaling rules. For example, you could set a rule to scale up if the server load is greater than 70%.
+
+Learn more about [autoscale](./signalr-howto-scale-autoscale.md)
+ ## Related resources - [Aggregation types in Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftsignalrservicesignalr )
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the "Regis
![Image of search bar](media/create-account-portal/search-bar.png) 1. Click **Create**.
-1. In the **Create an Azure Video Indexer resource** section enter required values. Here are the definitions:
+1. In the **Create an Azure Video Indexer resource** section enter required values.
- <!--![Image of create account](media/create-account-portal/create-account-blade.png)-->
+ ![Image of create account](media/create-account-portal/create-account-blade.png)
+
+ Here are the definitions:
+
| Name | Description| ||| |**Subscription**|Choose the subscription that you are using to create the Azure Video Indexer account.|
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Title: Platform updates for Azure VMware Solution description: Learn about the platform updates to Azure VMware Solution. + Last updated 12/22/2021
Last updated 12/22/2021
Azure VMware Solution will apply important updates starting in March 2021. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
+## May 9, 2022
+
+All new Azure VMware Solution private clouds in regions (France Central, Brazil South, Japan West, Australia Southeast, Canada East, East Asia, and Southeast Asia), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+
+Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+
+You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
+ ## February 18, 2022 Per VMware security advisory [VMSA-2022-0004](https://www.vmware.com/security/advisories/VMSA-2022-0004.html), multiple vulnerabilities in VMware ESXi have been reported to VMware.
azure-vmware Configure Storage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-storage-policy.md
You'll run the `Set-LocationStoragePolicy` cmdlet to Modify vSAN based storage p
You'll run the `Set-ClusterDefaultStoragePolicy` cmdlet to specify default storage policy for a cluster, +
+>[!NOTE]
+>Changing the storage policy of the default management cluster (Cluster-1) isn't allowed.
++ 1. Select **Run command** > **Packages** > **Set-ClusterDefaultStoragePolicy**. 1. Provide the required values or change the default values, and then select **Run**.
cognitive-services Howtoanalyzevideo_Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/HowtoAnalyzeVideo_Vision.md
Last updated 09/09/2019 ms.devlang: csharp-+ # Analyze videos in near real time
cognitive-services Deploy Computer Vision On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/deploy-computer-vision-on-premises.md
Title: Use Computer Vision container with Kubernetes and Helm
description: Learn how to deploy the Computer Vision container using Kubernetes and Helm. -+ Previously updated : 01/27/2020- Last updated : 05/09/2022++ # Use Computer Vision container with Kubernetes and Helm
The following prerequisites before using Computer Vision containers on-premises:
|-|| | Azure Account | If you don't have an Azure subscription, create a [free account][free-azure-account] before you begin. | | Kubernetes CLI | The [Kubernetes CLI][kubernetes-cli] is required for managing the shared credentials from the container registry. Kubernetes is also needed before Helm, which is the Kubernetes package manager. |
-| Helm CLI | Install the [Helm CLI][helm-install], which is used to to install a helm chart (container package definition). |
+| Helm CLI | Install the [Helm CLI][helm-install], which is used to install a helm chart (container package definition). |
| Computer Vision resource |In order to use the container, you must have:<br><br>An Azure **Computer Vision** resource and the associated API key the endpoint URI. Both values are available on the Overview and Keys pages for the resource and are required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page| [!INCLUDE [Gathering required parameters](../containers/includes/container-gathering-required-parameters.md)]
cognitive-services Read Container Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/read-container-migration-guide.md
Last updated 09/28/2021 + # Migrate to the Read v3.x OCR containers
cognitive-services Spatial Analysis Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-logging.md
Last updated 06/08/2021 + # Telemetry and troubleshooting
cognitive-services Upgrade Api Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/upgrade-api-versions.md
Last updated 08/11/2020 + # Upgrade from Read v2.x to Read v3.x
This guide shows how to upgrade your existing container or cloud API code from Read v2.x to Read v3.x. ## Determine your API path
-Use the following table to determine the **version string** in the API path based on the Read 3.x version you are migrating to.
+Use the following table to determine the **version string** in the API path based on the Read 3.x version you're migrating to.
|Product type| Version | Version string in 3.x API path | |:--|:-|:-|
Next, use the following sections to narrow your operations and replace the **ver
|-|--| |https://{endpoint}/vision/**v2.0/read/core/asyncBatchAnalyze** |https://{endpoint}/vision/<**version string**>/read/analyze[?language]|
-A new optional _language_ parameter is available. If you do not know the language of your document, or it may be multilingual, don't include it.
+A new optional _language_ parameter is available. If you don't know the language of your document, or it may be multilingual, don't include it.
### `Get Read Results`
When the call to `Get Read Operation Result` is successful, it returns a status
Note the following changes to the json: * In v2.x, `Get Read Operation Result` will return the OCR recognition json when the status is `Succeeded"`. In v3.0, this field is `succeeded`. * To get the root for page array, change the json hierarchy from `recognitionResults` to `analyzeResult`/`readResults`. The per-page line and words json hierarchy remains unchanged, so no code changes are required.
-* The page angle `clockwiseOrientation` has been renamed to `angle` and the range has been changed from 0 - 360 degrees to -180 to 180 degrees. Depending on your code, you may or may not have to makes changes as most math functions can handle either range.
+* The page angle `clockwiseOrientation` has been renamed to `angle` and the range has been changed from 0 - 360 degrees to -180 to 180 degrees. Depending on your code, you may or may not have to make changes as most math functions can handle either range.
-The v3.0 API also introduces the following improvements you can optionally leverage:
-* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing. See documentation for more details.
+The v3.0 API also introduces the following improvements you can optionally use:
+* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing.
* `version` tells you the version of the API used to generate results * A per-word `confidence` has been added. This value is calibrated so that a value 0.95 means that there is a 95% chance the recognition is correct. The confidence score can be used to select which text to send to human review.
In v3.0, it has been adjusted:
## Service only ### `Recognize Text`
-`Recognize Text` is a *preview* operation which is being *deprecated in all versions of Computer Vision API*. You must migrate from `Recognize Text` to `Read` (v3.0) or `Batch Read File` (v2.0, v2.1). v3.0 of `Read` includes newer, better models for text recognition and additional features, so it is recommended. To upgrade from `Recognize Text` to `Read`:
+`Recognize Text` is a *preview* operation that is being *deprecated in all versions of Computer Vision API*. You must migrate from `Recognize Text` to `Read` (v3.0) or `Batch Read File` (v2.0, v2.1). v3.0 of `Read` includes newer, better models for text recognition and other features, so it's recommended. To upgrade from `Recognize Text` to `Read`:
|Recognize Text 2.x |Read 3.x | |-|--| |https://{endpoint}/vision/**v2.0/recognizeText[?mode]**|https://{endpoint}/vision/<**version string**>/read/analyze[?language]|
-The _mode_ parameter is not supported in `Read`. Both handwritten and printed text will automatically be supported.
+The _mode_ parameter isn't supported in `Read`. Both handwritten and printed text will automatically be supported.
-A new optional _language_ parameter is available in v3.0. If you do not know the language of your document, or it may be multilingual, don't include it.
+A new optional _language_ parameter is available in v3.0. If you don't know the language of your document, or it may be multilingual, don't include it.
### `Get Recognize Text Operation Result`
Note the following changes to the json:
* In v2.x, `Get Read Operation Result` will return the OCR recognition json when the status is `Succeeded`. In v3.x, this field is `succeeded`. * To get the root for page array, change the json hierarchy from `recognitionResult` to `analyzeResult`/`readResults`. The per-page line and words json hierarchy remains unchanged, so no code changes are required.
-The v3.0 API also introduces the following improvements you can optionally leverage. See the API reference for more details:
-* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing. See documentation for more details.
+The v3.0 API also introduces the following improvements you can optionally use. See the API reference for more details:
+* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing.
* `version` tells you the version of the API used to generate results * A per-word `confidence` has been added. This value is calibrated so that a value 0.95 means that there is a 95% chance the recognition is correct. The confidence score can be used to select which text to send to human review. * `angle` general orientation of the text in clockwise direction, measured in degrees between (-180, 180].
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/encrypt-data-at-rest.md
Last updated 08/28/2020 + #Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
cognitive-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/export-delete-data.md
Last updated 03/21/2019 + # View or delete user data in Custom Vision
cognitive-services Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/storage-integration.md
Last updated 06/25/2021 + # Integrate Azure storage for notifications and backup
cognitive-services How To Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-migrate-face-data.md
Last updated 02/22/2021 ms.devlang: csharp-+ # Migrate your face data to a different Face subscription
cognitive-services How To Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-mitigate-latency.md
Last updated 1/5/2021 ms.devlang: csharp+ # How to: mitigate latency when using the Face service
cognitive-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/use-persondirectory.md
Last updated 04/22/2021 ms.devlang: csharp-+ # Use the PersonDirectory structure
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/encrypt-data-at-rest.md
Last updated 08/28/2020 + #Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Pronunciation assessment
-Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. Educators can use the capability to evaluate pronunciation of multiple speakers in real time.
+Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. Educators can use the capability to evaluate pronunciation of multiple speakers in real time. Pronunciation Assessment is announced generally available in US English, while [other languages](language-support.md#pronunciation-assessment) are available in preview. 
In this article, you'll learn how to set up `PronunciationAssessmentConfig` and retrieve the `PronunciationAssessmentResult` using the speech SDK.
-> [!NOTE]
-> Pronunciation assessment for the `en-US` locale is available in all [speech-to-text regions](regions.md#speech-to-text-text-to-speech-and-translation). Support for `en-GB` and `zh-CN` locales is in preview.
- ## Pronunciation assessment with the Speech SDK The following snippet illustrates how to create a `PronunciationAssessmentConfig`, then apply it to a `SpeechRecognizer`.
This table lists the configuration parameters for pronunciation assessment.
|--|-|| | `ReferenceText` | The text that the pronunciation will be evaluated against. | Required | | `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. | Optional |
-| `Granularity` | The evaluation granularity. Accepted values are `Phoneme`, which shows the score on the full text, word and phoneme level, `Word`, which shows the score on the full text and word level, `FullText`, which shows the score on the full text level only. Default: `Phoneme`. | Optional |
+| `Granularity` | The evaluation granularity. Accepted values are `Phoneme`, which shows the score on the full text, word and phoneme level, `Syllable`, which shows the score on the full text, word and syllable level, `Word`, which shows the score on the full text and word level, `FullText`, which shows the score on the full text level only. Default: `Phoneme`. | Optional |
| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. | Optional | | `ScenarioId` | A GUID indicating a customized point system. | Optional |
This table lists the result parameters of pronunciation assessment.
| Parameter | Description | |--|-|
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Word and full text accuracy scores are aggregated from phoneme-level accuracy score. |
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score. |
| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. | | `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. |
-| `PronScore` | Overall score indicating the pronunciation quality of the given speech. This is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
+| `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
| `ErrorType` | This value indicates whether a word is omitted, inserted, or mispronounced, compared to the `ReferenceText`. Possible values are `None`, `Omission`, `Insertion`, and `Mispronunciation`. | ### Sample responses
A typical pronunciation assessment result in JSON:
"RecognitionStatus": "Success", "Offset": "400000", "Duration": "11000000",
- "NBest": [
- {
- "Confidence" : "0.87",
- "Lexical" : "good morning",
+ "NBest": [
+ {
+ "Confidence": "0.87",
+ "Lexical": "good morning",
"ITN" : "good morning", "MaskedITN" : "good morning", "Display" : "Good morning.",
- "PronunciationAssessment":
- {
+ "PronunciationAssessment" : {
"PronScore" : 84.4, "AccuracyScore" : 100.0, "FluencyScore" : 74.0, "CompletenessScore" : 100.0, }, "Words": [
- {
- "Word" : "Good",
- "Offset" : 500000,
- "Duration" : 2700000,
- "PronunciationAssessment":
- {
+ {
+ "Word" : "good",
+ "Offset" : 500000,
+ "Duration" : 2700000,
+ "PronunciationAssessment": {
"AccuracyScore" : 100.0, "ErrorType" : "None"
- }
+ },
+ "Syllables" : [
+ {
+ "Syllable" : "ɡʊd",
+ "Offset" : 500000,
+ "Duration" : 2700000,
+ "PronunciationAssessment" : {
+ "AccuracyScore": 100.0
+ }
+ }],
+ "Phonemes": [
+ {
+ "Phoneme" : "ɡ",
+ "Offset" : 500000,
+ "Duration": 1200000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0
+ }
+ },
+ {
+ "Phoneme" : "ʊ",
+ "Offset" : 1800000,
+ "Duration": 500000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0
+ }
}, {
- "Word" : "morning",
- "Offset" : 5300000,
- "Duration" : 900000,
- "PronunciationAssessment":
- {
+ "Phoneme" : "d",
+ "Offset" : 2400000,
+ "Duration": 800000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0
+ }
+ }]
+ },
+ {
+ "Word" : "morning",
+ "Offset" : 3300000,
+ "Duration" : 5500000,
+ "PronunciationAssessment": {
"AccuracyScore" : 100.0, "ErrorType" : "None"
- }
- }
- ]
- }
- ]
+ },
+ "Syllables": [
+ {
+ "Syllable" : "mɔr",
+ "Offset" : 3300000,
+ "Duration": 2300000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0
+ }
+ },
+ {
+ "Syllable" : "nɪŋ",
+ "Offset" : 5700000,
+ "Duration": 3100000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0
+ }
+ }],
+ "Phonemes": [
+ ... // omitted phonemes
+ ]
+ }]
+ }]
} ``` ## Next steps -
-* To learn more about released use cases, read the [Azure tech blog](https://techcommunity.microsoft.com/t5/azure-ai-blog/speech-service-update-pronunciation-assessment-is-generally/ba-p/2505501).
+* Learn more about released [use cases](https://techcommunity.microsoft.com/t5/azure-ai-blog/speech-service-update-pronunciation-assessment-is-generally/ba-p/2505501)
* Try out the [pronunciation assessment demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Arabic|`ar-DZ`<br/>`ar-BH`<br/>`ar-EG`<br/>`ar-IQ`<br/>`ar-OM`<br/>`ar-SY`|
## Pronunciation assessment
-The [pronunciation assessment](how-to-pronunciation-assessment.md) feature currently supports the `en-US` locale, which is available with all speech-to-text regions. Support for `en-GB` and `zh-CN` languages is in preview.
+The following table lists the released languages and public preview languages.
+
+| Language | Locale |
+|--|--|
+|Chinese (Mandarin, Simplified)|`zh-CN`<sup>Public preview</sup> |
+|English (Australia)|`en-AU`<sup>Public preview</sup> |
+|English (United Kingdom)|`en-GB`<sup>Public preview</sup> |
+|English (United States)|`en-US`<sup>General available</sup>|
+|French (France)|`fr-FR`<sup>Public preview</sup> |
+|Spanish (Spain)|`es-ES`<sup>Public preview</sup> |
+
+> [!NOTE]
+> If you want to use languages that aren't listed here, please contact us by email at [mspafeedback@microsoft.com](mailto:mspafeedback@microsoft.com).
+>
+> For pronunciation assessment feature, the released en-US language is available in all [Speech-to-Text regions](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation), and preview languages are available in one region: West US.
## Speech translation
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
To add a Speech service resource to your Azure account by using the free or paid
1. Give a unique name for your new resource. The name helps you distinguish among multiple subscriptions tied to the same service. 1. Choose the Azure subscription that the new resource is associated with to determine how the fees are billed. Here's the introduction for [how to create an Azure subscription](../../cost-management-billing/manage/create-subscription.md#create-a-subscription-in-the-azure-portal) in the Azure portal.
- 1. Choose the [region](regions.md) where the resource will be used. Azure is a global cloud platform that's generally available in many regions worldwide. To get the best performance, select a region that's closest to you or where your application runs. The Speech service availabilities vary among different regions. Make sure that you create your resource in a supported region. For more information, see [region support for Speech services](./regions.md#speech-to-text-text-to-speech-and-translation).
+ 1. Choose the [region](regions.md) where the resource will be used. Azure is a global cloud platform that's generally available in many regions worldwide. To get the best performance, select a region that's closest to you or where your application runs. The Speech service availabilities vary among different regions. Make sure that you create your resource in a supported region. For more information, see [region support for Speech services](./regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation).
1. Choose either a free (F0) or paid (S0) pricing tier. For complete information about pricing and usage quotas for each tier, select **View full pricing details** or see [Speech services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). For limits on resources, see [Azure Cognitive Services limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-cognitive-services-limits). 1. Create a new resource group for this Speech subscription or assign the subscription to an existing resource group. Resource groups help you keep your various Azure subscriptions organized. 1. Select **Create**. This action takes you to the deployment overview and displays deployment progress messages.
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
Keep in mind the following points:
In the [Speech SDK](speech-sdk.md), you specify the region as a parameter (for example, in the Speech SDK for C#, you specify the region as a parameter to `SpeechConfig.FromSubscription`).
-### Speech-to-text, text-to-speech, and translation
+### Speech-to-text, pronunciation assessment, text-to-speech, and translation
-The Speech service is available in these regions for speech-to-text, text-to-speech, and translation:
+The Speech service is available in these regions for speech-to-text, pronunciation assessment, text-to-speech, and translation:
[!INCLUDE [](../../../includes/cognitive-services-speech-service-region-identifier.md)] If you plan to train a custom model with audio data, use one of the [regions with dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for faster training. You can use the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to copy the fully trained model to another region later.
+> [!TIP]
+> For pronunciation assessment feature, the released en-US language is available in all speech-to-text regions, and [preview languages](language-support.md#pronunciation-assessment) are available in one region: West US.
+ ### Intent recognition Available regions for intent recognition via the Speech SDK are in the following table.
cognitive-services V3 0 Translate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-translate.md
Previously updated : 05/12/2021 Last updated : 05/09/2022
Request parameters passed on the query string are:
| Query parameter | Description | | | |
-| from | _Optional parameter_. <br>Specifies the language of the input text. Find which languages are available to translate from by looking up [supported languages](../reference/v3-0-languages.md) using the `translation` scope. If the `from` parameter is not specified, automatic language detection is applied to determine the source language. <br> <br>You must use the `from` parameter rather than autodetection when using the [dynamic dictionary](../dynamic-dictionary.md) feature. |
+| from | _Optional parameter_. <br>Specifies the language of the input text. Find which languages are available to translate from by looking up [supported languages](../reference/v3-0-languages.md) using the `translation` scope. If the `from` parameter isn't specified, automatic language detection is applied to determine the source language. <br> <br>You must use the `from` parameter rather than autodetection when using the [dynamic dictionary](../dynamic-dictionary.md) feature. |
| textType | _Optional parameter_. <br>Defines whether the text being translated is plain text or HTML text. Any HTML needs to be a well-formed, complete element. Possible values are: `plain` (default) or `html`. | | category | _Optional parameter_. <br>A string specifying the category (domain) of the translation. This parameter is used to get translations from a customized system built with [Custom Translator](../customization.md). Add the Category ID from your Custom Translator [project details](../custom-translator/how-to-create-project.md#view-project-details) to this parameter to use your deployed customized system. Default value is: `general`. | | profanityAction | _Optional parameter_. <br>Specifies how profanities should be treated in translations. Possible values are: `NoAction` (default), `Marked` or `Deleted`. To understand ways to treat profanity, see [Profanity handling](#handle-profanity). |
Request parameters passed on the query string are:
| suggestedFrom | _Optional parameter_. <br>Specifies a fallback language if the language of the input text can't be identified. Language autodetection is applied when the `from` parameter is omitted. If detection fails, the `suggestedFrom` language will be assumed. | | fromScript | _Optional parameter_. <br>Specifies the script of the input text. | | toScript | _Optional parameter_. <br>Specifies the script of the translated text. |
-| allowFallback | _Optional parameter_. <br>Specifies that the service is allowed to fall back to a general system when a custom system does not exist. Possible values are: `true` (default) or `false`. <br> <br>`allowFallback=false` specifies that the translation should only use systems trained for the `category` specified by the request. If a translation for language X to language Y requires chaining through a pivot language E, then all the systems in the chain (X->E and E->Y) will need to be custom and have the same category. If no system is found with the specific category, the request will return a 400 status code. `allowFallback=true` specifies that the service is allowed to fall back to a general system when a custom system does not exist. |
+| allowFallback | _Optional parameter_. <br>Specifies that the service is allowed to fall back to a general system when a custom system doesn't exist. Possible values are: `true` (default) or `false`. <br> <br>`allowFallback=false` specifies that the translation should only use systems trained for the `category` specified by the request. If a translation for language X to language Y requires chaining through a pivot language E, then all the systems in the chain (X->E and E->Y) will need to be custom and have the same category. If no system is found with the specific category, the request will return a 400 status code. `allowFallback=true` specifies that the service is allowed to fall back to a general system when a custom system doesn't exist. |
Request headers include:
The body of the request is a JSON array. Each array element is a JSON object wit
The following limitations apply: * The array can have at most 100 elements.
-* The entire text included in the request cannot exceed 10,000 characters including spaces.
+* The entire text included in the request can't exceed 10,000 characters including spaces.
## Response body A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
- * `detectedLanguage`: An object describing the detected language through the following properties:
+* `detectedLanguage`: An object describing the detected language through the following properties:
- * `language`: A string representing the code of the detected language.
+ * `language`: A string representing the code of the detected language.
- * `score`: A float value indicating the confidence in the result. The score is between zero and one and a low score indicates a low confidence.
+ * `score`: A float value indicating the confidence in the result. The score is between zero and one and a low score indicates a low confidence.
The `detectedLanguage` property is only present in the result object when language autodetection is requested.
- * `translations`: An array of translation results. The size of the array matches the number of target languages specified through the `to` query parameter. Each element in the array includes:
+* `translations`: An array of translation results. The size of the array matches the number of target languages specified through the `to` query parameter. Each element in the array includes:
- * `to`: A string representing the language code of the target language.
+ * `to`: A string representing the language code of the target language.
- * `text`: A string giving the translated text.
+ * `text`: A string giving the translated text.
- * `transliteration`: An object giving the translated text in the script specified by the `toScript` parameter.
+* `transliteration`: An object giving the translated text in the script specified by the `toScript` parameter.
- * `script`: A string specifying the target script.
+ * `script`: A string specifying the target script.
- * `text`: A string giving the translated text in the target script.
+ * `text`: A string giving the translated text in the target script.
- The `transliteration` object is not included if transliteration does not take place.
+ The `transliteration` object isn't included if transliteration doesn't take place.
* `alignment`: An object with a single string property named `proj`, which maps input text to translated text. The alignment information is only provided when the request parameter `includeAlignment` is `true`. Alignment is returned as a string value of the following format: `[[SourceTextStartIndex]:[SourceTextEndIndex]ΓÇô[TgtTextStartIndex]:[TgtTextEndIndex]]`. The colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be non-contiguous. When no alignment information is available, the alignment element will be empty. See [Obtain alignment information](#obtain-alignment-information) for an example and restrictions.
- * `sentLen`: An object returning sentence boundaries in the input and output texts.
+* `sentLen`: An object returning sentence boundaries in the input and output texts.
- * `srcSentLen`: An integer array representing the lengths of the sentences in the input text. The length of the array is the number of sentences, and the values are the length of each sentence.
+ * `srcSentLen`: An integer array representing the lengths of the sentences in the input text. The length of the array is the number of sentences, and the values are the length of each sentence.
- * `transSentLen`: An integer array representing the lengths of the sentences in the translated text. The length of the array is the number of sentences, and the values are the length of each sentence.
+ * `transSentLen`: An integer array representing the lengths of the sentences in the translated text. The length of the array is the number of sentences, and the values are the length of each sentence.
Sentence boundaries are only included when the request parameter `includeSentenceLength` is `true`.
- * `sourceText`: An object with a single string property named `text`, which gives the input text in the default script of the source language. `sourceText` property is present only when the input is expressed in a script that's not the usual script for the language. For example, if the input were Arabic written in Latin script, then `sourceText.text` would be the same Arabic text converted into Arab script.
+* `sourceText`: An object with a single string property named `text`, which gives the input text in the default script of the source language. `sourceText` property is present only when the input is expressed in a script that's not the usual script for the language. For example, if the input were Arabic written in Latin script, then `sourceText.text` would be the same Arabic text converted into Arab script.
Examples of JSON responses are provided in the [examples](#examples) section.
Examples of JSON responses are provided in the [examples](#examples) section.
| Headers | Description | | | |
-| X-RequestId | Value generated by the service to identify the request. It is used for troubleshooting purposes. |
-| X-MT-System | Specifies the system type that was used for translation for each ΓÇÿtoΓÇÖ language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: <br><br>* Custom - Request includes a custom system and at least one custom system was used during translation.<br>* Team - All other requests |
+| X-RequestId | Value generated by the service to identify the request. It's used for troubleshooting purposes. |
+| X-MT-System | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: <br><br>* Custom - Request includes a custom system and at least one custom system was used during translation.<br>* Team - All other requests |
## Response status codes The following are the possible HTTP status codes that a request returns.
-| ProfanityAction | Action |
+|Status code | Description |
| | |
-| `NoAction` |NoAction is the default behavior. Profanity will pass from source to target. <br> <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He is a jackass. |
-| `Deleted` | Profane words will be removed from the output without replacement. <br> <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He is |
-| `Marked` | Profane words are replaced by a marker in the output. The marker depends on the `ProfanityMarker` parameter. <br> <br>For `ProfanityMarker=Asterisk`, profane words are replaced with `***`: <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He is a \\*\\*\\*. <br> <br>For `ProfanityMarker=Tag`, profane words are surrounded by XML tags &lt;profanity&gt; and &lt;/profanity&gt;: <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He is a &lt;profanity&gt;jackass&lt;/profanity&gt;. |
+|200 | Success. |
+|400 |One of the query parameters is missing or not valid. Correct request parameters before retrying. |
+|401 | The request couldn't be authenticated. Check that credentials are specified and valid. |
+|403 | The request isn't authorized. Check the details error message. This status code often indicates that all free translations provided with a trial subscription have been used up. |
+|408 | The request couldn't be fulfilled because a resource is missing. Check the details error message. When the request includes a custom category, this status code often indicates that the custom translation system isn't yet available to serve requests. The request should be retried after a waiting period (for example, 1 minute). |
+|429 | The server rejected the request because the client has exceeded request limits. |
+|500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId. |
+|503 |Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId. |
If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
The `translations` array includes one element, which provides the translation of
### Translate a single input with language autodetection
-This example shows how to translate a single sentence from English to Simplified Chinese. The request does not specify the input language. Autodetection of the source language is used instead.
+This example shows how to translate a single sentence from English to Simplified Chinese. The request doesn't specify the input language. Autodetection of the source language is used instead.
```curl curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
If you want to avoid getting profanity in the translation, regardless of the pre
| ProfanityAction | Action | | | |
-| `NoAction` | NoAction is the default behavior. Profanity will pass from source to target. <br> <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He is a jackass. |
-| `Deleted` | Profane words will be removed from the output without replacement. <br> <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He is a. |
-| `Marked` | Profane words are replaced by a marker in the output. The marker depends on the `ProfanityMarker` parameter. <br> <br>For `ProfanityMarker=Asterisk`, profane words are replaced with `***`: <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He is a \\*\\*\\*. <br> <br>For `ProfanityMarker=Tag`, profane words are surrounded by XML tags &lt;profanity&gt; and &lt;/profanity&gt;: <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He is a &lt;profanity&gt;jackass&lt;/profanity&gt;. |
+| `NoAction` | NoAction is the default behavior. Profanity will pass from source to target. <br> <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a jack. |
+| `Deleted` | Profane words will be removed from the output without replacement. <br> <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a. |
+| `Marked` | Profane words are replaced by a marker in the output. The marker depends on the `ProfanityMarker` parameter. <br> <br>For `ProfanityMarker=Asterisk`, profane words are replaced with `***`: <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a \\*\\*\\*. <br> <br>For `ProfanityMarker=Tag`, profane words are surrounded by XML tags &lt;profanity&gt; and &lt;/profanity&gt;: <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a &lt;profanity&gt;jack&lt;/profanity&gt;. |
For example:
That last request returns:
### Translate content with markup and decide what's translated
-It's common to translate content that includes markup such as content from an HTML page or content from an XML document. Include query parameter `textType=html` when translating content with tags. In addition, it's sometimes useful to exclude specific content from translation. You can use the attribute `class=notranslate` to specify content that should remain in its original language. In the following example, the content inside the first `div` element will not be translated, while the content in the second `div` element will be translated.
+It's common to translate content that includes markup such as content from an HTML page or content from an XML document. Include query parameter `textType=html` when translating content with tags. In addition, it's sometimes useful to exclude specific content from translation. You can use the attribute `class=notranslate` to specify content that should remain in its original language. In the following example, the content inside the first `div` element won't be translated, while the content in the second `div` element will be translated.
``` <div class="notranslate">This will not be translated.</div> <div>This will be translated. </div> ```
-Here is a sample request to illustrate.
+Here's a sample request to illustrate.
```curl curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=zh-Hans&textType=html" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'<div class=\"notranslate\">This will not be translated.</div><div>This will be translated.</div>'}]"
The response is:
The alignment information starts with `0:2-0:1`, which means that the first three characters in the source text (`The`) map to the first two characters in the translated text (`La`). #### Limitations
-Obtaining alignment information is an experimental feature that we have enabled for prototyping research and experiences with potential phrase mappings. We may choose to stop supporting this in the future. Here are some of the notable restrictions where alignments are not supported:
+Obtaining alignment information is an experimental feature that we've enabled for prototyping research and experiences with potential phrase mappings. We may choose to stop supporting this feature in the future. Here are some of the notable restrictions where alignments aren't supported:
-* Alignment is not available for text in HTML format i.e., textType=html
+* Alignment isn't available for text in HTML format that is, textType=html
* Alignment is only returned for a subset of the language pairs: - English to/from any other language except Chinese Traditional, Cantonese (Traditional) or Serbian (Cyrillic). - from Japanese to Korean or from Korean to Japanese. - from Japanese to Chinese Simplified and Chinese Simplified to Japanese. - from Chinese Simplified to Chinese Traditional and Chinese Traditional to Chinese Simplified.
-* You will not receive alignment if the sentence is a canned translation. Example of a canned translation is "This is a test", "I love you" and other high frequency sentences.
-* Alignment is not available when you apply any of the approaches to prevent translation as described [here](../prevent-translation.md)
+* You won't receive alignment if the sentence is a canned translation. Example of a canned translation is "This is a test", "I love you" and other high frequency sentences.
+* Alignment isn't available when you apply any of the approaches to prevent translation as described [here](../prevent-translation.md)
### Obtain sentence boundaries
The response is:
{ "translations":[ {
- "text":"La réponse se trouve dans la traduction automatique. La meilleure technologie de traduction automatique ne peut pas toujours fournir des traductions adaptées à un site ou des utilisateurs comme un être humain. Il suffit de copier et coller un extrait de code n’importe où.",
+ "text":"La réponse se trouve dans la traduction automatique. La meilleure technologie de traduction automatique ne peut pas toujours fournir des traductions adaptées à un site ou des utilisateurs comme un être humain. Il suffit de copier et coller un extrait de code n'importe où.",
"to":"fr", "sentLen":{"srcSentLen":[40,117,46],"transSentLen":[53,157,62]} }
The result is:
] ```
-This feature works the same way with `textType=text` or with `textType=html`. The feature should be used sparingly. The appropriate and far better way of customizing translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you have or can afford to create training data that shows your work or phrase in context, you get much better results. [Learn more about Custom Translator](../customization.md).
+This feature works the same way with `textType=text` or with `textType=html`. The feature should be used sparingly. The appropriate and far better way of customizing translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you've or can afford to create training data that shows your work or phrase in context, you get much better results. [Learn more about Custom Translator](../customization.md).
cognitive-services Request Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/request-limits.md
Title: Request limits - Translator
-description: This article lists request limits for the Translator. Charges are incurred based on character count, not request frequency with a limit of 5,000 characters per request. Character limits are subscription-based, with F0 limited to 2 million characters per hour.
+description: This article lists request limits for the Translator. Charges are incurred based on character count, not request frequency with a limit of 50,000 characters per request. Character limits are subscription-based, with F0 limited to 2 million characters per hour.
This article provides throttling limits for the Translator translation, translit
## Character and array limits per request
-Each translate request is limited to 10,000 characters, across all the target languages you are translating to. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3000x3 = 9,000 characters, which satisfy the request limit. You're charged per character, not by the number of requests. It's recommended to send shorter requests.
+Each translate request is limited to 50,000 characters, across all the target languages you are translating to. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3000x3 = 9,000 characters, which satisfy the request limit. You're charged per character, not by the number of requests. It's recommended to send shorter requests.
The following table lists array element and character limits for each operation of the Translator. | Operation | Maximum Size of Array Element | Maximum Number of Array Elements | Maximum Request Size (characters) | |:-|:-|:-|:-|
-| Translate | 10,000| 100| 10,000 |
+| Translate | 50,000| 1,000| 50,000 |
| Transliterate | 5,000| 10| 5,000 | | Detect | 50,000 |100 |50,000 | | BreakSentence | 50,000| 100 |50,000 |
When using the [BreakSentence](./reference/v3-0-break-sentence.md) function, sen
* [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/) * [Regional availability](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)
-* [v3 Translator reference](./reference/v3-0-reference.md)
+* [v3 Translator reference](./reference/v3-0-reference.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/language-support.md
Use this article to learn about the languages and regions currently supported by
With custom NER, you can train a model in one language and test in another language. This feature is very powerful because it helps you save time and effort, instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you have to specify this option at project creation. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in this language to your training set. > [!NOTE]
-> To enable support for multiple languages, you need to enable this option when [creating your project](how-to/create-project.md) or you can enbale it later form the project settings page.
+> To enable support for multiple languages, you need to enable this option when [creating your project](how-to/create-project.md) or you can enable it later form the project settings page.
## Language support
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/concepts.md
An exception policy controls the behavior of a Job based on a trigger and execut
- [How jobs are matched to workers](matching-concepts.md) - [Router Rule concepts](router-rule-concepts.md) - [Classification concepts](classification-concepts.md)
+- [Distribution modes](distribution-concepts.md)
- [Exception Policies](exception-policy.md) - [Quickstart guide](../../quickstarts/router/get-started-router.md) - [Manage queues](../../how-tos/router-sdk/manage-queue.md)
communication-services Distribution Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/distribution-concepts.md
+
+ Title: Distribution mode concepts for Azure Communication Services
+
+description: Learn about the Azure Communication Services Job Router distribution mode concepts.
+++++ Last updated : 05/06/2022++++
+# Distribution modes
++
+When creating a distribution policy, we specify one of the following distribution modes to define the strategy to use when distributing jobs to workers:
+
+## Round robin mode
+Jobs will be distributed in a circular fashion such that each available worker will receive jobs in sequence.
+
+## Longest idle mode
+Jobs will be distributed to the worker that is least utilized first. If there's a tie, we'll pick the worker that has been available for the longer time. Utilization is calculated as a `Load Ratio` by the following algorithm:
+
+Load Ratio = Aggregate of capacity consumed by all jobs assigned to the worker / Total capacity of the worker
+
+### Example
+Assume that each `chat` job has been configured to consume one capacity for a worker. A new chat job is queued into Job Router and the following workers are available to take the job:
+
+```
+Worker A:
+TotalCapacity = 5
+ConsumedScore = 3 (Currently handling 3 chats)
+LoadRatio = 3 / 5 = 0.6
+LastAvailable: 5 mins ago
+
+Worker B:
+TotalCapacity = 4
+ConsumedScore = 3 (Currently handling 3 chats)
+LoadRatio = 3 / 4 = 0.75
+LastAvailable: 3 min ago
+
+Worker C:
+TotalCapacity = 5
+ConsumedScore = 3 (Currently handling 3 chats)
+LoadRatio = 3 / 5 = 0.6
+LastAvailable: 7 min ago
+
+Worker D:
+TotalCapacity = 3
+ConsumedScore = 0 (Currently idle)
+LoadRatio = 0 / 4 = 0
+LastAvailable: 2 min ago
+
+Workers would be matched in order: D, C, A, B
+```
+
+Worker D has the lowest load ratio (0), so Worker D will be offered the job first. Workers A and C are tied with the same load ratio (0.6). However, Worker C has been available for a longer time (7 minutes ago) than Worker A (5 minutes ago), so Worker C will be matched before Worker A. Finally, Worker B will be matched last since Worker B has the highest load ratio (0.75).
+
+## Best worker mode
+The workers that are best able to handle the job are picked first. The logic to rank Workers can be customized, with an expression or Azure function to compare two workers by specifying a Scoring Rule. [See example][worker-scoring]
+
+When a Scoring Rule isn't provided, this distribution mode will use the default scoring method instead, which evaluates workers based on how the job's labels and selectors match with the worker's labels. The algorithms are outlined below.
+
+### Default label matching
+For calculating a score based on the job's labels, we increment the `Match Score` by 1 for every worker label that matches a corresponding label on the job and then divide by the total number of labels on the job. Therefore, the more labels that matched, the higher a worker's `Match Score`. The final `Match Score` will always be a value between 0 and 1.
+
+##### Example
+Job 1:
+```json
+{
+ "labels": {
+ { "language": "english" },
+ { "department": "sales" }
+ }
+}
+```
+
+Worker A:
+```json
+{
+ "labels": {
+ { "language": "english" },
+ { "department": "sales" }
+ }
+}
+```
+
+Worker B:
+```json
+{
+ "labels": {
+ { "language": "english" }
+ }
+}
+```
+
+Worker C:
+```json
+{
+ "labels": {
+ { "language": "english" },
+ { "department": "support" }
+ }
+}
+```
+
+Calculation:
+```
+Worker A's match score = 1 (for matching english language label) + 1 (for matching department sales label) / 2 (total number of labels) = 1
+Worker B's match score = 1 (for matching english language label) / 2 (total number of labels) = 0.5
+Worker C's match score = 1 (for matching english language label) / 2 (total number of labels) = 0.5
+```
+
+Worker A would be matched first. Next, Worker B or Worker C would be matched, depending on who was available for a longer time, since the match score is tied.
+
+### Default worker selector matching
+In the case where the job also contains worker selectors, we'll calculate the `Match Score` based on the `LabelOperator` of that worker selector.
+
+#### Equal/notEqual label operators
+If the worker selector has the `LabelOperator` `Equal` or `NotEqual`, we increment the score by 1 for each job label that matches that worker selector, in a similar manner as the `Label Matching` above.
+
+##### Example
+Job 2:
+```json
+{
+ "workerSelectors": [
+ { "key": "department", "labelOperator": "equals", "value": "billing" },
+ { "key": "segment", "labelOperator": "notEquals", "department": "vip" }
+ ]
+}
+```
+
+Worker D:
+```json
+{
+ "labels": {
+ { "department": "billing" },
+ { "segment": "vip" }
+ }
+}
+```
+
+Worker E:
+```json
+{
+ "labels": {
+ { "department": "billing" }
+ }
+}
+```
+
+Worker F:
+```json
+{
+ "labels": {
+ { "department": "sales" },
+ { "segment": "new" }
+ }
+}
+```
+
+Calculation:
+```
+Worker D's match score = 1 (for matching department selector) / 2 (total number of worker selectors) = 0.5
+Worker E's match score = 1 (for matching department selector) + 1 (for matching segment not equal to vip) / 2 (total number of worker selectors) = 1
+Worker F's match score = 1 (for segment not equal to vip) / 2 (total number of labels) = 0.5
+```
+
+Worker E would be matched first. Next, Worker D or Worker F would be matched, depending on who was available for a longer time, since the match score is tied.
+
+#### Other label operators
+For worker selectors using operators that compare by magnitude (`GreaterThan`/`GreaterThanEqual`/`LessThan`/`LessThanEqual`), we'll increment the worker's `Match Score` by an amount calculated using the logistic function (See Fig 1). The calculation is based on how much the worker's label value exceeds the worker selector's value or a lesser amount if it doesn't exceed the worker selector's value. Therefore, the more worker selector values the worker exceeds, and the greater the degree to which it does so, the higher a worker's score will be.
++
+Fig 1. Logistic function
+
+The following function is used for GreaterThan or GreaterThanEqual operators:
+```
+MatchScore(x) = 1 / (1 + e^(-x)) where x = (labelValue - selectorValue) / selectorValue
+```
+
+The following function is used for LessThan or LessThanEqual operators:
+
+```
+MatchScore(x) = 1 / (1 + e^(-x)) where x = (selectorValue - labelValue) / selectorValue
+```
+
+#### Example
+Job 3:
+```json
+{
+ "workerSelectors": [
+ { "key": "language", "operator": "equals", "value": "french" },
+ { "key": "sales", "operator": "greaterThanEqual", "value": 10 },
+ { "key": "cost", "operator": "lessThanEqual", "value": 10 }
+ ]
+}
+```
+
+Worker G:
+```json
+{
+ "labels": {
+ { "language": "french" },
+ { "sales", 10 },
+ { "cost", 10 }
+ }
+}
+```
+
+Worker H:
+```json
+{
+ "labels": {
+ { "language": "french" },
+ { "sales", 15 },
+ { "cost", 10 }
+ }
+}
+```
+
+Worker I:
+```json
+{
+ "labels": {
+ { "language": "french" },
+ { "sales", 10 },
+ { "cost", 9 }
+ }
+}
+```
+
+Calculation:
+```
+Worker G's match score = (1 + 1 / (1 + e^-((10 - 10) / 10)) + 1 / (1 + e^-((10 - 10) / 10))) / 3 = 0.667
+Worker H's match score = (1 + 1 / (1 + e^-((15 - 10) / 10)) + 1 / (1 + e^-((10 - 10) / 10))) / 3 = 0.707
+Worker I's match score = (1 + 1 / (1 + e^-((10 - 10) / 10)) + 1 / (1 + e^-((10 - 9) / 10))) / 3 = 0.675
+```
+
+All three workers match the worker selectors on the job and are eligible to work on it. However, we can see that Worker H exceeds the "sales" worker selector's value by a margin of 5. Meanwhile, Worker I only exceeds the cost worker selector's value by a margin of 1. Worker G doesn't exceed any of the worker selector's values at all. Therefore, Worker H would be matched first, followed by Worker I and finally Worker G would be matched last.
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
Title: Service limits for Azure Communication Services description: Learn how to-+ -+ Last updated 11/01/2021
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following timeouts apply to the Communication Services Calling SDKs:
| PSTN call establishment timeout | 115 | | Promote 1:1 call to a group call timeout | 115 |
+## Maximum call duration:
+The maximum call duration is 30 hours, participants that reach the maximum call duration lifetime of 30 hours will be disconnected from the call.
++ ## JavaScript Calling SDK support by OS and browser The following table represents the set of supported browsers which are currently available. **We support the most recent three versions of the browser** unless otherwise indicated.
communication-services Quick Create Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/quick-create-identity.md
Title: Quickstart - Quickly create Azure Communication Services identities for testing description: Learn how to use the Identities & Access Tokens tool in the Azure portal to use with samples and for troubleshooting.-+ -+ Last updated 07/19/2021
container-apps Connect Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/connect-apps.md
Previously updated : 04/04/2022 Last updated : 11/02/2021 + # Connect applications in Azure Container Apps Preview
Once you know a container app's domain name, then you can call the location with
A sample solution showing how you can call between containers using both the FQDN Location or Dapr can be found on [Azure Samples](https://github.com/Azure-Samples/container-apps-connect-multiple-apps)
-For more details about connecting Dapr applications, refer to [Invoke services using HTTP](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/howto-invoke-discover-services/).
- ## Location A container app's location is composed of values associated with its environment, name, and region. Available through the `azurecontainerapps.io` top-level domain, the fully qualified domain name (FQDN) uses:
Developing microservices often requires you to implement patterns common to dist
A microservice that uses Dapr is available through the following URL pattern:
-```text
-http://localhost:3500/v1.0/invoke/<YOUR_APP_NAME>/method
-```
- :::image type="content" source="media/connect-apps/azure-container-apps-location-dapr.png" alt-text="Azure Container Apps container app location with Dapr."::: ## Next steps
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
+
+ Title: Dapr integration with Azure Container Apps
+description: Learn more about using Dapr on your Azure Container App service to develop applications.
++++ Last updated : 05/05/2022++
+# Dapr integration with Azure Container Apps
+
+The Distributed Application Runtime ([Dapr][dapr-concepts]) is a set of incrementally adoptable APIs that simplify the authoring of distributed, microservice-based applications. For example, Dapr provides capabilities for enabling application intercommunication, whether through messaging via pub/sub or reliable and secure service-to-service calls. Once enabled in Container Apps, Dapr exposes its HTTP and gRPC APIs via a sidecar: a process that runs in tandem with each of your Container Apps.
+
+Dapr APIs, also referred to as building blocks, are built on best practice industry standards, that:
+
+- Seamlessly fit with your preferred language or framework
+- Are incrementally adoptable; you can use one, several, or all of the building blocks depending on your needs
+
+## Dapr building blocks
++
+| Building block | Description |
+| -- | -- |
+| [**Service-to-service invocation**][dapr-serviceinvo] | Discover services and perform reliable, direct service-to-service calls with automatic mTLS authentication and encryption. |
+| [**State management**][dapr-statemgmt] | Provides state management capabilities for transactions and CRUD operations. |
+| [**Pub/sub**][dapr-pubsub] | Allows publisher and subscriber container apps to intercommunicate via an intermediary message broker. |
+| [**Bindings**][dapr-bindings] | Trigger your application with incoming or outgoing events, without SDK or library dependencies. |
+| [**Actors**][dapr-actors] | Dapr actors apply the scalability and reliability that the underlying platform provides. |
+| [**Observability**](./observability.md) | Send tracing information to an Application Insights backend. |
+
+## Dapr settings
+
+The following Pub/sub example demonstrates how Dapr works alongside your container app:
++
+| Label | Dapr settings | Description |
+| -- | - | -- |
+| 1 | Container Apps with Dapr enabled | Dapr is enabled at the container app level by configuring Dapr settings. Dapr settings exist at the app-level, meaning they apply across revisions. |
+| 2 | Dapr sidecar | Fully managed Dapr APIs are exposed to your container app via the Dapr sidecar. These APIs are available through HTTP and gRPC protocols. By default, the sidecar runs on port 3500 in Container Apps. |
+| 3 | Dapr component | Dapr components can be shared by multiple container apps. Using scopes, the Dapr sidecar will determine which components to load for a given container app at runtime. |
+
+### Enable Dapr
+
+You can define the Dapr configuration for a container app through the Azure CLI or using Infrastructure as Code templates like bicep or ARM. With the following settings, you enable Dapr on your app:
+
+| Field | Description |
+| -- | -- |
+| `--enable-dapr` / `enabled` | Enables Dapr on the container app. |
+| `--dapr-app-port` / `appPort` | Identifies which port your application is listening. |
+| `--dapr-app-protocol` / `appProtocol` | Tells Dapr which protocol your application is using. Valid options are `http` or `grpc`. Default is `http`. |
+| `--dapr-app-id` / `appId` | The unique ID of the application. Used for service discovery, state encapsulation, and the pub/sub consumer ID. |
+
+Since Dapr settings are considered application-scope changes, new revisions aren't created when you change Dapr settings. However, when changing a Dapr setting, the container app instance and revisions are automatically restarted.
+
+### Configure Dapr components
+
+Once Dapr is enabled on your container app, you're able to plug in and use the [Dapr APIs](#dapr-building-blocks) as needed. You can also create **Dapr components**, which are specific implementations of a given building block. Dapr components are environment-level resources, meaning they can be shared across Dapr-enabled container apps. Components are pluggable modules that:
+
+- Allow you to use the individual Dapr building block APIs.
+- Can be scoped to specific container apps.
+- Can be easily modified to point to any one of the component implementations.
+- Can reference secure configuration values using Container Apps secrets.
+
+Based on your needs, you can "plug in" certain Dapr component types like state stores, pub/sub brokers, and more. In the examples below, you will find the various schemas available for defining a Dapr component in Azure Container Apps. The Container Apps manifests differ sightly from the Dapr OSS manifests in order to simplify the component creation experience.
+
+> [!NOTE]
+> By default, all Dapr-enabled container apps within the same environment will load the full set of deployed components. By adding scopes to a component, you tell the Dapr sidecars for each respective container app which components to load at runtime. Using scopes is recommended for production workloads.
+
+# [YAML](#tab/yaml)
+
+When defining a Dapr component via YAML, you will pass your component manifest into the Azure CLI. For example, deploy a `pubsub.yaml` component using the following command:
+
+```azurecli
+az containerapp env dapr-component set --name ENVIRONMENT_NAME --resource-group RESOURCE_GROUP_NAME --dapr-component-name pubsub--yaml "./pubsub.yaml"
+```
+
+The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`.
+
+```yaml
+# pubsub.yaml for Azure Service Bus component
+- name: dapr-pubsub
+ type: pubsub.azure.servicebus
+ version: v1
+ metadata:
+ - name: connectionString
+ secretRef: sb-root-connectionstring
+ secrets:
+ - name: sb-root-connectionstring
+ value: "value"
+ # Application scopes
+ scopes:
+ - publisher-app
+ - subscriber-app
+```
+
+# [Bicep](#tab/bicep)
+
+This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr component is defined as a child resource of your Container Apps environment. The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
+
+```bicep
+resource daprComponent 'daprComponents@2022-01-01-preview' = {
+ name: 'dapr-pubsub'
+ properties: {
+ componentType: 'pubsub.azure.servicebus'
+ version: 'v1'
+ secrets: [
+ {
+ name: 'sb-root-connectionstring'
+ value: 'value'
+ }
+ ]
+ metadata: [
+ {
+ name: 'connectionString'
+ secretRef: 'sb-root-connectionstring'
+ }
+ ]
+ // Application scopes
+ scopes: [
+ 'publisher-app'
+ 'subscriber-app'
+ ]
+ }
+}
+```
+
+# [ARM](#tab/arm)
+
+This resource defines a Dapr component called `dapr-pubsub` via ARM. The Dapr component is defined as a child resource of your Container Apps environment. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
+
+```json
+{
+ "resources": [
+ {
+ "type": "daprComponents",
+ "name": "dapr-pubsub",
+ "properties": {
+ "componentType": "pubsub.azure.servicebus",
+ "version": "v1",
+ "secrets": [
+ {
+ "name": "sb-root-connectionstring",
+ "value": "value"
+ }
+ ],
+ "metadata": [
+ {
+ "name": "connectionString",
+ "secretRef": "sb-root-connectionstring"
+ }
+ ],
+ // Application scopes
+ "scopes": ["publisher-app", "subscriber-app"]
+
+ }
+ }
+ ]
+}
+```
+++
+For comparison, a Dapr OSS `pubsub.yaml` file would include:
+
+```yml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: dapr-pubsub
+spec:
+ type: pubsub.azure.servicebus
+ version: v1
+ metadata:
+ - name: connectionString
+ secretKeyRef:
+ name: sb-root-connectionstring
+ key: "value"
+# Application scopes
+scopes:
+- publisher-app
+- subscriber-app
+```
+
+## Current supported Dapr version
+
+Azure Container Apps supports Dapr version 1.4.2.
+
+Version upgrades are handled transparently by Azure Container Apps. You can find the current version via the Azure portal and the CLI. See [known limitations](#limitations) around versioning.
+
+## Limitations
+
+### Unsupported Dapr capabilities
+
+- **Dapr Secrets Management API**: Use [Container Apps secret mechanism][aca-secrets] as an alternative.
+- **Custom configuration for Dapr Observability**: Instrument your environment with Application Insights to visualize distributed tracing.
+- **Dapr Configuration spec**: Any capabilities that require use of the Dapr configuration spec, which includes preview features.
+- **Advanced Dapr sidecar configurations**: Container Apps allows you to specify sidecar settings including `app-protocol`, `app-port`, and `app-id`. For a list of unsupported configuration options, see [the Dapr documentation](https://docs.dapr.io/reference/arguments-annotations-overview/).
+- **Dapr APIs in Preview state**
+
+### Known limitations
+
+- **Declarative pub/sub subscriptions**
+- **Actor reminders**: Require a minReplicas of 1+ to ensure reminders will always be active and fire correctly.
+
+## Next Steps
+
+Now that you've learned about Dapr and some of the challenges it solves, try [Deploying a Dapr application to Azure Container Apps using the Azure CLI][dapr-quickstart] or [Azure Resource Manager][dapr-arm-quickstart].
+
+<!-- Links Internal -->
+[dapr-quickstart]: ./microservices-dapr.md
+[dapr-arm-quickstart]: ./microservices-dapr-azure-resource-manager.md
+[aca-secrets]: ./manage-secrets.md
+
+<!-- Links External -->
+[dapr-concepts]: https://docs.dapr.io/concepts/overview/
+[dapr-pubsub]: https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview
+[dapr-statemgmt]: https://docs.dapr.io/developing-applications/building-blocks/state-management/state-management-overview/
+[dapr-serviceinvo]: https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/
+[dapr-bindings]: https://docs.dapr.io/developing-applications/building-blocks/bindings/bindings-overview/
+[dapr-actors]: https://docs.dapr.io/developing-applications/building-blocks/actors/actors-overview/
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
https://docs.microsoft.com/azure/azure-functions/functions-networking-options
https://techcommunity.microsoft.com/t5/apps-on-azure-blog/azure-container-apps-virtual-network-integration/ba-p/3096932 -->
-### HTTP edge proxy behavior
+## HTTP edge proxy behavior
Azure Container Apps uses [Envoy proxy](https://www.envoyproxy.io/) as an edge HTTP proxy. TLS is terminated on the edge and requests are routed based on their traffic split rules and routes traffic to the correct application.
cosmos-db How To Configure Cosmos Db Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-configure-cosmos-db-trigger.md
description: Learn how to configure logging and connection policy used by Azure
Previously updated : 10/04/2021 Last updated : 05/09/2022
This article describes advanced configuration options you can set when using the
The Azure Functions trigger for Cosmos DB uses the [Change Feed Processor Library](change-feed-processor.md) internally, and the library generates a set of health logs that can be used to monitor internal operations for [troubleshooting purposes](./troubleshoot-changefeed-functions.md).
-The health logs describe how the Azure Functions trigger for Cosmos DB behaves when attempting operations during load-balancing scenarios or initialization.
+The health logs describe how the Azure Functions trigger for Cosmos DB behaves when attempting operations during load-balancing, initialization, and processing scenarios.
### Enabling logging
To enable logging when using Azure Functions trigger for Cosmos DB, locate the `
"logging": { "fileLoggingMode": "always", "logLevel": {
- "Host.Triggers.CosmosDB": "Trace"
+ "Host.Triggers.CosmosDB": "Warning"
} } } ```
-After the Azure Function is deployed with the updated configuration, you will see the Azure Functions trigger for Cosmos DB logs as part of your traces. You can view the logs in your configured logging provider under the *Category* `Host.Triggers.CosmosDB`.
+After the Azure Function is deployed with the updated configuration, you'll see the Azure Functions trigger for Cosmos DB logs as part of your traces. You can view the logs in your configured logging provider under the *Category* `Host.Triggers.CosmosDB`.
+
+### Which type of logs are emitted?
+
+Once enabled, there are three levels of log events that will be emitted:
+
+* Error:
+ * When there's an unknown or critical error on the Change Feed processing that is affecting the correct trigger functionality.
+
+* Warning:
+ * When your Function user code had an unhandled exception - There's a gap in your Function code and the Function isn't [resilient to errors](../../azure-functions/performance-reliability.md#write-defensive-functions) or a serialization error (for C# Functions, the raw json can't be deserialized to the selected C# type).
+ * When there are transient connectivity issues preventing the trigger from interacting with the Cosmos DB account. The trigger will retry these [transient connectivity errors](troubleshoot-dot-net-sdk-request-timeout.md) but if they extend for a long period of time, there could be a network problem. You can enable Debug level traces to obtain the Diagnostics from the underlying Cosmos DB SDK.
+
+* Debug:
+ * When a lease is acquired by an instance - The current instance will start processing the Change Feed for the lease.
+ * When a lease is released by an instance - The current instance has stopped processing the Change Feed for the lease.
+ * When new changes are delivered from the trigger to your Function code - Helps debug situations when your Function code might be having errors and you aren't sure if you're receiving changes or not.
+ * For traces that are Warning and Error, adds the Diagnostics information from the underlying Cosmos DB SDK for troubleshooting purposes.
+
+You can also [refer to the source code](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/dev/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerHealthMonitor.cs) to see the full details.
### Query the logs
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/find-request-unit-charge.md
This article presents the different ways you can find the [request unit](../requ
## Use the .NET SDK
-Currently, the only SDK that returns the RU charge for table operations is the legacy [Microsoft.Azure.Cosmos.Table .NET Standard SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table). The `TableResult` object exposes a `RequestCharge` property that is populated by the SDK when you use it against the Azure Cosmos DB Table API:
+Currently, the only SDK that returns the RU charge for table operations is the [.NET Standard SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table). The `TableResult` object exposes a `RequestCharge` property that is populated by the SDK when you use it against the Azure Cosmos DB Table API:
```csharp CloudTable tableReference = client.GetTableReference("table");
To learn about optimizing your RU consumption, see these articles:
* [Request units and throughput in Azure Cosmos DB](../request-units.md) * [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
-* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
+* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
cost-management-billing View Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-reservations.md
By default, the following users can view and manage reservations:
- A Reservation administrator for reservations in their Azure Active Directory (Azure AD) tenant (directory) - A Reservation reader has read-only access to reservations in their Azure Active Directory tenant (directory)
-Currently, the reservation administrator and reservation reader roles are are only available to assign using PowerShell. They can't be viewed or assigned in the Azure portal. For more information, see [Grant access with PowerShell](#grant-access-with-powershell).
+Currently, the reservation administrator and reservation reader roles are only available to assign using PowerShell. They can't be viewed or assigned in the Azure portal. For more information, see [Grant access with PowerShell](#grant-access-with-powershell).
The reservation lifecycle is independent of an Azure subscription, so the reservation isn't a resource under the Azure subscription. Instead, it's a tenant-level resource with its own Azure RBAC permission separate from subscriptions. Reservations don't inherit permissions from subscriptions after the purchase.
data-factory Concepts Data Flow Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-udf.md
Last updated 04/20/2022
-# User defined functions in mapping data flow
+# User defined functions (Preview) in mapping data flow
++ A user defined function is a customized expression you can define to be able to reuse logic across multiple mapping data flows. User defined functions live in a collection called a data flow library to be able to easily group up common sets of customized functions. Whenever you find yourself building the same logic in an expression in across multiple mapping data flows this would be a good opportunity to turn that into a user defined function.
+> [!IMPORTANT]
+> User defined functions and mapping data flow libraries are currently in public preview.
+ ## Getting started To get started with user defined functions, you must first create a data flow library. Navigate to the management page and then find data flow libraries under the author section.
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
The following sections provide details about properties you can use to define Da
## Linked service properties
+> [!Important]
+> Due to Azure service security and compliance request, system-assigned managed identity authentication is no longer available in REST connector for both Copy and Mapping data flow. You are recommended to migrate existing linked services that use system-managed identity authentication to user-assigned managed identity authentication or other authentication types. Please make sure the migration to be done by **September 15, 2022**. For more detailed steps about how to create, manage user-assigned managed identities, refer to [this](data-factory-service-identity.md#user-assigned-managed-identity).
+ The following properties are supported for the REST linked service: | Property | Description | Required |
data-factory Solution Template Extract Data From Pdf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-extract-data-from-pdf.md
This template defines 4 parameters:
2. Create a **New** connection to your destination storage store or choose an existing connection. :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-2.png" alt-text="Screenshot of how to create a new connection or select existing connection from a drop down menu to Form Recognizer in template set up.":::-
+
+ In your connection to Form Recognizer, make sure to add a **Linked service Parameter**. You will need to use this parameter as your dynamic **Base URL**.
+
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-9.png" alt-text="Screenshot of where to add your Form Recognizer linked service parameter.":::
+
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-8.png" alt-text="Screenshot of the linked service base URL that references the linked service parameter.":::
+
3. Select **Use this template**. :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-3.png" alt-text="Screenshot of how to complete the template by clicking use this template at the bottom of the screen.":::
data-factory Tutorial Managed Virtual Network Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-migrate.md
+
+ Title: Move existing Azure integration runtime to an Azure integration runtime in a managed virtual network
+description: This tutorial provides steps to move existing Azure integration runtime to an Azure integration runtime in a managed virtual network.
+++++ Last updated : 05/08/2022++
+# Tutorial: How to move existing Azure integration runtime to an Azure integration runtime in a managed virtual network
++
+Managed virtual network provides a secure and manageable data integration solution. With managed virtual network, you can create the Azure integration runtime as part of a managed virtual network and use private endpoints to securely connect to supported data stores. Data traffic goes through Azure private links that provide secured connectivity to the data source. In addition, it prevents data exfiltration to the public internet.
+This tutorial provides steps to move existing Azure integration runtime to an Azure integration runtime in a managed virtual network.
+
+## Steps to move existing Azure integration runtime to an Azure integration runtime in a managed virtual network
+1. Enable managed virtual network on your Azure integration runtime. You can enable it either on a new Azure integration time or an existing one.
++
+> [!NOTE]
+> You can't enable managed virtual network on the default auto-resolve integration runtime.
+
+2. Modify all the integration runtime references in the linked service to the newly created Azure integration runtime in the managed virtual network.
+++
+## Next steps
+
+Advance to the following tutorial to learn about managed virtual network:
+
+> [!div class="nextstepaction"]
+> [Managed virtual network](managed-virtual-network-private-endpoint.md)
ddos-protection Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/alerts.md
Title: View and configure DDoS protection alerts for Azure DDoS Protection Stand
description: Learn how to view and configure DDoS protection alerts for Azure DDoS Protection Standard. documentationcenter: na-+ na Last updated 3/11/2022-+ # View and configure DDoS protection alerts
ddos-protection Ddos Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-disaster-recovery-guidance.md
Title: Azure DDoS Protection Standard business continuity | Microsoft Docs
description: Learn what to do in the event of an Azure service disruption impacting Azure DDoS Protection Standard. documentationcenter: na-+ na Last updated 04/16/2021-+ # Azure DDoS Protection Standard ΓÇô business continuity
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
Title: Azure DDoS Protection Standard Overview
description: Learn how the Azure DDoS Protection Standard, when combined with application design best practices, provides defense against DDoS attacks. documentationcenter: na-+ na Last updated 09/9/2020-+ # Azure DDoS Protection Standard overview
ddos-protection Ddos Protection Partner Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-partner-onboarding.md
Title: Partnering with Azure DDoS Protection Standard
description: "Understand partnering opportunities enabled by Azure DDoS Protection Standard." documentationcenter: na-+ Last updated 08/28/2020-+ # Partnering with Azure DDoS Protection Standard This article describes partnering opportunities enabled by the Azure DDoS Protection Standard. This article is designed to help product managers and business development roles understand the investment paths and provide insight into the partnering value propositions.
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
Title: Azure DDoS Protection reference architectures description: Learn Azure DDoS protection reference architectures. -+ Last updated 04/29/2022-+
ddos-protection Ddos Protection Standard Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-standard-features.md
Title: Azure DDoS Protection features
description: Learn Azure DDoS Protection features documentationcenter: na-+ na Last updated 09/08/2020-+ # Azure DDoS Protection Standard features
ddos-protection Ddos Rapid Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-rapid-response.md
Title: Azure DDoS Rapid Response
description: Learn how to engage DDoS experts during an active attack for specialized support. documentationcenter: na-+ na Last updated 08/28/2020-+ # Azure DDoS Rapid Response
ddos-protection Ddos Response Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-response-strategy.md
Title: Components of a DDoS response strategy
description: Learn what how to use Azure DDoS Protection Standard to respond to DDoS attacks. documentationcenter: na-+ na Last updated 09/08/2020-+ # Components of a DDoS response strategy
ddos-protection Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/diagnostic-logging.md
Title: Azure DDoS Protection Standard reports and flow logs
description: Learn how to configure reports and flow logs. documentationcenter: na-+ na Last updated 12/28/2020-+
ddos-protection Fundamental Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/fundamental-best-practices.md
Title: Azure DDoS Protection fundamental best practices
description: Learn the best security practices using DDoS protection. documentationcenter: na-+ na Last updated 09/08/2020-+ # Fundamental best practices
ddos-protection Inline Protection Glb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/inline-protection-glb.md
Title: Inline L7 DDoS protection with Gateway Load Balancer and partner NVAs
description: Learn how to create and enable inline L7 DDoS Protection with Gateway Load Balancer and Partner NVAs documentationcenter: na-+ na -+ Last updated 10/21/2021
ddos-protection Manage Ddos Protection Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-bicep.md
Title: Create and enable an Azure DDoS Protection plan using Bicep.
description: Learn how to create and enable an Azure DDoS Protection plan using Bicep. documentationcenter: na-+ na
ddos-protection Manage Ddos Protection Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-cli.md
Title: Create and configure an Azure DDoS Protection plan using Azure CLI
description: Learn how to create a DDoS Protection Plan using Azure CLI documentationcenter: na-+ na Last updated 04/18/2022-+ # Quickstart: Create and configure Azure DDoS Protection Standard using Azure CLI
ddos-protection Manage Ddos Protection Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell.md
Title: Create and configure an Azure DDoS Protection plan using Azure PowerShell
description: Learn how to create a DDoS Protection Plan using Azure PowerShell documentationcenter: na-+ na Last updated 04/18/2022-+
ddos-protection Manage Ddos Protection Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-template.md
Title: Create and enable an Azure DDoS Protection plan using an Azure Resource M
description: Learn how to create and enable an Azure DDoS Protection plan using an Azure Resource Manager template (ARM template). documentationcenter: na-+ na
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
Title: Manage Azure DDoS Protection Standard using the Azure portal
description: Learn how to use Azure DDoS Protection Standard to mitigate an attack. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na
Last updated 04/13/2022-+
ddos-protection Manage Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-permissions.md
Title: Azure DDoS Protection Plan permissions
description: Learn how to manage permission in a protection plan. documentationcenter: na-+ na Last updated 09/08/2020-+
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Title: Built-in policy definitions for Azure DDoS Protection Standard
description: Lists Azure Policy built-in policy definitions for Azure DDoS Protection Standard. These built-in policy definitions provide common approaches to managing your Azure resources. documentationcenter: na-+ na Last updated 03/08/2022-+
ddos-protection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/telemetry.md
Title: View and configure DDoS protection telemetry for Azure DDoS Protection St
description: Learn how to view and configure DDoS protection telemetry for Azure DDoS Protection Standard. documentationcenter: na-+ na Last updated 12/28/2020-+ # View and configure DDoS protection telemetry
ddos-protection Test Through Simulations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md
Title: Azure DDoS Protection simulation testing
description: Learn about how to test through simulations documentationcenter: na-+ na Last updated 04/21/2022-+
ddos-protection Types Of Attacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/types-of-attacks.md
Title: Types of attacks Azure DDoS Protection Standard mitigates
description: Learn what types of attacks Azure DDoS Protection Standard protects against. documentationcenter: na-+ na Last updated 09/08/2020-+ # Types of DDoS attacks overview
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Yes. If you've configured your Log Analytics agent to send data to two or more d
Yes. If you've configured your Log Analytics agent to send data to two or more different Log Analytics workspaces (multi-homing), you'll get 500 MB free data ingestion. It's calculated per node, per reported workspace, per day, and available for every workspace that has a 'Security' or 'AntiMalware' solution installed. You'll be charged for any data ingested over the 500 MB limit. ### Is the 500 MB free data ingestion calculated for an entire workspace or strictly per machine?
-You'll get 500 MB free data ingestion per day, for every Windows machine connected to the workspace. Specifically for security data types directly collected by Defender for Cloud.
+You'll get 500 MB free data ingestion per day, for every machine connected to the workspace. Specifically for security data types directly collected by Defender for Cloud.
This data is a daily rate averaged across all nodes. So even if some machines send 100-MB and others send 800-MB, if the total doesn't exceed the **[number of machines] x 500 MB** free limit, you won't be charged extra. ### What data types are included in the 500 MB data daily allowance?
-Defender for Cloud's billing is closely tied to the billing for Log Analytics. [Microsoft Defender for Servers](defender-for-servers-introduction.md) provides a 500 MB/node/day allocation for Windows machines against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
+Defender for Cloud's billing is closely tied to the billing for Log Analytics. [Microsoft Defender for Servers](defender-for-servers-introduction.md) provides a 500 MB/node/day allocation for machines against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
- SecurityAlert - SecurityBaseline - SecurityBaselineSummary
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
The **top menu bar** offers:
In the center of the page are the **feature tiles**, each linking to a high profile feature or dedicated dashboard: -- **Secure score** - Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level. [Learn more](secure-score-security-controls.md).
+- **Security posture** - Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level. [Learn more](secure-score-security-controls.md).
- **Workload protections** - This is the cloud workload protection platform (CWPP) integrated within Defender for Cloud for advanced, intelligent protection of your workloads running on Azure, on-premises machines, or other cloud providers. For each resource type, there's a corresponding Microsoft Defender plan. The tile shows the coverage of your connected resources (for the currently selected subscriptions) and the recent alerts, color-coded by severity. Learn more about [the enhanced security features](enhanced-security-features-overview.md). - **Regulatory compliance** - Defender for Cloud provides insights into your compliance posture based on continuous assessments of your Azure environment. Defender for Cloud analyzes risk factors in your environment according to security best practices. These assessments are mapped to compliance controls from a supported set of standards. [Learn more](regulatory-compliance-dashboard.md). - **Firewall Manager** - This tile shows the status of your hubs and networks from [Azure Firewall Manager](../firewall-manager/overview.md).
defender-for-iot Ot Virtual Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-virtual-appliances.md
For all deployments, bandwidth results for virtual machines may vary, depending
|**Enterprise** | **Max bandwidth**: 800 Mb/sec <br>**Max monitored assets**: 10,000 | **vCPU**: 8 <br>**Memory**: 32 GB <br>**Storage**: 1.8 TB (300 IOPS) | |**SMB** | **Max bandwidth**: 160 Mb/sec <br>**Max monitored assets**: 1,000 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 500 GB (150 IOPS) | |**Office** | **Max bandwidth**: 100 Mb/sec <br>**Max monitored assets**: 800 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 100 GB (150 IOPS) |
-|**Rugged** | **Max bandwidth**: 10 Mb/sec <br>**Max monitored assets**: 100 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 64 GB (150 IOPS) |
+|**Rugged** | **Max bandwidth**: 10 Mb/sec <br>**Max monitored assets**: 100 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 60 GB (150 IOPS) |
## On-premises management console VM requirements
dms Resource Custom Roles Sql Db Managed Instance Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance-ads.md
The AssignableScopes section of the role definition json string allows you to co
} } ```
-You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure Rest API to create the roles.
+You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure REST API to create the roles.
For more information, see the articles [Create custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md).
dms Resource Custom Roles Sql Db Virtual Machine Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-virtual-machine-ads.md
The AssignableScopes section of the role definition json string allows you to co
} } ```
-You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure Rest API to create the roles.
+You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure REST API to create the roles.
For more information, see the articles [Create custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md).
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
Previously updated : 03/22/2021 Last updated : 05/09/2022 # Designing for disaster recovery with ExpressRoute private peering
-ExpressRoute is designed for high availability to provide carrier grade private network connectivity to Microsoft resources. In other words, there's no single point of failure in the ExpressRoute path within Microsoft network. For design considerations to maximize the availability of an ExpressRoute circuit, see [Designing for high availability with ExpressRoute][HA].
+ExpressRoute is designed for high availability to provide carrier grade private network connectivity to Microsoft resources. In other words, there's no single point of failure in the ExpressRoute path within Microsoft network. For design considerations to maximize the availability of an ExpressRoute circuit, see [Designing for high availability with ExpressRoute][HA] and [Well-Architectured Framework](/azure/architecture/framework/services/networking/expressroute/reliability)
However, taking Murphy's popular adage--*if anything can go wrong, it will*--into consideration, in this article let us focus on solutions that go beyond failures that can be addressed using a single ExpressRoute circuit. We'll be looking into network architecture considerations for building robust backend network connectivity for disaster recovery using geo-redundant ExpressRoute circuits.
frontdoor How To Configure Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-configure-endpoints.md
To create an Azure Front Door profile, see [create a Azure Front Door](create-fr
:::image type="content" source="./media/how-to-configure-endpoints/associated-security-policy.png" alt-text="Screenshot of security policy associated with an endpoint." lightbox="./media/how-to-configure-endpoints/associated-security-policy-expanded.png":::
+## Configure origin timeout
+
+Origin timeout is the amount of time Azure Front Door will wait until it considers the connection to origin has timed out. You can set this value on the overview page of the Azure Front Door profile. This value will be applied to all endpoints in the profile.
++ ## Clean up resources In order to remove an endpoint, you first have to remove any security policies associated with the endpoint. Then select **Delete endpoint** to remove the endpoint from the Azure Front Door profile.
frontdoor How To Configure Endpoint Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-endpoint-manager.md
- Title: Configure Azure Front Door Standard/Premium endpoint with Endpoint Manager
-description: This article shows how to configure an endpoint with Endpoint Manager.
---- Previously updated : 02/18/2021---
-# Configure an Azure Front Door Standard/Premium (Preview) endpoint with Endpoint Manager
-
-> [!NOTE]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View **[Azure Front Door Docs](../front-door-overview.md)**.
-
-This article shows you how to create an endpoint for an existing Azure Front Door Standard/Premium profile with Endpoint Manager.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Prerequisites
-
-Before you can create an Azure Front Door Standard/Premium endpoint with Endpoint Manager, you must have created at least one Azure Front Door profile created. The profile has to have at least one or more Azure Front Door Standard/Premium endpoints. To organize your Azure Front Door Standard/Premium endpoints by internet domain, web application, or other criteria, you can use multiple profiles.
-
-To create an Azure Front Door profile, see [Create a new Azure Front Door Standard/Premium profile](create-front-door-portal.md).
-
-## Create a new Azure Front Door Standard/Premium Endpoint
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Front Door Standard/Premium profile.
-
-1. Select **Endpoint Manager**. Then select **Add an Endpoint** to create a new Endpoint.
-
- :::image type="content" source="../media/how-to-configure-endpoints/select-create-endpoint.png" alt-text="Screenshot of add an endpoint through Endpoint Manager.":::
-
-1. On the **Add an endpoint** page, enter, and select the following settings.
-
- :::image type="content" source="../media/how-to-configure-endpoints/create-endpoint-page.png" alt-text="Screenshot of add an endpoint page.":::
-
- | Settings | Value |
- | -- | -- |
- | Name | Enter a unique name for the new Azure Front Door Standard/Premium endpoint. This name is used to access your cached resources at the domain `<endpointname>.az01.azurefd.net` |
- | Origin Response timeout (secs) | Enter a timeout value in seconds that Azure Front Door will wait before considering the connection with origin has timeout. |
- | Status | Select the checkbox to enable this endpoint. |
-
-## Add Domains, Origin Group, Routes, and Security
-
-1. Select **Edit Endpoint** at the endpoint to configure the route.
-
-1. On the **Edit Endpoint** page, select **+ Add** under Domains.
-
- :::image type="content" source="../media/how-to-configure-endpoints/select-add-domain.png" alt-text="Screenshot of select domain on Edit Endpoint page.":::
-
-### Add Domain
-
-1. On the **Add Domain** page, choose to associate a domain *from your Azure Front Door profile* or *add a new domain*. For information about how to create a brand new domain, see [Create a new Azure Front Door Standard/Premium custom domain](how-to-add-custom-domain.md).
-
- :::image type="content" source="../media/how-to-configure-endpoints/add-domain-page.png" alt-text="Screenshot of Add a domain page.":::
-
-1. Select **Add** to add the domain to current endpoint. The selected domain should appear within the Domain panel.
-
- :::image type="content" source="../media/how-to-configure-endpoints/domain-in-domainview.png" alt-text="Screenshot of domains in domain view.":::
-
-### Add Origin Group
-
-1. Select **Add** at the Origin groups view. The **Add an origin group** page appears
-
- :::image type="content" source="../media/how-to-configure-endpoints/add-origin-group-view.png" alt-text="Screenshot of add an origin group page":::
-
-1. For **Name**, enter a unique name for the new origin group
-
-1. Select **Add an Origin** to add a new origin to current group.
-
-#### Health Probes
-Front Door sends periodic HTTP/HTTPS probe requests to each of your origin. Probe requests determine the proximity and health of each origin to load balance your end-user requests. Health probe settings for an origin group define how we poll the health status of app origin. The following settings are available for load-balancing configuration:
-
-> [!WARNING]
-> Since Front Door has many edge environments globally, health probe volume for your origin can be quite high - ranging from 25 requests every minute to as high as 1200 requests per minute, depending on the health probe frequency configured. With the default probe frequency of 30 seconds, the probe volume on your origin should be about 200 requests per minute.
-
-* **Status**: Specify whether to turn on the health probing. If you have a single origin in your origin group, you can choose to disable the health probes reducing the load on your application backend. Even if you have multiple origins in the group but only one of them is in enabled state, you can disable health probes.
-
-* **Path**: The URL used for probe requests for all the origin in this origin group. For example, if one of your origins is contoso-westus.azurewebsites.net and the path is set to /probe/test.aspx, then Front Door environments, assuming the protocol is set to HTTP, will send health probe requests to `http://contoso-westus.azurewebsites.net/probe/test.aspx`.
-
-* **Protocol**: Defines whether to send the health probe requests from Front Door to your origin with HTTP or HTTPS protocol.
-
-* **Probe Method**: The HTTP method to be used for sending health probes. Options include GET or HEAD (default).
-
- > [!NOTE]
- > For lower load and cost on your origin, Front Door recommends using HEAD requests for health probes.
-
-* **Interval(in seconds)**: Defines the frequency of health probes to your origin, or the intervals in which each of the Front Door environments sends a probe.
-
- >[!NOTE]
- >For faster failovers, set the interval to a lower value. The lower the value, the higher the health probe volume your origin receive. For example, if the interval is set to 30 seconds with say, 100 Front Door POPs globally, each backend will receive about 200 probe requests per minute.
-
-#### Load balancing
-Load-balancing settings for the origin group define how we evaluate health probes. These settings determine if the backend is healthy or unhealthy. They also check how to load-balance traffic between different origins in the origin group. The following settings are available for load-balancing configuration:
--- **Sample size**. Identifies how many samples of health probes we need to consider for origin health evaluation.--- **Successful sample size**. Defines the sample size as previously mentioned, the number of successful samples needed to call the origin healthy. For example, assume a Front Door health probe interval is 30 seconds, sample size is 5, and successful sample size is 3. Each time we evaluate the health probes for your origin, we look at the last five samples over 150 seconds (5 x 30). At least three successful probes are required to declare the backend as healthy.--- **Latency sensitivity (extra latency)**. Defines whether you want Front Door to send the request to origin within the latency measurement sensitivity range or forward the request to the closest backend.-
-Select **Add** to add the origin group to current endpoint. The origin group should appear within the Origin group panel
--
-### Add Route
-
-Select **Add** at the Routes view, the **Add a route** page appears. For information how to associate the domain and origin group, see [Create a new Azure Front Door route](how-to-configure-route.md)
-
-### Add Security
-
-1. Select **Add** at the Security view, The **Add a WAF policy** page appears
-
- :::image type="content" source="../media/how-to-configure-endpoints/add-waf-policy-page.png" alt-text="Screenshot of add a WAF policy page.":::
-
-1. **WAF Policy**: select a WAF policy you like apply for the selected domain within this endpoint.
-
- Select **Create New** to create a brand new WAF policy.
-
- :::image type="content" source="../media/how-to-configure-endpoints/create-new-waf-policy.png" alt-text="Screenshot of create a new WAF policy.":::
-
- **Name**: enter a unique name for the new WAF policy. You could edit this policy with more configuration from the Web Application Firewall page.
-
- **Domains**: select the domain to apply the WAF policy.
-
-1. Select **Add** button. The WAF policy should appear within the Security panel
-
- :::image type="content" source="../media/how-to-configure-endpoints/waf-in-security-view.png" alt-text="Screenshot of WAF policy in security view.":::
-
-## Clean up resources
-
-To delete an endpoint when it's no longer needed, select **Delete Endpoint** at the end of the endpoint row
--
-## Next steps
-
-To learn about custom domains, continue to [Adding a custom domain](how-to-add-custom-domain.md).
frontdoor Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/troubleshoot-issues.md
The cause of this problem can be one of three things:
* Send the request to your backend directly without going through Azure Front Door. See how long your backend usually takes to respond. * Send the request via Azure Front Door and see if you're getting any 503 responses. If not, the problem might not be a timeout issue. Contact support.
-* If requests going through Azure Front Door result in a 503 error response code, configure **Origin response timeout (in seconds)** for the endpoint. You can extend the default timeout to up to 4 minutes, which is 240 seconds. To configure the setting, go to **Endpoint manager** and select **Edit endpoint**.
+* If requests going through Azure Front Door result in a 503 error response code, configure **Origin response timeout (in seconds)** for Azure Front Door. You can extend the default timeout to up to 4 minutes, which is 240 seconds. To configure the setting, go to overview page of the Front Door profile. Select **Origin response timeout** and enter a value between *16* and *240* seconds.
- :::image type="content" source="./media/troubleshoot-issues/origin-response-timeout-1.png" alt-text="Screenshot that shows selecting Edit endpoint from Endpoint manager.":::
-
- Then select **Endpoint properties** to configure **Origin response timeout**.
-
- :::image type="content" source="./media/troubleshoot-issues/origin-response-timeout-2.png" alt-text="Screenshot that shows selecting Endpoint properties and the Origin response timeout field." lightbox="./media/troubleshoot-issues/origin-response-timeout-2-expanded.png":::
+ :::image type="content" source="./media/how-to-configure-endpoints/origin-timeout.png" alt-text="Screenshot of the origin timeout settings on the overview page of the Azure Front Door profile.":::
* If the timeout doesn't resolve the issue, use a tool like Fiddler or your browser's developer tool to check if the client is sending byte range requests with **Accept-Encoding** headers. Using this option leads to the origin responding with different content lengths.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
need to be evaluated as true.
If you're doing the move action, you need: -- Management group write and Role Assignment write permissions on the child subscription or
+- Management group write and role assignment write permissions on the child subscription or
management group. - Built-in role example: **Owner** - Management group write access on the target parent management group.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md
Azure Policy has several permissions, known as operations, in two Resource Provi
- [Microsoft.Authorization](../../role-based-access-control/resource-provider-operations.md#microsoftauthorization) - [Microsoft.PolicyInsights](../../role-based-access-control/resource-provider-operations.md#microsoftpolicyinsights)
-Many Built-in roles grant permission to Azure Policy resources. The **Resource Policy Contributor**
+Many built-in roles grant permission to Azure Policy resources. The **Resource Policy Contributor**
role includes most Azure Policy operations. **Owner** has full rights. Both **Contributor** and **Reader** have access to all _read_ Azure Policy operations.
necessary to grant the managed identity on **deployIfNotExists** or **modify** a
permissions. > [!NOTE]
-> All Policy objects, including definitions, initatives, and assignments, will be readable to all
-> roles over its scope. For example, a Policy assignment scoped to an Azure subscription will be readable
+> All Policy objects, including definitions, initiatives, and assignments, will be readable to all
+> roles over its scope. For example, a Policy assignment scoped to an Azure subscription will be readable
> by all role holders at the subscription scope and below.
-If none of the Built-in roles have the permissions required, create a
+If none of the built-in roles have the permissions required, create a
[custom role](../../role-based-access-control/custom-roles.md).
-Azure Policy operations can have a significant impact on your Azure environment. Only the minimum set of
+Azure Policy operations can have a significant impact on your Azure environment. Only the minimum set of
permissions necessary to perform a task should be assigned and these permissions should not be granted to users who do not need them.
Here are a few pointers and tips to keep in mind:
- Once you've created an initiative assignment, policy definitions added to the initiative also become part of that initiative's assignments.
-
+ - When an initiative assignment is evaluated, all policies within the initiative are also evaluated. If you need to evaluate a policy individually, it's better to not include it in an initiative.
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
Reasons to export data include:
### Storage and analysis
-For long-term storage and control over archiving and retention policies, you can [continuously export your data](howto-export-data.md) to other storage destinations. Use of separate storage also lets you use other analytics tools to derive insights and view the data in your solution.
+For long-term storage and control over archiving and retention policies, you can [continuously export your data](howto-export-to-blob-storage.md).
+ to other storage destinations. Use of separate storage also lets you use other analytics tools to derive insights and view the data in your solution.
### Business automation
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md
The article doesn't describe every possible type of telemetry, property, and com
Each example shows a snippet from the device model that defines the type and example JSON payloads to illustrate how the device should interact with the IoT Central application. > [!NOTE]
-> IoT Central accepts any valid JSON but it can only be used for visualizations if it matches a definition in the device model. You can export data that doesn't match a definition, see [Export IoT data to destinations in Azure](howto-export-data.md).
+> IoT Central accepts any valid JSON but it can only be used for visualizations if it matches a definition in the device model. You can export data that doesn't match a definition, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) v2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
iot-central Howto Connect Secure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-secure-vnet.md
Currently, it's not possible to connect an IoT Central application directly to V
- An IoT Central application. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md). -- Data export configured in your IoT Central application to send device data to a destination such as Azure Blob Storage, Azure Event Hubs, or Azure Service Bus. The destination is configured to use a managed identity. To learn more, see [Export IoT data to cloud destinations using data export](howto-export-data.md).
+- Data export configured in your IoT Central application to send device data to a destination such as Azure Blob Storage, Azure Event Hubs, or Azure Service Bus. The destination is configured to use a managed identity. To learn more, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
## Configure the destination service
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md
Select the ellipsis, for more chart controls:
## Next steps
-Now that you've learned how to visualize your data with the built-in analytics capabilities, a suggested next step is to learn how to [Export IoT data to cloud destinations using data export](howto-export-data.md).
+Now that you've learned how to visualize your data with the built-in analytics capabilities, a suggested next step is to learn how to [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-organizations.md
Organizations let you define a hierarchy that you use to manage which users can see which devices in your IoT Central application. The user's role determines their permissions over the devices they see, and the experiences they can access. Use organizations to implement a multi-tenanted application.
-Organizations is an optional feature that gives you more control over the [users and roles](howto-manage-users-roles.md) in your application.
+Organizations are an optional feature that gives you more control over the [users and roles](howto-manage-users-roles.md) in your application.
Organizations are hierarchical:
When you create a new device in your application, assign it to an organization i
To assign or reassign an existing device to an organization, select the device in the device list and then select **Organization**: > [!TIP] > You can see which organization a device belongs to in the device list. Use the filter tool in the device list to show devices in a particular organization.
When you start adding organizations, all existing devices, users, and experience
The following limits apply to organizations: - The hierarchy can be no more than five levels deep.-- The total number of organization cannot be more than 200. Each node in the hierarchy counts as an organization.
+- The total number of organizations can't be more than 200. Each node in the hierarchy counts as an organization.
## Next steps
-Now that you've learned how to manage Azure IoT Central organizations, the suggested next step is learn how to [Export IoT data to cloud destinations using data export](howto-export-data.md).
+Now that you've learned how to manage Azure IoT Central organizations, the suggested next step is to learn how to [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
+
iot-central Howto Export Data Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data-legacy.md
# Export IoT data to cloud destinations using data export (legacy)
-The legacy data export (classic) feature is now deprecated and you should plan to migrate to the new [data export feature](howto-export-data.md). The legacy data export lacks important capabilities such as the availability of different data types, filtering, and message transformation. See the following table for a comparison of legacy data export with new data export:
+The legacy data export (classic) feature is now deprecated and you should plan to migrate to the new [data export feature](howto-export-to-blob-storage.md). The legacy data export lacks important capabilities such as the availability of different data types, filtering, and message transformation. See the following table for a comparison of legacy data export with new data export:
| Capability | Legacy data export (classic) | New data export | | :- | :- | :-- |
In the new data export, you can create a destination and reuse it across differe
> > - Legacy data exports (classic) are scheduled to be retired. Migrate any legacy data exports to new exports >
-> - For information about the latest data export features, see [Export IoT data to cloud destinations using data export](./howto-export-data.md).
+> - For information about the latest data export features, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
This article describes how to use the data export feature in Azure IoT Central. This feature lets you export your data continuously to **Azure Event Hubs**, **Azure Service Bus**, or **Azure Blob storage** instances. Data export uses the JSON format and can include telemetry, device information, and device template information. Use the exported data for:
iot-central Howto Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data.md
- Title: Export data from Azure IoT Central | Microsoft Docs
-description: How to use the new data export to export your IoT data to Azure and custom cloud destinations.
--- Previously updated : 01/31/2022-----
-# Export IoT data to cloud destinations using data export
-
-This article describes how to use data export in Azure IoT Central. Use this feature to continuously export filtered and enriched IoT data from your IoT Central application. Data export pushes changes in near real time to other parts of your cloud solution for warm-path insights, analytics, and storage.
-
-For example, you can:
--- Continuously export telemetry, property changes, device connectivity, device lifecycle, and device template lifecycle data in JSON format in near real time.-- Filter the data streams to export data that matches custom conditions.-- Enrich the data streams with custom values and property values from the device.-- Transform the data streams to modify their shape and content.-- Send the data to destinations such as Azure Event Hubs, Azure Data Explorer, Azure Service Bus, Azure Blob Storage, and webhook endpoints.-
-> [!Tip]
-> When you turn on data export, you get only the data from that moment onward. Currently, data can't be retrieved for a time when data export was off. To retain more historical data, turn on data export early.
-
-## Prerequisites
-
-To use data export features, you must have the [Data export](howto-manage-users-roles.md) permission.
-
-## Set up an export destination
-
-Your export destination must exist before you configure your data export. Choose from the following destination types:
-
-# [Blob Storage](#tab/blob-storage)
-
-IoT Central exports data once per minute, with each file containing the batch of changes since the previous export. Exported data is saved in JSON format. The default paths to the exported data in your storage account are:
--- Telemetry: _{container}/{app-id}/{partition_id}/{YYYY}/{MM}/{dd}/{hh}/{mm}/{filename}_-- Property changes: _{container}/{app-id}/{partition_id}/{YYYY}/{MM}/{dd}/{hh}/{mm}/{filename}_-
-To browse the exported files in the Azure portal, navigate to the file and select **Edit blob**.
-
-### Connection options
-
-Blob Storage destinations let you configure the connection with a *connection string* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
--
-This article shows how to create a managed identity in the Azure portal. You can also use the Azure CLI to create a manged identity. To learn more, see [Assign a managed identity access to a resource using Azure CLI](../../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md).
-
-# [Service Bus](#tab/service-bus)
-
-Both queues and topics are supported for Azure Service Bus destinations.
-
-IoT Central exports data in near real time. The data is in the message body and is in JSON format encoded as UTF-8.
-
-The annotations or system properties bag of the message contains the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` fields that have the same values as the corresponding fields in the message body.
-
-### Connection options
-
-Service Bus destinations let you configure the connection with a *connection string* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
--
-This article shows how to create a managed identity in the Azure portal. You can also use the Azure CLI to create a manged identity. To learn more, see [Assign a managed identity access to a resource using Azure CLI](../../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md).
-
-# [Event Hubs](#tab/event-hubs)
-
-IoT Central exports data in near real time. The data is in the message body and is in JSON format encoded as UTF-8.
-
-The annotations or system properties bag of the message contains the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` fields that have the same values as the corresponding fields in the message body.
-
-### Connection options
-
-Event Hubs destinations let you configure the connection with a *connection string* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
--
-This article shows how to create a managed identity in the Azure portal. You can also use the Azure CLI to create a manged identity. To learn more, see [Assign a managed identity access to a resource using Azure CLI](../../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md).
-
-# [Azure Data Explorer](#tab/data-explorer)
-
-You can use an [Azure Data Explorer cluster](/azure/data-explorer/data-explorer-overview) or an [Azure Synapse Data Explorer pool](../../synapse-analytics/data-explorer/data-explorer-overview.md). To learn more, see [What is the difference between Azure Synapse Data Explorer and Azure Data Explorer?](../..//synapse-analytics/data-explorer/data-explorer-compare.md).
-
-IoT Central exports data in near real time to a database table in the Azure Data Explorer cluster. The data is in the message body and is in JSON format encoded as UTF-8. You can add a [Transform](howto-transform-data-internally.md) in IoT Central to export data that matches the table schema.
-
-To query the exported data in the Azure Data Explorer portal, navigate to the database and select **Query**.
-
-The following video walks you through exporting data to Azure Data Explorer:
-
-> [!VIDEO https://aka.ms/docs/player?id=9e0c0e58-2753-42f5-a353-8ae602173d9b]
-
-### Connection options
-
-Azure Data Explorer destinations let you configure the connection with a *service principal* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
--
-This article shows how to create a managed identity using the Azure CLI. You can also use the Azure portal to create a manged identity.
-
-# [Webhook](#tab/webhook)
-
-For webhook destinations, IoT Central exports data in near real time. The data in the message body is in the same format as for Event Hubs and Service Bus.
-
-### Create a webhook destination
-
-You can export data to a publicly available HTTP webhook endpoint. You can create a test webhook endpoint using [RequestBin](https://requestbin.net/). RequestBin throttles request when the request limit is reached:
-
-1. Open [RequestBin](https://requestbin.net/).
-1. Create a new RequestBin and copy the **Bin URL**. You use this URL when you test your data export.
-
-To create the Azure Data Explorer destination in IoT Central on the **Data export** page:
-
-1. Select **+ New destination**.
-
-1. Select **Webhook** as the destination type.
-
-1. Paste the callback URL for your webhook endpoint. You can optionally configure webhook authorization and add custom headers.
-
- - For **OAuth2.0**, only the client credentials flow is supported. When you save the destination, IoT Central communicates with your OAuth provider to retrieve an authorization token. This token is attached to the `Authorization` header for every message sent to this destination.
- - For **Authorization token**, you can specify a token value that's directly attached to the `Authorization` header for every message sent to this destination.
-
-1. Select **Save**.
---
-# [Service principal](#tab/service-principal/data-explorer)
-
-### Create an Azure Data Explorer destination
-
-If you don't have an existing Azure Data Explorer database to export to, follow these steps:
-
-1. You have two choices to create an Azure Data Explorer database:
-
- - Create a new Azure Data Explorer cluster and database. To learn more, see the [Azure Data Explorer quickstart](/azure/data-explorer/create-cluster-database-portal). Make a note of the cluster URI and the name of the database you create, you need these values in the following steps.
- - Create a new Azure Synapse Data Explorer pool and database. To learn more, see the [Azure Data Explorer quickstart](../../synapse-analytics/get-started-analyze-data-explorer.md). Make a note of the pool URI and the name of the database you create, you need these values in the following steps.
-
-1. Create a service principal that you can use to connect your IoT Central application to Azure Data Explorer. Use the Azure Cloud Shell to run the following command:
-
- ```azurecli
- az ad sp create-for-rbac --skip-assignment --name "My SP for IoT Central" --scopes /subscriptions/<SubscriptionId>
- ```
-
- Make a note of the `appId`, `password`, and `tenant` values in the command output, you need them in the following steps.
-
-1. To add the service principal to the database, navigate to the Azure Data Explorer portal and run the following query on your database. Replace the placeholders with the values you made a note of previously:
-
- ```kusto
- .add database ['<YourDatabaseName>'] admins ('aadapp=<YourAppId>;<YourTenant>');
- ```
-
-1. Create a table in your database with a suitable schema for the data you're exporting. The following example query creates a table called `smartvitalspatch`. To learn more, see [Transform data inside your IoT Central application for export](howto-transform-data-internally.md):
-
- ```kusto
- .create table smartvitalspatch (
- EnqueuedTime:datetime,
- Message:string,
- Application:string,
- Device:string,
- Simulated:boolean,
- Template:string,
- Module:string,
- Component:string,
- Capability:string,
- Value:dynamic
- )
- ```
-
-1. (Optional) To speed up ingesting data into your Azure Data Explorer database:
-
- 1. Navigate to the **Configurations** page for your Azure Data Explorer cluster. Then enable the **Streaming ingestion** option.
- 1. Run the following query to alter the table policy to enable streaming ingestion:
-
- ```kusto
- .alter table smartvitalspatch policy streamingingestion enable
- ```
-
-To create the Azure Data Explorer destination in IoT Central on the **Data export** page:
-
-1. Select **+ New destination**.
-
-1. Select **Azure Data Explorer** as the destination type.
-
-1. Enter your Azure Data Explorer cluster or pool URL, database name, and table name. The following table shows the service principal values to use for the authorization:
-
- | Service principal value | Destination configuration |
- | -- | - |
- | appId | ClientID |
- | tenant | Tenant ID |
- | password | Client secret |
-
- > [!TIP]
- > The cluster URL for a standalone Azure Data Explorer looks like `https://<ClusterName>.<AzureRegion>.kusto.windows.net`. The cluster URL for an Azure Synapse Data Explorer pool looks like `https://<DataExplorerPoolName>.<SynapseWorkspaceName>.kusto.azuresynapse.net`.
-
- :::image type="content" source="media/howto-export-data/export-destination.png" alt-text="Screenshot of Azure Data Explorer export destination.":::
-
-# [Managed identity](#tab/managed-identity/data-explorer)
-
-### Create an Azure Data Explorer destination
-
-If you don't have an existing Azure Data Explorer database to export to, follow these steps. You have two choices to create an Azure Data Explorer database:
--- Create a new Azure Data Explorer cluster and database. To learn more, see the [Azure Data Explorer quickstart](/azure/data-explorer/create-cluster-database-portal). Make a note of the cluster URI and the name of the database you create, you need these values in the following steps.-- Create a new Azure Synapse Data Explorer pool and database. To learn more, see the [Azure Data Explorer quickstart](../../synapse-analytics/get-started-analyze-data-explorer.md). Make a note of the pool URI and the name of the database you create, you need these values in the following steps.-
-To configure the managed identity that enables your IoT Central application to securely export data to your Azure resource:
-
-1. Create a managed identity for your IoT Central application to use to connect to your database. Use the Azure Cloud Shell to run the following command:
-
- ```azurecli
- az iot central app identity assign --name {your IoT Central app name} \
- --resource-group {resource group name} \
- --system-assigned
- ```
-
- Make a note of the `principalId` and `tenantId` output by the command. You use these values in the following step.
-
-1. Configure the database permissions to allow connections from your IoT Central application. Use the Azure Cloud Shell to run the following command:
-
- ```azurecli
- az kusto database-principal-assignment create --cluster-name {name of your cluster} \
- --database-name {name of your database} \
- --resource-group {resource group name} \
- --principal-assignment-name {name of your IoT Central application} \
- --principal-id {principal id from the previous step} \
- --principal-type App --role Admin \
- --tenant-id {tenant id from the previous step}
- ```
-
- > [!TIP]
- > If you're using Azure Synapse, see [`az synapse kusto database-principal-assignment`](/cli/azure/synapse/kusto/database-principal-assignment).
-
-1. Create a table in your database with a suitable schema for the data you're exporting. The following example query creates a table called `smartvitalspatch`. To learn more, see [Transform data inside your IoT Central application for export](howto-transform-data-internally.md):
-
- ```kusto
- .create table smartvitalspatch (
- EnqueuedTime:datetime,
- Message:string,
- Application:string,
- Device:string,
- Simulated:boolean,
- Template:string,
- Module:string,
- Component:string,
- Capability:string,
- Value:dynamic
- )
- ```
-
-1. (Optional) To speed up ingesting data into your Azure Data Explorer database:
-
- 1. Navigate to the **Configurations** page for your Azure Data Explorer cluster. Then enable the **Streaming ingestion** option.
- 1. Run the following query to alter the table policy to enable streaming ingestion:
-
- ```kusto
- .alter table smartvitalspatch policy streamingingestion enable
- ```
-
-To create the Azure Data Explorer destination in IoT Central on the **Data export** page:
-
-1. Select **+ New destination**.
-
-1. Select **Azure Data Explorer** as the destination type.
-
-1. Enter your Azure Data Explorer cluster or pool URL, database name, and table name. Select **System-assigned managed identity** as the authorization type.
-
- > [!TIP]
- > The cluster URL for a standalone Azure Data Explorer looks like `https://<ClusterName>.<AzureRegion>.kusto.windows.net`. The cluster URL for an Azure Synapse Data Explorer pool looks like `https://<DataExplorerPoolName>.<SynapseWorkspaceName>.kusto.azuresynapse.net`.
-
- :::image type="content" source="media/howto-export-data/export-destination-managed.png" alt-text="Screenshot of Azure Data Explorer export destination.":::
-
-# [Connection string](#tab/connection-string/event-hubs)
-
-### Create an Event Hubs destination
-
-If you don't have an existing Event Hubs namespace to export to, follow these steps:
-
-1. Create a [new Event Hubs namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.EventHub). You can learn more in [Azure Event Hubs docs](../../event-hubs/event-hubs-create.md).
-
-1. Create an event hub in your Event Hubs namespace. Go to your namespace, and select **+ Event Hub** at the top to create an event hub instance.
-
-1. Generate a key to use when you to set up your data export in IoT Central:
-
- - Select the event hub instance you created.
- - Select **Settings > Shared access policies**.
- - Create a new key or choose an existing key that has **Send** permissions.
- - Copy either the primary or secondary connection string. You use this connection string to set up a new destination in IoT Central.
- - Alternatively, you can generate a connection string for the entire Event Hubs namespace:
- 1. Go to your Event Hubs namespace in the Azure portal.
- 2. Under **Settings**, select **Shared Access Policies**.
- 3. Create a new key or choose an existing key that has **Send** permissions.
- 4. Copy either the primary or secondary connection string.
-
-To create the Event Hubs destination in IoT Central on the **Data export** page:
-
-1. Select **+ New destination**.
-
-1. Select **Azure Event Hubs** as the destination type.
-
-1. Select **Connection string** as the authorization type.
-
-1. Paste in the connection string for your Event Hubs resource, and enter the case-sensitive event hub name if necessary.
-
-1. Select **Save**.
-
-# [Managed identity](#tab/managed-identity/event-hubs)
-
-### Create an Event Hubs destination
-
-If you don't have an existing Event Hubs namespace to export to, follow these steps:
-
-1. Create a [new Event Hubs namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.EventHub). You can learn more in [Azure Event Hubs docs](../../event-hubs/event-hubs-create.md).
-
-1. Create an event hub in your Event Hubs namespace. Go to your namespace, and select **+ Event Hub** at the top to create an event hub instance.
--
-To configure the permissions:
-
-1. On the **Add role assignment** page, select the scope and subscription you want to use.
-
- > [!TIP]
- > If your IoT Central application and event hub are in the same resource group, you can choose **Resource group** as the scope and then select the resource group.
-
-1. Select **Azure Event Hubs Data Sender** as the **Role**.
-
-1. Select **Save**. The managed identity for your IoT Central application is now configured.
-
-To further secure your event hub and only allow access from trusted services with managed identities, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
-
-To create the Event Hubs destination in IoT Central on the **Data export** page:
-
-1. Select **+ New destination**.
-
-1. Select **Azure Event Hubs** as the destination type.
-
-1. Select **System-assigned managed identity** as the authorization type.
-
-1. Enter the host name of your Event Hubs resource. Then enter the case-sensitive event hub name. A host name looks like: `contoso-waste.servicebus.windows.net`.
-
-1. Select **Save**.
-
-# [Connection string](#tab/connection-string/service-bus)
-
-### Create a Service Bus queue or topic destination
-
-If you don't have an existing Service Bus namespace to export to, follow these steps:
-
-1. Create a [new Service Bus namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.ServiceBus.1.0.5). You can learn more in [Azure Service Bus docs](../../service-bus-messaging/service-bus-create-namespace-portal.md).
-
-1. To create a queue or topic to export to, go to your Service Bus namespace, and select **+ Queue** or **+ Topic**.
-
-1. Generate a key to use when you to set up your data export in IoT Central:
-
- - Select the queue or topic you created.
- - Select **Settings/Shared access policies**.
- - Create a new key or choose an existing key that has **Send** permissions.
- - Copy either the primary or secondary connection string. You use this connection string to set up a new destination in IoT Central.
- - Alternatively, you can generate a connection string for the entire Service Bus namespace:
- 1. Go to your Service Bus namespace in the Azure portal.
- 2. Under **Settings**, select **Shared Access Policies**.
- 3. Create a new key or choose an existing key that has **Send** permissions.
- 4. Copy either the primary or secondary connection string.
-
-To create the Service Bus destination in IoT Central on the **Data export** page:
-
-1. Select **+ New destination**.
-
-1. Select **Azure Service Bus Queue** or **Azure Service Bus Topic** as the destination type.
-
-1. Select **Connection string** as the authorization type.
-
-1. Paste in the connection string for your Service Bus resource, and enter the case-sensitive queue or topic name if necessary.
-
-1. Select **Save**.
-
-# [Managed identity](#tab/managed-identity/service-bus)
-
-### Create a Service Bus queue or topic destination
-
-If you don't have an existing Service Bus namespace to export to, follow these steps:
-
-1. Create a [new Service Bus namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.ServiceBus.1.0.5). You can learn more in [Azure Service Bus docs](../../service-bus-messaging/service-bus-create-namespace-portal.md).
-
-1. To create a queue or topic to export to, go to your Service Bus namespace, and select **+ Queue** or **+ Topic**.
--
-To configure the permissions:
-
-1. On the **Add role assignment** page, select the scope and subscription you want to use.
-
- > [!TIP]
- > If your IoT Central application and queue or topic are in the same resource group, you can choose **Resource group** as the scope and then select the resource group.
-
-1. Select **Azure Service Bus Data Sender** as the **Role**.
-
-1. Select **Save**. The managed identity for your IoT Central application is now configured.
-
-To further secure your queue or topic and only allow access from trusted services with managed identities, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
-
-To create the Service Bus destination in IoT Central on the **Data export** page:
-
-1. Select **+ New destination**.
-
-1. Select **Azure Service Bus Queue** or **Azure Service Bus Topic** as the destination type.
-
-1. Select **System-assigned managed identity** as the authorization type.
-
-1. Enter the host name of your Service Bus resource. Then enter the case-sensitive queue or topic name. A host name looks like: `contoso-waste.servicebus.windows.net`.
-
-1. Select **Save**.
-
-# [Connection string](#tab/connection-string/blob-storage)
-
-### Create an Azure Blob Storage destination
-
-If you don't have an existing Azure storage account to export to, follow these steps:
-
-1. Create a [new storage account in the Azure portal](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You can learn more about creating new [Azure Blob storage accounts](../../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Data Lake Storage v2 storage accounts](../../storage/common/storage-account-create.md). Data export can only write data to storage accounts that support block blobs. The following list shows the known compatible storage account types:
-
- |Performance Tier|Account Type|
- |-|-|
- |Standard|General Purpose V2|
- |Standard|General Purpose V1|
- |Standard|Blob storage|
- |Premium|Block Blob storage|
-
-1. To create a container in your storage account, go to your storage account. Under **Blob Service**, select **Browse Blobs**. Select **+ Container** at the top to create a new container.
-
-1. Generate a connection string for your storage account by going to **Settings > Access keys**. Copy one of the two connection strings.
-
-To create the Blob Storage destination in IoT Central on the **Data export** page:
-
-1. Select **+ New destination**.
-
-1. Select **Azure Blob Storage** as the destination type.
-
-1. Select **Connection string** as the authorization type.
-
-1. Paste in the connection string for your Blob Storage resource, and enter the case-sensitive container name if necessary.
-
-1. Select **Save**.
-
-# [Managed identity](#tab/managed-identity/blob-storage)
-
-### Create an Azure Blob Storage destination
-
-If you don't have an existing Azure storage account to export to, follow these steps:
-
-1. Create a [new storage account in the Azure portal](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You can learn more about creating new [Azure Blob storage accounts](../../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Data Lake Storage v2 storage accounts](../../storage/common/storage-account-create.md). Data export can only write data to storage accounts that support block blobs. The following list shows the known compatible storage account types:
-
- |Performance Tier|Account Type|
- |-|-|
- |Standard|General Purpose V2|
- |Standard|General Purpose V1|
- |Standard|Blob storage|
- |Premium|Block Blob storage|
-
-1. To create a container in your storage account, go to your storage account. Under **Blob Service**, select **Browse Blobs**. Select **+ Container** at the top to create a new container.
--
-To configure the permissions:
-
-1. On the **Add role assignment** page, select the subscription you want to use and **Storage** as the scope. Then select your storage account as the resource.
-
-1. Select **Storage Blob Data Contributor** as the **Role**.
-
-1. Select **Save**. The managed identity for your IoT Central application is now configured.
-
- > [!TIP]
- > This role assignment isn't visible in the list on the **Azure role assignments** page.
-
-To further secure your blob container and only allow access from trusted services with managed identities, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
-
-To create the Blob Storage destination in IoT Central on the **Data export** page:
-
-1. Select **+ New destination**.
-
-1. Select **Azure Blob Storage** as the destination type.
-
-1. Select **System-assigned managed identity** as the authorization type.
-
-1. Enter the endpoint URI for your storage account and the case-sensitive container name. An endpoint URI looks like: `https://contosowaste.blob.core.windows.net`.
-
-1. Select **Save**.
---
-## Set up a data export
-
-Now that you have a destination to export your data to, set up data export in your IoT Central application:
-
-1. Sign in to your IoT Central application.
-
-1. In the left pane, select **Data export**.
-
- > [!Tip]
- > If you don't see **Data export** in the left pane, then you don't have permissions to configure data export in your app. Talk to an administrator to set up data export.
-
-1. Select **+ New export**.
-
-1. Enter a display name for your new export, and make sure the data export is **Enabled**.
-
-1. Choose the type of data to export. The following table lists the supported data export types:
-
- | Data type | Description | Data format |
- | :- | :- | :-- |
- | Telemetry | Export telemetry messages from devices in near-real time. Each exported message contains the full contents of the original device message, normalized. | [Telemetry message format](#telemetry-format) |
- | Property changes | Export changes to device and cloud properties in near-real time. For read-only device properties, changes to the reported values are exported. For read-write properties, both reported and desired values are exported. | [Property change message format](#property-changes-format) |
- | Device connectivity | Export device connected and disconnected events. | [Device connectivity message format](#device-connectivity-changes-format) |
- | Device lifecycle | Export device registered, deleted, provisioned, enabled, disabled, displayNameChanged, and deviceTemplateChanged events. | [Device lifecycle changes message format](#device-lifecycle-changes-format) |
- | Device template lifecycle | Export published device template changes including created, updated, and deleted. | [Device template lifecycle changes message format](#device-template-lifecycle-changes-format) |
-
-1. Optionally, add filters to reduce the amount of data exported. There are different types of filter available for each data export type:
- <a name="DataExportFilters"></a>
-
- | Type of data | Available filters|
- |--||
- |Telemetry|<ul><li>Filter by device name, device ID, device template, and if the device is simulated</li><li>Filter stream to only contain telemetry that meets the filter conditions</li><li>Filter stream to only contain telemetry from devices with properties matching the filter conditions</li><li>Filter stream to only contain telemetry that has *message properties* meeting the filter condition. *Message properties* (also known as *application properties*) are sent in a bag of key-value pairs on each telemetry message optionally sent by devices that use the device SDKs. To create a message property filter, enter the message property key you're looking for, and specify a condition. Only telemetry messages with properties that match the specified filter condition are exported. [Learn more about application properties from IoT Hub docs](../../iot-hub/iot-hub-devguide-messages-construct.md) </li></ul>|
- |Property changes|<ul><li>Filter by device name, device ID, device template, and if the device is simulated</li><li>Filter stream to only contain property changes that meet the filter conditions</li></ul>|
- |Device connectivity|<ul><li>Filter by device name, device ID, device template, organizations, and if the device is simulated</li><li>Filter stream to only contain changes from devices with properties matching the filter conditions</li></ul>|
- |Device lifecycle|<ul><li>Filter by device name, device ID, device template, and if the device is provisioned, enabled, or simulated</li><li>Filter stream to only contain changes from devices with properties matching the filter conditions</li></ul>|
- |Device template lifecycle|<ul><li>Filter by device template</li></ul>|
-
-1. Optionally, enrich exported messages with extra key-value pair metadata. The following enrichments are available for the telemetry, property changes, device connectivity, and device lifecycle data export types:
-<a name="DataExportEnrichmnents"></a>
- - **Custom string**: Adds a custom static string to each message. Enter any key, and enter any string value.
- - **Property**, which adds to each message:
- - Device metadata such as device name, device template name, enabled, organizations, provisioned, and simulated.
- - The current device reported property or cloud property value to each message. If the exported message is from a device that doesn't have the specified property, the exported message doesn't get the enrichment.
-
-Configure the export destination:
-
-1. Select **+ Destination** to add a destination that you've already created or select **Create a new one**.
-
-1. To transform your data before it's exported, select **+ Transform**. To learn more, see [Transform data inside your IoT Central application for export](howto-transform-data-internally.md).
-
-1. Select **+ Destination** to add up to five destinations to a single export.
-
-1. When you've finished setting up your export, select **Save**. After a few minutes, your data appears in your destinations.
-
-## Monitor your export
-
-In IoT Central, the **Data export** page lets you check the status of your exports. You can also use [Azure Monitor](../../azure-monitor/overview.md) to see how much data you're exporting and any export errors. You can access export and device health metrics in charts in the Azure portal, with a REST API, or with queries in PowerShell or the Azure CLI. Currently, you can monitor the following data export metrics in Azure Monitor:
--- Number of messages incoming to export before filters are applied.-- Number of messages that pass through filters.-- Number of messages successfully exported to destinations.-- Number of errors found.-
-To learn more, see [Monitor application health](howto-manage-iot-central-from-portal.md#monitor-application-health).
-
-## Data formats
-
-The following sections describe the formats of the exported data:
-### Telemetry format
-
-Each exported message contains a normalized form of the full message the device sent in the message body. The message is in JSON format and encoded as UTF-8. Information in each message includes:
--- `applicationId`: The ID of the IoT Central application.-- `messageSource`: The source for the message - `telemetry`.-- `deviceId`: The ID of the device that sent the telemetry message.-- `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template assigned to the device.-- `enqueuedTime`: The time at which this message was received by IoT Central.-- `enrichments`: Any enrichments set up on the export.-- `module`: The IoT Edge module that sent this message. This field only appears if the message came from an IoT Edge module.-- `component`: The component that sent this message. This field only appears if the capabilities sent in the message were modeled as a component in the device template-- `messageProperties`: Other properties that the device sent with the message. These properties are sometimes referred to as *application properties*. [Learn more from IoT Hub docs](../../iot-hub/iot-hub-devguide-messages-construct.md).-
-For Event Hubs and Service Bus, IoT Central exports a new message quickly after it receives the message from a device. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, and `iotcentral-message-source` are included automatically.
-
-For Blob storage, messages are batched and exported once per minute.
-
-The following example shows an exported telemetry message:
-
-```json
-
-{
- "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
- "messageSource": "telemetry",
- "deviceId": "1vzb5ghlsg1",
- "schema": "default@v1",
- "templateId": "urn:qugj6vbw5:___qbj_27r",
- "enqueuedTime": "2020-08-05T22:26:55.455Z",
- "telemetry": {
- "Activity": "running",
- "BloodPressure": {
- "Diastolic": 7,
- "Systolic": 71
- },
- "BodyTemperature": 98.73447010562934,
- "HeartRate": 88,
- "HeartRateVariability": 17,
- "RespiratoryRate": 13
- },
- "enrichments": {
- "userSpecifiedKey": "sampleValue"
- },
- "module": "VitalsModule",
- "component": "DeviceComponent",
- "messageProperties": {
- "messageProp": "value"
- }
-}
-```
-
-#### Message properties
-
-Telemetry messages have properties for metadata as well as the telemetry payload. The previous snippet shows examples of system messages such as `deviceId` and `enqueuedTime`. To learn more about the system message properties, see [System Properties of D2C IoT Hub messages](../../iot-hub/iot-hub-devguide-messages-construct.md#system-properties-of-d2c-iot-hub-messages).
-
-You can add properties to telemetry messages if you need to add custom metadata to your telemetry messages. For example, you need to add a timestamp when the device creates the message.
-
-The following code snippet shows how to add the `iothub-creation-time-utc` property to the message when you create it on the device:
-
-> [!IMPORTANT]
-> The format of this timestamp must be UTC with no timezone information. For example, `2021-04-21T11:30:16Z` is valid, `2021-04-21T11:30:16-07:00` is invalid.
-
-# [JavaScript](#tab/javascript)
-
-```javascript
-async function sendTelemetry(deviceClient, index) {
- console.log('Sending telemetry message %d...', index);
- const msg = new Message(
- JSON.stringify(
- deviceTemperatureSensor.updateSensor().getCurrentTemperatureObject()
- )
- );
- msg.properties.add("iothub-creation-time-utc", new Date().toISOString());
- msg.contentType = 'application/json';
- msg.contentEncoding = 'utf-8';
- await deviceClient.sendEvent(msg);
-}
-```
-
-# [Java](#tab/java)
-
-```java
-private static void sendTemperatureTelemetry() {
- String telemetryName = "temperature";
- String telemetryPayload = String.format("{\"%s\": %f}", telemetryName, temperature);
-
- Message message = new Message(telemetryPayload);
- message.setContentEncoding(StandardCharsets.UTF_8.name());
- message.setContentTypeFinal("application/json");
- message.setProperty("iothub-creation-time-utc", Instant.now().toString());
-
- deviceClient.sendEventAsync(message, new MessageIotHubEventCallback(), message);
- log.debug("My Telemetry: Sent - {\"{}\": {}┬░C} with message Id {}.", telemetryName, temperature, message.getMessageId());
- temperatureReadings.put(new Date(), temperature);
-}
-```
-
-# [C#](#tab/csharp)
-
-```csharp
-private async Task SendTemperatureTelemetryAsync()
-{
- const string telemetryName = "temperature";
-
- string telemetryPayload = $"{{ \"{telemetryName}\": {_temperature} }}";
- using var message = new Message(Encoding.UTF8.GetBytes(telemetryPayload))
- {
- ContentEncoding = "utf-8",
- ContentType = "application/json",
- };
- message.Properties.Add("iothub-creation-time-utc", DateTime.UtcNow.ToString("yyyy-MM-ddTHH:mm:ssZ"));
- await _deviceClient.SendEventAsync(message);
- _logger.LogDebug($"Telemetry: Sent - {{ \"{telemetryName}\": {_temperature}┬░C }}.");
-}
-```
-
-# [Python](#tab/python)
-
-```python
-async def send_telemetry_from_thermostat(device_client, telemetry_msg):
- msg = Message(json.dumps(telemetry_msg))
- msg.custom_properties["iothub-creation-time-utc"] = datetime.now(timezone.utc).isoformat()
- msg.content_encoding = "utf-8"
- msg.content_type = "application/json"
- print("Sent message")
- await device_client.send_message(msg)
-```
---
-The following snippet shows this property in the message exported to Blob storage:
-
-```json
-{
- "applicationId":"5782ed70-b703-4f13-bda3-1f5f0f5c678e",
- "messageSource":"telemetry",
- "deviceId":"sample-device-01",
- "schema":"default@v1",
- "templateId":"urn:modelDefinition:mkuyqxzgea:e14m1ukpn",
- "enqueuedTime":"2021-01-29T16:45:39.143Z",
- "telemetry":{
- "temperature":8.341033560421833
- },
- "messageProperties":{
- "iothub-creation-time-utc":"2021-01-29T16:45:39.021Z"
- },
- "enrichments":{}
-}
-```
-
-### Property changes format
-
-Each message or record represents changes to device and cloud properties. Information in the exported message includes:
--- `applicationId`: The ID of the IoT Central application.-- `messageSource`: The source for the message - `properties`.-- `messageType`: Either `cloudPropertyChange`, `devicePropertyDesiredChange`, or `devicePropertyReportedChange`.-- `deviceId`: The ID of the device that sent the telemetry message.-- `schema`: The name and version of the payload schema.-- `enqueuedTime`: The time at which this change was detected by IoT Central.-- `templateId`: The ID of the device template assigned to the device.-- `properties`: An array of properties that changed, including the names of the properties and values that changed. The component and module information is included if the property is modeled within a component or an IoT Edge module.-- `enrichments`: Any enrichments set up on the export.-
-For Event Hubs and Service Bus, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically.
-
-For Blob storage, messages are batched and exported once per minute.
-
-The following example shows an exported property change message received in Azure Blob Storage.
-
-```json
-{
- "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
- "messageSource": "properties",
- "messageType": "cloudPropertyChange",
- "deviceId": "18a985g1fta",
- "schema": "default@v1",
- "templateId": "urn:qugj6vbw5:___qbj_27r",
- "enqueuedTime": "2020-08-05T22:37:32.942Z",
- "properties": [{
- "name": "MachineSerialNumber",
- "value": "abc",
- "module": "VitalsModule",
- "component": "DeviceComponent"
- }],
- "enrichments": {
- "userSpecifiedKey" : "sampleValue"
- }
-}
-```
-
-### Device connectivity changes format
-
-Each message or record represents a connectivity event from a single device. Information in the exported message includes:
--- `applicationId`: The ID of the IoT Central application.-- `messageSource`: The source for the message - `deviceConnectivity`.-- `messageType`: Either `connected` or `disconnected`.-- `deviceId`: The ID of the device that was changed.-- `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template assigned to the device.-- `enqueuedTime`: The time at which this change occurred in IoT Central.-- `enrichments`: Any enrichments set up on the export.-
-For Event Hubs and Service Bus, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically.
-
-For Blob storage, messages are batched and exported once per minute.
-
-The following example shows an exported device connectivity message received in Azure Blob Storage.
-
-```json
-{
- "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
- "messageSource": "deviceConnectivity",
- "messageType": "connected",
- "deviceId": "1vzb5ghlsg1",
- "schema": "default@v1",
- "templateId": "urn:qugj6vbw5:___qbj_27r",
- "enqueuedTime": "2021-04-05T22:26:55.455Z",
- "enrichments": {
- "userSpecifiedKey": "sampleValue"
- }
-}
-
-```
-
-### Device lifecycle changes format
-
-Each message or record represents one change to a single device. Information in the exported message includes:
--- `applicationId`: The ID of the IoT Central application.-- `messageSource`: The source for the message - `deviceLifecycle`.-- `messageType`: The type of change that occurred. One of: `registered`, `deleted`, `provisioned`, `enabled`, `disabled`, `displayNameChanged`, and `deviceTemplateChanged`.-- `deviceId`: The ID of the device that was changed.-- `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template assigned to the device.-- `enqueuedTime`: The time at which this change occurred in IoT Central.-- `enrichments`: Any enrichments set up on the export.-
-For Event Hubs and Service Bus, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically.
-
-For Blob storage, messages are batched and exported once per minute.
-
-The following example shows an exported device lifecycle message received in Azure Blob Storage.
-
-```json
-{
- "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
- "messageSource": "deviceLifecycle",
- "messageType": "registered",
- "deviceId": "1vzb5ghlsg1",
- "schema": "default@v1",
- "templateId": "urn:qugj6vbw5:___qbj_27r",
- "enqueuedTime": "2021-01-01T22:26:55.455Z",
- "enrichments": {
- "userSpecifiedKey": "sampleValue"
- }
-}
-```
-
-### Device template lifecycle changes format
-
-Each message or record represents one change to a single published device template. Information in the exported message includes:
--- `applicationId`: The ID of the IoT Central application.-- `messageSource`: The source for the message - `deviceTemplateLifecycle`.-- `messageType`: Either `created`, `updated`, or `deleted`.-- `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template assigned to the device.-- `enqueuedTime`: The time at which this change occurred in IoT Central.-- `enrichments`: Any enrichments set up on the export.-
-For Event Hubs and Service Bus, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically.
-
-For Blob storage, messages are batched and exported once per minute.
-
-The following example shows an exported device lifecycle message received in Azure Blob Storage.
-
-```json
-{
- "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
- "messageSource": "deviceTemplateLifecycle",
- "messageType": "created",
- "schema": "default@v1",
- "templateId": "urn:qugj6vbw5:___qbj_27r",
- "enqueuedTime": "2021-01-01T22:26:55.455Z",
- "enrichments": {
- "userSpecifiedKey": "sampleValue"
- }
-}
-```
-
-## Next steps
-
-Now that you know how to configure data export, a suggested next step is to learn [Transform data inside your IoT Central application for export](howto-transform-data-internally.md).
iot-central Howto Export To Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-azure-data-explorer.md
+
+ Title: Export data to Azure Data Explorer IoT Central | Microsoft Docs
+description: How to use the new data export to export your IoT data to Azure Data Explorer
+++ Last updated : 04/28/2022++++
+# Export IoT data to Azure Data Explorer
+
+This article describes how to configure data export to send data to the Azure Data Explorer.
++
+## Set up an Azure Data Explorer export destination
+
+You can use an [Azure Data Explorer cluster](/azure/data-explorer/data-explorer-overview) or an [Azure Synapse Data Explorer pool](../../synapse-analytics/data-explorer/data-explorer-overview.md). To learn more, see [What is the difference between Azure Synapse Data Explorer and Azure Data Explorer?](../..//synapse-analytics/data-explorer/data-explorer-compare.md).
+
+IoT Central exports data in near real time to a database table in the Azure Data Explorer cluster. The data is in the message body and is in JSON format encoded as UTF-8. You can add a [Transform](howto-transform-data-internally.md) in IoT Central to export data that matches the table schema.
+
+To query the exported data in the Azure Data Explorer portal, navigate to the database and select **Query**.
+
+The following video walks you through exporting data to Azure Data Explorer:
+
+> [!VIDEO https://aka.ms/docs/player?id=9e0c0e58-2753-42f5-a353-8ae602173d9b]
+
+## Connection options
+
+Azure Data Explorer destinations let you configure the connection with a *service principal* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
++
+This article shows how to create a managed identity using the Azure CLI. You can also use the Azure portal to create a manged identity.
+++
+# [Service principal](#tab/service-principal)
+
+### Create an Azure Data Explorer destination
+
+If you don't have an existing Azure Data Explorer database to export to, follow these steps:
+
+1. You have two choices to create an Azure Data Explorer database:
+
+ - Create a new Azure Data Explorer cluster and database. To learn more, see the [Azure Data Explorer quickstart](/azure/data-explorer/create-cluster-database-portal). Make a note of the cluster URI and the name of the database you create, you need these values in the following steps.
+ - Create a new Azure Synapse Data Explorer pool and database. To learn more, see the [Azure Data Explorer quickstart](../../synapse-analytics/get-started-analyze-data-explorer.md). Make a note of the pool URI and the name of the database you create, you need these values in the following steps.
+
+1. Create a service principal that you can use to connect your IoT Central application to Azure Data Explorer. Use the Azure Cloud Shell to run the following command:
+
+ ```azurecli
+ az ad sp create-for-rbac --skip-assignment --name "My SP for IoT Central" --scopes /subscriptions/<SubscriptionId>
+ ```
+
+ Make a note of the `appId`, `password`, and `tenant` values in the command output, you need them in the following steps.
+
+1. To add the service principal to the database, navigate to the Azure Data Explorer portal and run the following query on your database. Replace the placeholders with the values you made a note of previously:
+
+ ```kusto
+ .add database ['<YourDatabaseName>'] admins ('aadapp=<YourAppId>;<YourTenant>');
+ ```
+
+1. Create a table in your database with a suitable schema for the data you're exporting. The following example query creates a table called `smartvitalspatch`. To learn more, see [Transform data inside your IoT Central application for export](howto-transform-data-internally.md):
+
+ ```kusto
+ .create table smartvitalspatch (
+ EnqueuedTime:datetime,
+ Message:string,
+ Application:string,
+ Device:string,
+ Simulated:boolean,
+ Template:string,
+ Module:string,
+ Component:string,
+ Capability:string,
+ Value:dynamic
+ )
+ ```
+
+1. (Optional) To speed up ingesting data into your Azure Data Explorer database:
+
+ 1. Navigate to the **Configurations** page for your Azure Data Explorer cluster. Then enable the **Streaming ingestion** option.
+ 1. Run the following query to alter the table policy to enable streaming ingestion:
+
+ ```kusto
+ .alter table smartvitalspatch policy streamingingestion enable
+ ```
+
+To create the Azure Data Explorer destination in IoT Central on the **Data export** page:
+
+1. Select **+ New destination**.
+
+1. Select **Azure Data Explorer** as the destination type.
+
+1. Enter your Azure Data Explorer cluster or pool URL, database name, and table name. The following table shows the service principal values to use for the authorization:
+
+ | Service principal value | Destination configuration |
+ | -- | - |
+ | appId | ClientID |
+ | tenant | Tenant ID |
+ | password | Client secret |
+
+ > [!TIP]
+ > The cluster URL for a standalone Azure Data Explorer looks like `https://<ClusterName>.<AzureRegion>.kusto.windows.net`. The cluster URL for an Azure Synapse Data Explorer pool looks like `https://<DataExplorerPoolName>.<SynapseWorkspaceName>.kusto.azuresynapse.net`.
+
+ :::image type="content" source="media/howto-export-data/export-destination.png" alt-text="Screenshot of Azure Data Explorer export destination.":::
+
+# [Managed identity](#tab/managed-identity)
+
+### Create an Azure Data Explorer destination
+
+If you don't have an existing Azure Data Explorer database to export to, follow these steps. You have two choices to create an Azure Data Explorer database:
+
+- Create a new Azure Data Explorer cluster and database. To learn more, see the [Azure Data Explorer quickstart](/azure/data-explorer/create-cluster-database-portal). Make a note of the cluster URI and the name of the database you create, you need these values in the following steps.
+- Create a new Azure Synapse Data Explorer pool and database. To learn more, see the [Azure Data Explorer quickstart](../../synapse-analytics/get-started-analyze-data-explorer.md). Make a note of the pool URI and the name of the database you create, you need these values in the following steps.
+
+To configure the managed identity that enables your IoT Central application to securely export data to your Azure resource:
+
+1. Create a managed identity for your IoT Central application to use to connect to your database. Use the Azure Cloud Shell to run the following command:
+
+ ```azurecli
+ az iot central app identity assign --name {your IoT Central app name} \
+ --resource-group {resource group name} \
+ --system-assigned
+ ```
+
+ Make a note of the `principalId` and `tenantId` output by the command. You use these values in the following step.
+
+1. Configure the database permissions to allow connections from your IoT Central application. Use the Azure Cloud Shell to run the following command:
+
+ ```azurecli
+ az kusto database-principal-assignment create --cluster-name {name of your cluster} \
+ --database-name {name of your database} \
+ --resource-group {resource group name} \
+ --principal-assignment-name {name of your IoT Central application} \
+ --principal-id {principal id from the previous step} \
+ --principal-type App --role Admin \
+ --tenant-id {tenant id from the previous step}
+ ```
+
+ > [!TIP]
+ > If you're using Azure Synapse, see [`az synapse kusto database-principal-assignment`](/cli/azure/synapse/kusto/database-principal-assignment).
+
+1. Create a table in your database with a suitable schema for the data you're exporting. The following example query creates a table called `smartvitalspatch`. To learn more, see [Transform data inside your IoT Central application for export](howto-transform-data-internally.md):
+
+ ```kusto
+ .create table smartvitalspatch (
+ EnqueuedTime:datetime,
+ Message:string,
+ Application:string,
+ Device:string,
+ Simulated:boolean,
+ Template:string,
+ Module:string,
+ Component:string,
+ Capability:string,
+ Value:dynamic
+ )
+ ```
+
+1. (Optional) To speed up ingesting data into your Azure Data Explorer database:
+
+ 1. Navigate to the **Configurations** page for your Azure Data Explorer cluster. Then enable the **Streaming ingestion** option.
+ 1. Run the following query to alter the table policy to enable streaming ingestion:
+
+ ```kusto
+ .alter table smartvitalspatch policy streamingingestion enable
+ ```
+
+To create the Azure Data Explorer destination in IoT Central on the **Data export** page:
+
+1. Select **+ New destination**.
+
+1. Select **Azure Data Explorer** as the destination type.
+
+1. Enter your Azure Data Explorer cluster or pool URL, database name, and table name. Select **System-assigned managed identity** as the authorization type.
+
+ > [!TIP]
+ > The cluster URL for a standalone Azure Data Explorer looks like `https://<ClusterName>.<AzureRegion>.kusto.windows.net`. The cluster URL for an Azure Synapse Data Explorer pool looks like `https://<DataExplorerPoolName>.<SynapseWorkspaceName>.kusto.azuresynapse.net`.
+
+ :::image type="content" source="media/howto-export-data/export-destination-managed.png" alt-text="Azure Data Explorer export destination.":::
++++++++
+## Next steps
+
+Now that you know how to export to Azure Data Explorer, a suggested next step is to learn [Export to Webhook](howto-export-to-webhook.md).
iot-central Howto Export To Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-blob-storage.md
+
+ Title: Export data to Blob Storage IoT Central | Microsoft Docs
+description: How to use the new data export to export your IoT data to Blob Storage
+++ Last updated : 04/28/2022++++
+# Export IoT data to Blob Storage
+
+This article describes how to configure data export to send data to the Blob Storage service.
++
+## Set up a Blob Storage export destination
++
+IoT Central exports data once per minute, with each file containing the batch of changes since the previous export. Exported data is saved in JSON format. The default paths to the exported data in your storage account are:
+
+- Telemetry: _{container}/{app-id}/{partition_id}/{YYYY}/{MM}/{dd}/{hh}/{mm}/{filename}_
+- Property changes: _{container}/{app-id}/{partition_id}/{YYYY}/{MM}/{dd}/{hh}/{mm}/{filename}_
+
+To browse the exported files in the Azure portal, navigate to the file and select **Edit blob**.
+
+## Connection options
+
+Blob Storage destinations let you configure the connection with a *connection string* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
++
+This article shows how to create a managed identity in the Azure portal. You can also use the Azure CLI to create a manged identity. To learn more, see [Assign a managed identity access to a resource using Azure CLI](../../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md).
+++
+# [Connection string](#tab/connection-string)
+
+### Create an Azure Blob Storage destination
+
+If you don't have an existing Azure storage account to export to, follow these steps:
+
+1. Create a [new storage account in the Azure portal](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You can learn more about creating new [Azure Blob Storage accounts](../../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Data Lake Storage v2 storage accounts](../../storage/common/storage-account-create.md). Data export can only write data to storage accounts that support block blobs. The following list shows the known compatible storage account types:
+
+ |Performance Tier|Account Type|
+ |-|-|
+ |Standard|General Purpose V2|
+ |Standard|General Purpose V1|
+ |Standard|Blob storage|
+ |Premium|Block Blob storage|
+
+1. To create a container in your storage account, go to your storage account. Under **Blob Service**, select **Browse Blobs**. Select **+ Container** at the top to create a new container.
+
+1. Generate a connection string for your storage account by going to **Settings > Access keys**. Copy one of the two connection strings.
+
+To create the Blob Storage destination in IoT Central on the **Data export** page:
+
+1. Select **+ New destination**.
+
+1. Select **Azure Blob Storage** as the destination type.
+
+1. Select **Connection string** as the authorization type.
+
+1. Paste in the connection string for your Blob Storage resource, and enter the case-sensitive container name if necessary.
+
+1. Select **Save**.
+
+# [Managed identity](#tab/managed-identity)
+
+### Create an Azure Blob Storage destination
+
+If you don't have an existing Azure storage account to export to, follow these steps:
+
+1. Create a [new storage account in the Azure portal](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You can learn more about creating new [Azure Blob Storage accounts](../../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Data Lake Storage v2 storage accounts](../../storage/common/storage-account-create.md). Data export can only write data to storage accounts that support block blobs. The following list shows the known compatible storage account types:
+
+ |Performance Tier|Account Type|
+ |-|-|
+ |Standard|General Purpose V2|
+ |Standard|General Purpose V1|
+ |Standard|Blob storage|
+ |Premium|Block Blob storage|
+
+1. To create a container in your storage account, go to your storage account. Under **Blob Service**, select **Browse Blobs**. Select **+ Container** at the top to create a new container.
++
+To configure the permissions:
+
+1. On the **Add role assignment** page, select the subscription you want to use and **Storage** as the scope. Then select your storage account as the resource.
+
+1. Select **Storage Blob Data Contributor** as the **Role**.
+
+1. Select **Save**. The managed identity for your IoT Central application is now configured.
+
+ > [!TIP]
+ > This role assignment isn't visible in the list on the **Azure role assignments** page.
+
+To further secure your blob container and only allow access from trusted services with managed identities, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
+
+To create the Blob Storage destination in IoT Central on the **Data export** page:
+
+1. Select **+ New destination**.
+
+1. Select **Azure Blob Storage** as the destination type.
+
+1. Select **System-assigned managed identity** as the authorization type.
+
+1. Enter the endpoint URI for your storage account and the case-sensitive container name. An endpoint URI looks like: `https://contosowaste.blob.core.windows.net`.
+
+1. Select **Save**.
++++
+For Blob Storage, messages are batched and exported once per minute.
+
+The following example shows an exported telemetry message:
+
+```json
+
+{
+ "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
+ "messageSource": "telemetry",
+ "deviceId": "1vzb5ghlsg1",
+ "schema": "default@v1",
+ "templateId": "urn:qugj6vbw5:___qbj_27r",
+ "enqueuedTime": "2020-08-05T22:26:55.455Z",
+ "telemetry": {
+ "Activity": "running",
+ "BloodPressure": {
+ "Diastolic": 7,
+ "Systolic": 71
+ },
+ "BodyTemperature": 98.73447010562934,
+ "HeartRate": 88,
+ "HeartRateVariability": 17,
+ "RespiratoryRate": 13
+ },
+ "enrichments": {
+ "userSpecifiedKey": "sampleValue"
+ },
+ "module": "VitalsModule",
+ "component": "DeviceComponent",
+ "messageProperties": {
+ "messageProp": "value"
+ }
+}
+```
++++
+For Blob Storage, messages are batched and exported once per minute.
+
+The following snippet shows this property in the message exported to Blob Storage:
+
+```json
+{
+ "applicationId":"5782ed70-b703-4f13-bda3-1f5f0f5c678e",
+ "messageSource":"telemetry",
+ "deviceId":"sample-device-01",
+ "schema":"default@v1",
+ "templateId":"urn:modelDefinition:mkuyqxzgea:e14m1ukpn",
+ "enqueuedTime":"2021-01-29T16:45:39.143Z",
+ "telemetry":{
+ "temperature":8.341033560421833
+ },
+ "messageProperties":{
+ "iothub-creation-time-utc":"2021-01-29T16:45:39.021Z"
+ },
+ "enrichments":{}
+}
+```
++
+For Blob Storage, messages are batched and exported once per minute.
+
+The following example shows an exported device connectivity message received in Azure Blob Storage.
+
+```json
+{
+ "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
+ "messageSource": "deviceConnectivity",
+ "messageType": "connected",
+ "deviceId": "1vzb5ghlsg1",
+ "schema": "default@v1",
+ "templateId": "urn:qugj6vbw5:___qbj_27r",
+ "enqueuedTime": "2021-04-05T22:26:55.455Z",
+ "enrichments": {
+ "userSpecifiedKey": "sampleValue"
+ }
+}
+
+```
++
+For Blob Storage, messages are batched and exported once per minute.
+
+The following example shows an exported device lifecycle message received in Azure Blob Storage.
+
+```json
+{
+ "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
+ "messageSource": "deviceLifecycle",
+ "messageType": "registered",
+ "deviceId": "1vzb5ghlsg1",
+ "schema": "default@v1",
+ "templateId": "urn:qugj6vbw5:___qbj_27r",
+ "enqueuedTime": "2021-01-01T22:26:55.455Z",
+ "enrichments": {
+ "userSpecifiedKey": "sampleValue"
+ }
+}
+```
++
+For Blob Storage, messages are batched and exported once per minute.
+
+The following example shows an exported device lifecycle message received in Azure Blob Storage.
+
+```json
+{
+ "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
+ "messageSource": "deviceTemplateLifecycle",
+ "messageType": "created",
+ "schema": "default@v1",
+ "templateId": "urn:qugj6vbw5:___qbj_27r",
+ "enqueuedTime": "2021-01-01T22:26:55.455Z",
+ "enrichments": {
+ "userSpecifiedKey": "sampleValue"
+ }
+}
+```
+
+## Next steps
+
+Now that you know how to export to Blob Storage, a suggested next step is to learn [Export to Service Bus](howto-export-to-service-bus.md).
iot-central Howto Export To Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-event-hubs.md
+
+ Title: Export data to Event Hubs IoT Central | Microsoft Docs
+description: How to use the new data export to export your IoT data to Event Hubs
+++ Last updated : 04/28/2022++++
+# Export IoT data to Event Hubs
+
+This article describes how to configure data export to send data to the Event Hubs.
++
+## Set up an Event Hubs export destination
+
+IoT Central exports data in near real time. The data is in the message body and is in JSON format encoded as UTF-8.
+
+The annotations or system properties bag of the message contains the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` fields that have the same values as the corresponding fields in the message body.
+
+## Connection options
+
+Event Hubs destinations let you configure the connection with a *connection string* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
++
+This article shows how to create a managed identity in the Azure portal. You can also use the Azure CLI to create a manged identity. To learn more, see [Assign a managed identity access to a resource using Azure CLI](../../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md).
++
+# [Connection string](#tab/connection-string)
+
+### Create an Event Hubs destination
+
+If you don't have an existing Event Hubs namespace to export to, follow these steps:
+
+1. Create a [new Event Hubs namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.EventHub). You can learn more in [Azure Event Hubs docs](../../event-hubs/event-hubs-create.md).
+
+1. Create an event hub in your Event Hubs namespace. Go to your namespace, and select **+ Event Hub** at the top to create an event hub instance.
+
+1. Generate a key to use when you to set up your data export in IoT Central:
+
+ - Select the event hub instance you created.
+ - Select **Settings > Shared access policies**.
+ - Create a new key or choose an existing key that has **Send** permissions.
+ - Copy either the primary or secondary connection string. You use this connection string to set up a new destination in IoT Central.
+ - Alternatively, you can generate a connection string for the entire Event Hubs namespace:
+ 1. Go to your Event Hubs namespace in the Azure portal.
+ 2. Under **Settings**, select **Shared Access Policies**.
+ 3. Create a new key or choose an existing key that has **Send** permissions.
+ 4. Copy either the primary or secondary connection string.
+
+To create the Event Hubs destination in IoT Central on the **Data export** page:
+
+1. Select **+ New destination**.
+
+1. Select **Azure Event Hubs** as the destination type.
+
+1. Select **Connection string** as the authorization type.
+
+1. Paste in the connection string for your Event Hubs resource, and enter the case-sensitive event hub name if necessary.
+
+1. Select **Save**.
+
+# [Managed identity](#tab/managed-identity)
+
+### Create an Event Hubs destination
+
+If you don't have an existing Event Hubs namespace to export to, follow these steps:
+
+1. Create a [new Event Hubs namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.EventHub). You can learn more in [Azure Event Hubs docs](../../event-hubs/event-hubs-create.md).
+
+1. Create an event hub in your Event Hubs namespace. Go to your namespace, and select **+ Event Hub** at the top to create an event hub instance.
++
+To configure the permissions:
+
+1. On the **Add role assignment** page, select the scope and subscription you want to use.
+
+ > [!TIP]
+ > If your IoT Central application and event hub are in the same resource group, you can choose **Resource group** as the scope and then select the resource group.
+
+1. Select **Azure Event Hubs Data Sender** as the **Role**.
+
+1. Select **Save**. The managed identity for your IoT Central application is now configured.
+
+To further secure your event hub and only allow access from trusted services with managed identities, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
+
+To create the Event Hubs destination in IoT Central on the **Data export** page:
+
+1. Select **+ New destination**.
+
+1. Select **Azure Event Hubs** as the destination type.
+
+1. Select **System-assigned managed identity** as the authorization type.
+
+1. Enter the host name of your Event Hubs resource. Then enter the case-sensitive event hub name. A host name looks like: `contoso-waste.servicebus.windows.net`.
+
+1. Select **Save**.
++++++++
+For Event Hubs, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically.
+
+## Next steps
+
+Now that you know how to export to Event Hubs, a suggested next step is to learn [Export to Azure Data Explorer](howto-export-to-azure-data-explorer.md).
iot-central Howto Export To Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-service-bus.md
+
+ Title: Export data to Service Bus IoT Central | Microsoft Docs
+description: How to use the new data export to export your IoT data to Service Bus
+++ Last updated : 04/28/2022++++
+# Export IoT data to Service Bus
+
+This article describes how to configure data export to send data to the Service Bus.
++
+## Set up a Service Bus export destination
+
+Both queues and topics are supported for Azure Service Bus destinations.
+
+IoT Central exports data in near real time. The data is in the message body and is in JSON format encoded as UTF-8.
+
+The annotations or system properties bag of the message contains the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` fields that have the same values as the corresponding fields in the message body.
+
+## Connection options
+
+Service Bus destinations let you configure the connection with a *connection string* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
++
+This article shows how to create a managed identity in the Azure portal. You can also use the Azure CLI to create a manged identity. To learn more, see [Assign a managed identity access to a resource using Azure CLI](../../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md).
+
+# [Connection string](#tab/connection-string)
+
+### Create a Service Bus queue or topic destination
+
+If you don't have an existing Service Bus namespace to export to, follow these steps:
+
+1. Create a [new Service Bus namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.ServiceBus.1.0.5). You can learn more in [Azure Service Bus docs](../../service-bus-messaging/service-bus-create-namespace-portal.md).
+
+1. To create a queue or topic to export to, go to your Service Bus namespace, and select **+ Queue** or **+ Topic**.
+
+1. Generate a key to use when you to set up your data export in IoT Central:
+
+ - Select the queue or topic you created.
+ - Select **Settings/Shared access policies**.
+ - Create a new key or choose an existing key that has **Send** permissions.
+ - Copy either the primary or secondary connection string. You use this connection string to set up a new destination in IoT Central.
+ - Alternatively, you can generate a connection string for the entire Service Bus namespace:
+ 1. Go to your Service Bus namespace in the Azure portal.
+ 2. Under **Settings**, select **Shared Access Policies**.
+ 3. Create a new key or choose an existing key that has **Send** permissions.
+ 4. Copy either the primary or secondary connection string.
+
+To create the Service Bus destination in IoT Central on the **Data export** page:
+
+1. Select **+ New destination**.
+
+1. Select **Azure Service Bus Queue** or **Azure Service Bus Topic** as the destination type.
+
+1. Select **Connection string** as the authorization type.
+
+1. Paste in the connection string for your Service Bus resource, and enter the case-sensitive queue or topic name if necessary.
+
+1. Select **Save**.
+
+# [Managed identity](#tab/managed-identity)
+
+### Create a Service Bus queue or topic destination
+
+If you don't have an existing Service Bus namespace to export to, follow these steps:
+
+1. Create a [new Service Bus namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.ServiceBus.1.0.5). You can learn more in [Azure Service Bus docs](../../service-bus-messaging/service-bus-create-namespace-portal.md).
+
+1. To create a queue or topic to export to, go to your Service Bus namespace, and select **+ Queue** or **+ Topic**.
++
+To configure the permissions:
+
+1. On the **Add role assignment** page, select the scope and subscription you want to use.
+
+ > [!TIP]
+ > If your IoT Central application and queue or topic are in the same resource group, you can choose **Resource group** as the scope and then select the resource group.
+
+1. Select **Azure Service Bus Data Sender** as the **Role**.
+
+1. Select **Save**. The managed identity for your IoT Central application is now configured.
+
+To further secure your queue or topic and only allow access from trusted services with managed identities, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
+
+To create the Service Bus destination in IoT Central on the **Data export** page:
+
+1. Select **+ New destination**.
+
+1. Select **Azure Service Bus Queue** or **Azure Service Bus Topic** as the destination type.
+
+1. Select **System-assigned managed identity** as the authorization type.
+
+1. Enter the host name of your Service Bus resource. Then enter the case-sensitive queue or topic name. A host name looks like: `contoso-waste.servicebus.windows.net`.
+
+1. Select **Save**.
++++++++
+For Service Bus, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically.
+
+## Next steps
+
+Now that you know how to export to Service Bus, a suggested next step is to learn [Export to Event Hubs](howto-export-to-event-hubs.md).
iot-central Howto Export To Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-webhook.md
+
+ Title: Export data to Webhook IoT Central | Microsoft Docs
+description: How to use the new data export to export your IoT data to Webhook
+++ Last updated : 04/28/2022++++
+# Export IoT data to Webhook
+
+This article describes how to configure data export to send data to the Webhook.
++
+## Set up a Webhook export destination
+
+For Webhook destinations, IoT Central exports data in near real time. The data in the message body is in the same format as for Event Hubs and Service Bus.
+
+## Create a Webhook destination
+
+You can export data to a publicly available HTTP Webhook endpoint. You can create a test Webhook endpoint using [RequestBin](https://requestbin.net/). RequestBin throttles request when the request limit is reached:
+
+1. Open [RequestBin](https://requestbin.net/).
+1. Create a new RequestBin and copy the **Bin URL**. You use this URL when you test your data export.
+
+To create the Azure Data Explorer destination in IoT Central on the **Data export** page:
+
+1. Select **+ New destination**.
+
+1. Select **Webhook** as the destination type.
+
+1. Paste the callback URL for your Webhook endpoint. You can optionally configure Webhook authorization and add custom headers.
+
+ - For **OAuth2.0**, only the client credentials flow is supported. When you save the destination, IoT Central communicates with your OAuth provider to retrieve an authorization token. This token is attached to the `Authorization` header for every message sent to this destination.
+ - For **Authorization token**, you can specify a token value that's directly attached to the `Authorization` header for every message sent to this destination.
+
+1. Select **Save**.
++++++
iot-central Howto Manage Data Export With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md
# How to use the IoT Central REST API to manage data exports
-The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. You can use the REST API to create and manage [data exports](howto-export-data.md) in your IoT Central application.
+The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. You can use the REST API to create and manage [data exports](howto-export-to-blob-storage.md).
+ in your IoT Central application.
Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
The request body has some required fields:
* `displayName`: Display name of the destination. * `type`: Type of destination object which can be one of: `blobstorage@v1`, `dataexplorer@v1`, `eventhubs@v1`, `servicebusqueue@v1`, `servicebustopic@v1`, `webhook@v1`.
-* `connectionString`:The connection string for accessing the destination resource.
+* `connectionString`: The connection string for accessing the destination resource.
* `containerName`: For a blob storage destination, the name of the container where data should be written. The response to this request looks like the following example:
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md
Watch the following video to learn more about how to monitor device connection s
> [!VIDEO https://www.youtube.com/embed/EUZH_6Ihtto]
-You can include connection and disconnection events in [exports from IoT Central](howto-export-data.md#set-up-a-data-export). To learn more, see [React to IoT Hub events > Limitations for device connected and device disconnected events](../../iot-hub/iot-hub-event-grid.md#limitations-for-device-connected-and-device-disconnected-events).
+You can include connection and disconnection events in [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md). To learn more, see [React to IoT Hub events > Limitations for device connected and device disconnected events](../../iot-hub/iot-hub-event-grid.md#limitations-for-device-connected-and-device-disconnected-events).
## Add a device
iot-central Howto Manage Iot Central From Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-cli.md
Title: Manage IoT Central from Azure CLI or PowerShell | Microsoft Docs
-description: This article describes how to create and manage your IoT Central application using the Azure CLI or PowerShell. You can view, modify, and remove the application using these tools. You can also configure a managed system identity that can you can use to setup secure data export.
+description: This article describes how to create and manage your IoT Central application using the Azure CLI or PowerShell. You can view, modify, and remove the application using these tools. You can also configure a managed system identity that can you can use to set up secure data export.
Remove-AzIotCentralApp -ResourceGroupName "MyIoTCentralResourceGroup" `
## Configure a managed identity
-An IoT Central application can use a system assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to secure the connection to a [data export destination](howto-export-data.md#connection-options).
+An IoT Central application can use a system assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to secure the connection to a [data export destination](howto-export-to-blob-storage.md#connection-options).
To enable the managed identity, use either the [Azure portal - Configure a managed identity](howto-manage-iot-central-from-portal.md#configure-a-managed-identity) or the [REST API](howto-manage-iot-central-with-rest-api.md):
iot-central Howto Manage Iot Central From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-portal.md
When you configure a managed identity, the configuration includes a *scope* and
You can configure role assignments in the Azure portal or use the Azure CLI:
-* To learn more about to configure role assignments in the Azure portal for specific destinations, see [Export IoT data to cloud destinations using data export](howto-export-data.md).
+* To learn more about to configure role assignments in the Azure portal for specific destinations, see [Export IoT data to cloud destinations using blob storage](howto-export-to-blob-storage.md).
* To learn more about how to configure role assignments using the Azure CLI, see [Manage IoT Central from Azure CLI or PowerShell](howto-manage-iot-central-from-cli.md). ## Monitor application health
Metrics are enabled by default for your IoT Central application and you access t
### View metrics in the Azure portal
-The following steps assume you have an [IoT Central application](./howto-create-iot-central-application.md) with some [connected devices](./tutorial-connect-device.md) or a running [data export](howto-export-data.md).
+The following steps assume you have an [IoT Central application](./howto-create-iot-central-application.md) with some [connected devices](./tutorial-connect-device.md) or a running [data export](howto-export-to-blob-storage.md).
+ To view IoT Central metrics in the portal:
Access to metrics in the Azure portal is managed by [Azure role based access con
### IoT Central metrics
-For a list of of the metrics that are currently available for IoT Central, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsoftiotcentraliotapps).
+For a list of the metrics that are currently available for IoT Central, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsoftiotcentraliotapps).
### Metrics and invoices
iot-central Howto Transform Data Internally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data-internally.md
JSON output:
} ```
-To learn more about how to add an Azure Data Explorer cluster and database as an export destination, see [Create an Azure Data Explorer destination](howto-export-data.md#create-an-azure-data-explorer-destination).
+To learn more about how to add an Azure Data Explorer cluster and database as an export destination, see [Create an Azure Data Explorer destination](howto-export-to-azure-data-explorer.md).
### Scenario 2: Breaking apart a telemetry array
JSON output:
### Scenario 4: Export data to Azure Data Explorer and visualize it in Power BI
-In this scenario, you export data to Azure Data Explorer and then a use a connector to visualize the data in Power BI. To learn more about how to add an Azure Data Explorer cluster and database as an export destination, see [Create an Azure Data Explorer destination](howto-export-data.md#create-an-azure-data-explorer-destination).
+In this scenario, you export data to Azure Data Explorer and then a use a connector to visualize the data in Power BI. To learn more about how to add an Azure Data Explorer cluster and database as an export destination, see [Create an Azure Data Explorer destination](howto-export-to-azure-data-explorer.md).
This scenario uses an Azure Data Explorer table with the following schema:
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
The following table shows three example transformation types:
||-|-|-| | Message Format | Convert to or manipulate JSON messages. | CSV to JSON | At ingress. IoT Central only accepts value JSON messages. To learn more, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md). | | Computations | Math functions that [Azure Functions](../../azure-functions/index.yml) can execute. | Unit conversion from Fahrenheit to Celsius. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. Transforming the data lets you use IoT Central features such as visualizations and jobs. |
-| Message Enrichment | Enrichments from external data sources not found in device properties or telemetry. To learn more about internal enrichments, see [Export IoT data to cloud destinations using data export](howto-export-data.md) | Add weather information to messages using [location data](howto-use-location-data.md) from devices. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. |
+| Message Enrichment | Enrichments from external data sources not found in device properties or telemetry. To learn more about internal enrichments, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md). | Add weather information to messages using [location data](howto-use-location-data.md) from devices. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. |
## Prerequisites
iot-central Overview Iot Central Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-security.md
Managed identities are more secure because:
To learn more, see: -- [Export IoT data to cloud destinations using data export](howto-export-data.md)
+- [Export IoT data to cloud destinations using blob storage](howto-export-to-blob-storage.md)
- [Configure a managed identity in the Azure portal](howto-manage-iot-central-from-portal.md#configure-a-managed-identity) - [Configure a managed identity using the Azure CLI](howto-manage-iot-central-from-cli.md#configure-a-managed-identity)
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
To learn more, see [Transform data for IoT Central](howto-transform-data.md). Fo
You can use the data export and rules capabilities in IoT Central to integrate with other service. To learn more, see: -- [Export IoT data to cloud destinations using data export](howto-export-data.md)
+- [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
- [Transform data for IoT Central](howto-transform-data.md) - [Use workflows to integrate your Azure IoT Central application with other cloud services](howto-configure-rules-advanced.md) - [Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid](howto-create-custom-rules.md)
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-tour.md
This page lets you view and create rules based on device data. When a rule fires
:::image type="content" source="Media/overview-iot-central-tour/export.png" alt-text="Data Export":::
-Data export enables you to set up streams of data to external systems. To learn more, see the [Export your data in Azure IoT Central](./howto-export-data.md) article.
+Data export enables you to set up streams of data to external systems. To learn more, see the [Export your data in Azure IoT Central](./howto-export-to-blob-storage.md) article.
### Permissions
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md
Build [custom rules](tutorial-create-telemetry-rules.md) based on device state a
## Integrate with other services
-As an application platform, IoT Central lets you transform your IoT data into the business insights that drive actionable outcomes. [Rules](./tutorial-create-telemetry-rules.md), [data export](./howto-export-data.md), and the [public REST API](/learn/modules/manage-iot-central-apps-with-rest-api/) are examples of how you can integrate IoT Central with line-of-business applications:
+As an application platform, IoT Central lets you transform your IoT data into the business insights that drive actionable outcomes. [Rules](./tutorial-create-telemetry-rules.md), [data export](./howto-export-to-blob-storage.md), and the [public REST API](/learn/modules/manage-iot-central-apps-with-rest-api/) are examples of how you can integrate IoT Central with line-of-business applications:
![How IoT Central can transform your IoT data](media/overview-iot-central/transform.png)
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md
Last updated 01/13/2022
-+ # Quickstart - Use your smartphone as a device to send telemetry to an IoT Central application
lighthouse Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/overview.md
Azure Lighthouse includes multiple ways to help streamline engagement and manage
- **Managed Service offers in Azure Marketplace**: [Offer your services to customers](concepts/managed-services-offers.md) through private or public offers, and automatically onboard them to Azure Lighthouse. > [!TIP]
-> A similar offering, [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview), helps service providers onboard, monitor, and manage their Microsoft 365 customers at scale. Microsoft 365 Lighthouse is currently in preview.
+> A similar offering, [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview), helps service providers onboard, monitor, and manage their Microsoft 365 customers at scale.
## Pricing and availability
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
ms.suite: integration Previously updated : 03/15/2022 Last updated : 05/01/2022
logic-apps Logic Apps Using File Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-file-connector.md
Title: Connect to file systems on premises
-description: Connect to on-premises file systems with the File System connector through the on-premises data gateway in Azure Logic Apps.
+description: Connect to on-premises file systems from Azure Logic Apps with the File System connector.
ms.suite: integration
Last updated 03/11/2022
-# Connect to on-premises file systems with Azure Logic Apps
+# Connect to on-premises file systems from Azure Logic Apps
-With Azure Logic Apps and the File System connector, you can create automated tasks and workflows that create and manage files on an on-premises file share, for example:
+With the File System connector, you can create automated integration workflows in Azure Logic Apps that manage files on an on-premises file share, for example:
- Create, get, append, update, and delete files. - List files in folders or root folders. - Get file content and metadata.
- > [!IMPORTANT]
- > - The File System connector currently supports only Windows file systems on Windows operating systems.
- > - The gateway machine and the file server must exist in the same Windows domain.
- > - Mapped network drives aren't supported.
+This article shows how to connect to an on-premises file system through an example scenario where you copy a file from a Dropbox account to a file share, and then send an email. If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md).
+
+## Limitations
+
+- The File System connector currently supports only Windows file systems on Windows operating systems.
+- Mapped network drives aren't supported.
+- If you have to use the on-premises data gateway, your gateway installation and file system server must exist in the same Windows domain. For more information, review [Install on-premises data gateway for Azure Logic Apps](logic-apps-gateway-install.md) and [Connect to on-premises data sources from Azure Logic Apps](logic-apps-gateway-connection.md).
+
+## Connector reference
-This article shows how you can connect to an on-premises file system as described by this example scenario: copy a file that's uploaded to Dropbox to a file share, and then send an email. To securely connect and access on-premises systems, logic apps use the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md). For connector-specific technical information, see the [File System connector reference](/connectors/filesystem/).
+For connector-specific technical information, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/filesystem/).
+
+> [!NOTE]
+>
+> If your logic app runs in an integration service environment (ISE), and you use this connector's ISE version,
+> review [ISE message limits](logic-apps-limits-and-config.md#message-size-limits) and
+> [Access to Azure virtual networks with an integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md).
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* To create the connection to your file system, different requirements apply based on your logic app and the hosting environment:
+
+ - For Consumption logic app workflows in multi-tenant Azure Logic Apps, the *managed* File System connector requires that you use the on-premises data gateway resource in Azure to securely connect and access on-premises systems. After you install the on-premises data gateway and create the data gateway resource in Azure, you can select the data gateway resource when you create the connection to your file system from your workflow. For more information, review the following documentation:
-* Before you can connect logic apps to on-premises systems such as your file system server, you need to [install and set up an on-premises data gateway](../logic-apps/logic-apps-gateway-install.md). That way, you can specify to use your gateway installation when you create the file system connection from your logic app.
+ - [Managed connectors in Azure Logic Apps](../connectors/managed.md)
+ - [Install on-premises data gateway for Azure Logic Apps](logic-apps-gateway-install.md)
+ - [Connect to on-premises data sources from Azure Logic Apps](logic-apps-gateway-connection.md)
-* A [Dropbox account](https://www.dropbox.com/), which you can sign up for free. Your account credentials are necessary for creating a connection between your logic app and your Dropbox account.
+ - For logic app workflows in an integration service environment (ISE), you can use the connector's ISE version, which doesn't require the data gateway resource.
* Access to the computer that has the file system you want to use. For example, if you install the data gateway on the same computer as your file system, you need the account credentials for that computer.
-* An email account from a provider that's supported by Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review the connectors list here](/connectors/). This logic app uses a work or school account. If you use another email account, the overall steps are the same, but your UI might slightly differ.
+* For the example scenarios in this article, you need an email account from a provider that's supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review other supported email connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This logic app workflow uses the Office 365 Outlook connector with a work or school account. If you use another email account, the overall steps are the same, but your UI might slightly differ.
> [!IMPORTANT] > If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restriction in logic apps.
This article shows how you can connect to an on-premises file system as describe
> [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application). > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md).
-* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md). For this example, you need a blank logic app.
+* For the example File System action scenario, you need a [Dropbox account](https://www.dropbox.com/), which you can sign up for free.
+
+* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md). To add any trigger, you have to start with a blank workflow.
+
+<a name="add-file-system-trigger"></a>
+
+## Add a File System trigger
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app in the workflow designer.
+
+1. On the designer, under the search box, select **All**. In the search box, enter **file system**. From the triggers list, select the File System trigger that you want. This example continues with the trigger named **When a file is created**.
+
+ ![Screenshot showing Azure portal, designer for Consumption logic app, search box with "file system", and File System trigger selected.](media/logic-apps-using-file-connector/select-file-system-trigger-consumption.png)
+
+1. If you're prompted to create your file system server connection, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
+ |||||
+
+ The following example shows the connection information for the managed File System trigger:
+
+ ![Screenshot showing connection information for managed File System trigger.](media/logic-apps-using-file-connector/file-system-connection-consumption.png)
+
+ The following example shows the connection information for the ISE-based File System trigger:
+
+ ![Screenshot showing connection information for ISE-based File System trigger.](media/logic-apps-using-file-connector/file-system-connection-ise.png)
+
+1. After you provide the required information for your connection, select **Create**.
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your trigger.
-## Add trigger
+ For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check.
+ ![Screenshot showing the "When a file is created" trigger, which checks for a newly created file on the file system server.](media/logic-apps-using-file-connector/file-system-trigger-when-file-created.png)
-1. Sign in to the [Azure portal](https://portal.azure.com), and open your logic app in Logic App Designer, if not open already.
+ 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
-1. In the search box, enter "dropbox" as your filter. From the triggers list, select this trigger: **When a file is created**
+ ![Screenshot showing an action that sends email when a new file is created on the file system server.](media/logic-apps-using-file-connector/file-system-trigger-send-email.png)
- ![Select Dropbox trigger](media/logic-apps-using-file-connector/select-dropbox-trigger.png)
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, click inside the trigger's edit boxes.
+ > When the dynamic content list appears, select from the available outputs.
-1. Sign in with your Dropbox account credentials, and authorize access to your Dropbox data for Azure Logic Apps.
+1. Save your logic app. Test your workflow by uploading a file and triggering the workflow.
-1. Provide the required information for your trigger.
+ If successful, your workflow sends an email about the new file.
- ![Dropbox trigger](media/logic-apps-using-file-connector/dropbox-trigger.png)
+<a name="add-file-system-action"></a>
-## Add actions
+## Add a File System action
-1. Under the trigger, choose **Next step**. In the search box, enter "file system" as your filter. From the actions list, select this action: **Create file**
+1. In the [Azure portal](https://portal.azure.com), open your logic app in the workflow designer, if not already open.
- ![Find File System connector](media/logic-apps-using-file-connector/find-file-system-action.png)
+1. After the last step or between steps in your workflow, add a new step or action.
-1. If you don't already have a connection to your file system, you're prompted to create a connection.
+ This example uses a Dropbox trigger and follows that step with a File System action.
- ![Create connection](media/logic-apps-using-file-connector/file-system-connection.png)
+1. Under the **Choose an operation** search box, select **All**. In the search box, enter **file system**.
+
+1. From the actions list, select the File System action that you want. This example continues with the action named **Create file**.
+
+ ![Screenshot showing Azure portal, designer for Consumption logic app, search box with "file system", and File System action selected.](media/logic-apps-using-file-connector/select-file-system-action-consumption.png)
+
+1. If you're prompted to create your file system server connection, provide the following information as required:
| Property | Required | Value | Description |
- | -- | -- | -- | -- |
- | **Connection Name** | Yes | <*connection-name*> | The name you want for your connection |
- | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, for example, if you installed your on-premises data gateway such as a local folder on the computer where the on-premises data gateway is installed, or the folder for a network share that the computer can access. <p>For example: `\\PublicShare\\DropboxFiles` <p>The root folder is the main parent folder, which is used for relative paths for all file-related actions. |
- | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system uses: **Windows** |
- | **Username** | Yes | <*domain*>\\<*username*> <p>-or- <p><*local-computer*>\\<*username*> | The username for the computer where you have your file system folder. <p>If your file system folder is on the same computer as the on-premises data gateway, you can use <*local-computer*>\\<*username*>. |
- | **Password** | Yes | <*your-password*> | The password for the computer where you have your file system |
- | **gateway** | Yes | <*installed-gateway-name*> | The name for your previously installed gateway |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
|||||
-1. When you're done, choose **Create**.
+ The following example shows the connection information for the managed File System action:
- Logic Apps configures and tests your connection, making sure that the connection works properly. If the connection is set up correctly, options appear for the action that you previously selected.
+ ![Screenshot showing connection information for managed File System action.](media/logic-apps-using-file-connector/file-system-connection-consumption.png)
-1. In the **Create file** action, provide the details for copying files from Dropbox to the root folder in your on-premises file share. To add outputs from previous steps, click inside the boxes, and select from available fields when the dynamic content list appears.
+ The following example shows the connection information for the ISE-based File System action:
- ![Create file action](media/logic-apps-using-file-connector/create-file-filled.png)
+ ![Screenshot showing connection information for ISE-based File System action.](media/logic-apps-using-file-connector/file-system-connection-ise.png)
-1. Now, add an Outlook action that sends an email so the appropriate users know about the new file. Enter the recipients, title, and body of the email. For testing, you can use your own email address.
+1. After you provide the required information for your connection, select **Create**.
- ![Send email action](media/logic-apps-using-file-connector/send-email.png)
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
-1. Save your logic app. Test your app by uploading a file to Dropbox.
+1. Continue building your workflow.
- Your logic app should copy the file to your on-premises file share, and send the recipients an email about the copied file.
+ 1. Provide the required information for your action.
-## Connector reference
+ For this example, select the folder path on your file system server to use, which is the root folder here. Enter the file name and content, based on the file uploaded to Dropbox.
-For more technical details about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/fileconnector/).
+ ![Screenshot showing the "Create file" action, which creates a file on the file system server, based on a file uploaded to Dropbox.](media/logic-apps-using-file-connector/file-system-action-create-file.png)
-> [!NOTE]
-> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
-> this connector's ISE-labeled version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, click inside the trigger's edit boxes.
+ > When the dynamic content list appears, select from the available outputs.
+
+ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing an action that sends email after a new file is created on the file system server.](media/logic-apps-using-file-connector/file-system-action-send-email.png)
+
+1. Save your logic app. Test your workflow by uploading a file to Dropbox.
+
+ If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
## Next steps
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Title: 'MLOps: ML model management'
+ Title: 'MLOps: Machine learning model management'
-description: 'Learn about model management (MLOps) with Azure Machine Learning. Deploy, manage, track lineage and monitor your models to continuously improve them. '
+description: 'Learn about model management (MLOps) with Azure Machine Learning. Deploy, manage, track lineage, and monitor your models to continuously improve them.'
Last updated 11/04/2021
# MLOps: Model management, deployment, lineage, and monitoring with Azure Machine Learning
-In this article, learn about how do Machine Learning Operations (MLOps) in Azure Machine Learning to manage the lifecycle of your models. MLOps improves the quality and consistency of your machine learning solutions.
+In this article, you'll learn how to use machine learning operations (MLOps) in Azure Machine Learning to manage the lifecycle of your models. MLOps improves the quality and consistency of your machine learning solutions.
## What is MLOps?
-Machine Learning Operations (MLOps) is based on [DevOps](https://azure.microsoft.com/overview/what-is-devops/) principles and practices that increase the efficiency of workflows. For example, continuous integration, delivery, and deployment. MLOps applies these principles to the machine learning process, with the goal of:
+MLOps is based on [DevOps](https://azure.microsoft.com/overview/what-is-devops/) principles and practices that increase the efficiency of workflows. Examples include continuous integration, delivery, and deployment. MLOps applies these principles to the machine learning process, with the goal of:
-* Faster experimentation and development of models
-* Faster deployment of models into production
-* Quality assurance and end-to-end lineage tracking
+* Faster experimentation and development of models.
+* Faster deployment of models into production.
+* Quality assurance and end-to-end lineage tracking.
-## MLOps in Azure Machine Learning
+## MLOps in Machine Learning
-Azure Machine Learning provides the following MLOps capabilities:
+Machine Learning provides the following MLOps capabilities:
-- **Create reproducible ML pipelines**. Machine Learning pipelines allow you to define repeatable and reusable steps for your data preparation, training, and scoring processes.-- **Create reusable software environments** for training and deploying models.-- **Register, package, and deploy models from anywhere**. You can also track associated metadata required to use the model.-- **Capture the governance data for the end-to-end ML lifecycle**. The logged lineage information can include who is publishing models, why changes were made, and when models were deployed or used in production.-- **Notify and alert on events in the ML lifecycle**. For example, experiment completion, model registration, model deployment, and data drift detection.-- **Monitor ML applications for operational and ML-related issues**. Compare model inputs between training and inference, explore model-specific metrics, and provide monitoring and alerts on your ML infrastructure.-- **Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure Pipelines**. Using pipelines allows you to frequently update models, test new models, and continuously roll out new ML models alongside your other applications and services.
+- **Create reproducible machine learning pipelines.** Use machine learning pipelines to define repeatable and reusable steps for your data preparation, training, and scoring processes.
+- **Create reusable software environments.** Use these environments for training and deploying models.
+- **Register, package, and deploy models from anywhere.** You can also track associated metadata required to use the model.
+- **Capture the governance data for the end-to-end machine learning lifecycle.** The logged lineage information can include who is publishing models and why changes were made. It can also include when models were deployed or used in production.
+- **Notify and alert on events in the machine learning lifecycle.** Event examples include experiment completion, model registration, model deployment, and data drift detection.
+- **Monitor machine learning applications for operational and machine learning-related issues.** Compare model inputs between training and inference. Explore model-specific metrics. Provide monitoring and alerts on your machine learning infrastructure.
+- **Automate the end-to-end machine learning lifecycle with Machine Learning and Azure Pipelines.** By using pipelines, you can frequently update models. You can also test new models. You can continually roll out new machine learning models alongside your other applications and services.
-For more information on MLOps, see [Machine Learning DevOps (MLOps)](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-mlops).
+For more information on MLOps, see [Machine learning DevOps](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-mlops).
-## Create reproducible ML pipelines
+## Create reproducible machine learning pipelines
-Use ML pipelines from Azure Machine Learning to stitch together all of the steps involved in your model training process.
+Use machine learning pipelines from Machine Learning to stitch together all the steps in your model training process.
-An ML pipeline can contain steps from data preparation to feature extraction to hyperparameter tuning to model evaluation. For more information, see [ML pipelines](concept-ml-pipelines.md).
+A machine learning pipeline can contain steps from data preparation to feature extraction to hyperparameter tuning to model evaluation. For more information, see [Machine learning pipelines](concept-ml-pipelines.md).
-If you use the [Designer](concept-designer.md) to create your ML pipelines, you may at any time click the **"..."** at the top-right of the Designer page and then select **Clone**. Cloning your pipeline allows you to iterate your pipeline design without losing your old versions.
+If you use the [designer](concept-designer.md) to create your machine learning pipelines, you can at any time select the **...** icon in the upper-right corner of the designer page. Then select **Clone**. When you clone your pipeline, you iterate your pipeline design without losing your old versions.
## Create reusable software environments
-Azure Machine Learning environments allow you to track and reproduce your projects' software dependencies as they evolve. Environments allow you to ensure that builds are reproducible without manual software configurations.
+By using Machine Learning environments, you can track and reproduce your projects' software dependencies as they evolve. You can use environments to ensure that builds are reproducible without manual software configurations.
-Environments describe the pip and Conda dependencies for your projects, and can be used for both training and deployment of models. For more information, see [What are Azure Machine Learning environments](concept-environments.md).
+Environments describe the pip and conda dependencies for your projects. You can use them for training and deployment of models. For more information, see [What are Machine Learning environments?](concept-environments.md).
## Register, package, and deploy models from anywhere
-### Register and track ML models
+The following sections discuss how to register, package, and deploy models.
-Model registration allows you to store and version your models in the Azure cloud, in your workspace. The model registry makes it easy to organize and keep track of your trained models.
+### Register and track machine learning models
+
+With model registration, you can store and version your models in the Azure cloud, in your workspace. The model registry makes it easy to organize and keep track of your trained models.
> [!TIP]
-> A registered model is a logical container for one or more files that make up your model. For example, if you have a model that is stored in multiple files, you can register them as a single model in your Azure Machine Learning workspace. After registration, you can then download or deploy the registered model and receive all the files that were registered.
+> A registered model is a logical container for one or more files that make up your model. For example, if you have a model that's stored in multiple files, you can register them as a single model in your Machine Learning workspace. After registration, you can then download or deploy the registered model and receive all the files that were registered.
-Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. Additional metadata tags can be provided during registration. These tags are then used when searching for a model. Azure Machine Learning supports any model that can be loaded using Python 3.5.2 or higher.
+Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. More metadata tags can be provided during registration. These tags are then used when you search for a model. Machine Learning supports any model that can be loaded by using Python 3.5.2 or higher.
> [!TIP]
-> You can also register models trained outside Azure Machine Learning.
+> You can also register models trained outside Machine Learning.
-You can't delete a registered model that is being used in an active deployment.
-For more information, see the register model section of [Deploy models](how-to-deploy-and-where.md#registermodel).
+You can't delete a registered model that's being used in an active deployment.
+For more information, see the "Register model" section of [Deploy models](how-to-deploy-and-where.md#registermodel).
> [!IMPORTANT]
-> When using Filter by `Tags` option on the Models page of Azure Machine Learning Studio, instead of using `TagName : TagValue` customers should use `TagName=TagValue` (without space)
+> When you use the **Filter by** `Tags` option on the **Models** page of Azure Machine Learning Studio, instead of using `TagName : TagValue`, use `TagName=TagValue` without spaces.
-Azure Machine Learning can help you understand the CPU and memory requirements of the service that will be created when you deploy your model. Profiling tests the service that runs your model and returns information such as the CPU usage, memory usage, and response latency. It also provides a CPU and memory recommendation based on the resource usage.
-For more information, see the profiling section of [Deploy models](how-to-deploy-profile-model.md).
+Machine Learning can help you understand the CPU and memory requirements of the service that's created when you deploy your model. Profiling tests the service that runs your model and returns information like CPU usage, memory usage, and response latency. It also provides a CPU and memory recommendation based on the resource usage.
+
+For more information, see [Profile your model to determine resource utilization](how-to-deploy-profile-model.md).
### Package and debug models
-Before deploying a model into production, it is packaged into a Docker image. In most cases, image creation happens automatically in the background during deployment. You can manually specify the image.
+Before you deploy a model into production, it's packaged into a Docker image. In most cases, image creation happens automatically in the background during deployment. You can manually specify the image.
If you run into problems with the deployment, you can deploy on your local development environment for troubleshooting and debugging.
For more information, see [Deploy models](how-to-deploy-and-where.md#registermod
### Convert and optimize models
-Converting your model to [Open Neural Network Exchange](https://onnx.ai) (ONNX) may improve performance. On average, converting to ONNX can yield a 2x performance increase.
+Converting your model to [Open Neural Network Exchange](https://onnx.ai) (ONNX) might improve performance. On average, converting to ONNX can double performance.
-For more information on ONNX with Azure Machine Learning, see the [Create and accelerate ML models](concept-onnx.md) article.
+For more information on ONNX with Machine Learning, see [Create and accelerate machine learning models](concept-onnx.md).
### Use models
-Trained machine learning models are deployed as web services in the cloud or locally. Deployments use CPU, GPU, or field-programmable gate arrays (FPGA) for inferencing. You can also use models from Power BI.
+Trained machine learning models are deployed as web services in the cloud or locally. Deployments use CPU, GPU, or field-programmable gate arrays for inferencing. You can also use models from Power BI.
-When using a model as a web service, you provide the following items:
+When you use a model as a web service, you provide the following items:
-* The model(s) that are used to score data submitted to the service/device.
-* An entry script. This script accepts requests, uses the model(s) to score the data, and return a response.
-* An Azure Machine Learning environment that describes the pip and Conda dependencies required by the model(s) and entry script.
-* Any additional assets such as text, data, etc. that are required by the model(s) and entry script.
+* The models that are used to score data submitted to the service or device.
+* An entry script. This script accepts requests, uses the models to score the data, and returns a response.
+* A Machine Learning environment that describes the pip and conda dependencies required by the models and entry script.
+* Any other assets such as text and data that are required by the models and entry script.
-You also provide the configuration of the target deployment platform. For example, the VM family type, available memory, and number of cores when deploying to Azure Kubernetes Service.
+You also provide the configuration of the target deployment platform. Examples include the VM family type, available memory, and the number of cores when you deploy to Azure Kubernetes Service.
-When the image is created, components required by Azure Machine Learning are also added. For example, assets needed to run the web service.
+When the image is created, components required by Machine Learning are also added. An example is the assets needed to run the web service.
#### Batch scoring
-Batch scoring is supported through ML pipelines. For more information, see [Batch predictions on big data](./tutorial-pipeline-batch-scoring-classification.md).
+
+Batch scoring is supported through machine learning pipelines. For more information, see [Batch predictions on big data](./tutorial-pipeline-batch-scoring-classification.md).
#### Real-time web services
-You can use your models in **web services** with the following compute targets:
+You can use your models in web services with the following compute targets:
-* Azure Container Instance
+* Azure Container Instances
* Azure Kubernetes Service * Local development environment To deploy the model as a web service, you must provide the following items: * The model or ensemble of models.
-* Dependencies required to use the model. For example, a script that accepts requests and invokes the model, conda dependencies, etc.
+* Dependencies required to use the model. Examples are a script that accepts requests and invokes the model and conda dependencies.
* Deployment configuration that describes how and where to deploy the model. For more information, see [Deploy models](how-to-deploy-and-where.md). #### Controlled rollout
-When deploying to Azure Kubernetes Service, you can use controlled rollout to enable the following scenarios:
+When you deploy to Azure Kubernetes Service, you can use controlled rollout to enable the following scenarios:
-* Create multiple versions of an endpoint for a deployment
+* Create multiple versions of an endpoint for a deployment.
* Perform A/B testing by routing traffic to different versions of the endpoint. * Switch between endpoint versions by updating the traffic percentage in endpoint configuration.
-For more information, see [Controlled rollout of ML models](./how-to-safely-rollout-managed-endpoints.md).
+For more information, see [Controlled rollout of machine learning models](./how-to-safely-rollout-managed-endpoints.md).
### Analytics
-Microsoft Power BI supports using machine learning models for data analytics. For more information, see [Azure Machine Learning integration in Power BI (preview)](/power-bi/service-machine-learning-integration).
+Microsoft Power BI supports using machine learning models for data analytics. For more information, see [Machine Learning integration in Power BI (preview)](/power-bi/service-machine-learning-integration).
## Capture the governance data required for MLOps
-Azure ML gives you the capability to track the end-to-end audit trail of all of your ML assets by using metadata.
+Machine Learning gives you the capability to track the end-to-end audit trail of all your machine learning assets by using metadata. For example:
-- Azure ML [integrates with Git](how-to-set-up-training-targets.md#gitintegration) to track information on which repository / branch / commit your code came from.-- [Azure ML Datasets](how-to-create-register-datasets.md) help you track, profile, and version data.-- [Interpretability](how-to-machine-learning-interpretability.md) allows you to explain your models, meet regulatory compliance, and understand how models arrive at a result for given input.-- Azure ML Run history stores a snapshot of the code, data, and computes used to train a model.-- The Azure ML Model Registry captures all of the metadata associated with your model (which experiment trained it, where it is being deployed, if its deployments are healthy).-- [Integration with Azure](how-to-use-event-grid.md) allows you to act on events in the ML lifecycle. For example, model registration, deployment, data drift, and training (run) events.
+- Machine Learning [integrates with Git](how-to-set-up-training-targets.md#gitintegration) to track information on which repository, branch, and commit your code came from.
+- [Machine Learning datasets](how-to-create-register-datasets.md) help you track, profile, and version data.
+- [Interpretability](how-to-machine-learning-interpretability.md) allows you to explain your models, meet regulatory compliance, and understand how models arrive at a result for specific input.
+- Machine Learning Run history stores a snapshot of the code, data, and computes used to train a model.
+- The Machine Learning Model Registry captures all the metadata associated with your model. For example, metadata includes which experiment trained it, where it's being deployed, and if its deployments are healthy.
+- [Integration with Azure](how-to-use-event-grid.md) allows you to act on events in the machine learning lifecycle. Examples are model registration, deployment, data drift, and training (run) events.
> [!TIP]
-> While some information on models and datasets is automatically captured, you can add additional information by using __tags__. When looking for registered models and datasets in your workspace, you can use tags as a filter.
+> While some information on models and datasets is automatically captured, you can add more information by using _tags_. When you look for registered models and datasets in your workspace, you can use tags as a filter.
>
-> Associating a dataset with a registered model is an optional step. For information on referencing a dataset when registering a model, see the [Model](/python/api/azureml-core/azureml.core.model%28class%29) class reference.
+> Associating a dataset with a registered model is an optional step. For information on how to reference a dataset when you register a model, see the [Model](/python/api/azureml-core/azureml.core.model%28class%29) class reference.
+## Notify, automate, and alert on events in the machine learning lifecycle
-## Notify, automate, and alert on events in the ML lifecycle
-Azure ML publishes key events to Azure Event Grid, which can be used to notify and automate on events in the ML lifecycle. For more information, please see [this document](how-to-use-event-grid.md).
+Machine Learning publishes key events to Azure Event Grid, which can be used to notify and automate on events in the machine learning lifecycle. For more information, see [Use Event Grid](how-to-use-event-grid.md).
-
-## Monitor for operational & ML issues
+## Monitor for operational and machine learning issues
Monitoring enables you to understand what data is being sent to your model, and the predictions that it returns.
-This information helps you understand how your model is being used. The collected input data may also be useful in training future versions of the model.
+This information helps you understand how your model is being used. The collected input data might also be useful in training future versions of the model.
-For more information, see [How to enable model data collection](how-to-enable-data-collection.md).
+For more information, see [Enable model data collection](how-to-enable-data-collection.md).
## Retrain your model on new data
-Often, you'll want to validate your model, update it, or even retrain it from scratch, as you receive new information. Sometimes, receiving new data is an expected part of the domain. Other times, as discussed in [Detect data drift (preview) on datasets](how-to-monitor-datasets.md), model performance can degrade in the face of such things as changes to a particular sensor, natural data changes such as seasonal effects, or features shifting in their relation to other features.
+Often, you'll want to validate your model, update it, or even retrain it from scratch, as you receive new information. Sometimes, receiving new data is an expected part of the domain. Other times, as discussed in [Detect data drift (preview) on datasets](how-to-monitor-datasets.md), model performance can degrade because of:
+
+- Changes to a particular sensor.
+- Natural data changes such as seasonal effects.
+- Features shifting in their relation to other features.
-There is no universal answer to "How do I know if I should retrain?" but Azure ML event and monitoring tools previously discussed are good starting points for automation. Once you have decided to retrain, you should:
+There's no universal answer to "How do I know if I should retrain?" The Machine Learning event and monitoring tools previously discussed are good starting points for automation. After you've decided to retrain, you should:
-- Preprocess your data using a repeatable, automated process-- Train your new model-- Compare the outputs of your new model to those of your old model-- Use predefined criteria to choose whether to replace your old model
+- Preprocess your data by using a repeatable, automated process.
+- Train your new model.
+- Compare the outputs of your new model to the outputs of your old model.
+- Use predefined criteria to choose whether to replace your old model.
-A theme of the above steps is that your retraining should be automated, not ad hoc. [Azure Machine Learning pipelines](concept-ml-pipelines.md) are a good answer for creating workflows relating to data preparation, training, validation, and deployment. Read [Retrain models with Azure Machine Learning designer](how-to-retrain-designer.md) to see how pipelines and the Azure Machine Learning designer fit into a retraining scenario.
+A theme of the preceding steps is that your retraining should be automated, not improvised. [Machine Learning pipelines](concept-ml-pipelines.md) are a good answer for creating workflows that relate to data preparation, training, validation, and deployment. Read [Retrain models with Machine Learning designer](how-to-retrain-designer.md) to see how pipelines and the Machine Learning designer fit into a retraining scenario.
-## Automate the ML lifecycle
+## Automate the machine learning lifecycle
-You can use GitHub and Azure Pipelines to create a continuous integration process that trains a model. In a typical scenario, when a Data Scientist checks a change into the Git repo for a project, the Azure Pipeline will start a training run. The results of the run can then be inspected to see the performance characteristics of the trained model. You can also create a pipeline that deploys the model as a web service.
+You can use GitHub and Azure Pipelines to create a continuous integration process that trains a model. In a typical scenario, when a data scientist checks a change into the Git repo for a project, Azure Pipelines starts a training run. The results of the run can then be inspected to see the performance characteristics of the trained model. You can also create a pipeline that deploys the model as a web service.
-The [Azure Machine Learning extension](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.vss-services-azureml) makes it easier to work with Azure Pipelines. It provides the following enhancements to Azure Pipelines:
+The [Machine Learning extension](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.vss-services-azureml) makes it easier to work with Azure Pipelines. It provides the following enhancements to Azure Pipelines:
-* Enables workspace selection when defining a service connection.
+* Enables workspace selection when you define a service connection.
* Enables release pipelines to be triggered by trained models created in a training pipeline.
-For more information on using Azure Pipelines with Azure Machine Learning, see the following links:
+For more information on using Azure Pipelines with Machine Learning, see:
-* [Continuous integration and deployment of ML models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
-* [Azure Machine Learning MLOps](https://aka.ms/mlops) repository
-* [Azure Machine Learning MLOpsPython](https://github.com/Microsoft/MLOpspython) repository
+* [Continuous integration and deployment of machine learning models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
+* [Machine Learning MLOps](https://aka.ms/mlops) repository
+* [Machine Learning MLOpsPython](https://github.com/Microsoft/MLOpspython) repository
You can also use Azure Data Factory to create a data ingestion pipeline that prepares data for use with training. For more information, see [Data ingestion pipeline](how-to-cicd-data-ingestion.md).
You can also use Azure Data Factory to create a data ingestion pipeline that pre
Learn more by reading and exploring the following resources:
-+ [How & where to deploy models](how-to-deploy-and-where.md) with Azure Machine Learning
-
-+ [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md).
-++ [How and where to deploy models](how-to-deploy-and-where.md) with Machine Learning++ [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md) + [End-to-end MLOps examples repo](https://github.com/microsoft/MLOps)-
-+ [CI/CD of ML models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
-++ [CI/CD of machine learning models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning) + Create clients that [consume a deployed model](how-to-consume-web-service.md)- + [Machine learning at scale](/azure/architecture/data-guide/big-data/machine-learning-at-scale)-
-+ [Azure AI reference architectures & best practices rep](https://github.com/microsoft/AI)
++ [Azure AI reference architectures and best practices repo](https://github.com/microsoft/AI)
machine-learning How To Manage Resources Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-resources-vscode.md
The quickest way to create resources is using the extension's toolbar.
1. Select **+** in the activity bar. 1. Choose your resource from the dropdown list. 1. Configure the specification file. The information required depends on the type of resource you want to create.
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
Alternatively, you can create a resource by using the command palette:
To version a resource:
1. Use the existing specification file that created the resource or follow the create resources process to create a new specification file. 1. Increment the version number in the template.
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
As long as the name of the updated resource is the same as the previous version, Azure Machine Learning picks up the changes and creates a new version.
For more information, see [workspaces](concept-workspace.md).
1. In the Azure Machine Learning view, right-click your subscription node and select **Create Workspace**. 1. A specification file appears. Configure the specification file.
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
Alternatively, use the `> Azure ML: Create Workspace` command in the command palette.
For more information, see [datastores](concept-data.md#datastores).
1. Right-click the **Datastores** node and select **Create Datastore**. 1. Choose the datastore type. 1. A specification file appears. Configure the specification file.
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
Alternatively, use the `> Azure ML: Create Datastore` command in the command palette.
For more information, see [datasets](concept-data.md#datasets)
1. Expand the workspace node you want to create the dataset under. 1. Right-click the **Datasets** node and select **Create Dataset**. 1. A specification file appears. Configure the specification file.
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
Alternatively, use the `> Azure ML: Create Dataset` command in the command palette.
For more information, see [environments](concept-environments.md).
1. Expand the workspace node you want to create the datastore under. 1. Right-click the **Environments** node and select **Create Environment**. 1. A specification file appears. Configure the specification file.
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
Alternatively, use the `> Azure ML: Create Environment` command in the command palette.
Using the resource nodes in the Azure Machine Learning view:
1. Right-click the **Experiments** node in your workspace and select **Create Job**. 1. Choose your job type. 1. A specification file appears. Configure the specification file.
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
Alternatively, use the `> Azure ML: Create Job` command in the command palette.
For more information, see [compute instances](concept-compute-instance.md).
1. Expand the **Compute** node. 1. Right-click the **Compute instances** node in your workspace and select **Create Compute**. 1. A specification file appears. Configure the specification file.
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
Alternatively, use the `> Azure ML: Create Compute` command in the command palette.
For more information, see [training compute targets](concept-compute-target.md#t
1. Expand the **Compute** node. 1. Right-click the **Compute clusters** node in your workspace and select **Create Compute**. 1. A specification file appears. Configure the specification file.
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
Alternatively, use the `> Azure ML: Create Compute` command in the command palette.
For more information, see [models](concept-azure-machine-learning-architecture.m
1. Expand your workspace node. 1. Right-click the **Models** node in your workspace and select **Create Model**. 1. A specification file appears. Configure the specification file.
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
Alternatively, use the `> Azure ML: Create Model` command in the command palette.
For more information, see [endpoints](concept-azure-machine-learning-architectur
1. Right-click the **Models** node in your workspace and select **Create Endpoint**. 1. Choose your endpoint type. 1. A specification file appears. Configure the specification file.
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
Alternatively, use the `> Azure ML: Create Endpoint` command in the command palette.
machine-learning How To Use Automlstep In Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automlstep-in-pipelines.md
Finally, the actual metrics and model are downloaded to your local machine, as w
- Run this Jupyter notebook showing a [complete example of automated ML in a pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb) that uses regression to predict taxi fares - [Create automated ML experiments without writing code](how-to-use-automated-ml-for-ml-models.md) - Explore a variety of [Jupyter notebooks demonstrating automated ML](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning)-- Read about integrating your pipeline in to [End-to-end MLOps](./concept-model-management-and-deployment.md#automate-the-ml-lifecycle) or investigate the [MLOps GitHub repository](https://github.com/Microsoft/MLOpspython)
+- Read about integrating your pipeline in to [End-to-end MLOps](./concept-model-management-and-deployment.md#automate-the-machine-learning-lifecycle) or investigate the [MLOps GitHub repository](https://github.com/Microsoft/MLOpspython)
machine-learning How To Use Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid.md
Azure Machine Learning provides events in the various points of machine learning
| `Microsoft.MachineLearningServices.ModelRegistered` | Raised when a machine learning model is registered in the workspace | | `Microsoft.MachineLearningServices.ModelDeployed` | Raised when a deployment of inference service with one or more models is completed | | `Microsoft.MachineLearningServices.DatasetDriftDetected` | Raised when a data drift detection job for two datasets is completed |
-| `Microsoft.MachineLearningServices.RunStatusChanged` | Raised when a run status changed, currently only raised when a run status is 'failed' |
+| `Microsoft.MachineLearningServices.RunStatusChanged` | Raised when a run status is changed |
### Filter & subscribe to events
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
The first thing you have to do to build an application in Azure Machine Learning
The specification file creates a workspace called `TeamWorkspace` in the `WestUS2` region. The rest of the options defined in the specification file provide friendly naming, descriptions, and tags for the workspace.
-1. Right-click the specification file and select **Azure ML: Create Resource**. Creating a resource uses the configuration options defined in the YAML specification file and submits a job using the CLI (v2). At this point, a request to Azure is made to create a new workspace and dependent resources in your account. After a few minutes, the new workspace appears in your subscription node.
+1. Right-click the specification file and select **Azure ML: Execute YAML**. Creating a resource uses the configuration options defined in the YAML specification file and submits a job using the CLI (v2). At this point, a request to Azure is made to create a new workspace and dependent resources in your account. After a few minutes, the new workspace appears in your subscription node.
1. Set `TeamWorkspace` as your default workspace. Doing so places resources and jobs you create in the workspace by default. Select the **Set Azure ML Workspace** button on the Visual Studio Code status bar and follow the prompts to set `TeamWorkspace` as your default workspace. For more information on workspaces, see [how to manage resources in VS Code](how-to-manage-resources-vscode.md).
A compute target is the computing resource or environment where you run training
For more information on VM sizes, see [sizes for Linux virtual machines in Azure](../virtual-machines/sizes.md).
-1. Right-click the specification file and select **Azure ML: Create Resource**.
+1. Right-click the specification file and select **Azure ML: Execute YAML**.
After a few minutes, the new compute target appears in the *Compute > Compute clusters* node of your workspace.
This specification file submits a training job called `tensorflow-mnist-example`
To submit the training job: 1. Open the *job.yml* file.
-1. Right-click the file in the text editor and select **Azure ML: Create Resource**.
-
-> [!div class="mx-imgBorder"]
-> ![Run experiment](./media/tutorial-train-deploy-image-classification-model-vscode/run-experiment.png)
+1. Right-click the file in the text editor and select **Azure ML: Execute YAML**.
At this point, a request is sent to Azure to run your experiment on the selected compute target in your workspace. This process takes several minutes. The amount of time to run the training job is impacted by several factors like the compute type and training data size. To track the progress of your experiment, right-click the current run node and select **View Run in Azure portal**.
migrate Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-network-connectivity.md
This issue can occur if the services running on the appliance are not running on
:::image type="content" source="./media/troubleshoot-network-connectivity/view-appliance-services.png" alt-text="Snapshot of View appliance services.":::
+### Failed to save configuration: 504 gateway timeout
+
+#### Possible causes:
+This issue can occur if the Azure Migrate appliance cannot reach the service endpoint provided in the error message.
+
+#### Remediation:
+
+To validate the private link connection, perform a DNS resolution of the Azure Migrate service endpoints (private link resource FQDNs) from the on-premises server hosting the Migrate appliance and ensure that they resolve to private IP addresses.
+
+**To obtain the private endpoint details to verify DNS resolution:**
+
+The private endpoint details and private link resource FQDN information are available in the Discovery and Assessment and Server Migration properties pages. Select **Download DNS settings** on both the properties pages to view the full list.
+
+Next, refer to [this guidance](#verify-dns-resolution) to verify the DNS resolution.
mysql App Development Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/app-development-best-practices.md
description: Learn about best practices for building an app by using Azure Datab
+ Last updated 08/11/2020
mysql Concept Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-monitoring-best-practices.md
description: This article describes the best practices to monitor your Azure Dat
+ Last updated 11/23/2020
mysql Concept Operation Excellence Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-operation-excellence-best-practices.md
description: This article describes the best practices to operate your MySQL dat
+ Last updated 11/23/2020
mysql Concept Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-performance-best-practices.md
description: This article describes some recommendations to monitor and tune per
+ Last updated 1/28/2021
mysql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-reserved-pricing.md
description: Prepay for Azure Database for MySQL compute resources with reserved
+ Last updated 10/06/2021
mysql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-aks.md
description: Learn about connecting Azure Kubernetes Service with Azure Database
+ Last updated 07/14/2020
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-audit-logs.md
description: Describes the audit logs available in Azure Database for MySQL, and
+ Last updated 6/24/2020
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-azure-ad-authentication.md
description: Learn about the concepts of Azure Active Directory for authenticati
+ Last updated 07/23/2020
mysql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-azure-advisor-recommendations.md
description: Learn about Azure Advisor recommendations for MySQL.
+ Last updated 04/08/2021
mysql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-backup.md
description: Learn about automatic backups and restoring your Azure Database for
+ Last updated 3/27/2020
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-business-continuity.md
description: Learn about business continuity (point-in-time restore, data center
+ Last updated 7/7/2020
mysql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-certificate-rotation.md
description: Learn about the upcoming changes of root certificate changes that w
+ Last updated 04/08/2021
mysql Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-compatibility.md
description: This article describes the MySQL drivers and management tools that
+ Last updated 11/4/2021
mysql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-connection-libraries.md
description: This article lists each library or driver that client programs can
+ Last updated 8/3/2020
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-connectivity-architecture.md
description: Describes the connectivity architecture for your Azure Database for
+ Last updated 10/15/2021
Last updated 10/15/2021
[!INCLUDE[applies-to-mysql-single-server](includes/applies-to-mysql-single-server.md)]
-This article explains the Azure Database for MySQL connectivity architecture as well as how the traffic is directed to your Azure Database for MySQL instance from clients both within and outside Azure.
+This article explains the Azure Database for MySQL connectivity architecture and how the traffic is directed to your Azure Database for MySQL instance from clients both within and outside Azure.
## Connectivity architecture Connection to your Azure Database for MySQL is established through a gateway that is responsible for routing incoming connections to the physical location of your server in our clusters. The following diagram illustrates the traffic flow. :::image type="content" source="./media/concepts-connectivity-architecture/connectivity-architecture-overview-proxy.png" alt-text="Overview of the connectivity architecture":::
-As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 3306. Inside the database cluster, traffic is forwarded to appropriate Azure Database for MySQL. Therefore, in order to connect to your server, such as from corporate networks, it is necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region.
+As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 3306. Inside the database cluster, traffic is forwarded to appropriate Azure Database for MySQL. Therefore, in order to connect to your server, such as from corporate networks, it's necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region.
## Azure Database for MySQL gateway IP addresses The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for MySQL server.
-As part of ongoing service maintenance, we will periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MySQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. Once the new ring is fully functional, the older gateway hardware serving existing servers are planned for decommissioning. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
+As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MySQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. Once the new ring is fully functional, the older gateway hardware serving existing servers are planned for decommissioning. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**. You should use fully qualified domain name (FQDN) of your server in the format `<servername>.mysql.database.azure.com`, in the connection string for your application.
-* You do not update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
+* You don't update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
The following table lists the gateway IP addresses of the Azure Database for MySQL gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following:
-* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you are provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column.
-* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you are provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we have not decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you are expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there is no interruptions in connectivity to your server.
-* **Gateway IP addresses (decommissioned):** This columns lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
+* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you're provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column.
+* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you're provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we haven't decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you're expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there's no interruptions in connectivity to your server.
+* **Gateway IP addresses (decommissioned):** This column lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
| **Region name** | **Gateway IP addresses** | **Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** | |||--|--|
Support for redirection is available in the PHP [mysqlnd_azure](https://github.c
## Frequently asked questions ### What you need to know about this planned maintenance?
-This is a DNS change only which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it is automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications.
+This is a DNS change only, which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it's automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications.
### What are we decommissioning?
-Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We are decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification.
+Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We're decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification.
### How can you validate if your connections are going to old gateway nodes or new gateway nodes? Ping your server's FQDN, for example ``ping xxx.mysql.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway.
Ping your server's FQDN, for example ``ping xxx.mysql.database.azure.com``. If
You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses ### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned?
-You will receive an email to inform you when we will start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
+You'll receive an email to inform you when we'll start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
-### What do I do if my client applications are still connecting to old gateway server ?
+### What do I do if my client applications are still connecting to old gateway server?
This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code. ### Is there any impact for my application connections?
-This maintenance is just a DNS change, so it is transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which usually several weeks later. And the retry logic is not required for this case, but it is good to see the application have retry logic configured. Please either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-This maintenance operation will not drop the existing connections. It only makes the new connection requests go to new gateway ring.
+This maintenance is just a DNS change, so it's transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which several weeks later. And the retry logic isn't required for this case, but it's good to see the application have retry logic configured. Either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
+This maintenance operation won't drop the existing connections. It only makes the new connection requests go to new gateway ring.
### Can I request for a specific time window for the maintenance?
-As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for majority of users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
+As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for Most users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-### I am using private link, will my connections get affected?
+### I'm using private link, will my connections get affected?
No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
mysql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-connectivity.md
keywords: mysql connection,connection string,connectivity issues,transient error
+ Last updated 3/18/2020
mysql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-access-and-security-vnet.md
description: 'Describes how VNet service endpoints work for your Azure Database
+ Last updated 7/17/2020
mysql Concepts Data Access Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-access-security-private-link.md
description: Learn how Private link works for Azure Database for MySQL.
+ Last updated 03/10/2020
mysql Concepts Data Encryption Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-encryption-mysql.md
description: Azure Database for MySQL data encryption with a customer-managed ke
+ Last updated 01/13/2020
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-in-replication.md
description: Learn about using Data-in Replication to synchronize from an extern
+ Last updated 04/08/2021
mysql Concepts Database Application Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-database-application-development.md
description: Introduces design considerations that a developer should follow whe
+ Last updated 3/18/2020
mysql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-firewall-rules.md
description: Learn about using firewall rules to enable connections to your Azur
+ Last updated 07/17/2020
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-high-availability.md
description: This article provides information on high availability in Azure Dat
+ Last updated 7/7/2020
mysql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-infrastructure-double-encryption.md
description: Learn about using Infrastructure double encryption to add a second
+ Last updated 6/30/2020
mysql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-limits.md
description: This article describes limitations in Azure Database for MySQL, suc
+ Last updated 10/1/2020
mysql Concepts Migrate Dbforge Studio For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-migrate-dbforge-studio-for-mysql.md
description: The article demonstrates how to migrate to Azure Database for MySQL
+ Last updated 03/03/2021
mysql Concepts Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-migrate-dump-restore.md
description: This article explains two common ways to back up and restore databa
+ Last updated 10/30/2020
mysql Concepts Migrate Mydumper Myloader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-migrate-mydumper-myloader.md
description: This article explains two common ways to back up and restore databa
+ Last updated 06/18/2021
This command uses the following variables:
>[!Note] >For more information on other options, you can use with mydumper, run the following command: **mydumper --help** . For more details see, [mydumper\myloader documentation](https://centminmod.com/mydumper.html)<br>
- >To dump multiple databases in parallel, you can modiffy regex variable as shown in the example: **regex ΓÇÖ^(DbName1\.|DbName2\.)**
+ >To dump multiple databases in parallel, you can modify regex variable as shown in the example: **regex ΓÇÖ^(DbName1\.|DbName2\.)**
## Restore your database using myloader
After the database is restored, itΓÇÖs always recommended to validate the data c
* Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699). * [Tutorial: Minimal Downtime Migration of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server](howto-migrate-single-flexible-minimum-downtime.md) * Learn more about Data-in replication [Replicate data into Azure Database for MySQL Flexible Server](flexible-server/concepts-data-in-replication.md) and [Configure Azure Database for MySQL Flexible Server Data-in replication](./flexible-server/how-to-data-in-replication.md)
-* Commonly encountered [migration errors](./howto-troubleshoot-common-errors.md)
+* Commonly encountered [migration errors](./howto-troubleshoot-common-errors.md)
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-monitoring.md
description: This article describes the metrics for monitoring and alerting for
+ Last updated 10/21/2020
mysql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-performance-recommendations.md
description: This article describes the Performance Recommendation feature in Az
+ Last updated 6/3/2020
mysql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-planned-maintenance-notification.md
description: This article describes the Planned maintenance notification feature
+ Last updated 10/21/2020
mysql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-pricing-tiers.md
description: Learn about the various pricing tiers for Azure Database for MySQL
+ Last updated 02/07/2022
mysql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-query-performance-insight.md
description: This article describes the Query Performance Insight feature in Azu
+ Last updated 01/12/2022
mysql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-query-store.md
description: Learn about the Query Store feature in Azure Database for MySQL to
+ Last updated 5/12/2020
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-read-replicas.md
description: 'Learn about read replicas in Azure Database for MySQL: choosing re
+ Last updated 06/17/2021
mysql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-security.md
description: An overview of the security features in Azure Database for MySQL.
+ Last updated 3/18/2020
mysql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-server-logs.md
description: Describes the slow query logs available in Azure Database for MySQL
+ Last updated 11/6/2020
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-server-parameters.md
description: This topic provides guidelines for configuring server parameters in
+ Last updated 1/26/2021
mysql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-servers.md
description: This topic provides considerations and guidelines for working with
+ Last updated 3/18/2020
mysql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-ssl-connection-security.md
description: Information for configuring Azure Database for MySQL and associated
+ Last updated 07/09/2020
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-supported-versions.md
description: Learn which versions of the MySQL server are supported in the Azure
+ Last updated 11/4/2021
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md
description: Describes the policy around MySQL major and minor versions in Azure
+ Last updated 11/03/2020
mysql Connect Cpp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-cpp.md
description: This quickstart provides a C++ code sample you can use to connect a
+ ms.devlang: cpp
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-csharp.md
description: "This quickstart provides a C# (.NET) code sample you can use to co
+ ms.devlang: csharp
mysql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-go.md
description: This quickstart provides several Go code samples you can use to con
+ ms.devlang: golang
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-java.md
description: Learn how to use Java and JDBC with an Azure Database for MySQL dat
+ ms.devlang: java
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-nodejs.md
description: This quickstart provides several Node.js code samples you can use t
+ ms.devlang: javascript
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-php.md
description: This quickstart provides several PHP code samples you can use to co
+ Last updated 10/28/2020
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-python.md
description: This quickstart provides several Python code samples you can use to
+ ms.devlang: python
mysql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-ruby.md
description: This quickstart provides several Ruby code samples you can use to c
+ ms.devlang: ruby
mysql Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-workbench.md
description: This Quickstart provides the steps to use MySQL Workbench to connec
+ Last updated 5/26/2020
mysql Azure Pipelines Deploy Database Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/azure-pipelines-deploy-database-task.md
Title: Azure Pipelines task for Azure Database for MySQL Flexible Server
description: Enable Azure Database for MySQL Flexible Server CLI task for using with Azure Pipelines + Previously updated : 08/09/2021 Last updated : 08/09/2021 # Azure Pipelines for Azure Database for MySQL Flexible Server
mysql Concept Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concept-servers.md
description: This topic provides considerations and guidelines for working with
+ Last updated 09/21/2020
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-audit-logs.md
description: Describes the audit logs available in Azure Database for MySQL Flex
+ Last updated 9/21/2020
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-backup-restore.md
description: Learn about the concepts of backup and restore with Azure Database
+ Last updated 09/21/2020
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-business-continuity.md
description: Learn about the concepts of business continuity with Azure Database
+ Last updated 09/21/2020
mysql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-compute-storage.md
description: This article describes the compute and storage options in Azure Dat
+ Last updated 1/28/2021
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
description: Learn about using Data-in replication to synchronize from an extern
+ Last updated 06/08/2021
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
description: Get a conceptual overview of zone-redundant high availability in Az
+ Last updated 08/26/2021
mysql Concepts Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-limitations.md
description: This article describes Limitations in Azure Database for MySQL - Fl
+ Last updated 10/1/2020
mysql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-maintenance.md
description: This article describes the scheduled maintenance feature in Azure D
+ Last updated 09/21/2020
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-monitoring.md
description: This article describes the metrics for monitoring and alerting for
+ Last updated 9/21/2020
mysql Concepts Networking Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-public.md
description: Learn about public access networking option in the Flexible Server
+ Last updated 8/6/2021
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
description: Learn about private access networking option in the Flexible Server
+ Last updated 8/6/2021
mysql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking.md
description: Learn about connectivity and networking options in the Flexible Ser
+ Last updated 9/23/2020
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-read-replicas.md
description: 'Learn about read replicas in Azure Database for MySQL Flexible Ser
+ Last updated 06/17/2021
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-server-parameters.md
description: This topic provides guidelines for configuring server parameters in
+ Last updated 11/10/2020
mysql Concepts Slow Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-slow-query-logs.md
description: Describes the slow query logs available in Azure Database for MySQL
+ Last updated 9/21/2020
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-supported-versions.md
description: Learn which versions of the MySQL server are supported in the Azure
+ Last updated 09/21/2020
mysql Concepts Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-workbooks.md
description: This article describes how you can monitor Azure Database for MySQL
+ Last updated 10/01/2021
mysql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-azure-cli.md
description: This quickstart provides several ways to connect with Azure CLI wit
+ Last updated 03/01/2021
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-csharp.md
description: "This quickstart provides a C# (.NET) code sample you can use to co
+ ms.devlang: csharp
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-java.md
description: Learn how to use Java and JDBC with an Azure Database for MySQL Fle
+ ms.devlang: java
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-nodejs.md
description: This quickstart provides several Node.js code samples you can use t
+ ms.devlang: javascript
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-php.md
description: This quickstart provides several PHP code samples you can use to co
+ Last updated 9/21/2020
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-python.md
description: This quickstart provides several Python code samples you can use to
+ ms.devlang: python
mysql Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-workbench.md
description: This Quickstart provides the steps to use MySQL Workbench to connec
+ Last updated 9/21/2020
mysql How To Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-alert-on-metric.md
description: This article describes how to configure and access metric alerts fo
+ Last updated 05/06/2022
mysql How To Configure High Availability Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-high-availability-cli.md
description: This article describes how to configure zone redundant high availab
+ Last updated 04/1/2021
mysql How To Configure High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-high-availability.md
description: This article describes how to enable or disable zone redundant high
+ Last updated 09/21/2020
mysql How To Configure Server Parameters Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-server-parameters-cli.md
description: This article describes how to configure the service parameters in A
+ ms.devlang: azurecli Last updated 11/10/2020
mysql How To Configure Server Parameters Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-server-parameters-portal.md
description: This article describes how to configure MySQL server parameters in
+ Last updated 11/10/2020
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
description: Instructions and information on how to connect using TLS/SSL in Azu
+ Last updated 09/21/2020 ms.devlang: csharp, golang, java, javascript, php, python, ruby
mysql How To Create Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-create-manage-databases.md
description: This article describes how to create and manage databases on Azure
+ Last updated 02/17/2022
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-in-replication.md
description: This article describes how to set up Data-in replication for Azure
+ Last updated 06/08/2021
mysql How To Deploy On Azure Free Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-deploy-on-azure-free-account.md
Title: Use an Azure free account to try Azure Database for MySQL - Flexible Serv
description: Guidance on how to deploy an Azure Database for MySQL - Flexible Server for free using an Azure Free Account. -++ Last updated 08/16/2021
mysql How To Maintenance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-maintenance-portal.md
description: Learn how to configure scheduled maintenance settings for an Azure
+ Last updated 9/21/2020
mysql How To Manage Firewall Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-firewall-cli.md
description: Create and manage firewall rules for Azure Database for MySQL - Fle
+ ms.devlang: azurecli
mysql How To Manage Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-firewall-portal.md
description: Create and manage firewall rules for Azure Database for MySQL - Fle
+ Last updated 9/21/2020
mysql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-server-cli.md
description: Learn how to manage an Azure Database for MySQL Flexible server fro
+ Last updated 9/21/2020
mysql How To Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-server-portal.md
description: Learn how to manage an Azure Database for MySQL Flexible server fro
+ Last updated 9/21/2020
mysql How To Manage Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-virtual-network-cli.md
description: Create and manage virtual networks for Azure Database for MySQL - F
+ Last updated 9/21/2020
mysql How To Manage Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-virtual-network-portal.md
description: Create and manage virtual networks for Azure Database for MySQL - F
+ Last updated 9/21/2020
mysql How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-move-regions.md
description: Move an Azure Database for MySQL Flexible server from one Azure reg
+ Last updated 04/08/2022
mysql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-cli.md
description: Learn how to set up and manage read replicas in Azure Database for
+ Last updated 10/23/2021
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-portal.md
description: Learn how to set up and manage read replicas in Azure Database for
+ Last updated 06/17/2021
mysql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restart-server-portal.md
description: This article describes how you can restart an Azure Database for My
+ Last updated 10/26/2020
mysql How To Restart Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restart-stop-start-server-cli.md
description: This article describes how to restart/stop/start operations in Azur
+ Last updated 03/30/2021
mysql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-dropped-server.md
description: This article describes how to restore a deleted server in Azure Dat
+ Last updated 11/10/2021
mysql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-server-cli.md
description: This article describes how to perform restore operations in Azure D
+ Last updated 04/01/2021
mysql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-server-portal.md
description: This article describes how to perform restore operations in Azure D
+ Last updated 04/01/2021
mysql How To Stop Start Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-stop-start-server-portal.md
description: This article describes how to stop/start operations in Azure Databa
+ Last updated 09/29/2020
mysql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-troubleshoot-cli-errors.md
description: This topic gives guidance on troubleshooting common issues with Azu
+ Last updated 08/24/2021
mysql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-troubleshoot-common-connection-issues.md
keywords: mysql connection,connection string,connectivity issues,persistent erro
+ Last updated 9/21/2020
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
Title: Overview - Azure Database for MySQL - Flexible Server description: Learn about the Azure Database for MySQL Flexible server, a relational database service in the Microsoft cloud based on the MySQL Community Edition. +
mysql Quickstart Create Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-arm-template.md
Title: 'Quickstart: Create an Azure DB for MySQL - Flexible Server - ARM templat
description: In this Quickstart, learn how to create an Azure Database for MySQL - Flexible Server using ARM template. +
mysql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-connect-server-vnet.md
description: This article walks you through using the Azure portal to create and
+ Last updated 04/18/2021
mysql Quickstart Create Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-server-cli.md
description: This quickstart describes how to use the Azure CLI to create an Azu
+ ms.devlang: azurecli Last updated 9/21/2020
mysql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-server-portal.md
description: This article walks you through using the Azure portal to create an
+ Last updated 10/22/2020
mysql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/sample-scripts-azure-cli.md
description: This article lists the Azure CLI code samples available for interac
+ ms.devlang: azurecli
mysql Sample Cli Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-audit-logs.md
description: This Azure CLI sample script shows how to configure audit logs on a
+ ms.devlang: azurecli
mysql Sample Cli Change Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-change-server-parameters.md
description: This Azure CLI sample script shows how to list and change server pa
+ ms.devlang: azurecli
mysql Sample Cli Create Connect Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-private-access.md
description: This Azure CLI sample script shows how to create a Azure Database f
+ ms.devlang: azurecli
mysql Sample Cli Create Connect Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-public-access.md
description: This Azure CLI sample script shows how to create a Azure Database f
+ ms.devlang: azurecli
mysql Sample Cli Monitor And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-monitor-and-scale.md
description: This Azure CLI sample script shows how to monitor and scale a singl
+ ms.devlang: azurecli
mysql Sample Cli Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-read-replicas.md
description: This Azure CLI sample script shows how to create and manage read re
+ ms.devlang: azurecli
mysql Sample Cli Restart Stop Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restart-stop-start.md
description: This Azure CLI sample script shows how to Restart/Stop/Start an Azu
+ ms.devlang: azurecli
mysql Sample Cli Restore Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restore-server.md
description: This Azure CLI sample script shows how to restore a single Azure Da
+ ms.devlang: azurecli
mysql Sample Cli Same Zone Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-same-zone-ha.md
description: This Azure CLI sample script shows how to configure Same-Zone high
+ ms.devlang: azurecli
mysql Sample Cli Slow Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-slow-query-logs.md
description: This Azure CLI sample script shows how to configure slow query logs
+ ms.devlang: azurecli
mysql Sample Cli Zone Redundant Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-zone-redundant-ha.md
description: This Azure CLI sample script shows how to configure Zone-Redundant
+ ms.devlang: azurecli
mysql Tutorial Configure Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-configure-audit.md
description: 'This tutorial shows you how to configure audit logs by using Azure
+ Last updated 10/01/2021
mysql Tutorial Deploy Springboot On Aks Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-springboot-on-aks-vnet.md
Title: 'Tutorial: Deploy Spring Boot Application on AKS cluster with MySQL Flexible Server within a VNet' description: Learn how to quickly build and deploy a Spring Boot Application on AKS with Azure Database for MySQL - Flexible Server, with secure connectivity within a VNet. +
mysql Tutorial Deploy Wordpress On Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-wordpress-on-aks.md
Title: 'Tutorial: Deploy WordPress on AKS cluster with MySQL Flexible Server by using Azure CLI' description: Learn how to quickly build and deploy WordPress on AKS with Azure Database for MySQL - Flexible Server. +
mysql Tutorial Php Database App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-php-database-app.md
description: This tutorial explains how to build a PHP app with flexible server.
+ ms.devlang: php Last updated 9/21/2020
mysql Tutorial Query Performance Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-query-performance-insights.md
description: 'This article shows you the tools to help visualize Query Performan
+ Last updated 10/01/2021
mysql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-webapp-server-vnet.md
description: Quickstart guide to create Azure Database for MySQL Flexible Server
+ ms.devlang: azurecli Last updated 03/18/2021
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
Title: What's new in Azure Database for MySQL - Flexible Server
description: Learn about recent updates to Azure Database for MySQL - Flexible Server, a relational database service in the Microsoft cloud based on the MySQL Community Edition. +
mysql How To Connect Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-connect-overview-single-server.md
Title: Connect and query - Single Server MySQL
description: Links to quickstarts showing how to connect to your Azure My SQL Database Single Server and run queries. +
mysql How To Decide On Right Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-decide-on-right-migration-tools.md
Title: "Select the right tools for migration to Azure Database for MySQL" description: "This topic provides a decision table which helps customers in picking the right tools for migrating into Azure Database for MySQL" +
mysql How To Fix Corrupt Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-fix-corrupt-database.md
description: In this article, you'll learn about how to fix database corruption
+ Last updated 09/21/2020
mysql How To Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-major-version-upgrade.md
description: This article describes how you can upgrade major version for Azure
+ Last updated 1/28/2021
mysql How To Manage Single Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-manage-single-server-cli.md
description: Learn how to manage an Azure Database for MySQL server from the Azu
+ Last updated 9/22/2020
mysql How To Migrate Rds Mysql Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-migrate-rds-mysql-data-in-replication.md
description: This article describes how to migrate Amazon RDS for MySQL to Azure
+ Last updated 09/24/2021
mysql How To Migrate Rds Mysql Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-migrate-rds-mysql-workbench.md
description: This article describes how to migrate Amazon RDS for MySQL to Azure
+ Last updated 05/21/2021
mysql How To Stop Start Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-stop-start-server.md
description: This article describes how to stop/start operations in Azure Databa
+ Last updated 09/21/2020
mysql Howto Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-alert-on-metric.md
description: This article describes how to configure and access metric alerts fo
+ Last updated 3/18/2020
mysql Howto Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-auto-grow-storage-cli.md
description: This article describes how you can enable auto grow storage using t
+ Last updated 3/18/2020
mysql Howto Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-auto-grow-storage-portal.md
description: This article describes how you can enable auto grow storage for Azu
+ Last updated 3/18/2020
mysql Howto Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-auto-grow-storage-powershell.md
description: This article describes how you can enable auto grow storage using P
+ Last updated 4/28/2020
mysql Howto Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-audit-logs-cli.md
description: This article describes how to configure and access the audit logs i
+ Last updated 6/24/2020
mysql Howto Configure Audit Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-audit-logs-portal.md
description: This article describes how to configure and access the audit logs i
+ Last updated 9/29/2020
mysql Howto Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-privatelink-cli.md
description: Learn how to configure private link for Azure Database for MySQL fr
+ Last updated 01/09/2020
mysql Howto Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-privatelink-portal.md
description: Learn how to configure private link for Azure Database for MySQL fr
+ Last updated 01/09/2020
mysql Howto Configure Server Logs In Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-server-logs-in-cli.md
description: This article describes how to access the slow query logs in Azure D
+ ms.devlang: azurecli Last updated 4/13/2020
mysql Howto Configure Server Logs In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-server-logs-in-portal.md
description: This article describes how to configure and access the slow logs in
+ Last updated 3/15/2021
mysql Howto Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-server-parameters-using-cli.md
description: This article describes how to configure the service parameters in A
+ ms.devlang: azurecli Last updated 10/1/2020
mysql Howto Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-server-parameters-using-powershell.md
description: This article describes how to configure the service parameters in A
+ ms.devlang: azurepowershell Last updated 10/1/2020
mysql Howto Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-sign-in-azure-ad-authentication.md
description: Learn about how to set up Azure Active Directory (Azure AD) for aut
+ Last updated 07/23/2020
mysql Howto Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-configure-ssl.md
description: Instructions for how to properly configure Azure Database for MySQL
+ ms.devlang: csharp, golang, java, javascript, php, python, ruby
mysql Howto Connect Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-connect-webapp.md
description: Instructions for how to properly connect an existing Azure App Serv
+ Last updated 3/18/2020
mysql Howto Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-connect-with-managed-identity.md
description: Learn about how to connect and authenticate using Managed Identity
+ Last updated 05/19/2020
mysql Howto Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-connection-string-powershell.md
description: This article provides an Azure PowerShell example to generate a con
+ Last updated 8/5/2020
mysql Howto Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-connection-string.md
description: This document lists the currently supported connection strings for
+ Last updated 3/18/2020
mysql Howto Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-create-manage-server-portal.md
description: Learn how to manage an Azure Database for MySQL server from the Azu
+ Last updated 1/26/2021
mysql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-create-users.md
description: This article describes how to create new user accounts to interact
+ Last updated 02/17/2022
mysql Howto Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-encryption-cli.md
description: Learn how to set up and manage data encryption for your Azure Datab
+ Last updated 03/30/2020
mysql Howto Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-encryption-portal.md
description: Learn how to set up and manage data encryption for your Azure Datab
+ Last updated 01/13/2020
mysql Howto Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-encryption-troubleshoot.md
description: Learn how to troubleshoot data encryption in Azure Database for MyS
+ Last updated 02/13/2020
mysql Howto Data Encryption Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-encryption-validation.md
description: Learn how to validate the encryption of the Azure Database for MySQ
+ Last updated 04/28/2020
mysql Howto Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-in-replication.md
description: This article describes how to set up Data-in Replication for Azure
+ Last updated 04/08/2021
mysql Howto Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-deny-public-network-access.md
description: Learn how to configure Deny Public Network Access using Azure porta
+ Last updated 03/10/2020
mysql Howto Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-double-encryption.md
description: Learn how to set up and manage Infrastructure double encryption for
+ Last updated 06/30/2020
mysql Howto Manage Firewall Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-manage-firewall-using-cli.md
description: This article describes how to create and manage Azure Database for
+ ms.devlang: azurecli Last updated 3/18/2020
mysql Howto Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-manage-firewall-using-portal.md
description: Create and manage Azure Database for MySQL firewall rules using the
+ Last updated 3/18/2020
mysql Howto Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-manage-vnet-using-cli.md
description: This article describes how to create and manage Azure Database for
+ ms.devlang: azurecli Last updated 02/10/2022
mysql Howto Manage Vnet Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-manage-vnet-using-portal.md
description: Create and manage Azure Database for MySQL VNet service endpoints a
+ Last updated 3/18/2020
mysql Howto Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-migrate-online.md
description: This article describes how to perform a minimal-downtime migration
+ Last updated 6/19/2021
mysql Howto Migrate Single Flexible Minimum Downtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-migrate-single-flexible-minimum-downtime.md
description: This article describes how to perform a minimal-downtime migration
+ Last updated 06/18/2021
mysql Howto Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-move-regions-portal.md
description: Move an Azure Database for MySQL server from one Azure region to an
+ Last updated 06/26/2020
mysql Howto Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-read-replicas-cli.md
description: Learn how to set up and manage read replicas in Azure Database for
+ Last updated 06/17/2020
mysql Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-read-replicas-portal.md
description: Learn how to set up and manage read replicas in Azure Database for
+ Last updated 06/17/2020
mysql Howto Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-read-replicas-powershell.md
description: Learn how to set up and manage read replicas in Azure Database for
+ Last updated 06/17/2020
mysql Howto Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-redirection.md
description: This article describes how you can configure you application to con
+ Last updated 6/8/2020
mysql Howto Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restart-server-cli.md
description: This article describes how you can restart an Azure Database for My
+ Last updated 3/18/2020
mysql Howto Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restart-server-portal.md
description: This article describes how you can restart an Azure Database for My
+ Last updated 3/18/2020
mysql Howto Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restart-server-powershell.md
description: This article describes how you can restart an Azure Database for My
+ Last updated 4/28/2020
mysql Howto Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restore-dropped-server.md
description: This article describes how to restore a deleted server in Azure Dat
+ Last updated 10/09/2020
mysql Howto Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restore-server-cli.md
description: Learn how to backup and restore a server in Azure Database for MySQ
+ ms.devlang: azurecli Last updated 3/27/2020
mysql Howto Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restore-server-portal.md
description: This article describes how to restore a server in Azure Database fo
+ Last updated 6/30/2020
mysql Howto Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-restore-server-powershell.md
description: Learn how to backup and restore a server in Azure Database for MySQ
+ ms.devlang: azurepowershell Last updated 4/28/2020
mysql Howto Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-server-parameters.md
description: This article describes how to configure MySQL server parameters in
+ Last updated 10/1/2020
mysql Howto Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-tls-configurations.md
description: Learn how to set TLS configuration using Azure portal for your Azur
+ Last updated 06/02/2020
mysql Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-common-connection-issues.md
keywords: mysql connection,connection string,connectivity issues,transient error
+ Last updated 3/18/2020
mysql Howto Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-common-errors.md
Title: Troubleshoot common errors - Azure Database for MySQL
description: Learn how to troubleshoot common migration errors encountered by users new to the Azure Database for MySQL service +
mysql Howto Troubleshoot High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-high-cpu-utilization.md
description: Learn how to troubleshoot high CPU utilization in Azure Database fo
+ Last updated 4/27/2022
mysql Howto Troubleshoot Low Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-low-memory-issues.md
description: Learn how to troubleshoot low memory issues in Azure Database for M
+ Last updated 4/22/2022
mysql Howto Troubleshoot Query Performance New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-query-performance-new.md
description: Learn how to troubleshoot query performance in Azure Database for M
+ Last updated 4/22/2022
mysql Howto Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-query-performance.md
description: Learn how to profile query performance in Azure Database for MySQL
+ Last updated 3/30/2022
mysql Howto Troubleshoot Replication Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-replication-latency.md
keywords: mysql, troubleshoot, replication latency in seconds
+ Last updated 01/13/2021
mysql Howto Troubleshoot Sys Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-sys-schema.md
description: Learn how to use the sys_schema to find performance issues and main
+ Last updated 3/10/2022
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/overview.md
Title: Overview - Azure Database for MySQL
description: Learn about the Azure Database for MySQL service, a relational database service in the Microsoft cloud based on the MySQL Community Edition. +
mysql Partners Migration Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/partners-migration-mysql.md
description: Lists of third-party migration partners with solutions that support
+ Last updated 08/18/2021
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/policy-reference.md
+ # Azure Policy built-in definitions for Azure Database for MySQL
mysql Quickstart Create Mysql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-arm-template.md
description: In this Quickstart, learn how to create an Azure Database for MySQL
+ Last updated 05/19/2020
mysql Quickstart Create Mysql Server Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-azure-cli.md
description: This quickstart describes how to use the Azure CLI to create an Azu
+ ms.devlang: azurecli Last updated 07/15/2020
mysql Quickstart Create Mysql Server Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-azure-portal.md
description: This article walks you through using the Azure portal to create a s
+ Last updated 11/04/2020
mysql Quickstart Create Mysql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-azure-powershell.md
description: This quickstart describes how to use PowerShell to create an Azure
+ ms.devlang: azurepowershell Last updated 04/28/2020
mysql Quickstart Create Mysql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-bicep.md
+
+ Title: 'Quickstart: Create an Azure DB for MySQL - Bicep'
+description: In this Quickstart, learn how to create an Azure Database for MySQL server with virtual network integration using Bicep.
+++++ Last updated : 05/02/2022++
+# Quickstart: Use Bicep to create an Azure Database for MySQL server
++
+Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. In this quickstart, you use Bicep to create an Azure Database for MySQL server with virtual network integration. You can create the server in the Azure portal, Azure CLI, or Azure PowerShell.
++
+## Prerequisites
+
+You need an Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+# [PowerShell](#tab/PowerShell)
+
+* If you want to run the code locally, [Azure PowerShell](/powershell/azure/).
+
+# [CLI](#tab/CLI)
+
+* If you want to run the code locally, [Azure CLI](/cli/azure/).
+++
+## Review the Bicep file
+
+You create an Azure Database for MySQL server with a defined set of compute and storage resources. To learn more, see [Azure Database for MySQL pricing tiers](concepts-pricing-tiers.md). You create the server within an [Azure resource group](../azure-resource-manager/management/overview.md).
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/managed-mysql-with-vnet/).
++
+The Bicep file defines five Azure resources:
+
+* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualnetworks/subnets)
+* [**Microsoft.DBforMySQL/servers**](/azure/templates/microsoft.dbformysql/servers)
+* [**Microsoft.DBforMySQL/servers/virtualNetworkRules**](/azure/templates/microsoft.dbformysql/servers/virtualnetworkrules)
+* [**Microsoft.DBforMySQL/servers/firewallRules**](/azure/templates/microsoft.dbformysql/servers/firewallrules)
+
+## Deploy the Bicep file
++
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters serverName=<server-name> administratorLogin=<admin-login>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -serverName "<server-name>" -administratorLogin "<admin-login>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<server-name\>** with the server name for Azure database for MySQL. Replace **\<admin-login\>** with the database administrator login name. You'll also be prompted to enter **administratorLoginPassword**. The minimum password length is eight characters.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+For a step-by-step tutorial that guides you through the process of creating a Bicep file with Visual Studio Code, see:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
mysql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-server-up-azure-cli.md
description: Quickstart guide to create Azure Database for MySQL server using Az
+ ms.devlang: azurecli Last updated 3/18/2020
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-mysql-github-actions.md
Title: 'Quickstart: Connect to Azure MySQL with GitHub Actions'
description: Use Azure MySQL from a GitHub Actions workflow + Last updated 02/14/2022
mysql Reference Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/reference-stored-procedures.md
description: Learn which stored procedures in Azure Database for MySQL are usefu
+ Last updated 3/18/2020
mysql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/sample-scripts-azure-cli.md
description: This article lists the Azure CLI code samples available for interac
+ ms.devlang: azurecli
mysql Sample Scripts Java Connection Pooling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/sample-scripts-java-connection-pooling.md
+ Last updated 02/28/2018
mysql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-change-server-configuration.md
description: This sample CLI script lists all available server configurations an
+ ms.devlang: azurecli
mysql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-create-server-and-firewall-rule.md
description: This sample CLI script creates an Azure Database for MySQL server a
+ ms.devlang: azurecli
mysql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-point-in-time-restore.md
description: This sample Azure CLI script shows how to restore an Azure Database
+ ms.devlang: azurecli
mysql Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-scale-server.md
description: This sample CLI script scales Azure Database for MySQL server to a
+ ms.devlang: azurecli
mysql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-server-logs.md
description: This sample Azure CLI script shows how to enable and download the s
+ ms.devlang: azurecli
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/security-controls-policy.md
+ # Azure Policy Regulatory Compliance controls for Azure Database for MySQL
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/select-right-deployment-type.md
description: This article describes what factors to consider before you deploy A
+ Last updated 08/26/2020
mysql Single Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server-overview.md
Title: Overview - Azure Database for MySQL Single Server
description: Learn about the Azure Database for MySQL Single server, a relational database service in the Microsoft cloud based on the MySQL Community Edition. +
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server-whats-new.md
Title: What's new in Azure Database for MySQL Single Server
description: Learn about recent updates to Azure Database for MySQL - Single server, a relational database service in the Microsoft cloud based on the MySQL Community Edition. +
Azure Database for MySQL is a relational database service in the Microsoft cloud
This article summarizes new releases and features in Azure Database for MySQL - Single Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+## May 2022
+
+Enabled the ability to change the server parameter innodb_ft_server_stopword_table from Portal/CLI.
+Users can now change the value of the innodb_ft_server_stopword_table parameter using the Azure portal and CLI. This parameter helps to configure your own InnoDB FULLTEXT index stopword list for all InnoDB tables. For more information, see [innodb_ft_server_stopword_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_ft_server_stopword_table).
+ ## March 2022 This release of Azure Database for MySQL - Single Server includes the following updates.
mysql Tutorial Design Database Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/tutorial-design-database-using-cli.md
description: This tutorial explains how to create and manage Azure Database for
+ ms.devlang: azurecli Last updated 12/02/2019
mysql Tutorial Design Database Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/tutorial-design-database-using-portal.md
description: This tutorial explains how to create and manage Azure Database for
+ Last updated 3/20/2020
mysql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/tutorial-design-database-using-powershell.md
description: This tutorial explains how to create and manage Azure Database for
+ ms.devlang: azurepowershell Last updated 04/29/2020
mysql Tutorial Provision Mysql Server Using Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/tutorial-provision-mysql-server-using-Azure-Resource-Manager-templates.md
description: This tutorial explains how to provision and automate Azure Database
+ Last updated 12/02/2019
mysql Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/videos.md
description: This page lists video content relevant for learning Azure Database
+ Last updated 02/28/2018
open-datasets Dataset 1000 Genomes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-1000-genomes.md
Phase 3 Analysis: A global reference for human genetic variation Nature 526, 68-
For details on data formats refer to http://www.internationalgenome.org/formats
+**[NEW]** the dataset is also available in [parquet format](https://github.com/microsoft/genomicsnotebook/tree/main/vcf2parquet-conversion/1000genomes)
+ [!INCLUDE [Open Dataset usage notice](../../includes/open-datasets-usage-note.md)] ## Data source
West Central US: 'https://dataset1000genomes-secondary.blob.core.windows.net/dat
[SAS Token](../storage/common/storage-sas-overview.md): sv=2019-10-10&si=prod&sr=c&sig=9nzcxaQn0NprMPlSh4RhFQHcXedLQIcFgbERiooHEqM%3D
+## Data Access: Curated 1000 genomes dataset in parquet format
+
+East US: https://curated1000genomes.blob.core.windows.net/dataset
+
+SAS Token: sv=2018-03-28&si=prod&sr=c&sig=BgIomQanB355O4FhxqBL9xUgKzwpcVlRZdBewO5%2FM4E%3D
+ ## Use Terms Following the final publications, data from the 1000 Genomes Project is publicly available without embargo to anyone for use under the terms provided by the dataset source ([http://www.internationalgenome.org/data](http://www.internationalgenome.org/data)). Use of the data should be cited per details available in the [FAQs]() from the 1000 Genome Project.
purview Apply Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/apply-classifications.md
This article discusses how to apply classifications on assets.
## Introduction
-Classifications can be system or custom types. System classifications are present in Microsoft Purview by default. Custom classifications can be created based on a regular expression pattern. Classifications can be applied to assets either automatically or manually.
+Classifications can be system or custom types. System classifications are present in Microsoft Purview by default. Custom classifications can be created based on a regular expression pattern and keyword lists. Classifications can be applied to assets either automatically via scanning or manually.
This document explains how to apply classifications to your data.
This document explains how to apply classifications to your data.
In Microsoft Purview, you can apply system or custom classifications on a file, table, or column asset. This article describes the steps to manually apply classifications on your assets. ### Apply classification to a file asset
-Microsoft Purview can scan and automatically classify documentation files. For example, if you have a file named *multiple.docx* and it has a National ID number in its content, Microsoft Purview adds the classification **EU National Identification Number** to the file asset's detail page.
+Microsoft Purview can scan and automatically classify documents. For example, if you have a file named *multiple.docx* and it has a National ID number in its content, Microsoft Purview adds the classification **EU National Identification Number** to the file asset's detail page.
-In some scenarios, you might want to manually add classifications to your file asset. If you have multiple files that are grouped into a resource set, add classifications at the resource set level.
+In some scenarios, you might want to manually add classifications to your file asset or if you have multiple files that are grouped into a resource set, add classifications at the resource set level.
Follow these steps to add a custom or system classification to a partition resource set:
To add a classification to a column:
:::image type="content" source="./media/apply-classifications/confirm-classification-added.png" alt-text="Screenshot showing how to confirm that a classification was added to a column schema.":::
+## View classification details
+Microsoft Purview captures important details like who applied a classification and when it was applied. To view the details, hover over the classification to revel the Classification details card. The classification details card shows the following information:
+- Classification name - Name of the classification applied on the asset or column.
+- Applied by - Who applied the classification. Possible values are scan and user name.
+- Applied time - Local timestamp when the classification was applied via scan or manually.
+- Classification type - System or custom.
+
+Users with *Data Curator* role will see additional details for classifications that were applied automatically via scan. These details will include sample count that the scanner read to classify the data and distinct data count in the sample that the scanner found.
++ ## Impact of rescanning on existing classifications Classifications are applied the first time, based on sample set check on your data and matching it against the set regex pattern. At the time of rescan, if new classifications apply, the column gets additional classifications on it. Existing classifications stay on the column, and must be removed manually.
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
Title: Understand access and permissions
-description: This article gives an overview permission, access control, and collections in Microsoft Purview. Role-based access control is managed within Microsoft Purview itself, so this guide will cover the basics to secure your information.
+ Title: Understand access and permissions in the Microsoft Purview Data Map
+description: This article gives an overview permission, access control, and collections in the Microsoft Purview Data Map. Role-based access control is managed within the Microsoft Purview Data Map itself, so this guide will cover the basics to secure your information.
Last updated 03/09/2022
-# Access control in Microsoft Purview
+# Access control in the Microsoft Purview Data Map
-Microsoft Purview uses **Collections** to organize and manage access across its sources, assets, and other artifacts. This article describes collections and access management in your Microsoft Purview account.
+The Microsoft Purview Data Map uses **Collections** to organize and manage access across its sources, assets, and other artifacts. This article describes collections and access management in your Microsoft Purview Data Map.
> [!IMPORTANT]
-> This article refers to permissions required for the Microsoft Purview governance portal. If you are looking for permissions information for the Microsoft Purview compliance center, follow [the article for permissions in the Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center-permissions).
+> This article refers to permissions required for the Microsoft Purview governance portal, and applications like the Microsoft Purview Data Map, Data Catalog, Data Estate Insights, etc. If you are looking for permissions information for the Microsoft Purview compliance center, follow [the article for permissions in the Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center-permissions).
## Collections
purview Concept Resource Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-resource-sets.md
Previously updated : 09/24/2021 Last updated : 05/09/2022 # Understanding resource sets
When Microsoft Purview detects resources that it thinks are part of a resource s
## Advanced resource sets
-By default, Microsoft Purview determines the schema and classifications for resource sets based upon the [resource set file sampling rules](sources-and-scans.md#resource-set-file-sampling). Microsoft Purview can customize and further enrich your resource set assets through the **Advanced Resource Sets** capability. When Advanced Resource Sets are enabled, Microsoft Purview run extra aggregations to compute the following information about resource set assets:
+Microsoft Purview can customize and further enrich your resource set assets through the **Advanced Resource Sets** capability. Advanced resource sets allows Microsoft Purview to understand the underlying partitions of data ingested and enables the creation of [resource set pattern rules](how-to-resource-set-pattern-rules.md) that customize how Microsoft Purview groups resource sets during scanning.
+
+When Advanced Resource Sets are enabled, Microsoft Purview run extra aggregations to compute the following information about resource set assets:
-- Most up-to-date schema and classifications to accurately reflect schema drift from changing metadata. - A sample path from a file that comprises the resource set. - A partition count that shows how many files make up the resource set. -- A schema count that shows how many unique schemas were found. This value is either a number between 1ΓÇô5, or for values greater than 5, 5+.-- A list of partition types when more than a single partition type is included in the resource set. For example, an IoT sensor might output both XML and JSON files, although both are logically part of the same resource set. - The total size of all files that comprise the resource set. These properties can be found on the asset details page of the resource set. :::image type="content" source="media/concept-resource-sets/resource-set-properties.png" alt-text="The properties computed when advanced resource sets is on" border="true":::
-Enabling advanced resource sets also allows for the creation of [resource set pattern rules](how-to-resource-set-pattern-rules.md) that customize how Microsoft Purview groups resource sets during scanning.
- ### Turning on advanced resource sets Advanced resource sets is off by default in all new Microsoft Purview instances. Advanced resource sets can be enabled from **Account information** in the management hub.
When scanning a storage account, Microsoft Purview uses a set of defined pattern
To customize or override how Microsoft Purview detects which assets are grouped as resource sets and how they are displayed within the catalog, you can define pattern rules in the management center. For step-by-step instructions and syntax, please see [resource set pattern rules](how-to-resource-set-pattern-rules.md).
+## Known limitations with resource sets
+
+- By default, resource set assets will only be deleted by a scan if [Advanced Resource sets](#advanced-resource-sets) are enabled. If this capability is off, resource set assets can only be deleted manually or via API.
+- Currently, resource set assets will apply the first schema and classification discovered by the scan. Subsequent scans won't update the schema.
+ ## Next steps To get started with Microsoft Purview, see [Quickstart: Create a Microsoft Purview account](create-catalog-portal.md).
purview How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-manage-quotas.md
Title: Manage resources and quotas description: Learn about the quotas and limits on resources for Microsoft Purview and how to request quota increases.--++ Last updated 03/21/2022
purview How To Search Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-search-catalog.md
The following table contains the operators that can be used to compose a search
### Known limitations
-* Searching for classifications only matches on the formal classification name. For example, the keywords "World Cities" don't match classification "MICROSOFT.GOVERNMENT.CITY_NAME".
* Grouping isn't supported within a field search. Customers should use operators to connect field searches. For example,`name:(alice AND bob)` is invalid search syntax, but `name:alice AND name:bob` is supported. ## Next steps
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
Previously updated : 02/16/2022 Last updated : 05/09/2022
This article describes how you can create credentials in Microsoft Purview. Thes
A credential is authentication information that Microsoft Purview can use to authenticate to your registered data sources. A credential object can be created for various types of authentication scenarios, such as Basic Authentication requiring username/password. Credential capture specific information required to authenticate, based on the chosen type of authentication method. Credentials use your existing Azure Key Vaults secrets for retrieving sensitive authentication information during the Credential creation process.
-In Microsoft Purview, there are few options to use as authentication method to scan data sources such as the following options:
+In Microsoft Purview, there are few options to use as authentication method to scan data sources such as the following options. Learn from each [data source article](azure-purview-connector-overview.md) for the its supported authentication.
- [Microsoft Purview system-assigned managed identity](#use-microsoft-purview-system-assigned-managed-identity-to-set-up-scans) - [User-assigned managed identity](#create-a-user-assigned-managed-identity) (preview)
In Microsoft Purview, there are few options to use as authentication method to s
- SQL Authentication (using [Key Vault](#create-azure-key-vaults-connections-in-your-microsoft-purview-account)) - Service Principal (using [Key Vault](#create-azure-key-vaults-connections-in-your-microsoft-purview-account)) - Consumer Key (using [Key Vault](#create-azure-key-vaults-connections-in-your-microsoft-purview-account))
+- And more
Before creating any credentials, consider your data source types and networking requirements to decide which authentication method you need for your scenario.+ ## Use Microsoft Purview system-assigned managed identity to set up scans If you're using the Microsoft Purview system-assigned managed identity (SAMI) to set up scans, you won't need to create a credential and link your key vault to Microsoft Purview to store them. For detailed instructions on adding the Microsoft Purview SAMI to have access to scan your data sources, refer to the data source-specific authentication sections below:
At the bottom of the page, under Exception, enable the **Allow trusted Microsoft
To connect to Azure Key Vault with private endpoints, follow [Azure Key Vault's private endpoint documentation](../key-vault/general/private-link-service.md).
+> [!NOTE]
+> Private endpoint connection option is supported when using Azure integration runtime in [managed virtual network](catalog-managed-vnet.md) to scan the data sources. For self-hosted integration runtime, you need to enable [trusted Microsoft services](#trusted-microsoft-services).
+ ### Microsoft Purview permissions on the Azure Key Vault Currently Azure Key Vault supports two permission models:
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
Previously updated : 04/13/2022 Last updated : 05/09/2022 # Create and manage a self-hosted integration runtime
Your self-hosted integration runtime machine needs to connect to several resourc
* The Microsoft Purview services used to manage the self-hosted integration runtime. * The data sources you want to scan using the self-hosted integration runtime. * The managed Storage account and Event Hubs resource created by Microsoft Purview. Microsoft Purview uses these resources to ingest the results of the scan, among many other things, so the self-hosted integration runtime need to be able to connect with these resources.
-* The Azure Key Vault used to store credentials.
There are two firewalls to consider:
Depending on the sources you want to scan, you also need to allow other domains
| Domain names | Outbound ports | Description | | -- | -- | - |
-| `<your_key_vault_name>.vault.azure.net` | 443 | Required if any credentials are stored in Azure Key Vault. |
| `<your_storage_account>.dfs.core.windows.net` | 443 | When scan Azure Data Lake Store Gen 2. | | `<your_storage_account>.blob.core.windows.net` | 443 | When scan Azure Blob storage. | | `<your_sql_server>.database.windows.net` | 1433 | When scan Azure SQL Database. |
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Use any of the following deployment checklists during the setup or for troublesh
3. If user is recently created, login with the user at least once to make sure password is reset successfully and user can successfully initiate the session. 4. There is no MFA or Conditional Access Policies are enforced on the user. 9. Validate App registration settings to make sure:
- 1. App registration exists in your Azure Active Directory tenant.
- 2. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
+ 5. App registration exists in your Azure Active Directory tenant.
+ 6. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
1. Power BI Service Tenant.Read.All 2. Microsoft Graph openid 3. Microsoft Graph User.Read
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[scoped_credential] TO [PurviewA
> [!IMPORTANT] > Currently, we do not support setting up scans for an Azure Synapse workspace from the Microsoft Purview governance portal, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces. In this case:
-> - You can use [Microsoft Purview Rest API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to create a new scan for your Synapse workspaces including dedicated and serverless pools.
+> - You can use [Microsoft Purview REST API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to create a new scan for your Synapse workspaces including dedicated and serverless pools.
> - You must use **SQL Auth** as authentication mechanism. ### Create and run scan
resource-mover Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/common-questions.md
This article answers common questions about [Azure Resource Mover](overview.md).
### Can I move resources across any regions?
-Currently, you can move resources from any source public region to any target public region, depending on the [resource types available in that region](https://azure.microsoft.com/global-infrastructure/services/). Moving resources in Azure Government regions isn't currently supported.
+Currently, you can move resources from any source public region to any target public region and within regions in China, depending on the [resource types available in that region](https://azure.microsoft.com/global-infrastructure/services/). Moving resources within Azure Gov is also supported (US DoD Central, US DoD East, US Gov Arizona, US Gov Texas, US Gov Virginia). US Sec East/West/West Central are not currently supported.
+ ### What regions are currently supported?
Azure Resource Mover is currently available as follows:
**Support** | **Details** |
-Move support | Azure resources that are supported for move with Resource Mover can be moved from any public region to another public region.
+Move support | Azure resources that are supported for move with Resource Mover can be moved from any public region to another public region and within regions in China. Moving resources within Azure Gov is also supported (US DoD Central, US DoD East, US Gov Arizona, US Gov Texas, US Gov Virginia). US Sec East/West/West Central are not currently supported.
Metadata support | Supported regions for storing metadata about machines to be moved include East US2, North Europe, Southeast Asia, Japan East, UK South, and Australia East as metadata regions. <br/><br/> Moving resources within the Azure China region is also supported with the metadata region China North2. ### What resources can I move across regions using Resource Mover?
security Threat Modeling Tool Input Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-input-validation.md
In the preceding code example, the input value cannot be longer than 11 characte
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | N/A |
-| **Steps** | Many javascript functions don't do encoding by default. When assigning untrusted input to DOM elements via such functions, may result in cross site script (XSS) executions.|
+| **Steps** | Many JavaScript functions don't do encoding by default. When assigning untrusted input to DOM elements via such functions, may result in cross site script (XSS) executions.|
### Example Following are insecure examples:
sentinel Normalization About Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-parsers.md
_Im_Dns
> When using the ASIM parsers in the **Logs** page, the time range selector is set to `custom`. You can still set the time range yourself. Alternatively, specify the time range using parser parameters. >
-The following table lists unifying parsers available:
+The following table lists the available unifying parsers:
| Schema | Unifying parser | | | - |
Learn more about ASIM parsers:
- [ASIM parsers overview](normalization-parsers-overview.md) - [Manage ASIM parsers](normalization-manage-parsers.md) - [Develop custom ASIM parsers](normalization-develop-parsers.md)
+- [The ASIM parsers list](normalization-parsers-list.md)
Learn more about the ASIM in general:
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
Each schema field has a type. Some have built-in, Log Analytics types, such as `
|**IP address** |String | Microsoft Sentinel schemas don't have separate IPv4 and IPv6 addresses. Any IP address field might include either an IPv4 address or an IPv6 address, as follows: <br><br>- **IPv4** in a dot-decimal notation.<br>- **IPv6** in 8-hextets notation, allowing for the short form.<br><br>For example:<br>- **IPv4**: `192.168.10.10` <br>- **IPv6**: `FEDC:BA98:7654:3210:FEDC:BA98:7654:3210`<br>- **IPv6 short form**: `1080::8:800:200C:417A` | |**FQDN** | String | A fully qualified domain name using a dot notation, for example, `docs.microsoft.com`. For more information, see [The Device entity](#the-device-entity). | |<a name="hostname"></a>**Hostname** | String | A hostname which is not an FQDN, includes up to 63 characters including letters, numbers and hyphens. For more information, see [The Device entity](#the-device-entity).|
-|<a name="domaintype"></a>**DomainType** | Enumerated | The type of domain stored in domain and FQDN fields. Supported values include `FQDN` and `Windows`. For more information, see [The Device entity](#the-device-entity). |
-|<a name="dvcidtype"></a>**DvcIdType** | Enumerated | The type of the device ID stored in DvcId fields. Supported values include `AzureResourceId`, `MDEid`, `MD4IoTid`, `VMConnectionId`, `AwsVpcId`, `VectraId`, and `Other`. For more information, see [The Device entity](#the-device-entity). |
+| **DomainType** | Enumerated | The type of domain stored in domain and FQDN fields. For a list of values and more information, see [The Device entity](#the-device-entity). |
+| **DvcIdType** | Enumerated | The type of the device ID stored in DvcId fields. For a list of allowed values and further information refer to [DvcIdType](#dvcidtype). |
|<a name="devicetype"></a>**DeviceType** | Enumerated | The type of the device stored in DeviceType fields. Possible values include:<br>- `Computer`<br>- `Mobile Device`<br>- `IOT Device`<br>- `Other`. For more information, see [The Device entity](#the-device-entity). | |<a name="username"></a>**Username** | String | A valid username in one of the supported [types](#usernametype). For more information, see [The User entity](#the-user-entity). |
-|<a name="usernametype"></a>**UsernameType** | Enumerated | The type of username stored in username fields. Supported values include `UPN`, `Windows`, `DN`, `Simple`, and `Unknown`. For more information, see [The User entity](#the-user-entity). |
+|<a name="usernametype"></a>**UsernameType** | Enumerated | The type of username stored in username fields. For more information and list of supported values, see [The User entity](#the-user-entity). |
|<a name="useridtype"></a>**UserIdType** | Enumerated | The type of the ID stored in user ID fields. <br><br>Supported values are `SID`, `UIS`, `AADID`, `OktaId`, and `AWSId`. For more information, see [The User entity](#the-user-entity). |
-|<a name="usertype"></a>**UserType** | Enumerated | The type of a user. Supported values include: `Regular`, `Machine`, `Admin`, `System`, `Application`, `Service Principal`, and `Other`<br><br>. For more information, see [The User entity](#the-user-entity). |
+|<a name="usertype"></a>**UserType** | Enumerated | The type of a user. For more information and list of allowed values, see [The User entity](#the-user-entity). |
|<a name="apptype"></a>**AppType** | Enumerated | The type of an application. Supported values include: `Process`<br>, `Service`, `Resource`, `URL`, `SaaS application`, `CloudService`, and `Other`. | |**Country** | String | A string using [ISO 3166-1](https://www.iso.org/iso-3166-country-codes.html), according to the following priority: <br><br> - Alpha-2 codes, such as `US` for the United States. <br> - Alpha-3 codes, such as `USA` for the United States. <br>- Short name.<br><br>The list of codes can be found on the [International Standards Organization (ISO) website](https://www.iso.org/obp/ui/#search).| |**Region** | String | The country subdivision name, using ISO 3166-2.<br><br>The list of codes can be found on the [International Standards Organization (ISO) website](https://www.iso.org/obp/ui/#search).|
Each schema explicitly defines the central entities and entity fields. The follo
### The User entity
-The descriptors used for a user are Actor, Target User, and Updated User, as described in the following scenarios:
+Users are central to activities reported by events. The fields listed in this section are used to describe the users involved in the action. Prefixes are used to designate the role of the user in the activity. The prefixes `Src` and `Dst` are used to designate the user role in network related events, in which a source system and a destination system communicate. The prefixes 'Actor' and 'Target' are used for system oriented events such as process events.
-|Activity |Full scenario |Single entity scenario used for aliasing |
-||||
-|**Create user** | An Actor created or modified a Target User. | The (Target) User was created. |
-|**Modify user** | An Actor renamed Target User to Updated User. The Updated User usually doesn't have all the information associated with a user and has some overlap with the Target User. | |
-|**Network connection** | A process running as Actor on the source host, communicating with a process running as Target User on the destination host. | |
-|**DNS request** | An Actor initiated a DNS query. | |
-|**Sign-in** | An Actor signed in to a system as a Target User. |A (Target) User signed in. |
-|**Process creation** | An Actor (the user associated with the initiating process) has initiated process creation. The process created runs under the credentials of a Target User (the user related to the target process). | The process created runs under the credentials of a (Target) User. |
-|**Email** | An Actor sends an email to a Target User. | |
+#### The user ID
+| Field | Class | Type | Description |
+|-|-||-|
+| <a name="userid"></a>**UserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the user. |
+| <a name="useridtype"></a>**UserIdType** | Optional | UserIdType | The type of the ID stored in the [UserId](#userid) field. |
+| **SID**, **UID**, **AADID**, **OktaId**, **AWSId** | Optional | String | Fields used to store additional user IDs, if the original event includes multiple user IDs. Select the ID most associated with the event as the primary ID stored in [UserId](#userid).
-The following table describes the supported identifiers for a user:
+The allowed values for a user ID type are:
-|Normalized field |Type |Format and supported types |
-||||
-|**UserId** | String | A machine-readable, alphanumeric, unique representation of a user in a system. <br><br>Format and supported types include:<br> - **SID** (Windows): `S-1-5-21-1377283216-344919071-3415362939-500`<br> - **UID** (Linux): `4578`<br> - **AADID** (Azure Active Directory): `9267d02c-5f76-40a9-a9eb-b686f3ca47aa`<br> - **OktaId**: `00urjk4znu3BcncfY0h7`<br> - **AWSId**: `72643944673`<br><br> Store the ID type in the **UserIdType** field. If other IDs are available, we recommend that you normalize the field names to **UserSid**, **UserUid**, **UserAADID**, **UserOktaId**, and **UserAwsId**, respectively. |
-|**Username** | String | A username, including domain information when available, in one of the following formats and in the following order of priority: <br> - **Upn/Email**: `johndow@contoso.com` <br> - **Windows**: `Contoso\johndow` <br> - **DN**: `CN=Jeff Smith,OU=Sales,DC=Fabrikam,DC=COM` <br> - **Simple**: `johndow`. Use this form only if domain information is not available. <br><br> Store the Username type in the **UsernameType** field. |
+| Type | Description | Example |
+| - | - | - |
+| **SID** | A Windows user ID. | `S-1-5-21-1377283216-344919071-3415362939-500` |
+| **UID** | A Linux user ID. | `4578` |
+| **AADID**| An Azure Active Directory user ID.| `9267d02c-5f76-40a9-a9eb-b686f3ca47aa` |
+| **OktaId** | An Okta user ID. | `00urjk4znu3BcncfY0h7` |
+| **AWSId** | An AWS user ID. | `72643944673` |
-### The Process entity
+#### The user name
-The descriptors used for a user are Acting Process, Target Process, and Parent Process, as described in the following scenarios:
+| Field | Class | Type | Description |
+|-|-||-|
+| <a name="username"></a>**Username** | Optional | String | The source username, including domain information when available. Use the simple form only if domain information isn't available. Store the Username type in the [UsernameType](#usernametype) field. |
+| <a name="usernametype"></a>**UsernameType** | Optional | UsernameType | Specifies the type of the username stored in the [Username](#username) field. |
+| **UPN**, **WindowsUsername**, **DNUsername**, **SimpleUsername** | Optional | String | Fields used to store additional usernames, if the original event includes multiple usernames. Select the username most associated with the event as the primary username stored in [Username](#username). |
-- **Network connection**: An Acting Process initiated a network connection to communicate with Target Process on a remote system.-- **DNS request**: An Acting Process initiated a DNS query.-- **Sign-in**: An Acting Process initiated a signing into a remote system that ran a Target Process on its behalf.-- **Process creation**: An Acting Process has initiated a Target Process creation. The Parent Process is the parent of the acting process.
+The allowed values for a username type are:
-The following table describes the supported identifiers for processes:
+| Type | Description | Example |
+| - | - | - |
+| **UPN** | A UPN or Email address username designator. | `johndow@contoso.com` |
+| **Windows** | A Windows username including a domain. | `Contoso\johndow` |
+| **DN**| An LDAP distinguished name designator.| `CN=Jeff Smith,OU=Sales,DC=Fabrikam,DC=COM` |
+| **Simple** | A simple user name without a domain designator. | `johndow` |
+| **AWSId** | An AWS user ID. | `72643944673` |
-|Normalized field |Type |Format and supported types |
-||||
-|**Id** | String | The OS-assigned process ID. |
-|**Guid** | String | The OS-assigned process GUID. The GUID is commonly unique across system restarts, while the ID is often reused. |
-|**Path** | String | The full pathname of the process, including directory and file name. |
-|**Name** | Alias | The process name is an alias to the path. |
+#### Additional user fields
-For more information, see [Microsoft Sentinel Process Event normalization schema reference (preview)](process-events-normalization-schema.md).
+| Field | Class | Type | Description |
+|-|-||-|
+| <a name="usertype"></a>**UserType** | Optional | UserType | The type of source user. Supported values include: `Regular`, `Machine`, `Admin`, `System`, `Application`, `Service Principal`, and `Other`. The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [OriginalUserType](#originalusertype) field. |
+| <a name="originalusertype"></a>**OriginalUserType** | Optional | String | The original destination user type, if provided by the reporting device. |
-### The Device entity
-The normalization schemas attempt to follow user intuition as much as possible. They handle devices in different ways, depending on the scenario:
+### The device entity
-- When the event context implies a source and target device, the **Src** and **Target** descriptors are used. In such cases, the **Dvc** descriptor is used for the reporting device.-- For single device events, such as local OS events, the **Dvc** descriptor is used.-- If another gateway device is referenced in the event, and the value is different from the reporting device, the **Gateway** descriptor is used.
+Devices, or hosts, are the common terms used for the systems that take part in the event. The `Dvc` prefix is used to designate the primary device on which the event occurs. Some events, such as network sessions, have source and destination devices, designated by the prefix `Src` and `Dst`. In such a case, the `Dvc` prefix is used for the device reporting the event, which might be the source, destination, or a monitoring device.
-Device handling guidelines are further clarified as follows:
+### The device alias
-- **Network connection**: A connection was established from a Source Device (**Src**) to a Target Device (**Target**): The connection was reported by a (reporting) Device (**Dvc**).-- **Proxied network connection**: A connection was established from a Source Device (**Src**) to a Target Device (**Target**) through a Gateway Device (**Gateway**). A (reporting) Device reported the connection.-- **DNS request**: A DNS query was initiated from a Source Device (**Src**).-- **Sign-in**: A sign-in was initiated from a Source Device (**Src**) to a remote system on a Target Device (**Target**).-- **Process**: A process was initiated on a Device (**Dvc**).
+| Field | Class | Type | Description |
+||-||--|
+| <a name="dvc"></a>**Dvc**, <a name="src"></a>**Src**, <a name="dst"></a>**Dst** | Mandatory | String | The `Dvc`, 'Src', or 'Dst' fields are used as a unique identifier of the device. It is set to the best available identified for the device. These fields can alias the [FQDN](#fqdn), [DvcId](#dvcid), [Hostname](#hostname), or [IpAddr](#ipaddr) fields. For cloud sources, for which there is no apparent device, use the same value as the [Event Product](normalization-common-fields.md#eventproduct) field. |
-The following table describes the supported identifiers for devices:
-|Normalized field |Type |Format and supported types |
-||||
-|**Hostname** | String | |
-|**FQDN** | String | A fully qualified domain name. |
-|**IpAddr** | IP address | While devices might have multiple IP addresses, events usually have a single identifying IP address. The exception is a gateway device that might have two relevant IP addresses. For a gateway device, use `UpstreamIpAddr` and `DownstreamIpAddr`. |
-|**HostId** | String | |
+#### The device name
+Reported device names may include a hostname only, or a fully qualified domain name (FQDN), which includes a hostname and a domain name. The FQDN might be expressed using several formats. The following fields enable supporting the different variants in which the device name might be provided.
+| Field | Class | Type | Description |
+||-||--|
+| <a name ="hostname"></a>**Hostname** | Recommended | Hostname | The short hostname of the device. |
+| <a name="domain"></a>**Domain** | Recommended | String | The domain of the device on which the event occurred, without the hostname. |
+| <a name="domaintype"></a>**DomainType** | Recommended | Enumerated | The type of [Domain](#domain). Supported values include `FQDN` and `Windows`. This field is required if the [Domain](#domain) field is used. |
+| <a name="fqdn"></a>**FQDN** | Optional | String | The FQDN of the device including both [Hostname](#hostname) and [Domain](#domain) . This field supports both traditional FQDN format and Windows domain\hostname format. The [DomainType](#domaintype) field reflects the format used. |
-> [!NOTE]
-> `Domain` is a typical attribute of a device, but it isn't a complete identifier.
->
+For example:
+
+| Field | Value for input `appserver.contoso.com` | value for input `appserver` |
+| -- | | |
+| **Hostname** | `appserver` | `appserver` |
+| **Domain** | `contoso.con` | \<empty\> |
+| **DomainType** | `FQDN` | \<empty\> |
+| **FQDN** | `appserver.contoso.com` | \<empty\> |
++
+When the value provided by the source is an FQDN, or when the value may be either and FQDN or a short hostname, the parser should calculate the 4 values. The following code snippet would perform this calculation, in this case setting the `Dvc` fields based on ab input in the `Host` field
+
+``` KQL
+ | extend SplitHostname = split(Host,".")
+ | extend
+ DvcDomain = tostring(strcat_array(array_slice(SplitHostname, 1, -1), '.')),
+ DvcFQDN = iif (array_length(SplitHostname) > 1, Hostname, ''),
+ DvcDomainType = iif (array_length(SplitHostname) > 1, 'FQDN', '')
+ | extend
+ DvcHostname = tostring(SplitHostname[0])
+ | project-away SplitHostname
+```
++
+#### The device ID
++
+| Field | Class | Type | Description |
+||-||--|
+| <a name ="dvcid"></a>**DvcId** | Optional | String | The unique ID of the device . For example: `41502da5-21b7-48ec-81c9-baeea8d7d669` |
+| <a name="dvcidtype"></a>**DvcIdType** | Optional | Enumerated | The type of [DvcId](#dvcid). This field is required if the [DvcId](#dvcid) field is used. |
+| **DvcAzureResourceId**, **DvcMDEid**, **DvcMD4IoTid**, **DvcVMConnectionId**, **DvcVectraId**, **DvcAwsVpcId** | Optional | String | Fields used to store additional device IDs, if the original event includes multiple device IDs. Select the device ID most associated with the event as the primary ID stored in [DvcId](#dvcid). |
+
+Note that fields named should prepend a role prefix such as `Src` or `Dst`, but should not prepend a second `Dvc` prefix if used in that role.
+
+The allowed values for a device ID type are:
+
+| Type | Description |
+| - | - |
+| **MDEid** | The system ID assigned by Microsoft Defender for Endpoint. |
+| **AzureResourceId** | The Azure resource ID. |
+| **MD4IoTid**| The Microsoft Defender for IoT resource ID.|
+| **VMConnectionId** | The Azure Monitor VM Insights solution resource ID. |
+| **AwsVpcId** | An AWS VPC ID. |
+| **VectraId** | A Vectra AI assigned resource ID.|
+| **Other** | An ID type not listed above.|
+
+For example, the Azure Monitor [VM Insights solution](/azure/azure-monitor/vm/vminsights-log-search) provides network sessions information in the `VMConnection`. The table provides an Azure Resource ID in the `_ResourceId` field and a VM insights specific device ID in the `Machine` field. Use the following mapping to represent those IDs:
+
+| Field | Map to |
+| -- | -- |
+| **DvcId** | The `Machine` field in the `VMConnection` table. |
+| **DvcIdType** | The value `VMConnectionId` |
+| **DvcAzureResourceId** | The `_ResourceId` field in the `VMConnection` table. |
++
+#### Additional device fields
++
+| Field | Class | Type | Description |
+||-||--|
+| <a name ="ipaddr"></a>**IpAddr** | Recommended | IP address | The IP address of the device. <br><br>Example: `45.21.42.12` |
+| <a name = "dvcdescription"></a>**DvcDescription** | Optional | String | A descriptive text associated with the device. For example: `Primary Domain Controller`. |
+| <a name="macaddr"></a>**MacAddr** | Optional | MAC | The MAC address of the device on which the event occurred or which reported the event. <br><br>Example: `00:1B:44:11:3A:B7` |
+| <a name="zone"></a>**Zone** | Optional | String | The network on which the event occurred or which reported the event, depending on the schema. The zone is defined by the reporting device.<br><br>Example: `Dmz` |
+| <a name="dvcos"></a>**DvcOs** | Optional | String | The operating system running on the device on which the event occurred or which reported the event. <br><br>Example: `Windows` |
+| <a name="dvcosversion"></a>**DvcOsVersion** | Optional | String | The version of the operating system on the device on which the event occurred or which reported the event. <br><br>Example: `10` |
+| <a name="dvcaction"></a>**DvcAction** | Optional | String | For reporting security systems, the action taken by the system, if applicable. <br><br>Example: `Blocked` |
+| <a name="dvcoriginalaction"></a>**DvcOriginalAction** | Optional | String | The original [DvcAction](#dvcaction) as provided by the reporting device. |
+| <a name="interface"></a>**Interface** | Optional | String | The network interface on which data was captured. This field is typically relevant to network related activity which is captured by an intermediate or tap device. |
+| <a name="subscription"></a>**SubscriptionId** | Optional | String | The cloud platform subscription ID the device belongs to. **DvcSubscriptionId** map to a subscription ID on Azure and to an account ID on AWS. |
+
+Note that fields named in the list with the Dvc prefix should prepend a role prefix such as `Src` or `Dst`, but should not prepend a second `Dvc` prefix if used in that role.
-For more information, see [Microsoft Sentinel Authentication normalization schema reference (preview)](authentication-normalization-schema.md).
### Sample entity mapping
sentinel Normalization Develop Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-develop-parsers.md
When each source value maps to a target value, define the mapping using the `dat
Notice that lookup is useful and efficient also when the mapping has only two possible values.
-When the mapping conditiond are more complex use the `iff` or `case` functions. The `iff` function enables mapping two values:
+When the mapping conditions are more complex use the `iff` or `case` functions. The `iff` function enables mapping two values:
```KQL | extend EventResult =
To make sure that your parser produces valid values, use the ASIM data tester by
<parser name> | limit <X> | invoke ASimDataTester('<schema>') ```
-This test is resource intensive and may not work on your entire data set. Set X to the largest number for which the query will not timeout, or set the time range for the query using the time range picker.
+This test is resource intensive and may not work on your entire data set. Set X to the largest number for which the query will not time out, or set the time range for the query using the time range picker.
Handle the results as follows:
Learn more about ASIM parsers:
- [ASIM parsers overview](normalization-parsers-overview.md) - [Use ASIM parsers](normalization-about-parsers.md) - [Manage ASIM parsers](normalization-manage-parsers.md)
+- [The ASIM parsers list](normalization-parsers-list.md)
Learn more about the ASIM in general:
sentinel Normalization Manage Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-manage-parsers.md
Learn more about ASIM parsers:
- [ASIM parsers overview](normalization-parsers-overview.md) - [Use ASIM parsers](normalization-about-parsers.md) - [Develop custom ASIM parsers](normalization-develop-parsers.md)
+- [The ASIM parsers list](normalization-parsers-list.md)
Learn more about the ASIM in general:
sentinel Normalization Parsers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-overview.md
Learn more about ASIM parsers:
- [Use ASIM parsers](normalization-about-parsers.md) - [Develop custom ASIM parsers](normalization-develop-parsers.md) - [Manage ASIM parsers](normalization-manage-parsers.md)
+- [The ASIM parsers list](normalization-parsers-list.md)
For more about ASIM, in general, see:
sentinel Process Events Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/process-events-normalization-schema.md
For more information, see [ASIM parsers overview](normalization-parsers-overview
## Add your own normalized parsers
-When implementing custom parsers for the [Process Event](normalization-about-schemas.md#the-process-entity) information model, name your KQL functions using the following syntax: `imProcessCreate<vendor><Product>` and `imProcessTerminate<vendor><Product>`. Replace `im` with `ASim` for the parameter-less version
+When implementing custom process event parsers, name your KQL functions using the following syntax: `imProcessCreate<vendor><Product>` and `imProcessTerminate<vendor><Product>`. Replace `im` with `ASim` for the parameter-less version.
-Add your KQL function to the `imProcess<Type>` and `imProcess` unifying parsers to ensure that any content using the [Process Event](normalization-about-schemas.md#the-process-entity) model also uses your new parser.
+Add your KQL function to the unifying parsers as described in [Managing ASIM parsers](normalization-manage-parsers.md).
### Filtering parser parameters
service-bus-messaging Service Bus Tutorial Topics Subscriptions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-tutorial-topics-subscriptions-portal.md
private async Task ReceiveMessages(string subscription)
// to the broker for the specified amount of seconds and the broker returns messages as soon as they arrive. The client then initiates // a new connection. So in reality you would not want to break out of the loop. // Also note that the code shows how to batch receive, which you would do for performance reasons. For convenience you can also always
- // use the regular receive pump which we show in our Quick Start and in other github samples.
+ // use the regular receive pump which we show in our Quick Start and in other GitHub samples.
while (true) { try
spring-cloud How To Migrate Standard Tier To Enterprise Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-migrate-standard-tier-to-enterprise-tier.md
+
+ Title: How to migrate an Azure Spring Cloud Basic or Standard tier instance to Enterprise tier
+
+description: How to migrate an Azure Spring Cloud Basic or Standard tier instance to Enterprise tier
++++ Last updated : 05/09/2022+++
+# Migrate an Azure Spring Cloud Basic or Standard tier instance to Enterprise tier
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article shows you how to migrate an existing application in Basic or Standard tier to Enterprise tier. When you migrate from Basic or Standard tier to Enterprise tier, VMware Tanzu components will replace the OSS Spring Cloud components to provide more feature support.
+
+## Prerequisites
+
+- An already provisioned Azure Spring Cloud Enterprise tier service instance with Spring Cloud Gateway for Tanzu enabled. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using Enterprise tier](./quickstart-provision-service-instance-enterprise.md). However, you won't need to change any code in your applications.
+- [Azure CLI version 2.0.67 or later](/cli/azure/install-azure-cli).
+
+## Using Application Configuration Service for configuration
+
+In Enterprise tier, Application Configuration Service provides external configuration support for your apps. Managed Spring Cloud Config Server is only available in Basic and Standard tiers and is not available in Enterprise tier.
+
+## Configure Application Configuration Service for Tanzu settings
+
+Follow these steps to use Application Configuration Service for Tanzu as a centralized configuration service.
+
+# [Azure portal](#tab/azure-portal)
+
+1. Select **Application Configuration Service**.
+1. Select **Overview** to view the running state and resources allocated to Application Configuration Service for Tanzu.
+
+ :::image type="content" source="./media/enterprise/getting-started-enterprise/config-service-overview.png" alt-text="Application Configuration Service Overview screen" lightbox="./media/enterprise/getting-started-enterprise/config-service-overview.png":::
+
+1. Select **Settings**, then add a new entry in the **Repositories** section with the Git backend information.
+
+1. Select **Validate** to validate access to the target URI. After validation completes successfully, select **Apply** to update the configuration settings.
+
+ :::image type="content" source="./media/enterprise/getting-started-enterprise/config-service-settings.png" alt-text="Application Configuration Service Settings overview" lightbox="./media/enterprise/getting-started-enterprise/config-service-settings.png":::
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az spring-cloud application-configuration-service git repo add \
+ --name <entry-name> \
+ --patterns <patterns> \
+ --uri <git-backend-uri> \
+ --label <git-branch-name>
+```
+++
+### Bind application to Application Configuration Service for Tanzu and configure patterns
+
+When you use Application Configuration Service for Tanzu with a Git backend, you must bind the app to Application Configuration Service for Tanzu. After binding the app, you'll need to configure which pattern will be used by the app. Follow these steps to bind and configure the pattern for the app.
+
+# [Azure portal](#tab/azure-portal)
+
+1. Open the **App binding** tab.
+
+1. Select **Bind app** and choose one app in the dropdown, then select **Apply** to bind.
+
+ :::image type="content" source="./media/enterprise/how-to-enterprise-application-configuration-service/config-service-app-bind-dropdown.png" alt-text="How to bind Application Configuration Service screenshot":::
+
+ > [!NOTE]
+ > When you change the bind/unbind status, you must restart or redeploy the app for the binding to take effect.
+
+1. Select **Apps**, then select the [pattern(s)](./how-to-enterprise-application-configuration-service.md#pattern) to be used by the apps.
+
+ 1. In the left navigation menu, select **Apps** to view the list of apps.
+
+ 1. Select the target app to configure patterns for from the `name` column.
+
+ 1. In the left navigation pane, select **Configuration**, then select **General settings**.
+
+ 1. In the **Config file patterns** dropdown, choose one or more patterns from the list.
+
+ :::image type="content" source="./media/enterprise/how-to-enterprise-application-configuration-service/config-service-pattern.png" alt-text="Bind Application Configuration Service in deployment screenshot":::
+
+ 1. Select **Save**.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az spring-cloud application-configuration-service bind --app <app-name>
+az spring-cloud app deploy \
+ --name <app-name> \
+ --artifact-path <path-to-your-JAR-file> \
+ --config-file-pattern <config-file-pattern>
+```
+++
+For more information, see [Use Application Configuration Service for Tanzu](./how-to-enterprise-application-configuration-service.md).
+
+## Bind an application to Tanzu Service Registry
+
+[Service Registry](https://docs.pivotal.io/spring-cloud-services/2-1/common/service-registry/https://docsupdatetracker.net/index.html) is one of the proprietary VMware Tanzu components. It provides your apps with an implementation of the Service Discovery pattern, one of the key concepts of a microservice-based architecture.
+
+Use the following steps to bind an application to Tanzu Service Registry.
+
+1. Open the **App binding** tab.
+
+1. Select **Bind app** and choose one app in the dropdown, then select **Apply** to bind.
+
+ :::image type="content" source="./media/enterprise/how-to-enterprise-service-registry/service-reg-app-bind-dropdown.png" alt-text="Bind Service Registry dropdown screenshot":::
+
+ > [!NOTE]
+ > When you change the bind/unbind status, you must restart or redeploy the app to make the change take effect.
+
+For more information, see [Use Tanzu Service Registry](./how-to-enterprise-service-registry.md).
+
+## Create and configure an application using Spring Cloud Gateway for Tanzu
+
+[Spring Cloud Gateway for Tanzu](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/https://docsupdatetracker.net/index.html) is one of the VMware Tanzu components. It's based on the open-source Spring Cloud Gateway project. Spring Cloud Gateway for Tanzu handles cross-cutting concerns for API development teams, such as Single Sign-On (SSO), access control, rate-limiting, resiliency, security, and more.
+
+Use the following steps to create and configure an application using Spring Cloud Gateway for Tanzu.
+
+### Create an app for Spring Cloud Gateway to route traffic to
+
+1. Create an app which Spring Cloud Gateway for Tanzu will route traffic to by following the instructions in [Quickstart: Build and deploy apps to Azure Spring Cloud using the Enterprise tier](quickstart-deploy-apps-enterprise.md).
+
+1. Assign a public endpoint to the gateway to access it.
+
+ # [Azure [portal](#tab/azure-portal)
+
+ 1. Select the **Spring Cloud Gateway** section, then select **Overview** to view the running state and resources given to Spring Cloud Gateway and its operator.
+
+ 1. Select **Yes** next to *Assign endpoint* to assign a public endpoint. You'll get a URL in a few minutes. Save the URL to use later.
+
+ :::image type="content" source="./media/enterprise/getting-started-enterprise/gateway-overview.png" alt-text="Gateway overview screenshot showing assigning endpoint" lightbox="./media/enterprise/getting-started-enterprise/gateway-overview.png":::
+
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az spring-cloud gateway update --assign-endpoint
+ ```
+
+
+### Configure Spring Cloud Gateway
+
+1. Configure Spring Cloud Gateway for Tanzu properties using the CLI:
+
+ ```azurecli
+ az spring-cloud gateway update \
+ --api-description "<api-description>" \
+ --api-title "<api-title>" \
+ --api-version "v0.1" \
+ --server-url "<endpoint-in-the-previous-step>" \
+ --allowed-origins "*"
+ ```
+
+ You can view the properties in the portal.
+
+ :::image type="content" source="./media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-configuration.png" alt-text="Gateway Configuration settings screenshot" lightbox="./media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-configuration.png":::
+
+1. Configure routing rules to apps.
+
+ Create rules to access apps deployed in the above steps through Spring Cloud Gateway for Tanzu.
+
+ Save the following content to your application's JSON file, changing the placeholders to your application's information.
+
+ ```json
+ [
+ {
+ "title": "<your-title>",
+ "description": "Route to <your-app-name>",
+ "predicates": [
+ "Path=/api/<your-app-name>/owners"
+ ],
+ "filters": [
+ "StripPrefix=2"
+ ],
+ "tags": [
+ "<your-tags>"
+ ]
+ }
+ ]
+ ```
+
+1. Apply the rule to your application using the following command:
+
+ ```azurecli
+ az spring-cloud gateway route-config create \
+ --name <your-app-name-rule> \
+ --app-name <your-app-name> \
+ --routes-file <your-app-name>.json
+ ```
+
+ You can view the routes in the portal.
+
+ :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-route.png" alt-text="Example screenshot of gateway routing configuration" lightbox="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-route.png":::
+
+## Access application APIs through the gateway endpoint
+
+1. Access the application APIs through the gateway endpoint using the following command:
+
+ ```bash
+ curl https://<endpoint-url>/api/<your-app-name>
+ ```
+
+1. Query the routing rules using the following commands:
+
+ ```azurecli
+ az configure --defaults group=<resource group name> spring-cloud=<service name>
+ az spring-cloud gateway route-config show \
+ --name <your-app-rule> \
+ --query '{appResourceId:properties.appResourceId, routes:properties.routes}'
+ az spring-cloud gateway route-config list \
+ --query '[].{name:name, appResourceId:properties.appResourceId, routes:properties.routes}'
+ ```
+
+For more information, see [Use Spring Cloud Gateway for Tanzu](./how-to-use-enterprise-spring-cloud-gateway.md).
+
+## Next steps
+
+- [Azure Spring Cloud](index.yml)
static-web-apps Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-cli.md
Now that the repository is created, you can create a static web app from the Azu
As you execute this command, the CLI starts GitHub interactive login experience. Look for a line in your console that resembles the following message.
- > Please navigate to `https://github.com/login/device` and enter the user code 329B-3945 to activate and retrieve your github personal access token.
+ > Please navigate to `https://github.com/login/device` and enter the user code 329B-3945 to activate and retrieve your GitHub personal access token.
1. Navigate to **https://github.com/login/device**.
storage Customer Managed Keys Configure Key Vault Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-key-vault-hsm.md
Previously updated : 03/30/2021 Last updated : 05/05/2022
Finally, configure Azure Storage encryption with customer-managed keys to use a
Install Azure CLI 2.12.0 or later to configure encryption to use a customer-managed key in a managed HSM. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-To automatically update the key version for a customer-managed key, omit the key version when you configure encryption with customer-managed keys for the storage account. Call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings, as shown in the following example. Include the `--encryption-key-source parameter` and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account. Remember to replace the placeholder values in brackets with your own values.
+To automatically update the key version for a customer-managed key, omit the key version when you configure encryption with customer-managed keys for the storage account. For more information about configuring encryption for automatic key rotation, see [Update the key version](customer-managed-keys-overview.md#update-the-key-version).
+
+Next, call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings, as shown in the following example. Include the `--encryption-key-source parameter` and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account. Remember to replace the placeholder values in brackets with your own values.
```azurecli hsmurl = $(az keyvault show \
storage Customer Managed Keys Configure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-key-vault.md
Previously updated : 03/07/2022 Last updated : 05/05/2022
When you configure encryption with customer-managed keys for an existing storage
You can use either a system-assigned or user-assigned managed identity to authorize access to the key vault when you configure customer-managed keys for an existing storage account. > [!NOTE]
-> To rotate a key, create a new version of the key in Azure Key Vault. Azure Storage does not handle the rotation of the key in Azure Key Vault, so you will need to rotate your key manually or create a function to rotate it on a schedule.
+> To rotate a key, create a new version of the key in Azure Key Vault. Azure Storage does not handle key rotation, so you will need to manage rotation of the key in the key vault. You can [configure key auto-rotation in Azure Key Vault](../../key-vault/keys/how-to-configure-key-rotation.md) or rotate your key manually.
### Configure encryption for automatic updating of key versions
-Azure Storage can automatically update the customer-managed key that is used for encryption to use the latest key version. When the customer-managed key is rotated in Azure Key Vault, Azure Storage will automatically begin using the latest version of the key for encryption.
+Azure Storage can automatically update the customer-managed key that is used for encryption to use the latest key version from the key vault. Azure Storage checks the key vault daily for a new version of the key. When a new version becomes available, then Azure Storage automatically begins using the latest version of the key for encryption.
+
+> [!IMPORTANT]
+> Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours before disabling the older version.
### [Azure portal](#tab/portal)
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-overview.md
Previously updated : 01/24/2022 Last updated : 05/05/2022
Using a key vault or managed HSM has associated costs. For more information, see
When you configure encryption with customer-managed keys, you have two options for updating the key version: -- **Automatically update the key version:** To automatically update a customer-managed key when a new version is available, omit the key version when you enable encryption with customer-managed keys for the storage account. If the key version is omitted, then Azure Storage checks the key vault or managed HSM daily for a new version of a customer-managed key. Azure Storage automatically uses the latest version of the key.
+- **Automatically update the key version:** To automatically update a customer-managed key when a new version is available, omit the key version when you enable encryption with customer-managed keys for the storage account. If the key version is omitted, then Azure Storage checks the key vault or managed HSM daily for a new version of a customer-managed key. If a new key version is available, then Azure Storage automatically uses the latest version of the key.
+
+ Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours before disabling the older version.
+ - **Manually update the key version:** To use a specific version of a key for Azure Storage encryption, specify that key version when you enable encryption with customer-managed keys for the storage account. If you specify the key version, then Azure Storage uses that version for encryption until you manually update the key version. When the key version is explicitly specified, then you must manually update the storage account to use the new key version URI when a new version is created. To learn how to update the storage account to use a new version of the key, see [Configure encryption with customer-managed keys stored in Azure Key Vault](customer-managed-keys-configure-key-vault.md) or [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md).
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azurite.md
There are several different ways to install and run Azurite on your local system
### [Visual Studio](#tab/visual-studio)
-Azurite is automatically available with [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). If you are running an earlier version of Visual Studio, you'll need to install Azurite by using either Node Package Manager, DockerHub, or by cloning the Azurite github repository.
+Azurite is automatically available with [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). If you are running an earlier version of Visual Studio, you'll need to install Azurite by using either Node Package Manager, DockerHub, or by cloning the Azurite GitHub repository.
### [Visual Studio Code](#tab/visual-studio-code)
storage Files Smb Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md
description: Learn about file shares hosted in Azure Files using the Server Mess
Previously updated : 09/10/2021 Last updated : 05/09/2022
To view the status of SMB Multichannel, navigate to the storage account containi
To enable or disable SMB Multichannel, select the current status (**Enabled** or **Disabled** depending on the status). The resulting dialog provides a toggle to enable or disable SMB Multichannel. Select the desired state and select **Save**. # [PowerShell](#tab/azure-powershell) To get the status of SMB Multichannel, use the `Get-AzStorageFileServiceProperty` cmdlet. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment before running these PowerShell commands.
Azure Files exposes settings that let you toggle the SMB protocol to be more com
Azure Files exposes the following settings: -- **SMB versions**: Which versions of SMB are allowed. Supported protocol versions are SMB 3.1.1, SMB 3.0, and SMB 2.1. By default, all SMB versions are allowed, although SMB 2.1 is disallowed if "require secure transit" is enabled, since SMB 2.1 does not support encryption in transit.
+- **SMB versions**: Which versions of SMB are allowed. Supported protocol versions are SMB 3.1.1, SMB 3.0, and SMB 2.1. By default, all SMB versions are allowed, although SMB 2.1 is disallowed if "require secure transfer" is enabled, because SMB 2.1 does not support encryption in transit.
- **Authentication methods**: Which SMB authentication methods are allowed. Supported authentication methods are NTLMv2 and Kerberos. By default, all authentication methods are allowed. Removing NTLMv2 disallows using the storage account key to mount the Azure file share. - **Kerberos ticket encryption**: Which encryption algorithms are allowed. Supported encryption algorithms are AES-256 (recommended) and RC4-HMAC. - **SMB channel encryption**: Which SMB channel encryption algorithms are allowed. Supported encryption algorithms are AES-256-GCM, AES-128-GCM, and AES-128-CCM.
+The SMB security settings can be viewed and changed using the Azure portal, PowerShell, or CLI. Please select the desired tab to see the steps on how to get and set the SMB security settings.
+ # [Portal](#tab/azure-portal)
-The SMB security settings can be viewed and changed using PowerShell or CLI. Please select the desired tab to see the steps on how to get and set the SMB security settings.
+To view or change the SMB security settings using the Azure portal, follow these steps:
+
+1. Search for **Storage accounts** and select the storage account for which you want to view the security settings.
+
+1. Select **Data storage** > **File shares**.
+
+1. Under **File share settings**, select the value associated with **Security**. If you haven't modified the security settings, this value defaults to **Maximum compatibility**.
+
+ :::image type="content" source="media/files-smb-protocol/file-share-settings.png" alt-text="A screenshot showing where to change SMB security settings.":::
+
+1. Under **Profile**, select **Maximum compatibility**, **Maximum security**, or **Custom**. Selecting **Custom** allows you to create a custom profile for SMB protocol versions, SMB channel encryption, authentication mechanisms, and Kerberos ticket encryption.
+
+ :::image type="content" source="media/files-smb-protocol/file-share-security-settings.png" alt-text="A screenshot showing the dialog to change the SMB security settings for SMB protocol versions, SMB channel encryption, authentication mechanisms, and Kerberos ticket encryption.":::
+
+After you've entered the desired security settings, select **Save**.
# [PowerShell](#tab/azure-powershell) To get the SMB protocol settings, use the `Get-AzStorageFileServiceProperty` cmdlet. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment before running these PowerShell commands.
storage Storage Files Migration Linux Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-linux-hybrid.md
Title: Linux migration to Azure File Sync description: Learn how to migrate files from a Linux server location to a hybrid cloud deployment with Azure File Sync and Azure file shares.-+ Last updated 03/19/2020-+
storage Storage Files Migration Nas Cloud Databox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-cloud-databox.md
Title: On-premises NAS migration to Azure file shares description: Learn how to migrate files from an on-premises Network Attached Storage (NAS) location to Azure file shares with Azure DataBox.-+ Last updated 04/02/2021-+
storage Storage Files Migration Nas Hybrid Databox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-hybrid-databox.md
Title: On-premises NAS migration to Azure File Sync via Data Box description: Learn how to migrate files from an on-premises Network Attached Storage (NAS) location to a hybrid cloud deployment by using Azure File Sync via Azure Data Box.-+ Last updated 03/5/2021-+
storage Storage Files Migration Nas Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-hybrid.md
Title: On-premises NAS migration to Azure File Sync description: Learn how to migrate files from an on-premises Network Attached Storage (NAS) location to a hybrid cloud deployment with Azure File Sync and Azure file shares.-+ Last updated 03/19/2020-+
storage Storage Files Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-overview.md
Title: Migrate to Azure file shares description: Learn about migrations to Azure file shares and find your migration guide.-+ Last updated 3/18/2020-+
storage Storage Files Migration Robocopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-robocopy.md
Title: Migrate to Azure file shares using RoboCopy description: Learn how to migrate files from several locations Azure file shares with RoboCopy.-+ Last updated 04/12/2021-+
storage Storage Files Migration Server Hybrid Databox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-server-hybrid-databox.md
Title: Migrate data into Azure File Sync with Azure Data Box description: Migrate bulk data offline that's compatible with Azure File Sync. Avoid file conflicts, and catch up your file share with the latest changes on the server for a zero downtime cloud migration.-+ Last updated 06/01/2021-+
storage Storage Files Migration Storsimple 1200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-1200.md
Title: StorSimple 1200 migration to Azure File Sync description: Learn how to migrate a StorSimple 1200 series virtual appliance to Azure File Sync.-+ Last updated 03/09/2020-+
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
Title: StorSimple 8000 series migration to Azure File Sync description: Learn how to migrate a StorSimple 8100 or 8600 appliance to Azure File Sync.-+ Last updated 10/22/2021-+
storage Storage How To Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md
To create an Azure file share, you need to answer three questions about how you
Premium file shares are available with locally redundancy and zone redundancy in a subset of regions. To find out if premium file shares are currently available in your region, see the [products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage) page for Azure. For information about regions that support ZRS, see [Azure Storage redundancy](../common/storage-redundancy.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json). - **What size file share do you need?**
- In local and zone redundant storage accounts, Azure file shares can span up to 100 TiB, however in geo- and geo-zone redundant storage accounts, Azure file shares can span only up to 5 TiB.
+ In local and zone redundant storage accounts, Azure file shares can span up to 100 TiB. However, in geo- and geo-zone redundant storage accounts, Azure file shares can span only up to 5 TiB.
For more information on these three choices, see [Planning for an Azure Files deployment](storage-files-planning.md).
To create a FileStorage storage account, ensure the **Performance** radio button
:::image type="content" source="media/storage-how-to-create-file-share/files-create-smb-share-performance-premium.png" alt-text="A screenshot of the performance radio button with premium selected and account kind with FileStorage selected."::: The other basics fields are independent from the choice of storage account:-- **Storage account name**: The name of the storage account resource to be created. This name must be globally unique, but otherwise can any name you desire. The storage account name will be used as the server name when you mount an Azure file share via SMB.
+- **Storage account name**: The name of the storage account resource to be created. This name must be globally unique. The storage account name will be used as the server name when you mount an Azure file share via SMB. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
- **Location**: The region for the storage account to be deployed into. This can be the region associated with the resource group, or any other available region. - **Replication**: Although this is labeled replication, this field actually means **redundancy**; this is the desired redundancy level: locally redundancy (LRS), zone redundancy (ZRS), geo-redundancy (GRS), and geo-zone-redundancy (GZRS). This drop-down list also contains read-access geo-redundancy (RA-GRS) and read-access geo-zone redundancy (RA-GZRS), which do not apply to Azure file shares; any file share created in a storage account with these selected will actually be either geo-redundant or geo-zone-redundant, respectively.
az storage account create \
### Enable large files shares on an existing account
-Before you create an Azure file share on an existing account, you may want to enable it for large file shares if you haven't already. Standard storage accounts using either LRS or ZRS can be upgraded to support large file shares. If you have a GRS, GZRS, RA-GRS, or RA-GZRS account, you will need to convert it to an LRS account before proceeding.
+Before you create an Azure file share on an existing storage account, you may want to enable it for large file shares (up to 100 TiB) if you haven't already. Standard storage accounts using either LRS or ZRS can be upgraded to support large file shares without causing downtime for existing file shares on the storage account. If you have a GRS, GZRS, RA-GRS, or RA-GZRS account, you will need to convert it to an LRS account before proceeding.
# [Portal](#tab/azure-portal) 1. Open the [Azure portal](https://portal.azure.com), and navigate to the storage account where you want to enable large file shares.
Before you create an Azure file share on an existing account, you may want to en
:::image type="content" source="media/storage-files-how-to-create-large-file-share/files-enable-large-file-share-existing-account.png" alt-text="Screenshot of the storage account, file shares blade with 100 TiB shares highlighted."::: # [PowerShell](#tab/azure-powershell)
-To enable large file shares on your existing account, use the following command. Replace `<yourStorageAccountName>` and `<yourResourceGroup>` with your information.
+To enable large file shares on your existing storage account, use the following command. Replace `<yourStorageAccountName>` and `<yourResourceGroup>` with your information.
```powershell Set-AzStorageAccount `
Set-AzStorageAccount `
``` # [Azure CLI](#tab/azure-cli)
-To enable large file shares on your existing account, use the following command. Replace `<yourStorageAccountName>` and `<yourResourceGroup>` with your information.
+To enable large file shares on your existing storage account, use the following command. Replace `<yourStorageAccountName>` and `<yourResourceGroup>` with your information.
```azurecli-interactive az storage account update --name <yourStorageAccountName> -g <yourResourceGroup> --enable-large-file-share
az storage account update --name <yourStorageAccountName> -g <yourResourceGroup>
## Create a file share
-Once you've created your storage account, all that is left is to create your file share. This process is mostly the same regardless of whether you're using a premium file share or a standard file share. You should consider the following differences.
+Once you've created your storage account, you can create your file share. This process is mostly the same regardless of whether you're using a premium file share or a standard file share. You should consider the following differences:
Standard file shares may be deployed into one of the standard tiers: transaction optimized (default), hot, or cool. This is a per file share tier that is not affected by the **blob access tier** of the storage account (this property only relates to Azure Blob storage - it does not relate to Azure Files at all). You can change the tier of the share at any time after it has been deployed. Premium file shares cannot be directly converted to any standard tier.
Standard file shares may be deployed into one of the standard tiers: transaction
The **quota** property means something slightly different between premium and standard file shares: -- For standard file shares, it's an upper boundary of the Azure file share, beyond which end-users cannot go. If a quota is not specified, standard file share can span up to 100 TiB or 5 TiB if the large file shares property is not set for a storage account. If you did not create your storage account with large file shares enabled, see [Enable large files shares on an existing account](#enable-large-files-shares-on-an-existing-account) for how to enable 100 TiB file shares.
+- For standard file shares, it's an upper boundary of the Azure file share, beyond which end-users cannot go. If a quota is not specified, standard file shares can span up to 100 TiB (or 5 TiB if the large file shares property is not set for a storage account). If you did not create your storage account with large file shares enabled, see [Enable large files shares on an existing account](#enable-large-files-shares-on-an-existing-account) for how to enable 100 TiB file shares.
- For premium file shares, quota means **provisioned size**. The provisioned size is the amount that you will be billed for, regardless of actual usage. The IOPS and throughput available on a premium file share is based on the provisioned size. For more information on how to plan for a premium file share, see [provisioning premium file shares](understanding-billing.md#provisioned-model).
synapse-analytics Security White Paper Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-introduction.md
Last updated 01/14/2022
- [Dedicated SQL pool](../sql-data-warehouse/sql-data-warehouse-overview-what-is.md?context=/azure/synapse-analytics/context/context) (formerly SQL DW) for enterprise data warehousing. - Deep integration with [Power BI](https://powerbi.microsoft.com/), [Azure Cosmos DB](../../cosmos-db/synapse-link.md?context=/azure/synapse-analytics/context/context), and [Azure Machine Learning](../machine-learning/what-is-machine-learning.md).
-Azure Synapse data security and privacy are non-negotiable. The purpose of this white paper, then, is to provide a comprehensive overview of Azure Synapse security features, which are enterprise-grade and industry-leading. The white paper comprises a series of articles that cover the following five layers of security:
+Azure Synapse data security and privacy are non-negotiable. The purpose of this white paper is to provide a comprehensive overview of Azure Synapse security features, which are enterprise-grade and industry-leading. The white paper comprises a series of articles that cover the following five layers of security:
- Data protection - Access control
Azure Synapse data security and privacy are non-negotiable. The purpose of this
This white paper targets all enterprise security stakeholders. They include security administrators, network administrations, Azure administrators, workspace administrators, and database administrators.
-**Writers:** Vengatesh Parasuraman, Fretz Nuson, Ron Dunn, Khendr'a Reid, John Hoang, Nithesh Krishnappa, Mykola Kovalenko, Brad Schacht, Pedro Matinez, Mark Pryce-Maher, and Arshad Ali.
+**Writers:** Vengatesh Parasuraman, Fretz Nuson, Ron Dunn, Khendr'a Reid, John Hoang, Nithesh Krishnappa, Mykola Kovalenko, Brad Schacht, Pedro Martinez, Mark Pryce-Maher, and Arshad Ali.
-**Technical Reviewers:** Nandita Valsan, Rony Thomas, Daniel Crawford, and Tammy Richter Jones.
+**Technical Reviewers:** Nandita Valsan, Rony Thomas, Abhishek Narain, Daniel Crawford, and Tammy Richter Jones.
**Applies to:** Azure Synapse Analytics, dedicated SQL pool (formerly SQL DW), serverless SQL pool, and Apache Spark pool.
Some common security questions include:
The purpose of this white paper is to provide answers to these common security questions, and many others.
+## Component architecture
+
+Azure Synapse is a Platform-as-a-service (PaaS) analytics service that brings together multiple independent components such as dedicated SQL pools, serverless SQL pools, Apache Spark pools, and data integration pipelines. These components are designed to work together to provide a seamless analytical platform experience.
+
+[Dedicated SQL pools](../sql/overview-architecture.md) are provisioned clusters that provide enterprise data warehousing capabilities for SQL workloads. Data is ingested into managed storage powered by Azure Storage, which is also a PaaS service. Compute is isolated from storage enabling customers to scale compute independently of their data. Dedicated SQL pools also provide the ability to query data files directly over customer-managed Azure Storage accounts by using external tables.
+
+[Serverless SQL pools](../sql/on-demand-workspace-overview.md) are on-demand clusters that provide a SQL interface to query and analyze data directly over customer-managed Azure Storage accounts. Since they're serverless, there's no managed storage, and the compute nodes scale automatically in response to the query workload.
+
+[Apache Spark](../spark/apache-spark-overview.md) in Azure Synapse is one of Microsoft's implementations of open-source Apache Spark in the cloud. Spark instances are provisioned on-demand based on the metadata configurations defined in the Spark pools. Each user gets their own dedicated Spark instance for running their jobs. The data files processed by the Spark instances are managed by the customer in their own Azure Storage accounts.
+
+ [Pipelines](../../data-factory/concepts-pipelines-activities.md) are a logical grouping of activities that perform data movement and data transformation at scale. [Data flow](../../data-factory/concepts-data-flow-overview.md) is a transformation activity in a pipeline that's developed by using a low-code user interface. It can execute data transformations at scale. Behind the scenes, data flows use Apache Spark clusters of Azure Synapse to execute automatically generated code. Pipelines and data flows are compute-only services, and they don't have any managed storage associated with them.
+
+Pipelines use the Integration Runtime (IR) as the scalable compute infrastructure for performing data movement and dispatch activities. Data movement activities run on the IR whereas the dispatch activities run on variety of other compute engines, including Azure SQL Database, Azure HDInsight, Azure Databricks, Apache Spark clusters of Azure Synapse, and others. Azure Synapse supports two types of IR: Azure Integration Runtime and Self-hosted Integration Runtime. The [Azure IR](/azure/data-factory/concepts-integration-runtime.md#azure-integration-runtime) provides a fully managed, scalable, and on-demand compute infrastructure. The [Self-hosted IR](/azure/data-factory/concepts-integration-runtime.md#self-hosted-integration-runtime) is installed and configured by the customer in their own network, either in on-premises machines or in Azure cloud virtual machines.
+
+Customers can choose to associate their Synapse workspace with a [managed workspace virtual network](../security/synapse-workspace-managed-vnet.md). When associated with a managed workspace virtual network, Azure IRs and Apache Spark clusters that are used by pipelines, data flows, and the Apache Spark pools are deployed inside the managed workspace virtual network. This setup ensures network isolation between the workspaces for pipelines and Apache Spark workloads.
+
+The following diagram depicts the various components of Azure Synapse.
++
+## Component isolation
+
+Each individual component of Azure Synapse depicted in the diagram provides its own security features. Security features provide data protection, access control, authentication, network security, and threat protection for securing the compute and the associated data thatΓÇÖs processed. Additionally, Azure Storage, being a PaaS service, provides additional security of its own, that's set up and managed by the customer in their own storage accounts. This level of component isolation limits and minimizes the exposure if there were a security vulnerability in any one of its components.
+ ## Security layers Azure Synapse implements a multi-layered security architecture for end-to-end protection of your data. There are five layers:
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-openrowset.md
The `OPENROWSET` function can optionally contain a `DATA_SOURCE` parameter to sp
```sql SELECT * FROM OPENROWSET(BULK 'http://<storage account>.dfs.core.windows.net/container/folder/*.parquet',
- FORMAT = 'PARQUET') AS file
+ FORMAT = 'PARQUET') AS [file]
``` This is a quick and easy way to read the content of the files without pre-configuration. This option enables you to use the basic authentication option to access the storage (Azure AD passthrough for Azure AD logins and SAS token for SQL logins).
This is a quick and easy way to read the content of the files without pre-config
SELECT * FROM OPENROWSET(BULK '/folder/*.parquet', DATA_SOURCE='storage', --> Root URL is in LOCATION of DATA SOURCE
- FORMAT = 'PARQUET') AS file
+ FORMAT = 'PARQUET') AS [file]
```
virtual-machines Dasv5 Dadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dasv5-dadsv5-series.md
Title: 'Dasv5 and Dadsv5-series - Azure Virtual Machines' description: Specifications for the Dasv5 and Dadsv5-series VMs. --++
virtual-machines Easv5 Eadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/easv5-eadsv5-series.md
Title: 'Easv5 and Eadsv5-series - Azure Virtual Machines' description: Specifications for the Easv5 and Eadsv5-series VMs.--++
virtual-machines Image Builder Api Update Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-api-update-release-notes.md
+
+ Title: What's new in Azure Image Builder
+description: Learn what is new with Azure Image Builder; such as the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes.
++++ Last updated : 04/04/2022+++++++
+# What's new in Azure Image Builder
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+This document contains all major API changes and feature updates for the Azure Image Builder service.
+
+## API Releases
++++
+### 2021-10-01
+
+**Breaking Change**:
+
+Our 2021-10-01 API introduces a change to the error schema that will be part of every future API release. Any Azure Image Builder automations you may have need to take account the new error output when switching to 2021-10-01 or newer API versions (new schema shown below). We recommend that once customers switch to the new API version (2021-10-01 and beyond), they don't revert to older versions as they'll have to change their automation again to expect the older error schema. We don't anticipate changing the error schema again in future releases.
+
+For API versions 2020-02-14 and older, the error output will look like the following messages:
+
+```
+{
+"error": {
+ "code": "ValidationFailed",
+ "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute//images//imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
+}
+```
++
+For API versions 2021-10-01 and newer, the error output will look like the following messages:
+
+```
+{
+ "error": {
+ "code": "ValidationFailed",
+ "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute//images//imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
+ }
+}
+```
+
+**Improvements**:
+
+- Added support for [Build VM MSIs](linux/image-builder-json.md#user-assigned-identity-for-the-image-builder-build-vm).
+- Added support for Proxy VM size customization.
+
+### 2020-02-14
+++
+**Improvements:**
+
+- Added support for creating images from the following sources:
+ - Managed Image
+ - Azure Compute Gallery
+ - Platform Image Repository (including Platform Image Purchase Plan)
+- Added support for the following customizations:
+ - Shell (Linux) - Script or Inline
+ - PowerShell (Windows) - Script or Inline, run elevated, run as system
+ - File (Linux and Windows)
+ - Windows Restart (Windows)
+ - Windows Update (Windows) (with search criteria, filters, and update limit)
+- Added support for the following distribution types:
+ - VHD
+ - Managed Image
+ - Azure Compute Gallery
+- **Other Features**
+ - Added support for customers to use their own VNet.
+ - Added support for customers to customize the build VM (VM size, OS disk size).
+ - Added support for user assigned MSI (for customize/distribute steps).
+ - Added support for [Gen2 images.](image-builder-overview.md#hyper-v-generation).
+
+### Preview APIs
+
+ The following APIs are deprecated, but still supported:
+- 2019-05-01-preview
++
+## Next steps
+Learn more about [Image Builder](image-builder-overview.md).
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
Then run installation commands specific for your distribution.
wget -O /tmp/${CUDA_REPO_PKG} https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/${CUDA_REPO_PKG} sudo dpkg -i /tmp/${CUDA_REPO_PKG}
- sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
+ sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/3bf863cc.pub
rm -f /tmp/${CUDA_REPO_PKG} sudo apt-get update
virtual-machines Sizes General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-general.md
Title: Azure VM sizes - General purpose | Microsoft Docs description: Lists the different general purpose sizes available for virtual machines in Azure. Lists information about the number of vCPUs, data disks, and NICs as well as storage throughput and network bandwidth for sizes in this series.-+ Last updated 10/20/2021-+ # General purpose virtual machine sizes
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-portal.md
$vm = Set-AzVMOSDisk -VM $vm `
-StorageAccountTypeΓÇ»"StandardSSD_LRS"ΓÇ»` -CreateOptionΓÇ»"FromImage"
-$vm = Set-AzVmSecurityProfile -VM $vm `
+$vm = Set-AzVmSecurityType -VM $vm `
-SecurityType "TrustedLaunch" $vm = Set-AzVmUefi -VM $vm `
virtual-machines Change Availability Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/change-availability-set.md
This article was last tested on 2/12/2019 using the [Azure Cloud Shell](https://
## Change the availability set The following script provides an example of gathering the required information, deleting the original VM and then recreating it in a new availability set.
-The below scenario also covers an optional portion where we create a snapshot of the VM's OS disk in order to create disk from the snapshot to have a backup because when the VM gets deleted, the OS disk will also be deleted along with it.
```powershell # Set variables $resourceGroup = "myResourceGroup" $vmName = "myVM" $newAvailSetName = "myAvailabilitySet"
- $snapshotName = "MySnapShot"
# Get the details of the VM to be moved to the Availability Set $originalVM = Get-AzVM `
The below scenario also covers an optional portion where we create a snapshot of
-PlatformUpdateDomainCount 2 ` -Sku Aligned }
-
-# Get Current VM OS Disk metadata
- $osDiskid = $originalVM.StorageProfile.OsDisk.ManagedDisk.Id
- $osDiskName = $originalVM.StorageProfile.OsDisk.Name
-
-# Create Disk Snapshot (optional)
- $snapshot = New-AzSnapshotConfig -SourceUri $osDiskid `
- -Location $originalVM.Location `
- -CreateOption copy
-
- $newsnap = New-AzSnapshot `
- -Snapshot $snapshot `
- -SnapshotName $snapshotName `
- -ResourceGroupName $resourceGroup
-
+ # Remove the original VM Remove-AzVM -ResourceGroupName $resourceGroup -Name $vmName
-# Create disk out of snapshot (optional)
- $osDisk = New-AzDisk -DiskName $osDiskName -Disk `
- (New-AzDiskConfig -Location $originalVM.Location -CreateOption Copy `
- -SourceResourceId $newsnap.Id) `
- -ResourceGroupName $resourceGroup
- # Create the basic configuration for the replacement VM. $newVM = New-AzVMConfig ` -VMName $originalVM.Name `
The below scenario also covers an optional portion where we create a snapshot of
-Location $originalVM.Location ` -VM $newVM ` -DisableBginfoExtension
-
-# Delete Snapshot (optional)
- Remove-AzSnapshot -ResourceGroupName $resourceGroup -SnapshotName $snapshotName -Force
``` ## Next steps
virtual-machines Tutorial Availability Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/tutorial-availability-sets.md
If you look at the availability set in the portal by going to **Resource Groups*
![Availability set in the portal](./media/tutorial-availability-sets/fd-ud.png) > [!NOTE]
-> Under certain circumstances, 2 VMs in the same AvailabilitySet could share the same FaultDomain. This can be confirmed by going into your availability set and checking the Fault Domain column. This can be causeed by the following sequence of events while deploying the VMs:
+> Under certain circumstances, 2 VMs in the same AvailabilitySet could share the same FaultDomain. This can be confirmed by going into your availability set and checking the Fault Domain column. This can be caused by the following sequence of events while deploying the VMs:
> 1. The 1st VM is Deployed > 1. The 1st VM is Stopped/Deallocated > 1. The 2nd VM is Deployed.
virtual-machines High Availability Guide Rhel Glusterfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-glusterfs.md
To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS i
## Set up GlusterFS
-You can either use an Azure Template from github to deploy all required Azure resources, including the virtual machines, availability set and network interfaces or you can deploy the resources manually.
+You can either use an Azure Template from GitHub to deploy all required Azure resources, including the virtual machines, availability set and network interfaces or you can deploy the resources manually.
### Deploy Linux via Azure Template The Azure Marketplace contains an image for Red Hat Enterprise Linux that you can use to deploy new virtual machines.
-You can use one of the quickstart templates on github to deploy all required resources. The template deploys the virtual machines, availability set etc.
+You can use one of the quickstart templates on GitHub to deploy all required resources. The template deploys the virtual machines, availability set etc.
Follow these steps to deploy the template: 1. Open the [SAP file server template][template-file-server] in the Azure portal
virtual-network-manager How To Create Hub And Spoke Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-hub-and-spoke-powershell.md
Deploy-AzNetworkManagerCommit @deployment
## Confirm deployment
-1. Go to one of the virtual networks in the portal and select **Peerings** under *Settings*. You should see a new peering connection create between the hub and the spokes virtual network with *ANM* in the name.
+1. Go to one of the virtual networks in the portal and select **Peerings** under *Settings*. You should see a new peering connection create between the hub and the spokes virtual network with *AVNM* in the name.
1. To test *direct connectivity* between spokes, deploy a virtual machine into each spokes virtual network. Then start an ICMP request from one virtual machine to the other.
virtual-network-manager How To Create Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-hub-and-spoke.md
Title: 'Create a hub and spoke topology with Azure Virtual Network Manager (Preview)' description: Learn how to create a hub and spoke network topology with Azure Virtual Network Manager.--++ Previously updated : 05/03/2022 Last updated : 11/02/2021
This section will help you create a network group containing the virtual network
1. Go to your Azure Virtual Network Manager instance. This how-to guide assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
-1. Select **Network groups** under *Settings*, and then select **+ Create** to create a new network group.
+1. Select **Network groups** under *Settings*, and then select **+ Add** to create a new network group.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-network-group.png" alt-text="Screenshot of Create a network group button.":::
+ :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-network-group.png" alt-text="Screenshot of add a network group button.":::
-1. On the *Create a network group* page, enter a **Name** and a **Description** for the network group. Then select **Add** to create the network group.
+1. On the *Basics* tab, enter a **Name** and a **Description** for the network group.
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page.":::
+ :::image type="content" source="./media/how-to-create-hub-and-spoke/basics.png" alt-text="Screenshot of basics tab for add a network group.":::
-1. You'll see the new network group added to the *Network Groups* page.
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
+1. To add virtual network manually, select the **Static group members** tab. For more information, see [static members](concept-network-groups.md#static-membership).
-1. From the list of network groups, select **myNetworkGroup** to manage the network group memberships.
+ :::image type="content" source="./media/how-to-create-hub-and-spoke/static-group.png" alt-text="Screenshot of static group members tab.":::
- :::image type="content" source="media/how-to-create-mesh-network/manage-group-membership.png" alt-text="Screenshot of manage group memberships page.":::
+1. To add virtual networks dynamically, select the **Conditional statements** tab. For more information, see [dynamic membership](concept-network-groups.md#dynamic-membership).
-1. To add a virtual network manually, select the **Add** button under *Static membership*, and select the virtual networks to add. Then select **Add** to save the static membership. For more information, see [static members](concept-network-groups.md#static-membership).
+ :::image type="content" source="./media/how-to-create-hub-and-spoke/conditional-statements.png" alt-text="Screenshot of conditional statements tab.":::
- :::image type="content" source="./media/how-to-create-hub-and-spoke/add-static-members.png" alt-text="Screenshot of add virtual networks to network group page.":::
-
-1. To add virtual networks dynamically, select the **Define** button under *Define dynamic membership*, and then enter the conditional statements for membership. Select **Save** to save the dynamic membership conditions. For more information, see [dynamic membership](concept-network-groups.md#dynamic-membership).
-
- :::image type="content" source="media/how-to-create-mesh-network/define-dynamic-members.png" alt-text="Screenshot of Define dynamic membership page.":::
+1. Once you're satisfied with the virtual networks selected for the network group, select **Review + create**. Then select **Create** once validation has passed.
## Create a hub and spoke connectivity configuration This section will guide you through how to create a hub-and-spoke configuration with the network group you created in the previous section.
-1. Select **Configuration** under *Settings*, then select **+ Create**.
+1. Select **Configuration** under *Settings*, then select **+ Add a configuration**.
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of the configurations list.":::
+ :::image type="content" source="./media/how-to-create-hub-and-spoke/configuration-list.png" alt-text="Screenshot of the configurations list.":::
-1. Select **Connectivity configuration** from the drop-down menu.
+1. Select **Connectivity** from the drop-down menu.
:::image type="content" source="./media/create-virtual-network-manager-portal/configuration-menu.png" alt-text="Screenshot of configuration drop-down menu.":::
-1. On the *Add a connectivity configuration* page, enter the following information:
+1. On the *Add a connectivity configuration* page, enter, or select the following information:
- :::image type="content" source="media/how-to-create-mesh-network/add-config-name.png" alt-text="Screenshot of add a connectivity configuration page.":::
+ :::image type="content" source="./media/how-to-create-hub-and-spoke/connectivity-configuration.png" alt-text="Screenshot of add a connectivity configuration page.":::
| Setting | Value | | - | -- | | Name | Enter a *name* for this configuration. | | Description | *Optional* Enter a description about what this configuration will do. |
+ | Topology | Select the **Hub and spoke** topology. |
+ | Hub | Select a virtual network that will act as the hub virtual network. |
+ | Existing peerings | Select this checkbox if you want to remove all previously created VNet peering between virtual networks in the network group defined in this configuration. |
-1. Select **Next: Topology >**. Select **Hub and Spoke** under the **Topology** setting. This selection will reveal more settings.
-
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/hub-configuration.png" alt-text="Screenshot of selecting a hub for the connectivity configuration.":::
-
-1. Select **Select a hub** under **Hub** setting. Then, select the virtual network to serve as your network hub and click **Select**.
+1. Then select **+ Add network groups**.
- :::image type="content" source="media/tutorial-create-secured-hub-and-spoke/select-hub.png" alt-text="Screenshot of Select a hub configuration.":::
-
-1. Under **Spoke network groups**, select **+ add**. Then, select your network group and click **Select**.
-
- :::image type="content" source="media/how-to-create-hub-and-spoke/add-network-group.png" alt-text="Screenshot of Add network groups page.":::
-
-1. You'll see the following three options appear next to the network group name under **Spoke network groups**:
+1. On the *Add network groups* page, select the network groups you want to add to this configuration. Then select **Add** to save.
+1. You'll see the following three options appear next to the network group name under *Spoke network groups*:
+
:::image type="content" source="./media/how-to-create-hub-and-spoke/spokes-settings.png" alt-text="Screenshot of spoke network groups settings." lightbox="./media/how-to-create-hub-and-spoke/spokes-settings-expanded.png":::
- | Setting | Value |
- | - | -- |
- | Direct connectivity | Select **Enable peering within network group** if you want to establish VNet peering between virtual networks in the network group of the same region. |
- | Gateway | Select **Hub as a gateway** if you have a virtual network gateway in the hub virtual network that you want this network group to use to pass traffic to on-premises. This option won't be available unless a virtual network gateway is deployed in the hub virtual network. |
- | Global Mesh | Select **Enable mesh connectivity across regions** if you want to establish VNet peering for all virtual networks in the network group across regions. This option requires you select **Enable peering within network group** first. |
+ * *Direct connectivity*: Select **Enable peering within network group** if you want to establish VNet peering between virtual networks in the network group of the same region.
+ * *Global Mesh*: Select **Enable mesh connectivity across regions** if you want to establish VNet peering for all virtual networks in the network group across regions.
+ * *Gateway*: Select **Use hub as a gateway** if you have a virtual network gateway in the hub virtual network that you want this network group to use to pass traffic to on-premises.
Select the settings you want to enable for each network group.
-1. Finally, Select **Next: Review + create >** and then **Create** to create the hub-and-spoke connectivity configuration.
+1. Finally, select **Add** to create the hub-and-spoke connectivity configuration.
## Deploy the hub and spoke configuration
-To have this configuration take effect in your environment, you'll need to deploy the configuration to the regions where your selected virtual networks are created.
-
-> [!NOTE]
-> Make sure the virtual network gateway has been successfully deployed before deploying the connectivity configuration. If you deploy a hub and spoke configuration with **Use the hub as a gateway** enabled and there's no gateway, the deployment will fail. For more information, see [use hub as a gateway](concept-connectivity-configuration.md#use-hub-as-a-gateway).
->
-
-1. Select **Deployments** under *Settings*, then select **Deploy configuration**.
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/deployments.png" alt-text="Screenshot of deployments page in Network Manager.":::
+To have this configuration take effect in your environment, you'll need to deploy the configuration to the regions where your selected virtual network are created.
+1. Select **Deployments** under *Settings*, then select **Deploy a configuration**.
1. On the *Deploy a configuration* select the following settings:
To have this configuration take effect in your environment, you'll need to deplo
| Setting | Value | | - | -- |
- | Configurations | Select elect **Include connectivity configurations in your goal state**. This will reveal more options. |
- | Connectivity Configurations | Select the name of the connectivity configuration you created in the previous section. |
- | Target regions | Select all the regions that include virtual networks you need configuration applied to. |
+ | Configuration type | Select **Connectivity**. |
+ | Configurations | Select the name of the configuration you created in the previous section. |
+ | Target regions | Select all the regions that apply to virtual networks you select for the configuration. |
-1. Select **Deploy**. You'll see the deployment shows up in the list for those regions. The deployment of the configuration can take several minutes to complete. You can select the **Refresh** button to check on the status of the deployment.
+1. Select **Deploy** and then select **OK** to commit the configuration to the selected regions.
- :::image type="content" source="./media/how-to-create-hub-and-spoke/deploy-status.png" alt-text="Screenshot of deployment status screen." lightbox="./media/how-to-create-hub-and-spoke/deploy-status-expanded.png":::
+1. The deployment of the configuration can take up to 15-20 minutes, select the **Refresh** button to check on the status of the deployment.
## Confirm deployment
-1. Go to one of the virtual networks in the portal and select **Peerings** under *Settings*. You should see a new peering connection created between the hub and the spokes virtual network with *ANM* in the name.
+1. See [view applied configuration](how-to-view-applied-configurations.md).
1. To test *direct connectivity* between spokes, deploy a virtual machine into each spokes virtual network. Then initiate an ICMP request from one virtual machine to the other.
-1. See [view applied configuration](how-to-view-applied-configurations.md).
- ## Next steps - Learn about [Security admin rules](concept-security-admins.md)
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
Deploy a virtual network gateway into the hub virtual network. This virtual netw
1. Under **Spoke network groups**, select **+ add**. Then, select **myNetworkGroupB** for the network group and click **Select**.
- :::image type="content" source="media/how-to-create-hub-and-spoke/add-network-group.png" alt-text="Screenshot of Add network groups page.":::
+ :::image type="content" source="media/tutorial-create-secured-hub-and-spoke/select-network-group.png" alt-text="Screenshot of Add network groups page.":::
1. After you've added the network group, select the following options. Then select add to create the connectivity configuration.
Deploy a virtual network gateway into the hub virtual network. This virtual netw
| Hub as gateway | Select the checkbox for **Use hub as a gateway**. | | Global Mesh | Leave this option **unchecked**. Since both spokes are in the same region this setting is not required. |
-1. Select **Next: Review + create >** and then **Create** to create the connectivity configuration.
+1. Select **Next: Review + create >** and then create the connectivity configuration.
## Deploy the connectivity configuration
Make sure the virtual network gateway has been successfully deployed before depl
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/deploy-configuration.png" alt-text="Screenshot of deploy a configuration page.":::
-1. Select **Deploy**. You should now see the deployment show up in the list for those regions. The deployment of the configuration can take several minutes to complete. You can select the **Refresh** button to check on the status of the deployment.
+1. Select **Deploy**. You should now see the deployment show up in the list for those regions. The deployment of the configuration can take several minutes to complete.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/deployment-in-progress.png" alt-text="Screenshot of deployment in progress in deployment list."::: ## Create security configuration
-1. Select **Configuration** under *Settings* again, then select **+ Create**, and select **SecurityAdmin** from the menu to begin creating a SecurityAdmin configuration.
+1. Select **Configuration** under *Settings* again, then select **+ Create**, and select **SecurityAdmin** from the menu to begin creating a SecurityAdmin configuration..
1. Enter the name **mySecurityConfig** for the configuration, then select **Next: Rule collections**.
Make sure the virtual network gateway has been successfully deployed before depl
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/deploy-security.png" alt-text="Screenshot of deploying a security configuration.":::
-1. Select **Next** and then **Deploy**. You should now see the deployment show up in the list for the selected region. The deployment of the configuration can take about 15-20 minutes to complete.
+1. Select **Next** and then **Deploy**.You should now see the deployment show up in the list for the selected region. The deployment of the configuration can take about 15-20 minutes to complete.
## Verify deployment of configurations
virtual-network Virtual Network Test Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-test-latency.md
Run the following commands:
sudo yum install ncurses-devel -y sudo yum install -y automake sudo yum install -y autoconf
+ sudo yum install -y libtool
``` #### For Ubuntu
Run the following commands:
sudo apt-get install -y automake sudo apt-get install -y autoconf sudo apt-get install -y libtool
+ sudo yum update
``` #### For all distros
virtual-wan About Nva Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-nva-hub.md
Title: 'About Network Virtual Appliances - Virtual WAN hub'
description: Learn about Network Virtual Appliances in a Virtual WAN hub. -+ Last updated 06/02/2021-+ # Customer intent: As someone with a networking background, I want to learn about Network Virtual Appliances in a Virtual WAN hub.
NVA Partners may create different resources depending on their appliance deploym
:::image type="content" source="./media/about-nva-hub/managed-app.png" alt-text="Managed Application resource groups"::: +
+### Managed resource group permissions
+
+By default, all managed resource groups have an deny-all Azure Active Directory assignment. Deny-all assignments prevent customers from calling write operations on any resources in the managed resource group, including Network Virtual Appliance resources.
+
+However, partners may create exceptions for specific actions that customers are allowed to perform on resources deployed in managed resource groups.
+
+Permissions on resources in existing managed resource groups are not dynamically updated as new permitted actions are added by partners and require a manual refresh.
+
+To refresh permissions on the managed resource groups, customers can leverage the [Refresh Permissions REST API ](/rest/api/managedapplications/applications/refresh-permissions).
+
+> [!NOTE]
+> To properly apply new permissions, refresh permissions API must be called with an additional query parameter **targetVersion**. The value for targetVersion is provider-specific. Please reference your provider's documentation for the latest version number.
+
+```http-interactive
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Solutions/applications/{applicationName}/refreshPermissions?api-version=2019-07-01&targetVersion={targetVersion}
+```
+ ### <a name="units"></a>NVA Infrastructure Units When you create an NVA in a Virtual WAN hub, you must choose the number of NVA Infrastructure Units you want to deploy it with. An **NVA Infrastructure Unit** is a unit of aggregate bandwidth capacity for an NVA in a Virtual WAN hub. An **NVA Infrastructure Unit** is similar to a VPN [Scale Unit](pricing-concepts.md#scale-unit) in terms of the way you think about capacity and sizing.
vpn-gateway About Vpn Profile Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-vpn-profile-download.md
Title: 'About Point-to-Site VPN client profiles for Azure AD authentication'
+ Title: 'P2S VPN client profile config files - Azure AD authentication'
-description: Learn about P2S VPN client profile files for Azure AD authentication.
+description: Learn how to generate P2S VPN client profile configuration files for Azure AD authentication.
Last updated 05/04/2022
-# Generate P2S Azure VPN client profile files - Azure AD authentication
+# Generate P2S Azure VPN Client profile config files - Azure AD authentication
-After you install the Azure VPN Client, you configure the VPN client profile. Client profile files contain information that's necessary to configure a VPN connection. This article helps you obtain and understand the information needed to configure an Azure VPN Client profile.
+After you install the Azure VPN Client, you configure the VPN client profile. Client profile config files contain information that's necessary to configure a VPN connection. This article helps you obtain and understand the information needed to configure an Azure VPN Client profile for Azure VPN Gateway point-to-site configurations that use Azure AD authentication.
## <a name="generate"></a>Generate profile files
You can generate VPN client profile configuration files using PowerShell, or by
1. In the Azure portal, navigate to the virtual network gateway for the virtual network that you want to connect to. 1. On the virtual network gateway page, select **Point-to-site configuration**.
-1. At the top of the Point-to-site configuration page, select **Download VPN client**. It takes a few minutes for the client configuration package to generate.
+1. At the top of the point-to-site configuration page, select **Download VPN client**. It takes a few minutes for the client configuration package to generate.
1. Your browser indicates that a client configuration zip file is available. It's named the same name as your gateway. Unzip the file to view the folders. ### PowerShell
vpn-gateway About Zone Redundant Vnet Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-zone-redundant-vnet-gateways.md
For information about gateway SKUs, see [VPN gateway SKUs](vpn-gateway-about-vpn
Zone-redundant gateways and zonal gateways both rely on the Azure public IP resource *Standard* SKU. The configuration of the Azure public IP resource determines whether the gateway that you deploy is zone-redundant, or zonal. If you create a public IP resource with a *Basic* SKU, the gateway will not have any zone redundancy, and the gateway resources will be regional.
-> [!IMPORTANT]
-> *Standard* public IP resources with Tier = Global cannot be attached to a Gateway. Only *Standard* public IP resources with Tier = Regional can be used.
- ### <a name="pipzrg"></a>Zone-redundant gateways When you create a public IP address using the **Standard** public IP SKU without specifying a zone, the behavior differs depending on whether the gateway is a VPN gateway, or an ExpressRoute gateway.
vpn-gateway Openvpn Azure Ad Client Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-client-mac.md
Title: 'Configure VPN clients for P2S OpenVPN protocol connections: Azure AD authentication: macOS'
+ Title: 'Configure Azure VPN Client - Azure AD authentication - macOS'
description: 'Learn how to configure a macOS VPN client to connect to a virtual network using VPN Gateway Point-to-Site and Azure Active Directory authentication.'
Last updated 09/30/2021
-# Configure a VPN client for P2S OpenVPN protocol connections - Azure AD authentication - macOS
+# Configure an Azure VPN Client - Azure AD authentication - macOS
This article helps you configure a VPN client for a computer running macOS 10.15 and later to connect to a virtual network using Point-to-Site VPN and Azure Active Directory authentication. Before you can connect and authenticate using Azure AD, you must first configure your Azure AD tenant. For more information, see [Configure an Azure AD tenant](openvpn-azure-ad-tenant.md). For more information about Point-to-Site connections, see [About Point-to-Site connections](point-to-site-about.md).
vpn-gateway Openvpn Azure Ad Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-client.md
Title: 'Configure Azure VPN Client for P2S OpenVPN protocol connections: Azure AD authentication: Windows'
+ Title: 'Configure Azure VPN Client - Azure AD authentication - Windows'
description: Learn how to configure the Azure VPN Client to connect to a VNet using VPN Gateway point-to-site VPN, OpenVPN protocol connections, and Azure AD authentication from a Windows computer.
Last updated 05/05/2022
-# Configure Azure VPN Client for P2S OpenVPN protocol connections - Azure AD authentication - Windows
+# Configure an Azure VPN Client - Azure AD authentication - Windows
This article helps you configure the Azure VPN Client on a Windows computer to connect to a virtual network using a VPN Gateway point-to-site VPN and Azure Active Directory authentication. Before you can connect and authenticate using Azure AD, you must first configure your Azure AD tenant. For more information, see [Configure an Azure AD tenant](openvpn-azure-ad-tenant.md). For more information about point-to-site, see [About point-to-site VPN](point-to-site-about.md).
vpn-gateway Vpn Gateway Howto Openvpn Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-openvpn-clients.md
Last updated 05/05/2022
-# Configure OpenVPN clients for Azure VPN Gateway
+# Configure an OpenVPN client for Azure VPN Gateway P2S connections
-This article helps you configure **OpenVPN &reg; Protocol** clients for Azure VPN Gateway point-to-site configurations that use OpenVPN.
+This article helps you configure the **OpenVPN &reg; Protocol** client for Azure VPN Gateway point-to-site configurations. This article pertains specifically to OpenVPN clients, not the Azure VPN Client or native VPN clients.
-This article contains general instructions. For the following point-to-site authentication types, see the associated articles instead:
+For these authentication types, see the following articles instead:
* Azure AD authentication * [Windows clients](openvpn-azure-ad-client.md)