Updates from: 09/09/2021 03:05:58
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-monitor.md
In this article, you learn how to transfer the logs to an Azure Log Analytics wo
> When you plan to transfer Azure AD B2C logs to different monitoring solutions, or repository, consider the following. Azure AD B2C logs contain personal data. Such data should be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorized or unlawful processing, using appropriate technical or organizational measures. Watch this video to learn how to configure monitoring for Azure AD B2C using Azure Monitor.
-[!Video https://www.youtube.com/embed/tF2JS6TGc3g]
+
+>[!Video https://www.youtube.com/embed/tF2JS6TGc3g]
## Deployment overview
active-directory-b2c Azure Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-sentinel.md
In this tutorial, you'll learn to:
Enable **Diagnostic settings** in Azure AD within your Azure AD B2C tenant to define where logs and metrics for a resource should be sent.
-Then after, [configure Azure AD B2C to send logs to Azure Monitor](https://docs.microsoft.com/azure/active-directory-b2c/azure-monitor).
+Then after, [configure Azure AD B2C to send logs to Azure Monitor](./azure-monitor.md).
## Deploy an Azure Sentinel instance
In the following example, we explain the scenario where you receive a notificati
1. From the Azure Sentinel navigation menu, select **Analytics**. 2. In the action bar at the top, select **+ Create** and select
- **Scheduled query rule**. It will open the **Analytics rule wizard**.
+ **Scheduled query rule**. This will open the **Analytics rule wizard**.
![image shows select create scheduled query rule](./media/azure-sentinel/create-scheduled-rule.png)
active-directory-b2c Enable Authentication Web Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-web-api.md
Try to call the protected web API endpoint without an access token. Open a brows
Continue to configure your app to call the web API. For guidance, see the [Prerequisites](#prerequisites) section.
+Watch this video to learn about some best practices when you integrate Azure AD B2C with an API.
+
+>[!Video https://www.youtube.com/embed/wuUu71RcsIo]
+ ## Next steps Get the complete example on GitHub:
active-directory-b2c Enable Authentication Web Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-web-application.md
The required information is described in the [Configure authentication in a samp
After you're successfully authenticated, you'll see your display name in the navigation bar. To view the claims that the Azure AD B2C token returns to your app, select **Claims**. ## Next steps
-* Learn how to [customize and enhance the Azure AD B2C authentication experience for your web app](enable-authentication-web-application-options.md).
+* Learn how to [customize and enhance the Azure AD B2C authentication experience for your web app](enable-authentication-web-application-options.md).
active-directory-b2c Language Customization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/language-customization.md
You might not need that level of control over what languages your customer sees.
> [!NOTE] > If you're using custom user attributes, you need to provide your own translations. For more information, see [Customize your strings](#customize-your-strings).
+Watch this video to learn how to localize or customize language using Azure AD B2C.
+
+>[!Video https://www.youtube.com/embed/yqrX5_tA7Ms]
+ ::: zone pivot="b2c-custom-policy" Localization requires three steps:
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/microsoft-graph-operations.md
Microsoft Graph allows you to manage resources in your Azure AD B2C directory. T
> [!NOTE] > You can also programmatically create an Azure AD B2C directory itself, along with the corresponding Azure resource linked to an Azure subscription. This functionality isn't exposed through the Microsoft Graph API, but through the Azure REST API. For more information, see [B2C Tenants - Create](/rest/api/activedirectory/b2ctenants/create).
+Watch this video to learn about Azure AD B2C user migration using Microsoft Graph API.
+
+>[!Video https://www.youtube.com/embed/9BRXBtkBzL4]
+ ## Prerequisites To use MS Graph API, and interact with resources in your Azure AD B2C tenant, you need an application registration that grants the permissions to do so. Follow the steps in the [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-get-started.md) article to create an application registration that your management application can use.
active-directory-b2c Saml Service Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-service-provider.md
Organizations that use Azure AD B2C as their customer identity and access manage
3. If the user signs in by using a federated identity provider, a token response is sent to Azure AD B2C. 4. Azure AD B2C generates a SAML assertion and sends it to the application.
+Watch this video to learn how to integrate SAML applications with Azure AD B2C.
+
+>[!Video https://www.youtube.com/embed/r2TIVBCm7v4]
+ ## Prerequisites For the scenario in this article, you need:
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 08/03/2021 Last updated : 09/08/2021
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## August 2021
+
+### New articles
+
+- [Deploy custom policies with GitHub Actions](deploy-custom-policies-github-action.md)
+- [Configure authentication in a sample WPF desktop app by using Azure AD B2C](configure-authentication-sample-wpf-desktop-app.md)
+- [Enable authentication options in a WPF desktop app by using Azure AD B2C](enable-authentication-wpf-desktop-app-options.md)
+- [Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs-saml.md)
+- [Configure authentication in a sample Python web application using Azure Active Directory B2C](configure-authentication-sample-python-web-app.md)
+- [Configure authentication options in a Python web application using Azure Active Directory B2C](enable-authentication-python-web-app-options.md)
+- [Tutorial: How to perform security analytics for Azure AD B2C data with Azure Sentinel](azure-sentinel.md)
+- [Enrich tokens with claims from external sources using API connectors](add-api-connector-token-enrichment.md)
+
+### Updated articles
+
+- [Customize the user interface with HTML templates in Azure Active Directory B2C](customize-ui-with-html.md)
+- [Configure authentication in a sample WPF desktop app by using Azure AD B2C](configure-authentication-sample-wpf-desktop-app.md)
+- [Enable authentication options in a WPF desktop app by using Azure AD B2C](enable-authentication-wpf-desktop-app-options.md)
+- [Configure authentication in a sample iOS Swift app by using Azure AD B2C](configure-authentication-sample-ios-app.md)
+- [Enable authentication options in an iOS Swift app by using Azure AD B2C](enable-authentication-ios-app-options.md)
+- [Enable authentication in your own iOS Swift app by using Azure AD B2C](enable-authentication-ios-app.md)
+- [Add a web API application to your Azure Active Directory B2C tenant](add-web-api-application.md)
+- [Configure authentication in a sample Android app by using Azure AD B2C](configure-authentication-sample-android-app.md)
+- [Configure authentication options in an Android app by using Azure AD B2C](enable-authentication-android-app-options.md)
+- [Enable authentication in your own Android app by using Azure AD B2C](enable-authentication-android-app.md)
+- [Configure authentication in a sample web app by using Azure AD B2C](configure-authentication-sample-web-app.md)
+- [Enable authentication options in a web app by using Azure AD B2C](enable-authentication-web-application-options.md)
+- [Enable authentication in your own web app by using Azure AD B2C](enable-authentication-web-application.md)
+- [Configure authentication options in a single-page application by using Azure AD B2C](enable-authentication-spa-app-options.md)
+- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)
+- [Add AD FS as an OpenID Connect identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs.md)
+- [Configure SAML identity provider options with Azure Active Directory B2C](identity-provider-generic-saml-options.md)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md)
+- [Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication](partner-bloksec.md)
+- [Add an API connector to a sign-up user flow](add-api-connector.md)
+- [Use API connectors to customize and extend sign-up user flows](api-connectors-overview.md)
+- [Set up phone sign-up and sign-in for user flows](phone-authentication-user-flows.md)
++ ## July 2021 ### New articles
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/whats-new-docs.md
Title: "What's new in Azure Active Directory application provisioning" description: "New and updated documentation for the Azure Active Directory application provisioning." Previously updated : 08/03/2021 Last updated : 09/08/2021
Welcome to what's new in Azure Active Directory application provisioning documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the provisioning service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## August 2021
+
+### Updated articles
+
+- [Reference for writing expressions for attribute mappings in Azure Active Directory](functions-for-customizing-application-data.md)
+- [Known issues and resolutions with SCIM 2.0 protocol compliance of the Azure AD User Provisioning service](application-provisioning-config-problem-scim-compatibility.md)
+- [Tutorial - Customize user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md)
++ ## July 2021 ### Updated articles
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/whats-new-docs.md
Title: "What's new in Azure Active Directory application proxy" description: "New and updated documentation for the Azure Active Directory application proxy." Previously updated : 08/03/2021 Last updated : 09/08/2021
# Azure Active Directory application proxy: What's new Welcome to what's new in Azure Active Directory application proxy documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).+
+## August 2021
+
+### Updated articles
+
+- [Configure custom domains with Azure AD Application Proxy](application-proxy-configure-custom-domain.md)
++ ## July 2021 ### Updated articles
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
The following core requirements apply:
|`https://login.microsoftonline.com`|Authentication requests| |`https://enterpriseregistration.windows.net`|Azure AD Password Protection functionality|
+> [!NOTE]
+> Some endpoints, such as the CRL endpoint, are not addressed in this article. For a list of all supported endpoints, see [Microsoft 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide#microsoft-365-common-and-office-online).
+ ### Azure AD Password Protection DC agent The following requirements apply to the Azure AD Password Protection DC agent:
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
Filters for devices (preview) condition in Conditional Access evaluates policy b
| Include/exclude mode with negative operators (NotEquals, NotStartsWith, NotEndsWith, NotContains, NotIn) and use of any attributes including extensionAttributes1-15 | Registered device managed by Intune | Yes, if criteria are met | | Include/exclude mode with negative operators (NotEquals, NotStartsWith, NotEndsWith, NotContains, NotIn) and use of any attributes including extensionAttributes1-15 | Registered device not managed by Intune | Yes, if criteria are met and if device is compliant or Hybrid Azure AD joined |
+> [!IMPORTANT]
+> For unregistered devices, the only device information passed is the Operating System, Operating System Version, and the Browser. This means for unregistered devices and Conditional Access policies using negative operators for filters for device, any value outside of these will be evaluated with an blank value. For example, if an unregistered device was being evaluated with the following: **device.displayName -notContains *Example***. Since the unregistered device will pass a blank display name, which is not the value of *Example*, the resulting condition will be true.
+ ## Next steps - [Conditional Access: Conditions](concept-conditional-access-conditions.md)
active-directory Msal Net Client Assertions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-client-assertions.md
string signedClientAssertion = ComputeAssertion();
app = ConfidentialClientApplicationBuilder.Create(config.ClientId) .WithClientAssertion(() => { return GetSignedClientAssertion(); } ) .Build();
+
+// or in async manner
+
+app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
+ .WithClientAssertion(async cancellationToken => { return await GetClientAssertionAsync(cancellationToken); })
+ .Build();
``` The [claims expected by Azure AD](active-directory-certificate-credentials.md) in the signed assertion are:
active-directory Msal Net Instantiate Public Client Config Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-instantiate-public-client-config-options.md
Before initializing an application, you first need to [register](quickstart-regi
- The tenant ID if you are writing a line of business application solely for your organization (also named single-tenant application). - For web apps, and sometimes for public client apps (in particular when your app needs to use a broker), you'll have also set the redirectUri where the identity provider will contact back your application with the security tokens.
+## Default Reply Uri
+
+In MSAL.NET 4.1+ the default redirect URI (Reply URI) can now be set with the `public PublicClientApplicationBuilder WithDefaultRedirectUri()` method. This method will set the redirect uri property of public client application to the recommended default.
+
+This method's behavior is dependent upon the platform that you are using at the time. Here is a table that describes what redirect uri is set on certain platforms:
+
+Platform | Redirect URI
+ | --
+Desktop app (.NET FW) | `https://login.microsoftonline.com/common/oauth2/nativeclient`
+UWP | value of `WebAuthenticationBroker.GetCurrentApplicationCallbackUri()`
+.NET Core | `http://localhost`
+
+For the UWP platform, is enhanced the experience by enabling SSO with the browser by setting the value to the result of `WebAuthenticationBroker.GetCurrentApplicationCallbackUri()`.
+
+For .NET Core, MSAL.Net is setting the value to the local host to enable the user to use the system browser for interactive authentication.
+
+> [!NOTE]
+> For embedded browsers in desktop scenarios the redirect uri used is intercepted by MSAL to detect that a response is returned from the identity provider that an auth code has been returned. This uri can therefore be used in any cloud without seeing an actual redirect to that uri. This means you can and should use `https://login.microsoftonline.com/common/oauth2/nativeclient` in any cloud. If you prefer you can also use any other uri as long as you configure the redirect uri correctly with MSAL and in the app registration. Specifying the default Uri in the application registration means there is the least amount of setup in MSAL.
+ A .NET Core console application could have the following *appsettings.json* configuration file:
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory external identities" description: "New and updated documentation for the Azure Active Directory external identities." Previously updated : 08/03/2021 Last updated : 09/08/2021
Welcome to what's new in Azure Active Directory external identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the external identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## August 2021
+
+### Updated articles
+
+- [Identity Providers for External Identities](identity-providers.md)
+- [Enable B2B external collaboration and manage who can invite guests](delegate-invitations.md)
+- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)
+- [Add Google as an identity provider for B2B guest users](google-federation.md)
+- [Azure Active Directory (Azure AD) identity provider for External Identities](azure-ad-account.md)
+- [Microsoft account (MSA) identity provider for External Identities](microsoft-account.md)
+- [Add an API connector to a user flow](self-service-sign-up-add-api-connector.md)
++ ## July 2021 ### New articles
active-directory Resilience With Monitoring Alerting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/resilience-with-monitoring-alerting.md
Monitoring maximizes the availability and performance of your applications and services. It delivers a comprehensive solution for collecting, analyzing, and acting on telemetry from your infrastructure and applications. Alerts proactively notify you when issues are found with your service or applications. They allow you to identify and address issues before the end users of your service notice them. [Azure AD Log Analytics](https://azure.microsoft.com/services/monitor/?OCID=AID2100131_SEM_6d16332c03501fc9c1f46c94726d2264:G:s&ef_id=6d16332c03501fc9c1f46c94726d2264:G:s&msclkid=6d16332c03501fc9c1f46c94726d2264#features) helps you analyze, search the audit logs and sign-in logs, and build custom views.
+Watch this video to learn how to set up monitoring and reporting in Azure AD B2C using Azure Monitor.
+
+>[!Video https://www.youtube.com/embed/Mu9GQy-CbXI]
+ ## Monitor and get notified through alerts Monitoring your system and infrastructure is critical to ensure the overall health of your services. It starts with the definition of business metrics, such as, new user arrival, end userΓÇÖs authentication rates, and conversion. Configure such indicators to monitor. If you're planning for an upcoming surge because of promotion or holiday traffic, revise your estimates specifically for the event and corresponding benchmark for the business metrics. After the event, fall back to the previous benchmark.
For example, track the following metrics, since a sudden drop in either will lea
- [Resilient interfaces with external processes](resilient-external-processes.md) - [Resilience through developer best practices](resilience-b2c-developer-best-practices.md) - [Build resilience in your authentication infrastructure](resilience-in-infrastructure.md)-- [Increase resilience of authentication and authorization in your applications](resilience-app-development-overview.md)
+- [Increase resilience of authentication and authorization in your applications](resilience-app-development-overview.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
Previously updated : 7/30/2021 Last updated : 9/7/2021
The What's new in Azure Active Directory? release notes provide information abou
+## February 2021
+
+### Email one-time passcode authentication on by default starting October 2021
+
+**Type:** Plan for change
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Starting October 31, 2021, Microsoft Azure Active Directory [email one-time passcode authentication](../external-identities/one-time-passcode.md) will become the default method for inviting accounts and tenants for B2B collaboration scenarios. At this time, Microsoft will no longer allow the redemption of invitations using unmanaged Azure Active Directory accounts.
+++
+### Unrequested but consented permissions will no longer be added to tokens if they would trigger Conditional Access
+
+**Type:** Plan for change
+**Service category:** Authentications (Logins)
+**Product capability:** Platform
+
+Currently, applications using [dynamic permissions](../develop/v2-permissions-and-consent.md#requesting-individual-user-consent) are given all of the permissions they're consented to access. This includes applications that are unrequested and even if they trigger conditional access. For example, this can cause an app requesting only `user.read` that also has consent for `files.read`, to be forced to pass the Conditional Access assigned for the `files.read` permission.
+
+To reduce the number of unnecessary Conditional Access prompts, Azure AD is changing the way that unrequested scopes are provided to applications. Apps will only trigger conditional access for permission they explicitly request. For more information, read [What's new in authentication](../develop/reference-breaking-changes.md#conditional-access-will-only-trigger-for-explicitly-requested-scopes).
+
+
+
+### Public preview - Use a Temporary Access Pass to register Passwordless credentials
+
+**Type:** New feature
+**Service category:** MFA
+**Product capability:** Identity Security & Protection
+
+Temporary Access Pass is a time-limited passcode that serves as strong credentials and allows onboarding of Passwordless credentials and recovery when a user has lost or forgotten their strong authentication factor (for example, FIDO2 security key or Microsoft Authenticator) app and needs to sign in to register new strong authentication methods. [Learn more](../authentication/howto-authentication-temporary-access-pass.md).
+++
+### Public preview - Keep me signed in (KMSI) in next generation of user flows
+
+**Type:** New feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+The next generation of B2C user flows now supports the [keep me signed in (KMSI)](../../active-directory-b2c/session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) functionality that allows customers to extend the session lifetime for the users of their web and native applications by using a persistent cookie. feature keeps the session active even when the user closes and reopens the browser, and is revoked when the user signs out.
+++
+### Public preview - Reset redemption status for a guest user
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Customers can now reinvite existing external guest users to reset their redemption status, which allows the guest user account to remain without them losing any access. [Learn more](../external-identities/reset-redemption-status.md).
+
++
+### Public preview - /synchronization (provisioning) APIs now support application permissions
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** Identity Lifecycle Management
+
+Customers can now use application.readwrite.ownedby as an application permission to call the synchronization APIs. Note this is only supported for provisioning from Azure AD out into third-party applications (for example, AWS, Data Bricks, etc.). It is currently not supported for HR-provisioning (Workday / Successfactors) or Cloud Sync (AD to Azure AD). [Learn more](/graph/api/resources/provisioningobjectsummary?view=graph-rest-beta&preserve-view=true).
+
++
+### General availability - Authentication Policy Administrator built-in role
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** Access Control
+
+Users with this role can configure the authentication methods policy, tenant-wide MFA settings, and password protection policy. This role grants permission to manage Password Protection settings: smart lockout configurations and updating the custom banned passwords list. [Learn more](../roles/permissions-reference.md#authentication-policy-administrator).
+++
+### General availability - User collections on My Apps are available now!
+
+**Type:** New feature
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+Users can now create their own groupings of apps on the My Apps app launcher. They can also reorder and hide collections shared with them by their administrator. [Learn more](../user-help/my-apps-portal-user-collections.md).
+++
+### General availability - Autofill in Authenticator
+
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** Identity Security & Protection
+
+Microsoft Authenticator provides Multi-factor Authentication (MFA) and account management capabilities, and now also will autofill passwords on sites and apps users visit on their mobile (iOS and Android).
+
+To use autofill on Authenticator, users need to add their personal Microsoft account to Authenticator and use it to sync their passwords. Work or school accounts cannot be used to sync passwords at this time. [Learn more](../user-help/user-help-auth-app-faq.md#autofill-for-it-admins).
+++
+### General availability - Invite internal users to B2B collaboration
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Customers can now invite internal guests to use B2B collaboration instead of sending an invitation to an existing internal account. This allows customers to keep that user's object ID, UPN, group memberships, and app assignments. [Learn more](../external-identities/invite-internal-users.md).
+++
+### General availability - Domain Name Administrator built-in role
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** Access Control
+
+Users with this role can manage (read, add, verify, update, and delete) domain names. They can also read directory information about users, groups, and applications, as these objects have domain dependencies.
+
+For on-premises environments, users with this role can configure domain names for federation so that associated users are always authenticated on-premises. These users can then sign into Azure AD-based services with their on-premises passwords via single sign-on. Federation settings need to be synced via Azure AD Connect, so users also have permissions to manage Azure AD Connect. [Learn more](../roles/permissions-reference.md#domain-name-administrator).
+
++
+### New Federated Apps available in Azure AD Application gallery - February 2021
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In February 2021 we have added following 37 new applications in our App gallery with Federation support:
+
+[Loop Messenger Extension](https://loopworks.com/loop-flow-messenger/), [Silverfort Azure AD Adapter](http://www.silverfort.com/), [Interplay Learning](https://skilledtrades.interplaylearning.com/#login), [Nura Space](https://dashboard.nuraspace.com/login), [Yooz EU](https://eu1.getyooz.com/?kc_idp_hint=microsoft), [UXPressia](https://uxpressia.com/users/sign-in), [introDus Pre- and Onboarding Platform](http://app.introdus.dk/login), [Happybot](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=34353e1e-dfe5-4d2f-bb09-2a5e376270c8&response_type=code&redirect_uri=https://api.happyteams.io/microsoft/integrate&response_mode=query&scope=offline_access%20User.Read%20User.Read.All), [LeaksID](https://app.leaksid.com/), [ShiftWizard](http://www.shiftwizard.com/), [PingFlow SSO](https://app.pingview.io/), [Swiftlane](https://admin.swiftlane.com/login), [Quasydoc SSO](https://www.quasydoc.eu/login), [Fenwick Gold Account](https://businesscentral.dynamics.com/), [SeamlessDesk](https://www.seamlessdesk.com/login), [Learnsoft LMS & TMS](http://www.learnsoft.com/), [P-TH+](https://p-th.jp/), [myViewBoard](https://api.myviewboard.com/auth/microsoft/), [Tartabit IoT Bridge](https://bridge-us.tartabit.com/), [AKASHI](../saas-apps/akashi-tutorial.md), [Rewatch](../saas-apps/rewatch-tutorial.md), [Zuddl](../saas-apps/zuddl-tutorial.md), [Parkalot - Car park management](../saas-apps/parkalot-car-park-management-tutorial.md), [HSB ThoughtSpot](../saas-apps/hsb-thoughtspot-tutorial.md), [IBMid](../saas-apps/ibmid-tutorial.md), [SharingCloud](../saas-apps/sharingcloud-tutorial.md), [PoolParty Semantic Suite](../saas-apps/poolparty-semantic-suite-tutorial.md), [GlobeSmart](../saas-apps/globesmart-tutorial.md), [Samsung Knox and Business Services](../saas-apps/samsung-knox-and-business-services-tutorial.md), [Penji](../saas-apps/penji-tutorial.md), [Kendis- Scaling Agile Platform](../saas-apps/kendis-scaling-agile-platform-tutorial.md), [Maptician](../saas-apps/maptician-tutorial.md), [Olfeo SAAS](../saas-apps/olfeo-saas-tutorial.md), [Sigma Computing](../saas-apps/sigma-computing-tutorial.md), [CloudKnox Permissions Management Platform](../saas-apps/cloudknox-permissions-management-platform-tutorial.md), [Klaxoon SAML](../saas-apps/klaxoon-saml-tutorial.md), [Enablon](../saas-apps/enablon-tutorial.md)
+
+You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
+
+For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest
+
+
+
+### New provisioning connectors in the Azure AD Application Gallery - February 2021
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Atea](../saas-apps/atea-provisioning-tutorial.md)
+- [Getabstract](../saas-apps/getabstract-provisioning-tutorial.md)
+- [HelloID](../saas-apps/helloid-provisioning-tutorial.md)
+- [Hoxhunt](../saas-apps/hoxhunt-provisioning-tutorial.md)
+- [Iris Intranet](../saas-apps/iris-intranet-provisioning-tutorial.md)
+- [Preciate](../saas-apps/preciate-provisioning-tutorial.md)
+
+For more information, read [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+++
+### General availability - 10 Azure Active Directory roles now renamed
+
+**Type:** Changed feature
+**Service category:** RBAC
+**Product capability:** Access Control
+
+10 Azure AD built-in roles have been renamed so that they're aligned across the [Microsoft 365 admin center](/microsoft-365/admin/microsoft-365-admin-center-preview), [Azure AD portal](https://portal.azure.com/), and [Microsoft Graph](https://developer.microsoft.com/graph/). To learn more about the new roles, refer to [Administrator role permissions in Azure Active Directory](../roles/permissions-reference.md#all-roles).
+
+![Table showing role names in MS Graph API and the Azure portal, and the proposed final name across API, Azure portal, and Mac.](media/whats-new/roles-table-rbac.png)
+++
+### New Company Branding in MFA/SSPR Combined Registration
+
+**Type:** Changed feature
+**Service category:** User Experience and Management
+**Product capability:** End User Experiences
+
+In the past, company logos weren't used on Azure Active Directory sign-in pages. Company branding is now located to the top left of MF).
+++
+### General availability - Second level manager can be set as alternate approver
+
+**Type:** Changed feature
+**Service category:** User Access Management
+**Product capability:** Entitlement Management
+
+An extra option when you select approvers is now available in Entitlement Management. If you select "Manager as approver" for the First Approver, you will have another option, "Second level manager as alternate approver", available to choose in the alternate approver field. If you select this option, you need to add a fallback approver to forward the request to in case the system can't find the second level manager. [Learn more](../governance/entitlement-management-access-package-approval-policy.md#alternate-approvers).
+
++
+### Authentication Methods Activity Dashboard
+
+**Type:** Changed feature
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+
+
+The refreshed Authentication Methods Activity dashboard gives admins an overview of authentication method registration and usage activity in their tenant. The report summarizes the number of users registered for each method, and also which methods are used during sign-in and password reset. [Learn more](../authentication/howto-authentication-methods-activity.md).
+
++
+### Refresh and session token lifetimes configurability in Configurable Token Lifetime (CTL) are retired
+
+**Type:** Deprecated
+**Service category:** Other
+**Product capability:** User Authentication
+
+Refresh and session token lifetimes configurability in CTL are retired. Azure Active Directory no longer honors refresh and session token configuration in existing policies. [Learn more](../develop/active-directory-configurable-token-lifetimes.md#token-lifetime-policies-for-refresh-tokens-and-session-tokens).
+
++ ## January 2021 ### Secret token will be a mandatory field when configuring provisioning
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
Previously updated : 7/30/2021 Last updated : 9/7/2021
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## August 2021
+
+### New major version of AADConnect available
+
+**Type:** Fixed
+**Service category:** AD Connect
+**Product capability:** Identity Lifecycle Management
+
+We've released a new major version of Azure Active Directory Connect. This version contains several updates of foundational components to the latest versions and is recommended for all customers using Azure AD Connect. [Learn more](../hybrid/whatis-azure-ad-connect-v2.md).
+
++
+### Public Preview - Azure AD single Sign on and device-based Conditional Access support in Firefox on Windows 10
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** SSO
+
+
+We now support native single sign-on (SSO) support and device-based Conditional Access to the Firefox browser on Windows 10 and Windows Server 2019. Support is available in Firefox version 91. [Learn more](../conditional-access/require-managed-devices.md#prerequisites).
+
++
+### Public preview - beta MS Graph APIs for Azure AD access reviews returns list of contacted reviewer names
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+
+We've released beta MS Graph API for Azure AD access reviews. The API has methods to return a list of contacted reviewer names in addition to the reviewer type. [Learn more](/graph/api/resources/accessreviewinstance?view=graph-rest-beta).
+
++
+### General Availability - "Register or join devices" user action in Conditional Access
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+
+The "Register or join devices" user action is generally available in Conditional access. This user action allows you to control multifactor authentication (MFA) policies for Azure Active Directory (AD) device registration. Currently, this user action only allows you to enable MFA as a control when users register or join devices to Azure AD. Other controls that are dependent on or not applicable to Azure AD device registration continue to be disabled with this user action. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions).
+++
+### General Availability - customers can scope reviews of privileged roles to eligible or permanent assignments
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+Administrators can now create access reviews of only permanent or eligible assignments to privileged Azure AD or Azure resource roles. [Learn more](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md).
+
+
+
+### General availability - assign roles to Azure Active Directory (AD) groups
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** Access Control
+
+
+Assigning roles to Azure AD groups is now generally available. This feature can simplify the management of role assignments in Azure AD for Global Administrators and Privileged Role Administrators. [Learn more](../roles/groups-concept.md).
+
++
+### New Federated Apps available in Azure AD Application gallery - Aug 2021
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In August 2021, we have added following 46 new applications in our App gallery with Federation support:
+
+[Siriux Customer Dashboard](https://portal.siriux.tech/login), [STRUXI](https://struxi.app/), [Autodesk Construction Cloud - Meetings](https://acc.autodesk.com/), [Eccentex AppBase for Azure](../saas-apps/eccentex-appbase-for-azure-tutorial.md), [Bookado](https://adminportal.bookado.io/), [FilingRamp](https://app.filingramp.com/login), [BenQ IAM](../saas-apps/benq-iam-tutorial.md), [Rhombus Systems](../saas-apps/rhombus-systems-tutorial.md), [CorporateExperience](../saas-apps/corporateexperience-tutorial.md), [TutorOcean](../saas-apps/tutorocean-tutorial.md), [Bookado Device](https://adminportal.bookado.io/), [HiFives-AD-SSO](https://app.hifives.in/login/azure), [Darzin](https://au.darzin.com/), [Simply Stakeholders](https://au.simplystakeholders.com/), [KACTUS HCM - Smart People](https://kactusspc.digitalware.co/), [Five9 UC Adapter for Microsoft Teams V2](https://uc.five9.net/?vendor=msteams), [Automation Center](https://automationcenter.cognizantgoc.com/portal/boot/signon), [Cirrus Identity Bridge for Azure AD](../saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md), [ShiftWizard SAML](../saas-apps/shiftwizard-saml-tutorial.md), [Safesend Returns](https://www.safesendwebsites.com/), [Brushup](../saas-apps/brushup-tutorial.md), [directprint.io Cloud Print Administration](../saas-apps/directprint-io-cloud-print-administration-tutorial.md), [plain-x](https://app.plain-x.com/#/login),[X-point Cloud](../saas-apps/x-point-cloud-tutorial.md), [SmartHub INFER](../saas-apps/smarthub-infer-tutorial.md), [Fresh Relevance](../saas-apps/fresh-relevance-tutorial.md), [FluentPro G.A. Suite](https://gas.fluentpro.com/Account/SSOLogin?provider=Microsoft), [Clockwork Recruiting](../saas-apps/clockwork-recruiting-tutorial.md), [WalkMe SAML2.0](../saas-apps/walkme-saml-tutorial.md), [Sideways 6](https://app.sideways6.com/account/login?ReturnUrl=/), [Kronos Workforce Dimensions](../saas-apps/kronos-workforce-dimensions-tutorial.md), [SysTrack Cloud Edition](https://cloud.lakesidesoftware.com/Cloud/Account/Login), [mailworx Dynamics CRM Connector](https://www.mailworx.info/), [Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service](../saas-apps/palo-alto-networks-cloud-identity-enginecloud-authentication-service-tutorial.md), [Peripass](https://accounts.peripass.app/v1/sso/challenge), [JobDiva](https://www.jobssos.com/index_azad.jsp?SSO=AZURE&ID=1), [Sanebox For Office365](https://sanebox.com/login), [Tulip](../saas-apps/tulip-tutorial.md), [HP Wolf Security](https://bec-pocda37b439.bromium-online.com/gui/), [Genesys Engage cloud Email](https://login.microsoftonline.com/common/oauth2/authorize?prompt=consent&accessType=offline&state=07e035a7-6fb0-4411-afd9-efa46c9602f9&resource=https://graph.microsoft.com/&response_type=code&redirect_uri=https://iwd.api01-westus2.dev.genazure.com/iwd/v3/emails/oauth2/microsoft/callback&client_id=36cd21ab-862f-47c8-abb6-79facad09dda), [Meta Wiki](https://meta.dunkel.eu/), [Palo Alto Networks Cloud Identity Engine Directory Sync](https://directory-sync.us.paloaltonetworks.com/directory?instance=L2qoLVONpBHgdJp1M5K9S08Z7NBXlpi54pW1y3DDu2gQqdwKbyUGA11EgeaDfZ1dGwn397S8eP7EwQW3uyE4XL), [Valarea](https://www.valarea.com/en/download), [LanSchool Air](../saas-apps/lanschool-air-tutorial.md), [Catalyst](https://www.catalyst.org/sso-login/), [Webcargo](../saas-apps/webcargo-tutorial.md)
+
+You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
+
+For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest
+++
+### New provisioning connectors in the Azure AD Application Gallery - August 2021
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Chatwork](../saas-apps/chatwork-provisioning-tutorial.md)
+- [Freshservice](../saas-apps/freshservice-provisioning-tutorial.md)
+- [InviteDesk](../saas-apps/invitedesk-provisioning-tutorial.md)
+- [Maptician](../saas-apps/maptician-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see Automate user provisioning to SaaS applications with Azure AD.
+
++
+### Multi-factor (MFA) fraud report ΓÇô new audit event
+
+**Type:** Changed feature
+**Service category:** MFA
+**Product capability:** Identity Security & Protection
+
+
+To help administrators understand that their users are blocked for MFA as a result of fraud report, we have added a new audit event. This audit event is tracked when the user reports fraud. The audit log is available in addition to the existing information in the sign-in logs about fraud report. To learn how to get the audit report, see [multifactor authentication Fraud alert](../authentication/howto-mfa-mfasettings.md#fraud-alert).
+++
+### Improved Low-Risk Detections
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+To improve the quality of low risk alerts that Identity Protection issues, we've modified the algorithm to issue fewer low risk Risky Sign-Ins. Organizations may see a significant reduction in low risk sign-in in their environment. [Learn more](../identity-protection/concept-identity-protection-risks.md).
+
++
+### Non-interactive risky sign-ins
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+Identity Protection now emits risky sign-ins on non-interactive sign-ins. Admins can find these risky sign-ins using the **sign-in type** filter in the risky sign-ins report. [Learn more](../identity-protection/howto-identity-protection-investigate-risk.md).
+
++
+### Change from User Administrator to Identity Governance Administrator in Entitlement Management
+
+**Type:** Changed feature
+**Service category:** Roles
+**Product capability:** Identity Governance
+
+The permissions assignments to manage access packages and other resources in Entitlement Management are moving from the User Administrator role to the Identity Governance administrator role.
+
+Users that have been assigned the User administrator role can longer create catalogs or manage access packages in a catalog they don't own. If users in your organization have been assigned the User administrator role to configure catalogs, access packages, or policies in entitlement management, they will need a new assignment. You should instead assign these users the Identity Governance administrator role. [Learn more](../governance/entitlement-management-delegate.md)
+++
+### Windows Azure Active Directory connector is deprecated
+
+**Type:** Deprecated
+**Service category:** Microsoft Identity Manager
+**Product capability:** Identity Lifecycle Management
+
+The Windows Azure AD Connector for FIM is at feature freeze and deprecated. The solution of using FIM and the Azure AD Connector has been replaced. Existing deployments should migrate to [Azure AD Connect](../hybrid/whatis-hybrid-identity.md), Azure AD Connect Sync, or the [Microsoft Graph Connector](https://docs.microsoft.com/microsoft-identity-manager/microsoft-identity-manager-2016-connector-graph), as the internal interfaces used by the Azure AD Connector for FIM are being removed from Azure AD. [Learn more](https://docs.microsoft.com/microsoft-identity-manager/microsoft-identity-manager-2016-deprecated-features).
+++
+### Retirement of older Azure AD Connect versions
+
+**Type:** Deprecated
+**Service category:** AD Connect
+**Product capability:** User Management
+
+Starting August 31 2022, all V1 versions of Azure AD Connect will be retired. If you haven't already done so, you need to update your server to Azure AD Connect V2.0. You need to make sure you're running a recent version of Azure AD Connect to receive an optimal support experience.
+
+If you run a retired version of Azure AD Connect it may unexpectedly stop working. You may also not have the latest security fixes, performance improvements, troubleshooting, and diagnostic tools and service enhancements. Also, if you require support we can't provide you with the level of service your organization needs.
+
+See [Azure Active Directory Connect V2.0](../hybrid/whatis-azure-ad-connect-v2.md), what has changed in V2.0 and how this change impacts you.
+++
+### Retirement of support for installing MIM on Windows Server 2008 R2 or SQL Server 2008 R2
+
+**Type:** Deprecated
+**Service category:** Microsoft Identity Manager
+**Product capability:** Identity Lifecycle Management
+
+Deploying MIM Sync, Service, Portal or CM on Windows Server 2008 R2, or using SQL Server 2008 R2 as the underlying database, is deprecated as these platforms are no longer in mainstream support. Installing MIM Sync and other components on Windows Server 2016 or later, and with SQL Server 2016 or later, is recommended.
+
+Deploying MIM for Privileged Access Management with a Windows Server 2012 R2 domain controller in the PRIV forest is deprecated. Use Windows Server 2016 or later Active Directory, with Windows Server 2016 functional level, for your PRIV forest domain. The Windows Server 2012 R2 functional level is still permitted for a CORP forest's domain. [Learn more](https://docs.microsoft.com/microsoft-identity-manager/microsoft-identity-manager-2016-supported-platforms).
+++ ## July 2021 ### New Google sign-in integration for Azure AD B2C and B2B self-service sign-up and invited external users will stop working starting July 12, 2021
Two-way SMS for MFA Server was originally deprecated in 2018, and will not be su
Email notifications and Azure portal Service Health notifications were sent to affected admins on December 8, 2020 and January 28, 2021. The alerts went to the Owner, Co-Owner, Admin, and Service Admin RBAC roles tied to the subscriptions. [Learn more](../authentication/how-to-authentication-two-way-sms-unsupported.md).
-
-## February 2021
-
-### Email one-time passcode authentication on by default starting October 2021
-
-**Type:** Plan for change
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-
-Starting October 31, 2021, Microsoft Azure Active Directory [email one-time passcode authentication](../external-identities/one-time-passcode.md) will become the default method for inviting accounts and tenants for B2B collaboration scenarios. At this time, Microsoft will no longer allow the redemption of invitations using unmanaged Azure Active Directory accounts.
---
-### Unrequested but consented permissions will no longer be added to tokens if they would trigger Conditional Access
-**Type:** Plan for change
-**Service category:** Authentications (Logins)
-**Product capability:** Platform
-
-Currently, applications using [dynamic permissions](../develop/v2-permissions-and-consent.md#requesting-individual-user-consent) are given all of the permissions they're consented to access. This includes applications that are unrequested and even if they trigger conditional access. For example, this can cause an app requesting only `user.read` that also has consent for `files.read`, to be forced to pass the Conditional Access assigned for the `files.read` permission.
-
-To reduce the number of unnecessary Conditional Access prompts, Azure AD is changing the way that unrequested scopes are provided to applications. Apps will only trigger conditional access for permission they explicitly request. For more information, read [What's new in authentication](../develop/reference-breaking-changes.md#conditional-access-will-only-trigger-for-explicitly-requested-scopes).
-
-
-
-### Public preview - Use a Temporary Access Pass to register Passwordless credentials
-
-**Type:** New feature
-**Service category:** MFA
-**Product capability:** Identity Security & Protection
-
-Temporary Access Pass is a time-limited passcode that serves as strong credentials and allows onboarding of Passwordless credentials and recovery when a user has lost or forgotten their strong authentication factor (for example, FIDO2 security key or Microsoft Authenticator) app and needs to sign in to register new strong authentication methods. [Learn more](../authentication/howto-authentication-temporary-access-pass.md).
---
-### Public preview - Keep me signed in (KMSI) in next generation of user flows
-
-**Type:** New feature
-**Service category:** B2C - Consumer Identity Management
-**Product capability:** B2B/B2C
-
-The next generation of B2C user flows now supports the [keep me signed in (KMSI)](../../active-directory-b2c/session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) functionality that allows customers to extend the session lifetime for the users of their web and native applications by using a persistent cookie. feature keeps the session active even when the user closes and reopens the browser, and is revoked when the user signs out.
---
-### Public preview - Reset redemption status for a guest user
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-Customers can now reinvite existing external guest users to reset their redemption status, which allows the guest user account to remain without them losing any access. [Learn more](../external-identities/reset-redemption-status.md).
-
--
-### Public preview - /synchronization (provisioning) APIs now support application permissions
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** Identity Lifecycle Management
-
-Customers can now use application.readwrite.ownedby as an application permission to call the synchronization APIs. Note this is only supported for provisioning from Azure AD out into third-party applications (for example, AWS, Data Bricks, etc.). It is currently not supported for HR-provisioning (Workday / Successfactors) or Cloud Sync (AD to Azure AD). [Learn more](/graph/api/resources/provisioningobjectsummary?view=graph-rest-beta&preserve-view=true).
-
--
-### General availability - Authentication Policy Administrator built-in role
-
-**Type:** New feature
-**Service category:** RBAC
-**Product capability:** Access Control
-
-Users with this role can configure the authentication methods policy, tenant-wide MFA settings, and password protection policy. This role grants permission to manage Password Protection settings: smart lockout configurations and updating the custom banned passwords list. [Learn more](../roles/permissions-reference.md#authentication-policy-administrator).
---
-### General availability - User collections on My Apps are available now!
-
-**Type:** New feature
-**Service category:** My Apps
-**Product capability:** End User Experiences
-
-Users can now create their own groupings of apps on the My Apps app launcher. They can also reorder and hide collections shared with them by their administrator. [Learn more](../user-help/my-apps-portal-user-collections.md).
---
-### General availability - Autofill in Authenticator
-
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** Identity Security & Protection
-
-Microsoft Authenticator provides Multi-factor Authentication (MFA) and account management capabilities, and now also will autofill passwords on sites and apps users visit on their mobile (iOS and Android).
-
-To use autofill on Authenticator, users need to add their personal Microsoft account to Authenticator and use it to sync their passwords. Work or school accounts cannot be used to sync passwords at this time. [Learn more](../user-help/user-help-auth-app-faq.md#autofill-for-it-admins).
---
-### General availability - Invite internal users to B2B collaboration
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-Customers can now invite internal guests to use B2B collaboration instead of sending an invitation to an existing internal account. This allows customers to keep that user's object ID, UPN, group memberships, and app assignments. [Learn more](../external-identities/invite-internal-users.md).
---
-### General availability - Domain Name Administrator built-in role
-
-**Type:** New feature
-**Service category:** RBAC
-**Product capability:** Access Control
-
-Users with this role can manage (read, add, verify, update, and delete) domain names. They can also read directory information about users, groups, and applications, as these objects have domain dependencies.
-
-For on-premises environments, users with this role can configure domain names for federation so that associated users are always authenticated on-premises. These users can then sign into Azure AD-based services with their on-premises passwords via single sign-on. Federation settings need to be synced via Azure AD Connect, so users also have permissions to manage Azure AD Connect. [Learn more](../roles/permissions-reference.md#domain-name-administrator).
-
--
-### New Federated Apps available in Azure AD Application gallery - February 2021
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In February 2021 we have added following 37 new applications in our App gallery with Federation support:
-
-[Loop Messenger Extension](https://loopworks.com/loop-flow-messenger/), [Silverfort Azure AD Adapter](http://www.silverfort.com/), [Interplay Learning](https://skilledtrades.interplaylearning.com/#login), [Nura Space](https://dashboard.nuraspace.com/login), [Yooz EU](https://eu1.getyooz.com/?kc_idp_hint=microsoft), [UXPressia](https://uxpressia.com/users/sign-in), [introDus Pre- and Onboarding Platform](http://app.introdus.dk/login), [Happybot](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=34353e1e-dfe5-4d2f-bb09-2a5e376270c8&response_type=code&redirect_uri=https://api.happyteams.io/microsoft/integrate&response_mode=query&scope=offline_access%20User.Read%20User.Read.All), [LeaksID](https://app.leaksid.com/), [ShiftWizard](http://www.shiftwizard.com/), [PingFlow SSO](https://app.pingview.io/), [Swiftlane](https://admin.swiftlane.com/login), [Quasydoc SSO](https://www.quasydoc.eu/login), [Fenwick Gold Account](https://businesscentral.dynamics.com/), [SeamlessDesk](https://www.seamlessdesk.com/login), [Learnsoft LMS & TMS](http://www.learnsoft.com/), [P-TH+](https://p-th.jp/), [myViewBoard](https://api.myviewboard.com/auth/microsoft/), [Tartabit IoT Bridge](https://bridge-us.tartabit.com/), [AKASHI](../saas-apps/akashi-tutorial.md), [Rewatch](../saas-apps/rewatch-tutorial.md), [Zuddl](../saas-apps/zuddl-tutorial.md), [Parkalot - Car park management](../saas-apps/parkalot-car-park-management-tutorial.md), [HSB ThoughtSpot](../saas-apps/hsb-thoughtspot-tutorial.md), [IBMid](../saas-apps/ibmid-tutorial.md), [SharingCloud](../saas-apps/sharingcloud-tutorial.md), [PoolParty Semantic Suite](../saas-apps/poolparty-semantic-suite-tutorial.md), [GlobeSmart](../saas-apps/globesmart-tutorial.md), [Samsung Knox and Business Services](../saas-apps/samsung-knox-and-business-services-tutorial.md), [Penji](../saas-apps/penji-tutorial.md), [Kendis- Scaling Agile Platform](../saas-apps/kendis-scaling-agile-platform-tutorial.md), [Maptician](../saas-apps/maptician-tutorial.md), [Olfeo SAAS](../saas-apps/olfeo-saas-tutorial.md), [Sigma Computing](../saas-apps/sigma-computing-tutorial.md), [CloudKnox Permissions Management Platform](../saas-apps/cloudknox-permissions-management-platform-tutorial.md), [Klaxoon SAML](../saas-apps/klaxoon-saml-tutorial.md), [Enablon](../saas-apps/enablon-tutorial.md)
-
-You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
-
-For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest
-
-
-
-### New provisioning connectors in the Azure AD Application Gallery - February 2021
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [Atea](../saas-apps/atea-provisioning-tutorial.md)-- [Getabstract](../saas-apps/getabstract-provisioning-tutorial.md)-- [HelloID](../saas-apps/helloid-provisioning-tutorial.md)-- [Hoxhunt](../saas-apps/hoxhunt-provisioning-tutorial.md)-- [Iris Intranet](../saas-apps/iris-intranet-provisioning-tutorial.md)-- [Preciate](../saas-apps/preciate-provisioning-tutorial.md)-
-For more information, read [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
---
-### General availability - 10 Azure Active Directory roles now renamed
-
-**Type:** Changed feature
-**Service category:** RBAC
-**Product capability:** Access Control
-
-10 Azure AD built-in roles have been renamed so that they're aligned across the [Microsoft 365 admin center](/microsoft-365/admin/microsoft-365-admin-center-preview), [Azure AD portal](https://portal.azure.com/), and [Microsoft Graph](https://developer.microsoft.com/graph/). To learn more about the new roles, refer to [Administrator role permissions in Azure Active Directory](../roles/permissions-reference.md#all-roles).
-
-![Table showing role names in MS Graph API and the Azure portal, and the proposed final name across API, Azure portal, and Mac.](media/whats-new/roles-table-rbac.png)
---
-### New Company Branding in MFA/SSPR Combined Registration
-
-**Type:** Changed feature
-**Service category:** User Experience and Management
-**Product capability:** End User Experiences
-
-In the past, company logos weren't used on Azure Active Directory sign-in pages. Company branding is now located to the top left of MF).
---
-### General availability - Second level manager can be set as alternate approver
-
-**Type:** Changed feature
-**Service category:** User Access Management
-**Product capability:** Entitlement Management
-
-An extra option when you select approvers is now available in Entitlement Management. If you select "Manager as approver" for the First Approver, you will have another option, "Second level manager as alternate approver", available to choose in the alternate approver field. If you select this option, you need to add a fallback approver to forward the request to in case the system can't find the second level manager. [Learn more](../governance/entitlement-management-access-package-approval-policy.md#alternate-approvers).
-
--
-### Authentication Methods Activity Dashboard
-
-**Type:** Changed feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-
-The refreshed Authentication Methods Activity dashboard gives admins an overview of authentication method registration and usage activity in their tenant. The report summarizes the number of users registered for each method, and also which methods are used during sign-in and password reset. [Learn more](../authentication/howto-authentication-methods-activity.md).
-
--
-### Refresh and session token lifetimes configurability in Configurable Token Lifetime (CTL) are retired
-
-**Type:** Deprecated
-**Service category:** Other
-**Product capability:** User Authentication
-
-Refresh and session token lifetimes configurability in CTL are retired. Azure Active Directory no longer honors refresh and session token configuration in existing policies. [Learn more](../develop/active-directory-configurable-token-lifetimes.md#token-lifetime-policies-for-refresh-tokens-and-session-tokens).
-
-
active-directory Entitlement Management Access Reviews Review Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-reviews-review-access.md
description: Learn how to complete an access review of entitlement management ac
documentationCenter: '' -+ editor:
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-catalog-create.md
Title: Create & manage a catalog of resources in entitlement management - Azure AD
+ Title: Create and manage a catalog of resources in entitlement management - Azure AD
description: Learn how to create a new container of resources and access packages in Azure Active Directory entitlement management. documentationCenter: ''
-#Customer intent: As an administrator, I want detailed information about the options available when creating and manage catalog so that I most effectively use catalogs in my organization.
+#Customer intent: As an administrator, I want detailed information about the options available for creating and managing a catalog so that I can most effectively use catalogs in my organization.
# Create and manage a catalog of resources in Azure AD entitlement management
+This article shows you how to create and manage a catalog of resources and access packages in Azure Active Directory (Azure AD) entitlement management.
+ ## Create a catalog
-A catalog is a container of resources and access packages. You create a catalog when you want to group related resources and access packages. Whoever creates the catalog becomes the first catalog owner. A catalog owner can add additional catalog owners.
+A catalog is a container of resources and access packages. You create a catalog when you want to group related resources and access packages. Whoever creates the catalog becomes the first catalog owner. A catalog owner can add more catalog owners.
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog creator
+**Prerequisite roles:** Global administrator, Identity Governance administrator, User administrator, or Catalog creator
> [!NOTE]
-> Users that have been assigned the User administrator role will no longer be able to create catalogs or manage access packages in a catalog they do not own. If users in your organization have been assigned the User administrator role to configure catalogs, access packages, or policies in entitlement management, you should instead assign these users the **Identity Governance administrator** role.
+> Users who were assigned the User administrator role will no longer be able to create catalogs or manage access packages in a catalog they don't own. If users in your organization were assigned the User administrator role to configure catalogs, access packages, or policies in entitlement management, you should instead assign these users the Identity Governance administrator role.
+
+To create a catalog:
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
-1. In the left menu, click **Catalogs**.
+1. On the left menu, select **Catalogs**.
- ![Entitlement management catalogs in the Azure portal](./media/entitlement-management-catalog-create/catalogs.png)
+ ![Screenshot that shows entitlement management catalogs in the Azure portal.](./media/entitlement-management-catalog-create/catalogs.png)
-1. Click **New catalog**.
+1. Select **New catalog**.
1. Enter a unique name for the catalog and provide a description. Users will see this information in an access package's details.
-1. If you want the access packages in this catalog to be available for users to request as soon as they are created, set **Enabled** to **Yes**.
+1. If you want the access packages in this catalog to be available for users to request as soon as they're created, set **Enabled** to **Yes**.
1. If you want to allow users in selected external directories to be able to request access packages in this catalog, set **Enabled for external users** to **Yes**.
- ![New catalog pane](./media/entitlement-management-shared/new-catalog.png)
+ ![Screenshot that shows the New catalog pane.](./media/entitlement-management-shared/new-catalog.png)
-1. Click **Create** to create the catalog.
+1. Select **Create** to create the catalog.
## Create a catalog programmatically+
+There are two ways to create a catalog programmatically.
+ ### Create a catalog with Microsoft Graph
-You can also create a catalog using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageCatalog](/graph/api/accesspackagecatalog-post?view=graph-rest-beta&preserve-view=true).
+You can create a catalog by using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageCatalog](/graph/api/accesspackagecatalog-post?view=graph-rest-beta&preserve-view=true).
-### Create a catalog with PowerShell
+### Create a catalog with PowerShell
-You can create a catalog in PowerShell with the `New-MgEntitlementManagementAccessPackageCatalog` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or later.
+You can also create a catalog in PowerShell with the `New-MgEntitlementManagementAccessPackageCatalog` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or later.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
$catalog = New-MgEntitlementManagementAccessPackageCatalog -DisplayName "Marketi
## Add resources to a catalog
-To include resources in an access package, the resources must exist in a catalog. The types of resources you can add are groups, applications, and SharePoint Online sites.
+To include resources in an access package, the resources must exist in a catalog. The types of resources you can add are groups, applications, and SharePoint Online sites. For example:
+
+* Groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. Groups that originate in an on-premises Active Directory can't be assigned as resources because their owner or member attributes can't be changed in Azure AD. Groups that originate in Exchange Online as Distribution groups can't be modified in Azure AD either.
+* Applications can be Azure AD enterprise applications, which include both software as a service (SaaS) applications and your own applications integrated with Azure AD. For more information on how to select appropriate resources for applications with multiple roles, see [Add resource roles](entitlement-management-access-package-resources.md#add-resource-roles).
+* Sites can be SharePoint Online sites or SharePoint Online site collections.
-* The groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. Groups that originate in an on-premises Active Directory cannot be assigned as resources because their owner or member attributes cannot be changed in Azure AD. Groups that originate in Exchange Online as Distribution groups cannot be modified in Azure AD either.
-* The applications can be Azure AD enterprise applications, including both SaaS applications and your own applications integrated with Azure AD. For more information on selecting appropriate resources for applications with multiple roles, see [add resource roles](entitlement-management-access-package-resources.md#add-resource-roles).
-* The sites can be SharePoint Online sites or SharePoint Online site collections.
+**Prerequisite roles:** See [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog).
-**Prerequisite role:** See [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog)
+To add resources to a catalog:
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
-1. In the left menu, click **Catalogs** and then open the catalog you want to add resources to.
+1. On the left menu, select **Catalogs** and then open the catalog you want to add resources to.
-1. In the left menu, click **Resources**.
+1. On the left menu, select **Resources**.
-1. Click **Add resources**.
+1. Select **Add resources**.
-1. Click a resource type: **Groups and Teams**, **Applications**, or **SharePoint sites**.
+1. Select the resource type **Groups and Teams**, **Applications**, or **SharePoint sites**.
If you don't see a resource that you want to add or you're unable to add a resource, make sure you have the required Azure AD directory role and entitlement management role. You might need to have someone with the required roles add the resource to your catalog. For more information, see [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog).
-1. Select one or more resources of the type that you would like to add to the catalog.
+1. Select one or more resources of the type that you want to add to the catalog.
- ![Add resources to a catalog](./media/entitlement-management-catalog-create/catalog-add-resources.png)
+ ![Screenshot that shows the Add resources to a catalog pane.](./media/entitlement-management-catalog-create/catalog-add-resources.png)
-1. When finished, click **Add**.
+1. When you're finished, select **Add**.
These resources can now be included in access packages within the catalog.
-### Add resource attributes (Preview) in the catalog
+### Add resource attributes (preview) in the catalog
-Attributes are required fields that requestors will be asked to answer before submitting their access request. Their answers for these attributes will be shown to approvers and also stamped on the user object in Azure Active Directory.
+Attributes are required fields that requestors will be asked to answer before they submit their access request. Their answers for these attributes will be shown to approvers and also stamped on the user object in Azure AD.
> [!NOTE]
->All attributes set up on a resource would require an answer before a request for an access package containing that resource can be submitted. If requestors don't provide an answer, there request won't be processed.
+>All attributes set up on a resource require an answer before a request for an access package containing that resource can be submitted. If requestors don't provide an answer, their request won't be processed.
-To require attributes for access requests, use the following steps:
+To require attributes for access requests:
-1. Click **Resources** in the left menu, and a list of resources in the catalog will appear.
+1. Select **Resources** on the left menu, and a list of resources in the catalog appears.
-1. Click on the ellipses next to the resource you want to add attributes, then select **Require attributes (Preview)**.
+1. Select the ellipsis next to the resource where you want to add attributes, and then select **Require attributes (Preview)**.
- ![Add resources - select require attributes](./media/entitlement-management-catalog-create/resources-require-attributes.png)
+ ![Screenshot that shows selecting Require attributes (Preview).](./media/entitlement-management-catalog-create/resources-require-attributes.png)
1. Select the attribute type:
- 1. **Built-in**: includes Azure Active Directory user profile attributes.
- 1. **Directory schema extension**: provides a way to store additional data in Azure Active Directory on user objects and other directory objects. This includes groups, tenant details, and service principals. Only extension attributes on user objects can be used to send out claims to applications.
- 1. If you choose **Built-in**, you can choose an attribute from the dropdown list. If you choose **Directory schema extension**, you can enter the attribute name in the textbox.
+ 1. **Built-in** includes Azure AD user profile attributes.
+ 1. **Directory schema extension** provides a way to store more data in Azure AD on user objects and other directory objects. This includes groups, tenant details, and service principals. Only extension attributes on user objects can be used to send out claims to applications.
+1. If you chose **Built-in**, select an attribute from the dropdown list. If you chose **Directory schema extension**, enter the attribute name in the text box.
> [!NOTE]
- > The User.mobilePhone attribute can be updated only for non-administrator users. Learn more [here](/graph/permissions-reference#remarks-5).
+ > The User.mobilePhone attribute can be updated only for non-administrator users. Learn more at [this website](/graph/permissions-reference#remarks-5).
-1. Select the Answer format in which you would like requestors to answer. Answer formats include: **short text**, **multiple choice**, and **long text**.
+1. Select the answer format you want requestors to use for their answer. Answer formats include **short text**, **multiple choice**, and **long text**.
-1. If selecting multiple choice, click on the **Edit and localize** button to configure the answer options.
- 1. After selecting Edit and localize, the **View/edit question** pane will open.
- 1. Type in the response options you wish to give the requestor when answering the question in the **Answer values** boxes.
- 1. Select the language the for the response option. You can localize response options if you choose additional languages.
- 1. Type in as many responses as you need then click **Save**.
+1. If you select multiple choice, select **Edit and localize** to configure the answer options.
+ 1. In the **View/edit question** pane that appears, enter the response options you want to give the requestor when they answer the question in the **Answer values** boxes.
+ 1. Select the language for the response option. You can localize response options if you choose more languages.
+ 1. Enter as many responses as you need, and then select **Save**.
1. If you want the attribute value to be editable during direct assignments and self-service requests, select **Yes**. > [!NOTE]
- > ![Add resources - add attributes - make attributes editable](./media/entitlement-management-catalog-create/attributes-are-editable.png)
- > - If you select **No** in Attribute value is editable field, and the attribute value **is empty**, users will have the ability to enter the value of that attribute. Once saved, the value will no longer be editable.
- > - If you select **No** in Attribute value is editable field, and the attribute value **is not empty**, then users will not be able to edit the pre-existing value, both during direct assignments and during self-service requests.
+ > ![Screenshot that shows making attributes editable.](./media/entitlement-management-catalog-create/attributes-are-editable.png)
+ > - If you select **No** in the **Attribute value is editable** box and the attribute value *is empty*, users can enter the value of that attribute. After saving, the value can't be edited.
+ > - If you select **No** in the **Attribute value is editable** box and the attribute value *isn't empty*, users can't edit the preexisting value during direct assignments and self-service requests.
- ![Add resources - add attributes - questions](./media/entitlement-management-catalog-create/add-attributes-questions.png)
+ ![Screenshot that shows adding localizations.](./media/entitlement-management-catalog-create/add-attributes-questions.png)
-1. If you would like to add localization, click **Add localization**.
+1. If you want to add localization, select **Add localization**.
- 1. Once in the **Add localizations for questions** pane, select the language code for the language in which you want to localize the question related to the selected attribute.
- 1. In the language you configured, type the question in the **Localized Text** box.
- 1. Once you've added all of the localizations needed, click **Save**.
+ 1. In the **Add localizations for question** pane, select the language code for the language in which you want to localize the question related to the selected attribute.
+ 1. In the language you configured, enter the question in the **Localized Text** box.
+ 1. After you add all the localizations you need, select **Save**.
- ![Add resources - add attributes - localization](./media/entitlement-management-catalog-create/attributes-add-localization.png)
+ ![Screenshot that shows saving the localizations.](./media/entitlement-management-catalog-create/attributes-add-localization.png)
-1. Once all attribute information is completed on the **Require attributes (Preview)** page, click **Save**.
+1. After all attribute information is completed on the **Require attributes (Preview)** page, select **Save**.
-### Add a Multi-geo SharePoint Site
+### Add a Multi-Geo SharePoint site
-1. If you have [Multi-Geo](/microsoft-365/enterprise/multi-geo-capabilities-in-onedrive-and-sharepoint-online-in-microsoft-365) enabled for SharePoint, select the environment you would like to select sites from.
+1. If you have [Multi-Geo](/microsoft-365/enterprise/multi-geo-capabilities-in-onedrive-and-sharepoint-online-in-microsoft-365) enabled for SharePoint, select the environment you want to select sites from.
- :::image type="content" source="media/entitlement-management-catalog-create/sharepoint-multi-geo-select.png" alt-text="Access package - Add resource roles - Select SharePoint Multi-geo sites":::
+ :::image type="content" source="media/entitlement-management-catalog-create/sharepoint-multi-geo-select.png" alt-text="Screenshot that shows the Select SharePoint Online sites pane.":::
-1. Then select the sites you would like to be added to the catalog.
+1. Then select the sites you want to be added to the catalog.
-### Adding a resource to a catalog programmatically
+### Add a resource to a catalog programmatically
-You can also add a resource to a catalog using Microsoft Graph. A user in an appropriate role, or a catalog and resource owner, with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [create an accessPackageResourceRequest](/graph/api/accesspackageresourcerequest-post?view=graph-rest-beta&preserve-view=true). An application with application permissions cannot yet programmatically add a resource without a user context at the time of the request, however.
+You can also add a resource to a catalog by using Microsoft Graph. A user in an appropriate role, or a catalog and resource owner, with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [create an accessPackageResourceRequest](/graph/api/accesspackageresourcerequest-post?view=graph-rest-beta&preserve-view=true). An application with application permissions can't yet programmatically add a resource without a user context at the time of the request, however.
## Remove resources from a catalog
-You can remove resources from a catalog. A resource can only be removed from a catalog if it isn't being used in any of the catalog's access packages.
+You can remove resources from a catalog. A resource can be removed from a catalog only if it isn't being used in any of the catalog's access packages.
-**Prerequisite role:** See [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog)
+**Prerequisite roles:** See [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog).
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+To remove resources from a catalog:
-1. In the left menu, click **Catalogs** and then open the catalog you want to remove resources from.
+1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
-1. In the left menu, click **Resources**.
+1. On the left menu, select **Catalogs** and then open the catalog you want to remove resources from.
-1. Select the resources you want to remove.
+1. On the left menu, select **Resources**.
-1. Click **Remove** (or click the ellipsis (**...**) and then click **Remove resource**).
+1. Select the resources you want to remove.
+1. Select **Remove**. Optionally, select the ellipsis (**...**) and then select **Remove resource**.
-## Add additional catalog owners
+## Add more catalog owners
-The user that created a catalog becomes the first catalog owner. To delegate management of a catalog, you add users to the catalog owner role. This helps share the catalog management responsibilities.
+The user who created a catalog becomes the first catalog owner. To delegate management of a catalog, add users to the catalog owner role. Adding more catalog owners helps to share the catalog management responsibilities.
-Follow these steps to assign a user to the catalog owner role:
+**Prerequisite roles:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
+To assign a user to the catalog owner role:
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
-1. In the left menu, click **Catalogs** and then open the catalog you want to add administrators to.
+1. On the left menu, select **Catalogs** and then open the catalog you want to add administrators to.
-1. In the left menu, click **Roles and administrators**.
+1. On the left menu, select **Roles and administrators**.
- ![Catalogs roles and administrators](./media/entitlement-management-shared/catalog-roles-administrators.png)
+ ![Screenshot that shows catalog roles and administrators.](./media/entitlement-management-shared/catalog-roles-administrators.png)
-1. Click **Add owners** to select the members for these roles.
+1. Select **Add owners** to select the members for these roles.
1. Click **Select** to add these members.
Follow these steps to assign a user to the catalog owner role:
You can edit the name and description for a catalog. Users see this information in an access package's details.
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
+**Prerequisite roles:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+To edit a catalog:
-1. In the left menu, click **Catalogs** and then open the catalog you want to edit.
+1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
-1. On the catalog's **Overview** page, click **Edit**.
+1. On the left menu, select **Catalogs** and then open the catalog you want to edit.
+
+1. On the catalog's **Overview** page, select **Edit**.
1. Edit the catalog's name, description, or enabled settings.
- ![Edit catalog settings](./media/entitlement-management-shared/catalog-edit.png)
+ ![Screenshot that shows editing catalog settings.](./media/entitlement-management-shared/catalog-edit.png)
-1. Click **Save**.
+1. Select **Save**.
## Delete a catalog You can delete a catalog, but only if it doesn't have any access packages.
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
+**Prerequisite roles:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
+
+To delete a catalog:
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
-1. In the left menu, click **Catalogs** and then open the catalog you want to delete.
+1. On the left menu, select **Catalogs** and then open the catalog you want to delete.
-1. On the catalog's **Overview**, click **Delete**.
+1. On the catalog's **Overview** page, select **Delete**.
-1. In the message box that appears, click **Yes**.
+1. On the message box that appears, select **Yes**.
-### Deleting a catalog programmatically
+### Delete a catalog programmatically
-You can also delete a catalog using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [delete an accessPackageCatalog](/graph/api/accesspackagecatalog-delete?view=graph-rest-beta&preserve-view=true).
+You can also delete a catalog by using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [delete an accessPackageCatalog](/graph/api/accesspackagecatalog-delete?view=graph-rest-beta&preserve-view=true).
## Next steps -- [Delegate access governance to access package managers](entitlement-management-delegate-managers.md)
+[Delegate access governance to access package managers](entitlement-management-delegate-managers.md)
active-directory Entitlement Management Delegate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-delegate.md
The following table lists the tasks that the entitlement management roles can do
| [Add a connected organization](entitlement-management-organization.md) | :heavy_check_mark: | | | | | | [Create a new catalog](entitlement-management-catalog-create.md) | :heavy_check_mark: | :heavy_check_mark: | | | | | [Add a resource to a catalog](entitlement-management-catalog-create.md#add-resources-to-a-catalog) | :heavy_check_mark: | | :heavy_check_mark: | | |
-| [Add a catalog owner](entitlement-management-catalog-create.md#add-additional-catalog-owners) | :heavy_check_mark: | | :heavy_check_mark: | | |
+| [Add a catalog owner](entitlement-management-catalog-create.md#add-more-catalog-owners) | :heavy_check_mark: | | :heavy_check_mark: | | |
| [Edit a catalog](entitlement-management-catalog-create.md#edit-a-catalog) | :heavy_check_mark: | | :heavy_check_mark: | | | | [Delete a catalog](entitlement-management-catalog-create.md#delete-a-catalog) | :heavy_check_mark: | | :heavy_check_mark: | | | | [Delegate to an access package manager](entitlement-management-delegate-managers.md) | :heavy_check_mark: | | :heavy_check_mark: | | |
active-directory Entitlement Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-scenarios.md
There are several ways that you can configure entitlement management for your or
### Catalog owner: Delegate management of resources
-1. [Add co-owners to the catalog](entitlement-management-catalog-create.md#add-additional-catalog-owners)
+1. [Add co-owners to the catalog](entitlement-management-catalog-create.md#add-more-catalog-owners)
1. [Add resources to the catalog](entitlement-management-catalog-create.md#add-resources-to-a-catalog) ### Catalog owner: Delegate management of access packages
active-directory How To Connect Sync Feature Directory Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-feature-directory-extensions.md
The installation shows the following attributes, which are valid candidates:
* Single-valued attributes: String, Boolean, Integer, Binary * Multi-valued attributes: String, Binary -
->[!NOTE]
-> After Azure AD Connect synchronized multi-valued Active Directory attribute to Azure AD as a multi-valued attribute extension, it is possible to include attribute to the SAML claim. But, it is not possible to consume this data through API call.
+> [!NOTE]
+> Not all features in Azure Active Directory support multi valued extension attributes. Please refer to the documentation of the feature in which you plan to use these attributes to confirm they are supported.
The list of attributes is read from the schema cache that's created during installation of Azure AD Connect. If you have extended the Active Directory schema with additional attributes, you must [refresh the schema](how-to-connect-installation-wizard.md#refresh-directory-schema) before these new attributes are visible.
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/secure-hybrid-access.md
Last updated 8/17/2021
- # Secure hybrid access: Secure legacy apps with Azure Active Directory You can now protect your on-premises and cloud legacy authentication applications by connecting them to Azure Active Directory (AD) with:
You can now protect your on-premises and cloud legacy authentication application
- [Secure hybrid access partners](#secure-hybrid-access-through-azure-ad-partner-integrations)
-You can bridge the gap and strengthen your security posture across all applications with Azure AD capabilities like [Azure AD Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/overview) and [Azure AD Identity Protection](https://docs.microsoft.com/azure/active-directory/identity-protection/overview-identity-protection). By having Azure AD as an Identity provider (IDP), you can use modern authentication and authorization methods like [single sign-on (SSO)](https://docs.microsoft.com/azure/active-directory/manage-apps/what-is-single-sign-on) and [multifactor authentication (MFA)](https://docs.microsoft.com/azure/active-directory/authentication/concept-mfa-howitworks) to secure your on-premises legacy applications.
+You can bridge the gap and strengthen your security posture across all applications with Azure AD capabilities like [Azure AD Conditional Access](../conditional-access/overview.md) and [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md). By having Azure AD as an Identity provider (IDP), you can use modern authentication and authorization methods like [single sign-on (SSO)](what-is-single-sign-on.md) and [multifactor authentication (MFA)](../authentication/concept-mfa-howitworks.md) to secure your on-premises legacy applications.
## Secure hybrid access through Azure AD Application Proxy
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 08/03/2021 Last updated : 09/08/2021
reviewer: napuri
Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## August 2021
+
+### New articles
+
+- [Protecting against consent phishing](protect-against-consent-phishing.md)
+
+### Updated articles
+
+- [Configure permission classifications](configure-permission-classifications.md)
+- [Configure group owner consent to apps accessing group data](configure-user-consent-groups.md)
+- [Take action on over privileged or suspicious applications in Azure Active Directory](manage-application-permissions.md)
+- [Managing consent to applications and evaluating consent requests](manage-consent-requests.md)
+- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)
+- [Quickstart: Add an application to your tenant](add-application-portal.md)
+- [Assign users and groups to an enterprise application](assign-user-or-group-access-portal.md)
+- [Managing access to apps](what-is-access-management.md)
+- [Azure Active Directory application management: What's new](whats-new-docs.md)
+- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)
+- [Advanced certificate signing options in a SAML token](certificate-signing-options.md)
+- [Create collections on the My Apps portal](access-panel-collections.md)
++ ## July 2021 ### Updated articles
active-directory Groups Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/groups-role-settings.md
Role settings are the default settings that are applied to group owner and group
Follow these steps to open the settings for an Azure privileged access group role.
-1. Sign in to [Azure portal](https://portal.azure.com/) with a user in the [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
+1. Sign in to [Azure portal](https://portal.azure.com/) with a user in the [Global Administrator](../roles/permissions-reference.md#global-administrator) role or who is assigned as the group owner.
1. Open **Azure AD Privileged Identity Management**.
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
When you need to assume an Azure AD role, you can request activation by opening
![Azure AD roles - activation page contains duration and scope](./media/pim-how-to-activate-role/activate-page.png)
-1. Select **Additional verification required**"** and follow the instructions to provide additional security verification. You are required to authenticate only once per session.
+1. Select **Additional verification required**"** and follow the instructions to provide security verification. You are required to authenticate only once per session.
![Screen to provide security verification such as a PIN code](./media/pim-resource-roles-activate-your-roles/resources-mfa-enter-code.png)
If you do not require activation of a role that requires approval, you can cance
![My request list with Cancel action highlighted](./media/pim-resource-roles-activate-your-roles/resources-my-requests-cancel.png)
-## Troubleshoot for new version
+## Troubleshoot portal delay
-### Permissions are not granted after activating a role
+### Permissions aren't granted after activating a role
When you activate a role in Privileged Identity Management, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, sign out of the portal you are trying to perform the action and then sign back in. In the Azure portal, PIM signs you out and back in automatically.
active-directory Pim How To Add Role To User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md
Previously updated : 06/03/2021 Last updated : 09/01/2021
Follow these steps to make a user eligible for an Azure AD admin role.
1. To specify a specific assignment duration, add a start and end date and time boxes. When finished, select **Assign** to create the new role assignment.
+ - **Permanent** assignments have no expiration date. Use this option for permanent workers who frequently need the role permissions.
+
+ - **Time-bound** assignments will expire at the end of a specified period. Use this option with temporary or contract workers, for example, whose project end date and time are known.
+ ![Memberships settings - date and time](./media/pim-how-to-add-role-to-user/start-and-end-dates.png) 1. After the role is assigned, a assignment status notification is displayed.
For certain roles, the scope of the granted permissions can be restricted to a s
For more information about creating administrative units, see [Add and remove administrative units](../roles/admin-units-manage.md).
+## Assign a role using Graph API
+
+For permissions required to use the PIM API, see [Understand the Privileged Identity Management APIs](pim-apis.md).
+
+### Eligible with no end date
+
+The following is a sample HTTP request to create an eligible assignment with no end date. For details on the API commands including samples such as C# and JavaScript, see [Create unifiedRoleEligibilityScheduleRequest](/graph/api/unifiedroleeligibilityschedulerequest-post-unifiedroleeligibilityschedulerequests?view=graph-rest-beta&tabs=http&preserve-view=true).
+
+#### HTTP request
+
+````HTTP
+POST https://graph.microsoft.com/beta/rolemanagement/directory/roleEligibilityScheduleRequests
+
+ "action": "AdminAssign",
+ "justification": "abcde",
+ "directoryScopeId": "/",
+ "principalId": "d96ea738-3b95-4ae7-9e19-78a083066d5b",
+ "roleDefinitionId": "88d8e3e3-8f55-4a1e-953a-9b9898b8876b",
+ "scheduleInfo": {
+ "startDateTime": "2021-07-15T19:15:08.941Z",
+ "expiration": {
+ "type": "NoExpiration" }
+ }
+{
+}
+````
+
+#### HTTP response
+
+The following is an example of the response. The response object shown here might be shortened for readability.
+
+````HTTP
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleEligibilityScheduleRequests/$entity",
+ "id": "bd3cb7fc-cf0e-4590-a668-de8d0cc8c7e6",
+ "status": "Provisioned",
+ "createdDateTime": "2021-07-15T19:47:41.0939004Z",
+ "completedDateTime": "2021-07-15T19:47:42.4376681Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "AdminAssign",
+ "principalId": "d96ea738-3b95-4ae7-9e19-78a083066d5b",
+ "roleDefinitionId": "88d8e3e3-8f55-4a1e-953a-9b9898b8876b",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "bd3cb7fc-cf0e-4590-a668-de8d0cc8c7e6",
+ "justification": "test",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "5d851eeb-b593-4d43-a78d-c8bd2f5144d2"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2021-07-15T19:47:42.4376681Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "noExpiration",
+ "endDateTime": null,
+ "duration": null
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ }
+}
+````
+
+### Active and time-bound
+
+The following is a sample HTTP request to create an active assignment that's time-bound. For details on the API commands including samples such as C# and JavaScript, see [Create unifiedRoleEligibilityScheduleRequest](/graph/api/unifiedroleeligibilityschedulerequest-post-unifiedroleeligibilityschedulerequests?view=graph-rest-beta&tabs=http&preserve-view=true).
+
+#### HTTP request
+
+````HTTP
+POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentScheduleRequests
+
+{
+ "action": "AdminAssign",
+ "justification": "abcde",
+ "directoryScopeId": "/",
+ "principalId": "d96ea738-3b95-4ae7-9e19-78a083066d5b",
+ "roleDefinitionId": "cf1c38e5-3621-4004-a7cb-879624dced7c",
+ "scheduleInfo": {
+ "startDateTime": "2021-07-15T19:15:08.941Z",
+ "expiration": {
+ "type": "AfterDuration",
+ "duration": "PT3H"
+ }
+ }
+}
+````
+
+#### HTTP response
+
+The following is an example of the response. The response object shown here might be shortened for readability.
+
+````HTTP
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleAssignmentScheduleRequests/$entity",
+ "id": "5ea884f1-8a4d-4c75-b085-c509b93cd582",
+ "status": "Provisioned",
+ "createdDateTime": "2021-07-15T19:15:09.7093491Z",
+ "completedDateTime": "2021-07-15T19:15:11.4437343Z",
+ "approvalId": null,
+ "customData": null,
+ "action": "AdminAssign",
+ "principalId": "d96ea738-3b95-4ae7-9e19-78a083066d5b",
+ "roleDefinitionId": "cf1c38e5-3621-4004-a7cb-879624dced7c",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": "5ea884f1-8a4d-4c75-b085-c509b93cd582",
+ "justification": "test",
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "5d851eeb-b593-4d43-a78d-c8bd2f5144d2"
+ }
+ },
+ "scheduleInfo": {
+ "startDateTime": "2021-07-15T19:15:11.4437343Z",
+ "recurrence": null,
+ "expiration": {
+ "type": "afterDuration",
+ "endDateTime": null,
+ "duration": "PT3H"
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ }
+}
+````
+ ## Update or remove an existing role assignment Follow these steps to update or remove an existing role assignment. **Azure AD P2 licensed customers only**: Don't assign a group as Active to a role through both Azure AD and Privileged Identity Management (PIM). For a detailed explanation, see [Known issues](../roles/groups-concept.md#known-issues).
Follow these steps to update or remove an existing role assignment. **Azure AD P
1. Select **Update** or **Remove** to update or remove the role assignment.
+## Remove eligible assignment via API
+
+### Request
+
+````HTTP
+POST https://graph.microsoft.com/beta/roleManagement/directory/roleEligibilityScheduleRequests
+
+
+
+{
+ "action": "AdminRemove",
+ "justification": "abcde",
+ "directoryScopeId": "/",
+ "principalId": "d96ea738-3b95-4ae7-9e19-78a083066d5b",
+ "roleDefinitionId": "88d8e3e3-8f55-4a1e-953a-9b9898b8876b"
+}
+````
+
+### Response
+
+````HTTP
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleEligibilityScheduleRequests/$entity",
+ "id": "fc7bb2ca-b505-4ca7-ad2a-576d152633de",
+ "status": "Revoked",
+ "createdDateTime": "2021-07-15T20:23:23.85453Z",
+ "completedDateTime": null,
+ "approvalId": null,
+ "customData": null,
+ "action": "AdminRemove",
+ "principalId": "d96ea738-3b95-4ae7-9e19-78a083066d5b",
+ "roleDefinitionId": "88d8e3e3-8f55-4a1e-953a-9b9898b8876b",
+ "directoryScopeId": "/",
+ "appScopeId": null,
+ "isValidationOnly": false,
+ "targetScheduleId": null,
+ "justification": "test",
+ "scheduleInfo": null,
+ "createdBy": {
+ "application": null,
+ "device": null,
+ "user": {
+ "displayName": null,
+ "id": "5d851eeb-b593-4d43-a78d-c8bd2f5144d2"
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ }
+}
+````
+ ## Next steps - [Configure Azure AD admin role settings in Privileged Identity Management](pim-how-to-change-default-settings.md)
active-directory Pim Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-roles.md
Previously updated : 05/11/2020 Last updated : 09/01/2021
For more information about the classic subscription administrator roles, see [Cl
We support all Microsoft 365 roles in the Azure AD Roles and Administrators portal experience, such as Exchange Administrator and SharePoint Administrator, but we don't support specific roles within Exchange RBAC or SharePoint RBAC. For more information about these Microsoft 365 services, see [Microsoft 365 admin roles](/office365/admin/add-users/about-admin-roles). > [!NOTE]
-> Eligible users for the SharePoint administrator role, the Device administrator role, and any roles trying to access the Microsoft Security and Compliance Center might experience delays of up to a few hours after activating their role. We are working with those teams to fix the issues.
+> - Eligible users for the SharePoint administrator role, the Device administrator role, and any roles trying to access the Microsoft Security and Compliance Center might experience delays of up to a few hours after activating their role. We are working with those teams to fix the issues.
+> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Azure AD joined devices](/azure/active-directory/devices/assign-local-admin#manage-the-device-administrator-role).
## Next steps
active-directory Quickstart Access Log With Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/quickstart-access-log-with-graph-api.md
The goal of this step is to create a record of a failed sign-in in the Azure AD
This section provides you with the steps to get information about your sign-in using the Graph API.
- ![Graph explorer query](./media/quickstart-access-log-with-graph-api/graph-explorer-query.png)
+ ![Microsoft Graph Explorer query](./media/quickstart-access-log-with-graph-api/graph-explorer-query.png)
**To review the failed sign-in:**
-1. Navigate to the [Microsoft Graph explorer](https://developer.microsoft.com/en-us/graph/graph-explorer).
+1. Navigate to [Microsoft Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer).
2. Sign-in to your tenant as global administrator.
- ![Microsoft Graph explorer authentication](./media/quickstart-access-log-with-graph-api/graph-explorer-authentication.png)
+ ![Microsoft Graph Explorer authentication](./media/quickstart-access-log-with-graph-api/graph-explorer-authentication.png)
3. In the **HTTP verb drop-down list**, select **GET**.
This section provides you with the steps to get information about your sign-in u
Review the outcome of your query.
- ![Microsoft Graph explorer response preview](./media/quickstart-access-log-with-graph-api/response-preview.png)
+ ![Microsoft Graph Explorer response preview](./media/quickstart-access-log-with-graph-api/response-preview.png)
## Clean up resources
active-directory Custom Assign Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-assign-powershell.md
Previously updated : 05/14/2021 Last updated : 09/07/2021
$roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Applicati
# Get app registration and construct resource scope for assignment. $appRegistration = Get-AzureADApplication -Filter "displayName eq 'f/128 Filter Photos'"
-$resourceScope = '/' + $appRegistration.objectId
+$directoryScope = '/' + $appRegistration.objectId
# Create a scoped role assignment
-$roleAssignment = New-AzureADMSRoleAssignment -ResourceScope $resourceScope -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId
+$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $directoryScope -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId
``` To assign the role to a service principal instead of a user, use the [Get-AzureADMSServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) cmdlet.
$roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Applicati
# Get app registration and construct resource scope for assignment. $appRegistration = Get-AzureADApplication -Filter "displayName eq 'f/128 Filter Photos'"
-$resourceScope = '/' + $appRegistration.objectId
+$directoryScope = '/' + $appRegistration.objectId
# Create a scoped role assignment
-$roleAssignment = New-AzureADMSRoleAssignment -ResourceScope $resourceScope -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId
+$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $directoryScope -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId
``` ### Read and list role assignments
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/delegate-by-task.md
In this article, you can find the information needed to restrict a user's admini
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Add resources to a catalog | Identity Governance Administrator | With entitlement management, you can delegate this task to the catalog owner ([see documentation](../governance/entitlement-management-catalog-create.md#add-additional-catalog-owners)) |
+> | Add resources to a catalog | Identity Governance Administrator | With entitlement management, you can delegate this task to the catalog owner ([see documentation](../governance/entitlement-management-catalog-create.md#add-more-catalog-owners)) |
> | Add SharePoint Online sites to catalog | SharePoint Administrator | | ## Groups
active-directory View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/view-assignments.md
Previously updated : 05/14/2021 Last updated : 09/07/2021
This section describes viewing assignments of a role with organization-wide scop
Example of listing the role assignments. ``` PowerShell
-# Fetch list of all directory roles with object ID
-Get-AzureADDirectoryRole
+# Fetch list of all directory roles with template ID
+Get-AzureADMSRoleDefinition
# Fetch a specific directory role by ID
-$role = Get-AzureADDirectoryRole -ObjectId "5b3fe201-fa8b-4144-b6f1-875829ff7543"
+$role = Get-AzureADMSRoleDefinition -Id "5b3fe201-fa8b-4144-b6f1-875829ff7543"
-# Fetch role membership for a role
-Get-AzureADDirectoryRoleMember -ObjectId $role.ObjectId | Get-AzureADUser
+# Fetch membership for a role
+Get-AzureADMSRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'"
``` ## Microsoft Graph API
HTTP request to get a role assignment for a given role definition.
GET ``` HTTP
-https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments&$filter=roleDefinitionId eq ΓÇÿ<object-id-or-template-id-of-role-definition>ΓÇÖ
+https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments&$filter=roleDefinitionId eq ΓÇÿ<template-id-of-role-definition>ΓÇÖ
``` Response
HTTP/1.1 200 OK
"id":"CtRxNqwabEKgwaOCHr2CGJIiSDKQoTVJrLE9etXyrY0-1", "principalId":"ab2e1023-bddc-4038-9ac1-ad4843e7e539", "roleDefinitionId":"3671d40a-1aac-426c-a0c1-a3821ebd8218",
- "resourceScopes":["/"]
+ "directoryScopeId":"/"
} ```
active-directory Nist Authenticator Assurance Level 3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/nist-authenticator-assurance-level-3.md
Microsoft offers authentication methods that enable you to meet required NIST au
| FIDO2 security key<br>or<br> Smart card (Active Directory Federation Services [AD FS])<br>or<br>Windows Hello for Business with hardware TPM| Multifactor cryptographic hardware | | **Additional methods**| | | Password<br> and<br>(Hybrid Azure AD joined with hardware TPM <br>or <br> Azure AD joined with hardware TPM)| Memorized secret<br>and<br> Single-factor cryptographic hardware |
-| Password <br>and<br>(Single-factor one-time password hardware (from an OTP manufacturer) <br>or<br>Hybrid Azure AD joined with software TPM <br>or <br> Azure AD joined with software TPM <br>or<br> Compliant managed device)| Memorized secret <br>and<br>Single-factor one-time password hardware<br> and<br>Single-factor cryptographic software |
+| Password <br>and<br>Single-factor one-time password hardware (from an OTP manufacturer) <br>and<br>(Hybrid Azure AD joined with software TPM <br>or <br> Azure AD joined with software TPM <br>or<br> Compliant managed device)| Memorized secret <br>and<br>Single-factor one-time password hardware<br> and<br>Single-factor cryptographic software |
### Our recommendations
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --lo
az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <custom private dns zone ResourceId> --fqdn-subdomain <subdomain-name> ```
-## Create a private AKS cluster with a Public DNS address
+## Create a private AKS cluster with a Public FQDN
Prerequisites:
-* Azure CLI with aks-preview extension 0.5.29 or later.
+* Azure CLI >= 2.28.0 or Azure CLI with aks-preview extension 0.5.29 or later.
* If using ARM or the rest API, the AKS API version must be 2021-05-01 or later. The Public DNS option can be leveraged to simplify routing options for your Private Cluster. ![Public DNS](https://user-images.githubusercontent.com/50749048/124776520-82629600-df0d-11eb-8f6b-71c473b6bd01.png)
-1. By specifying `--enable-public-fqdn` when you provision a private AKS cluster, AKS creates an additional A record for its FQDN in Azure public DNS. The agent nodes still use the A record in the private DNS zone to resolve the private IP address of the private endpoint for communication to the API server.
+1. When you provision a private AKS cluster, AKS by default creates an additional public FQDN and corresponding A record in Azure public DNS. The agent nodes still use the A record in the private DNS zone to resolve the private IP address of the private endpoint for communication to the API server.
-2. If you use both `--enable-public-fqdn` and `--private-dns-zone none`, the cluster will only have a public FQDN. When using this option, no Private DNS Zone is created or used for the name resolution of the FQDN of the API Server. The IP of the API is still private and not publicly routable.
+2. If you use `--private-dns-zone none`, the cluster will only have a public FQDN. When using this option, no Private DNS Zone is created or used for the name resolution of the FQDN of the API Server. The IP of the API is still private and not publicly routable.
+
+3. If the public FQDN is not desired, you could use `--disable-public-fqdn` to disable it ("none" private dns zone is not allowed to disable public FQDN).
```azurecli-interactive
-az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <private-dns-zone-mode> --enable-public-fqdn
+az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <private-dns-zone-mode> --disable-public-fqdn
+az aks update -n <private-cluster-name> -g <private-cluster-resource-group> --disable-public-fqdn
``` ## Options for connecting to the private cluster
api-management Api Management Howto Manage Protocols Ciphers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-manage-protocols-ciphers.md
na Previously updated : 05/29/2019 Last updated : 09/07/2021 # Manage protocols and ciphers in Azure API Management
-Azure API Management supports multiple versions of TLS protocol for both client and backend sides as well as the 3DES cipher.
+Azure API Management supports multiple versions of Transport Layer Security (TLS) protocol for:
+* Client side
+* Backend side
+* The 3DES cipher
This guide shows you how to manage protocols and ciphers configuration for an Azure API Management instance.
This guide shows you how to manage protocols and ciphers configuration for an Az
## Prerequisites
-To follow the steps in this article, you must have:
-
-* An API Management instance
+* An API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
## How to manage TLS protocols and 3DES cipher 1. Navigate to your **API Management instance** in the Azure portal.
-2. Select **Protocol settings** from the menu.
-3. Enable or disable desired protocols or ciphers.
-4. Click **Save**. Changes will be applied within an hour.
+1. Scroll to the **Security** section in the side menu.
+1. Under the Security section, select **Protocols + ciphers**.
+1. Enable or disable desired protocols or ciphers.
+1. Click **Save**. Changes will be applied within an hour.
+
+> [!NOTE]
+> Some protocols or cipher suites (like backend-side TLS 1.2) can't be enabled or disabled from the Azure portal. Instead, you'll need to apply the REST call. Use the `properties.customProperties` structure in the [Create/Update API Management Service REST API](https://docs.microsoft.com/rest/api/apimanagement/2020-06-01-preview/api-management-service/create-or-update#request-body) article.
## Next steps
-* Learn more about [TLS (Transport Layer Security)](/dotnet/framework/network-programming/tls).
+* Learn more about [TLS](/dotnet/framework/network-programming/tls).
* Check out more [videos](https://azure.microsoft.com/documentation/videos/index/?services=api-management) about API Management.
app-service Networking Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking-features.md
For any given use case, there might be a few ways to solve the problem. Choosing
| Support IP-based SSL needs for your app | App-assigned address | | Support unshared dedicated inbound address for your app | App-assigned address | | Restrict access to your app from a set of well-defined addresses | Access restrictions |
-| Restrict access to your app from resources in a virtual network | Service endpoints </br> ILB ASE </br> Private endpoints |
+| Restrict access to your app from resources in a virtual network | Service endpoints </br> Internal Load Balancer (ILB) ASE </br> Private endpoints |
| Expose your app on a private IP in your virtual network | ILB ASE </br> Private endpoints </br> Private IP for inbound traffic on an Application Gateway instance with service endpoints | | Protect your app with a web application firewall (WAF) | Application Gateway and ILB ASE </br> Application Gateway with private endpoints </br> Application Gateway with service endpoints </br> Azure Front Door with access restrictions | | Load balance traffic to your apps in different regions | Azure Front Door with access restrictions |
An App Service Environment (ASE) is a single-tenant deployment of the Azure App
* Access resources across ExpressRoute. * Expose your apps with a private address in your virtual network. * Access resources across service endpoints.
+* Access resources across private endpoints.
-With an ASE, you don't need to use features like VNet Integration or service endpoints because the ASE is already in your virtual network. If you want to access resources like SQL or Azure Storage over service endpoints, enable service endpoints on the ASE subnet. If you want to access resources in the virtual network, you don't need to do any additional configuration. If you want to access resources across ExpressRoute, you're already in the virtual network and don't need to configure anything on the ASE or the apps in it.
+With an ASE, you don't need to use VNet Integration because the ASE is already in your virtual network. If you want to access resources like SQL or Azure Storage over service endpoints, enable service endpoints on the ASE subnet. If you want to access resources in the virtual network or private endpoints in the virtual network, you don't need to do any additional configuration. If you want to access resources across ExpressRoute, you're already in the virtual network and don't need to configure anything on the ASE or the apps in it.
Because the apps in an ILB ASE can be exposed on a private IP address, you can easily add WAF devices to expose just the apps that you want to the internet and help keep the rest secure. This feature can help make the development of multitier applications easier. Some things aren't currently possible from the multitenant service but are possible from an ASE. Here are some examples:
-* Expose your apps on a private IP address.
-* Help secure all outbound traffic with network controls that aren't a part of your app.
* Host your apps in a single-tenant service. * Scale up to many more instances than are possible in the multitenant service. * Load private CA client certificates for use by your apps with private CA-secured endpoints. * Force TLS 1.1 across all apps hosted in the system without any ability to disable it at the app level.
-* Provide a dedicated outbound address for all the apps in your ASE that aren't shared with customers.
![Diagram that illustrates an ASE in a virtual network.](media/networking-features/app-service-environment.png)
If you scan App Service, you'll find several ports that are exposed for inbound
|-|-| | HTTP/HTTPS | 80, 443 | | Management | 454, 455 |
-| FTP/FTPS | 21, 990, 10001-10020 |
+| FTP/FTPS | 21, 990, 10001-10300 |
| Visual Studio remote debugging | 4020, 4022, 4024 | | Web Deploy service | 8172 | | Infrastructure use | 7654, 1221 |
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/configuration-infrastructure.md
For this scenario, use NSGs on the Application Gateway subnet. Put the following
**Scenario 1**: UDR for Virtual Appliances
- Any scenario where 0.0.0.0/0 needs to be redirected through any virtual appliance, a hub/spoke virtual network, or on-premise (forced tunneling) isn't supported for V2.
+ Any scenario where 0.0.0.0/0 needs to be redirected through any virtual appliance, a hub/spoke virtual network, or on-premises (forced tunneling) isn't supported for V2.
## Next steps
avere-vfxt Avere Vfxt Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/avere-vfxt/avere-vfxt-deploy.md
The second page of the deployment template allows you to set the cluster size, n
![Second page of the deployment template](media/avere-vfxt-deploy-2.png)
-* **Avere vFXT cluster node count** - Choose the number of nodes in the cluster. The minimum is three nodes and the maximum is twelve.
+* **Avere vFXT cluster node count** - Choose the number of nodes in the cluster. The minimum is three nodes and the maximum is 20.
* **Cluster administration password** - Create the password for cluster administration. This password is used with the username ```admin``` to sign in to the cluster control panel, where you can monitor the cluster and configure cluster settings.
azure-arc Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/connectivity.md
Previously updated : 07/30/2021 Last updated : 09/08/2021
Some Azure-attached services are only available when they can be directly reache
|**Billing telemetry data**|Customer environment -> Azure|Required|No|Indirect or direct|Utilization of database instances must be sent to Azure for billing purposes. | |**Monitoring data and logs**|Customer environment -> Azure|Optional|Maybe depending on data volume (see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/))|Indirect or direct|You may want to send the locally collected monitoring data and logs to Azure Monitor for aggregating data across multiple environments into one place and also to use Azure Monitor services like alerts, using the data in Azure Machine Learning, etc.| |**Azure Role-based Access Control (Azure RBAC)**|Customer environment -> Azure -> Customer Environment|Optional|No|Direct only|If you want to use Azure RBAC, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure RBAC then local Kubernetes RBAC can be used.|
-|**Azure Active Directory (AAD) (Future)**|Customer environment -> Azure -> Customer environment|Optional|Maybe, but you may already be paying for Azure AD|Direct only|If you want to use Azure AD for authentication, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure AD for authentication, you can us Active Directory Federation Services (ADFS) over Active Directory. **Pending availability in directly connected mode**|
+|**Azure Active Directory (AAD) (Future)**|Customer environment -> Azure -> Customer environment|Optional|Maybe, but you may already be paying for Azure AD|Direct only|If you want to use Azure AD for authentication, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure AD for authentication, you can use Active Directory Federation Services (ADFS) over Active Directory. **Pending availability in directly connected mode**|
|**Backup and restore**|Customer environment -> Customer environment|Required|No|Direct or indirect|The backup and restore service can be configured to point to local storage classes. **Pending availability in directly connected mode**| |**Azure backup - long term retention (Future)**| Customer environment -> Azure | Optional| Yes for Azure storage | Direct only |You may want to send backups that are taken locally to Azure Backup for long-term, off-site retention of backups and bring them back to the local environment for restore. **Pending availability in directly connected mode**| |**Azure Defender security services (Future)**|Customer environment -> Azure -> Customer environment|Optional|Yes|Direct only|**Pending availability in directly connected mode**|
Yes
None
+### Helm chart used to create data controller in direct connected mode
+
+The helm chart used to provision the Azure Arc data controller bootstrapper and cluster level objects, such as custom resource definitions, cluster roles, and cluster role bindings, is pulled from an Azure Container Registry.
+
+#### Connection source
+
+The Kubernetes kubelet on each of the Kubernetes nodes pulling the container images.
+
+#### Connection target
+
+`arcdataservicesrow1.azurecr.io`
+
+#### Protocol
+
+HTTPS
+
+#### Port
+
+443
+
+#### Can use proxy
+
+Yes
+
+#### Authentication
+
+None
+ ### Azure Resource Manager APIs Azure Data Studio, and Azure CLI connect to the Azure Resource Manager APIs to send and retrieve data to and from Azure for some features.
azure-cache-for-redis Cache Best Practices Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-scale.md
Title: Best practices for scaling your Azure Cache for Redis
+ Title: Best practices for scaling
description: Learn how to scale your Azure Cache for Redis.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
description: This article tracks FedRAMP and DoD compliance scope for Azure, Dyn
Previously updated : 08/27/2021 Last updated : 09/07/2021 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
For current Azure Government regions and available services, see [Products avail
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope
-*Last updated: August 2021*
+*Last updated: September 2021*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** | | [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; | | | [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; | |
+| [Azure Immersive Reader](https://azure.microsoft.com/services/immersive-reader/) | &#x2705; | &#x2705; | |
| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | &#x2705; | &#x2705; | | | [Azure Internet Analyzer](https://azure.microsoft.com/services/internet-analyzer/) | &#x2705; | &#x2705; | | | [Azure IoT Central](https://azure.microsoft.com/services/iot-central/) | | | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | | | [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | &#x2705; | &#x2705; | | | [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) | &#x2705; | &#x2705; | |
+| [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | |
| [Azure Time Series Insights](https://azure.microsoft.com/services/time-series-insights/) | &#x2705; | &#x2705; | | | [Azure Video Analyzer](https://azure.microsoft.com/products/video-analyzer/) | &#x2705; | &#x2705; | | | [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | |
-| [Azure VMware Solution](https://azure.microsoft.com/services/azure-vmware/) | | | &#x2705; |
+| [Azure VMware Solution](https://azure.microsoft.com/services/azure-vmware/) | &#x2705; | &#x2705; | |
| [Azure Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | | | [Batch](https://azure.microsoft.com/services/batch/) | &#x2705; | &#x2705; | | | **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dynamics 365 Field Service](https://dynamics.microsoft.com/field-service/overview/)| &#x2705; | &#x2705; | | | [Dynamics 365 Finance](https://dynamics.microsoft.com/finance/overview/)| &#x2705; | &#x2705; | | | [Dynamics 365 Guides](https://dynamics.microsoft.com/mixed-reality/guides/)| &#x2705; | &#x2705; | |
-| [Dynamics 365 Sales](https://dynamics.microsoft.com/sales/overview/) | | | &#x2705; |
-| [Dynamics 365 Sales Professional](https://dynamics.microsoft.com/sales/professional/) | | | &#x2705; |
+| [Dynamics 365 Sales](https://dynamics.microsoft.com/sales/overview/) | &#x2705; | &#x2705; | |
+| [Dynamics 365 Sales Professional](https://dynamics.microsoft.com/sales/professional/) | &#x2705; | &#x2705; | |
| [Dynamics 365 Supply Chain Management](https://dynamics.microsoft.com/supply-chain-management/overview/)| &#x2705; | &#x2705; | | | [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | | | [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope
-*Last updated: August 2021*
+*Last updated: September 2021*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure File Sync](../../storage/file-sync/file-sync-introduction.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Firewall Manager](https://azure.microsoft.com/services/firewall-manager/#overview) | &#x2705; | &#x2705; | | | |
| [Azure Form Recognizer](https://azure.microsoft.com/services/form-recognizer/) | &#x2705; | &#x2705; | | | | | [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Functions](https://azure.microsoft.com/services/functions/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
Any alert instance describes the resource that was affected and the cause of the
"LinkToFilteredSearchResultsAPI": "https://api.applicationinsights.io/v1/apps/0MyAppId0/metrics/requests/count", "SearchIntervalDurationMin": "15", "SearchIntervalInMinutes": "15",
- "Threshold": 10000,
+ "Threshold": 10000.0,
"Operator": "Less Than", "ApplicationId": "8e20151d-75b2-4d66-b965-153fb69d65a6", "Dimensions": [
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 08/15/2021 Last updated : 09/07/2021 # What's new in Azure Monitor documentation This article lists significant changes to Azure Monitor documentation.
+## August, 2021
+
+### Agents
+
+**Updated articles**
+
+- [Migrate from Log Analytics agents](agents/azure-monitor-agent-migration.md)
+- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)
+
+### Alerts
+
+**Updated articles**
+
+- [Troubleshooting problems in Azure Monitor metric alerts](alerts/alerts-troubleshoot-metric.md)
+- [Create metric alert monitors in Azure CLI](azure-cli-metrics-alert-sample.md)
+- [Create, view, and manage activity log alerts by using Azure Monitor](alerts/alerts-activity-log.md)
++
+### Application Insights
+
+**Updated articles**
+
+- [Monitoring Azure Functions with Azure Monitor Application Insights](app/monitor-functions.md)
+- [Application Monitoring for Azure App Service](app/azure-web-apps.md)
+- [Configure Application Insights for your ASP.NET website](app/asp-net.md)
+- [Application Insights availability tests](app/availability-overview.md)
+- [Application Insights logging with .NET](app/ilogger.md)
+- [Geolocation and IP address handling](app/ip-collection.md)
+- [Monitor availability with URL ping tests](app/monitor-web-app-availability.md)
+
+### Essentials
+
+**Updated articles**
+
+- [Supported metrics with Azure Monitor](essentials/metrics-supported.md)
+- [Supported categories for Azure Resource Logs](essentials/resource-logs-categories.md)
+- [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](essentials/collect-custom-metrics-linux-telegraf.md)
+
+### Insights
+
+**Updated articles**
+
+- [Azure Monitor Network Insights](insights/network-insights-overview.md)
+
+### Logs
+
+**New articles**
+
+- [Azure AD authentication for Logs](logs/azure-ad-authentication-logs.md)
+- [Move Log Analytics workspace to another region using the Azure portal](logs/move-workspace-region.md)
+- [Availability zones in Azure Monitor](logs/availability-zones.md)
+- [Managing Azure Monitor Logs in Azure CLI](logs/azure-cli-log-analytics-workspace-sample.md)
+
+**Updated articles**
+
+- [Design your Private Link setup](logs/private-link-design.md)
+- [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)
+- [Move Log Analytics workspace to another region using the Azure portal](logs/move-workspace-region.md)
+- [Configure your Private Link](logs/private-link-configure.md)
+- [Use Azure Private Link to connect networks to Azure Monitor](logs/private-link-security.md)
+- [Standard columns in Azure Monitor Logs](logs/log-standard-columns.md)
+- [Azure Monitor customer-managed key](logs/customer-managed-keys.md)
+- [[Azure Monitor Logs data security](logs/data-security.md)
+- [Send log data to Azure Monitor by using the HTTP Data Collector API (preview)](logs/data-collector-api.md)
+- [Get started with log queries in Azure Monitor](logs/get-started-queries.md)
+- [Azure Monitor Logs overview](logs/data-platform-logs.md)
+- [Log Analytics tutorial](logs/log-analytics-tutorial.md)
+
+### Virtual Machines
+
+**Updated articles**
+
+- [Monitor virtual machines with Azure Monitor: Alerts](vm/monitor-virtual-machine-alerts.md)
+ ## July, 2021 ### General
azure-netapp-files Azacsnap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-get-started.md
The following guidance is provided to illustrate the usage of the snapshot tools
- [What are the prerequisites for the storage snapshot](azacsnap-installation.md#prerequisites-for-installation) - [Enable communication with storage](azacsnap-installation.md#enable-communication-with-storage)
- - [Enable communication with SAP HANA](azacsnap-installation.md#enable-communication-with-sap-hana)
+ - [Enable communication with database](azacsnap-installation.md#enable-communication-with-database)
- [How to take snapshots manually](azacsnap-tips.md#take-snapshots-manually) - [How to set up automatic snapshot backup](azacsnap-tips.md#setup-automatic-snapshot-backup) - [How to monitor the snapshots](azacsnap-tips.md#monitor-the-snapshots)
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-installation.md
na ms.devlang: na Previously updated : 08/03/2021 Last updated : 09/08/2021
tools.
1. **Time Synchronization is set up**. The customer will need to provide an NTP compatible time server, and configure the OS accordingly. 1. **HANA is installed** : See HANA installation instructions in [SAP NetWeaver Installation on HANA database](/archive/blogs/saponsqlserver/sap-netweaver-installation-on-hana-database).
-1. **[Enable communication with storage](#enable-communication-with-storage)** (refer separate section for more details): Customer must
- set up SSH with a private/public key pair, and provide the public key for each node where the
- snapshot tools are planned to be executed to Microsoft Operations for setup on the storage
- back-end.
+1. **[Enable communication with storage](#enable-communication-with-storage)** (refer separate section for more details): Select the storage back-end you are using for your deployment.
+
+ # [Azure NetApp Files](#tab/azure-netapp-files)
+
1. **For Azure NetApp Files (refer separate section for details)**: Customer must generate the service principal authentication file. > [!IMPORTANT]
tools.
> - (https://)management.azure.com:443 > - (https://)login.microsoftonline.com:443
+ # [Azure Large Instance (Bare Metal)](#tab/azure-large-instance)
+
1. **For Azure Large Instance (refer separate section for details)**: Customer must set up SSH with a private/public key pair, and provide the public key for each node where the snapshot tools are planned to be executed to Microsoft Operations for setup on the storage back-end.
tools.
Type `exit` to logout of the storage prompt. Microsoft operations will provide the storage user and storage IP at the time of provisioning.
-
-1. **[Enable communication with SAP HANA](#enable-communication-with-sap-hana)** (refer separate section for more details): Customer must
- set up an appropriate SAP HANA user with the required privileges to perform the snapshot.
+
+
+
+1. **[Enable communication with storage](#enable-communication-with-storage)** (refer separate section for more details): Select the storage back-end you are using for your deployment.
+
+1. **[Enable communication with database](#enable-communication-with-database)** (refer separate section for more details):
+
+ # [SAP HANA](#tab/sap-hana)
+
+ Customer must set up an appropriate SAP HANA user with the required privileges to perform the snapshot.
+ 1. This setting can be tested from the command line as follows using the text in `grey` 1. HANAv1
tools.
`hdbsql -n <HANA IP address> -i <HANA instance> -d SYSTEMDB -U <HANA user> "\s"` - The examples above are for non-SSL communication to SAP HANA.
+
+
-## Enable communication with storage
-This section explains how to enable communication with storage.
+## Enable communication with storage
-Follow the instructions to configure storage for your configuration, either:
-1. [Azure NetApp Files (with Virtual Machine)](#azure-netapp-files-with-virtual-machine)
-1. [Azure Large Instance (Bare Metal)](#azure-large-instance-bare-metal)
+This section explains how to enable communication with storage. Ensure the storage back-end you are using is correctly selected.
-### Azure NetApp Files (with Virtual Machine)
+# [Azure NetApp Files (with Virtual Machine)](#tab/azure-netapp-files)
Create RBAC Service Principal
Create RBAC Service Principal
1. Cut and Paste the output content into a file called `azureauth.json` stored on the same system as the `azacsnap` command and secure the file with appropriate system permissions.
-### Azure Large Instance (Bare Metal)
+# [Azure Large Instance (Bare Metal)](#tab/azure-large-instance)
Communication with the storage back-end executes over an encrypted SSH channel. The following example steps are to provide guidance on setup of SSH for this communication.
example steps are to provide guidance on setup of SSH for this communication.
wKGAIilSg7s6Bq/2lAPDN1TqwIF8wQhAg2C7yeZHyE/ckaw/eQYuJtN+RNBD ```
-## Enable communication with SAP HANA
++
+## Enable communication with database
+
+This section explains how to enable communication with storage. Ensure the storage back-end you are using is correctly selected.
+
+# [SAP HANA](#tab/sap-hana)
The snapshot tools communicate with SAP HANA and need a user with appropriate permissions to initiate and release the database save-point. The following example shows the setup of the SAP
hdbsql \
> The `\` character is a command line line-wrap to improve clarity of the multiple parameters passed on the command line. ++ ## Installing the snapshot tools The downloadable self-installer is designed to make the snapshot tools easy to set up and run with
As the root superuser, a manual installation can be achieved as follows:
```bash echo "export LD_LIBRARY_PATH=\"\$LD_LIBRARY_PATH:$NEW_LIB_PATH\"" >> /home/azacsnap/.profile ```
+
+1. Actions to take depending on storage back-end:
-1. On Azure Large Instances
- 1. Copy the SSH keys for back-end storage for azacsnap from the "root" user (the user running
- the install). This assumes the "root" user has already configured connectivity to the storage
- > see section "[Enable communication with storage](#enable-communication-with-storage)".
+ # [Azure NetApp Files (with VM)](#tab/azure-netapp-files)
- ```bash
- cp -pr ~/.ssh /home/azacsnap/.
- ```
+ 1. On Azure NetApp Files
+ 1. Configure the userΓÇÖs `DOTNET_BUNDLE_EXTRACT_BASE_DIR` path per the .NET Core single-file extract
+ guidance.
+ 1. SUSE Linux
- 1. Set the user permissions correctly for the SSH files
+ ```bash
+ echo "export DOTNET_BUNDLE_EXTRACT_BASE_DIR=\$HOME/.net" >> /home/azacsnap/.profile
+ echo "[ -d $DOTNET_BUNDLE_EXTRACT_BASE_DIR] && chmod 700 $DOTNET_BUNDLE_EXTRACT_BASE_DIR" >> /home/azacsnap/.profile
+ ```
- ```bash
- chown -R azacsnap.sapsys /home/azacsnap/.ssh
- ```
+ 1. RHEL
-1. On Azure NetApp Files
- 1. Configure the userΓÇÖs `DOTNET_BUNDLE_EXTRACT_BASE_DIR` path per the .NET Core single-file extract
- guidance.
- 1. SUSE Linux
+ ```bash
+ echo "export DOTNET_BUNDLE_EXTRACT_BASE_DIR=\$HOME/.net" >> /home/azacsnap/.bash_profile
+ echo "[ -d $DOTNET_BUNDLE_EXTRACT_BASE_DIR] && chmod 700 $DOTNET_BUNDLE_EXTRACT_BASE_DIR" >> /home/azacsnap/.bash_profile
+ ```
+
+ # [Azure Large Instance (Bare Metal)](#tab/azure-large-instance)
+
+ 1. On Azure Large Instances
+ 1. Copy the SSH keys for back-end storage for azacsnap from the "root" user (the user running
+ the install). This assumes the "root" user has already configured connectivity to the storage
+ > see section "[Enable communication with storage](#enable-communication-with-storage)".
```bash
- echo "export DOTNET_BUNDLE_EXTRACT_BASE_DIR=\$HOME/.net" >> /home/azacsnap/.profile
- echo "[ -d $DOTNET_BUNDLE_EXTRACT_BASE_DIR] && chmod 700 $DOTNET_BUNDLE_EXTRACT_BASE_DIR" >> /home/azacsnap/.profile
+ cp -pr ~/.ssh /home/azacsnap/.
```
- 1. RHEL
+ 1. Set the user permissions correctly for the SSH files
```bash
- echo "export DOTNET_BUNDLE_EXTRACT_BASE_DIR=\$HOME/.net" >> /home/azacsnap/.bash_profile
- echo "[ -d $DOTNET_BUNDLE_EXTRACT_BASE_DIR] && chmod 700 $DOTNET_BUNDLE_EXTRACT_BASE_DIR" >> /home/azacsnap/.bash_profile
+ chown -R azacsnap.sapsys /home/azacsnap/.ssh
```
+
+ 1. Copy the SAP HANA connection secure user store for the target user, azacsnap. This assumes the "root" user has already configured the secure user store.
- > see section "[Enable communication with SAP HANA](#enable-communication-with-sap-hana)".
+ > see section "[Enable communication with database](#enable-communication-with-database)".
```bash cp -pr ~/.hdb /home/azacsnap/.
The following output shows the steps to complete after running the installer wit
1. Run your first snapshot backup 1. `azacsnap -c backup ΓÇô-volume data--prefix=hana_test --retention=1`
-Step 2 will be necessary if "[Enable communication with SAP HANA](#enable-communication-with-sap-hana)" was not done before the
+Step 2 will be necessary if "[Enable communication with database](#enable-communication-with-database)" was not done before the
installation. > [!NOTE]
installation.
This section explains how to configure the data base.
+# [SAP HANA](#tab/sap-hana)
+ ### SAP HANA Configuration There are some recommended changes to be applied to SAP HANA to ensure protection of the log backups and catalog. By default, the `basepath_logbackup` and `basepath_catalogbackup` will output their files to the `$(DIR_INSTANCE)/backup/log` directory, and it is unlikely this path is on a volume which `azacsnap` is configured to snapshot these files will not be protected with storage snapshots.
global.ini,DEFAULT,,,persistence,log_backup_timeout_s,900
global.ini,SYSTEM,,,persistence,log_backup_timeout_s,300 ``` ++ ## Next steps - [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md)
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-release-notes.md
AzAcSnap v5.0 (Build: 20210421.6349) has been made Generally Available and for t
AzAcSnap v5.0 Preview (Build:20210318.30771) has been released with the following fixes and improvements: -- Removed the need to add the AZACSNAP user into the SAP HANA Tenant DBs, see the [Enable communication with SAP HANA](azacsnap-installation.md#enable-communication-with-sap-hana) section.
+- Removed the need to add the AZACSNAP user into the SAP HANA Tenant DBs, see the [Enable communication with database](azacsnap-installation.md#enable-communication-with-database) section.
- Fix to allow a [restore](azacsnap-cmd-ref-restore.md) with volumes configured with Manual QOS. - Added mutex control to throttle SSH connections for Azure Large Instance. - Fix installer for handling path names with spaces and other related issues.
azure-netapp-files Azacsnap Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-troubleshoot.md
Cannot get SAP HANA version, exiting with error: 127
### Insufficient privilege
-If running `azacsnap` presents an error such as `* 258: insufficient privilege`, check to ensure the appropriate privilege has been asssigned to the "AZACSNAP" database user (assuming this is the user created per the [installation guide](azacsnap-installation.md#enable-communication-with-sap-hana)). Verify the user's current privilege with the following command:
+If running `azacsnap` presents an error such as `* 258: insufficient privilege`, check to ensure the appropriate privilege has been asssigned to the "AZACSNAP" database user (assuming this is the user created per the [installation guide](azacsnap-installation.md#enable-communication-with-database)). Verify the user's current privilege with the following command:
```bash hdbsql -U AZACSNAP "select GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE from sys.granted_privileges "' | grep -i -e GRANTEE -e azacsnap
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na ms.devlang: na Previously updated : 08/25/2021 Last updated : 09/08/2021 # Solution architectures using Azure NetApp Files
This section provides references for solutions for Linux OSS applications and da
### Oracle
+* [Oracle Database with Azure NetApp Files - Azure Example Scenarios](/azure/architecture/example-scenario/file-storage/oracle-azure-netapp-files)
* [Oracle Databases on Microsoft Azure Using Azure NetApp Files](https://www.netapp.com/media/17105-tr4780.pdf) * [Oracle VM images and their deployment on Microsoft Azure: Shared storage configuration options](../virtual-machines/workloads/oracle/oracle-vm-solutions.md#shared-storage-configuration-options) * [Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md)
This section provides references for Windows applications and SQL Server solutio
### SQL Server
+* [SQL Server on Azure Virtual Machines with Azure NetApp Files - Azure Example Scenarios](/azure/architecture/example-scenario/file-storage/sql-server-azure-netapp-files)
* [SQL Server on Azure Deployment Guide Using Azure NetApp Files](https://www.netapp.com/pdf.html?item=/media/27154-tr-4888.pdf) * [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md) * [Deploy SQL Server Over SMB with Azure NetApp Files](https://www.youtube.com/watch?v=x7udfcYbibs)
This section provides solutions for Azure platform services.
* [Trident - Storage Orchestrator for Containers](https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/anf.html) * [Magento e-commerce platform in Azure Kubernetes Service (AKS)](/azure/architecture/example-scenario/magento/magento-azure)
+### Azure Red Hat Openshift
+
+* [Using Trident to Automate Azure NetApp Files from OpenShift](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/using-trident-to-automate-azure-netapp-files-from-openshift/ba-p/2367351)
++ ### Azure Batch * [Run MPI workloads with Azure Batch and Azure NetApp Files](https://azure.microsoft.com/resources/run-mpi-workloads-with-azure-batch-and-azure-netapp-files/)
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
na ms.devlang: na Previously updated : 08/17/2021 Last updated : 09/07/2021 # Create and manage Active Directory connections for Azure NetApp Files
This setting is configured in the **Active Directory Connections** under **NetAp
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+ * **Administrators**
+
+ You can specify users or groups that will be given administrator privileges on the volume.
+
+ ![Screenshot that shows the Administrators box of Active Directory connections window.](../media/azure-netapp-files/active-directory-administrators.png)
+ * Credentials, including your **username** and **password** ![Active Directory credentials](../media/azure-netapp-files/active-directory-credentials.png)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na ms.devlang: na Previously updated : 08/18/2021 Last updated : 09/07/2021
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## September 2021
+
+* [**Administrators**](create-active-directory-connections.md#create-an-active-directory-connection) option in Active Directory connections
+
+ The Active Directory connections page now includes an **Administrators** field. You can specify users or groups that will be given administrator privileges on the volume.
+ ## August 2021 * Support for [enabling Continuous Availability on existing SMB volumes](enable-continuous-availability-existing-SMB.md)
azure-resource-manager Bicep Functions Logical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions-logical.md
Last updated 07/29/2021
# Logical functions for Bicep
-Resource Manager provides a `bool` function for Bicep.
+Resource Manager provides a `bool` function for Bicep.
Most of the logical functions in Azure Resource Manager templates are replaced with [logical operators](./operators-logical.md) in Bicep.
A boolean of the converted value.
### Examples
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/bool.json) shows how to use bool with a string or integer.
+The following example shows how to use bool with a string or integer.
```bicep output trueString bool = bool('true')
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 08/30/2021 Last updated : 09/08/2021 # Move operation support for resources
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | - |
-> | backupvaults | No | No | No |
+> | backupvaults | [Yes](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-resource-group) | [Yes](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-subscription) | No |
## Microsoft.DataShare
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | - |
-> | accounts | **pending** | **pending** | No |
+> | accounts | Yes | Yes | No |
## Microsoft.ProviderHub
Third-party services currently don't support the move operation.
- For commands to move resources, see [Move resources to new resource group or subscription](move-resource-group-and-subscription.md). - [Learn more](../../resource-mover/overview.md) about the Resource Mover service.-- To get the same data as a file of comma-separated values, download [move-support-resources.csv](https://github.com/tfitzmac/resource-capabilities/blob/master/move-support-resources.csv) for resource group and subscription move support. If you want those properties and region move support, download [move-support-resources-with-regions.csv](https://github.com/tfitzmac/resource-capabilities/blob/master/move-support-resources-with-regions.csv).
+- To get the same data as a file of comma-separated values, download [move-support-resources.csv](https://github.com/tfitzmac/resource-capabilities/blob/master/move-support-resources.csv) for resource group and subscription move support. If you want those properties and region move support, download [move-support-resources-with-regions.csv](https://github.com/tfitzmac/resource-capabilities/blob/master/move-support-resources-with-regions.csv).
azure-resource-manager Template Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-array.md
Title: Template functions - arrays description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with arrays. Previously updated : 05/11/2021 Last updated : 09/08/2021 # Array functions for ARM templates
An array.
### Example
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/array.json) shows how to use the array function with different types.
+The following example shows how to use the array function with different types.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "intToConvert": {
- "type": "int",
- "defaultValue": 1
- },
- "stringToConvert": {
- "type": "string",
- "defaultValue": "efgh"
- },
- "objectToConvert": {
- "type": "object",
- "defaultValue": {
- "a": "b",
- "c": "d"
- }
- }
- },
- "resources": [
- ],
- "outputs": {
- "intOutput": {
- "type": "array",
- "value": "[array(parameters('intToConvert'))]"
- },
- "stringOutput": {
- "type": "array",
- "value": "[array(parameters('stringToConvert'))]"
- },
- "objectOutput": {
- "type": "array",
- "value": "[array(parameters('objectToConvert'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
Combines multiple arrays and returns the concatenated array, or combines multipl
| Parameter | Required | Type | Description | |: |: |: |: | | arg1 |Yes |array or string |The first array or string for concatenation. |
-| additional arguments |No |array or string |Additional arrays or strings in sequential order for concatenation. |
+| more arguments |No |array or string |More arrays or strings in sequential order for concatenation. |
This function can take any number of arguments, and can accept either strings or arrays for the parameters. However, you can't provide both arrays and strings for parameters. Arrays are only concatenated with other arrays.
A string or array of concatenated values.
The following example shows how to combine two arrays.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "firstArray": {
- "type": "array",
- "defaultValue": [
- "1-1",
- "1-2",
- "1-3"
- ]
- },
- "secondArray": {
- "type": "array",
- "defaultValue": [
- "2-1",
- "2-2",
- "2-3"
- ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "return": {
- "type": "array",
- "value": "[concat(parameters('firstArray'), parameters('secondArray'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
The output from the preceding example with the default values is:
| - | - | -- | | return | Array | ["1-1", "1-2", "1-3", "2-1", "2-2", "2-3"] |
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/concat-string.json) shows how to combine two string values and return a concatenated string.
+The following example shows how to combine two string values and return a concatenated string.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "prefix": {
- "type": "string",
- "defaultValue": "prefix"
- }
- },
- "resources": [],
- "outputs": {
- "concatOutput": {
- "type": "string",
- "value": "[concat(parameters('prefix'), '-', uniqueString(resourceGroup().id))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
Checks whether an array contains a value, an object contains a key, or a string
The following example shows how to use contains with different types:
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "stringToTest": {
- "type": "string",
- "defaultValue": "OneTwoThree"
- },
- "objectToTest": {
- "type": "object",
- "defaultValue": {
- "one": "a",
- "two": "b",
- "three": "c"
- }
- },
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "stringTrue": {
- "type": "bool",
- "value": "[contains(parameters('stringToTest'), 'e')]"
- },
- "stringFalse": {
- "type": "bool",
- "value": "[contains(parameters('stringToTest'), 'z')]"
- },
- "objectTrue": {
- "type": "bool",
- "value": "[contains(parameters('objectToTest'), 'one')]"
- },
- "objectFalse": {
- "type": "bool",
- "value": "[contains(parameters('objectToTest'), 'a')]"
- },
- "arrayTrue": {
- "type": "bool",
- "value": "[contains(parameters('arrayToTest'), 'three')]"
- },
- "arrayFalse": {
- "type": "bool",
- "value": "[contains(parameters('arrayToTest'), 'four')]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
An array. When no parameters are provided, it returns an empty array.
### Example
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/createarray.json) shows how to use createArray with different types:
+The following example shows how to use createArray with different types:
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "objectToTest": {
- "type": "object",
- "defaultValue": {
- "one": "a",
- "two": "b",
- "three": "c"
- }
- },
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "stringArray": {
- "type": "array",
- "value": "[createArray('a', 'b', 'c')]"
- },
- "intArray": {
- "type": "array",
- "value": "[createArray(1, 2, 3)]"
- },
- "objectArray": {
- "type": "array",
- "value": "[createArray(parameters('objectToTest'))]"
- },
- "arrayArray": {
- "type": "array",
- "value": "[createArray(parameters('arrayToTest'))]"
- },
- "emptyArray": {
- "type": "array",
- "value": "[createArray()]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
Determines if an array, object, or string is empty.
| Parameter | Required | Type | Description | |: |: |: |: |
-| itemToTest |Yes |array, object, or string |The value to check if it is empty. |
+| itemToTest |Yes |array, object, or string |The value to check if it's empty. |
### Return value
Returns **True** if the value is empty; otherwise, **False**.
The following example checks whether an array, object, and string are empty.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "testArray": {
- "type": "array",
- "defaultValue": []
- },
- "testObject": {
- "type": "object",
- "defaultValue": {}
- },
- "testString": {
- "type": "string",
- "defaultValue": ""
- }
- },
- "resources": [
- ],
- "outputs": {
- "arrayEmpty": {
- "type": "bool",
- "value": "[empty(parameters('testArray'))]"
- },
- "objectEmpty": {
- "type": "bool",
- "value": "[empty(parameters('testObject'))]"
- },
- "stringEmpty": {
- "type": "bool",
- "value": "[empty(parameters('testString'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
The type (string, int, array, or object) of the first element in an array, or th
The following example shows how to use the first function with an array and string.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "arrayOutput": {
- "type": "string",
- "value": "[first(parameters('arrayToTest'))]"
- },
- "stringOutput": {
- "type": "string",
- "value": "[first('One Two Three')]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
Returns a single array or object with the common elements from the parameters.
|: |: |: |: | | arg1 |Yes |array or object |The first value to use for finding common elements. | | arg2 |Yes |array or object |The second value to use for finding common elements. |
-| additional arguments |No |array or object |Additional values to use for finding common elements. |
+| more arguments |No |array or object |More values to use for finding common elements. |
### Return value
An array or object with the common elements.
### Example
-The following example shows how to use intersection with arrays and objects:
+The following example shows how to use intersection with arrays and objects.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "firstObject": {
- "type": "object",
- "defaultValue": {
- "one": "a",
- "two": "b",
- "three": "c"
- }
- },
- "secondObject": {
- "type": "object",
- "defaultValue": {
- "one": "a",
- "two": "z",
- "three": "c"
- }
- },
- "firstArray": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- },
- "secondArray": {
- "type": "array",
- "defaultValue": [ "two", "three" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "objectOutput": {
- "type": "object",
- "value": "[intersection(parameters('firstObject'), parameters('secondObject'))]"
- },
- "arrayOutput": {
- "type": "array",
- "value": "[intersection(parameters('firstArray'), parameters('secondArray'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
The type (string, int, array, or object) of the last element in an array, or the
The following example shows how to use the last function with an array and string.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "arrayOutput": {
- "type": "string",
- "value": "[last(parameters('arrayToTest'))]"
- },
- "stringOutput": {
- "type": "string",
- "value": "[last('One Two Three')]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
An int.
### Example
-The following example shows how to use length with an array and string:
+The following example shows how to use length with an array and string.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "arrayToTest": {
- "type": "array",
- "defaultValue": [
- "one",
- "two",
- "three"
- ]
- },
- "stringToTest": {
- "type": "string",
- "defaultValue": "One Two Three"
- },
- "objectToTest": {
- "type": "object",
- "defaultValue": {
- "propA": "one",
- "propB": "two",
- "propC": "three",
- "propD": {
- "propD-1": "sub",
- "propD-2": "sub"
- }
- }
- }
- },
- "resources": [],
- "outputs": {
- "arrayLength": {
- "type": "int",
- "value": "[length(parameters('arrayToTest'))]"
- },
- "stringLength": {
- "type": "int",
- "value": "[length(parameters('stringToTest'))]"
- },
- "objectLength": {
- "type": "int",
- "value": "[length(parameters('objectToTest'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
An int representing the maximum value.
### Example
-The following example shows how to use max with an array and a list of integers:
+The following example shows how to use max with an array and a list of integers.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ 0, 3, 2, 5, 4 ]
- }
- },
- "resources": [],
- "outputs": {
- "arrayOutput": {
- "type": "int",
- "value": "[max(parameters('arrayToTest'))]"
- },
- "intOutput": {
- "type": "int",
- "value": "[max(0,3,2,5,4)]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
An int representing the minimum value.
### Example
-The following example shows how to use min with an array and a list of integers:
+The following example shows how to use min with an array and a list of integers.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ 0, 3, 2, 5, 4 ]
- }
- },
- "resources": [],
- "outputs": {
- "arrayOutput": {
- "type": "int",
- "value": "[min(parameters('arrayToTest'))]"
- },
- "intOutput": {
- "type": "int",
- "value": "[min(0,3,2,5,4)]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
An array of integers.
### Example
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/range.json) shows how to use the range function:
+The following example shows how to use the range function.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "startingInt": {
- "type": "int",
- "defaultValue": 5
- },
- "numberOfElements": {
- "type": "int",
- "defaultValue": 3
- }
- },
- "resources": [],
- "outputs": {
- "rangeOutput": {
- "type": "array",
- "value": "[range(parameters('startingInt'),parameters('numberOfElements'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
Returns an array with all the elements after the specified number in the array,
| Parameter | Required | Type | Description | |: |: |: |: | | originalValue |Yes |array or string |The array or string to use for skipping. |
-| numberToSkip |Yes |int |The number of elements or characters to skip. If this value is 0 or less, all the elements or characters in the value are returned. If it is larger than the length of the array or string, an empty array or string is returned. |
+| numberToSkip |Yes |int |The number of elements or characters to skip. If this value is 0 or less, all the elements or characters in the value are returned. If it's larger than the length of the array or string, an empty array or string is returned. |
### Return value
An array or string.
The following example skips the specified number of elements in the array, and the specified number of characters in a string.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "testArray": {
- "type": "array",
- "defaultValue": [
- "one",
- "two",
- "three"
- ]
- },
- "elementsToSkip": {
- "type": "int",
- "defaultValue": 2
- },
- "testString": {
- "type": "string",
- "defaultValue": "one two three"
- },
- "charactersToSkip": {
- "type": "int",
- "defaultValue": 4
- }
- },
- "resources": [],
- "outputs": {
- "arrayOutput": {
- "type": "array",
- "value": "[skip(parameters('testArray'),parameters('elementsToSkip'))]"
- },
- "stringOutput": {
- "type": "string",
- "value": "[skip(parameters('testString'),parameters('charactersToSkip'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
The output from the preceding example with the default values is:
`take(originalValue, numberToTake)`
-Returns an array with the specified number of elements from the start of the array, or a string with the specified number of characters from the start of the string.
+Returns an array or string. An array has the specified number of elements from the start of the array. A string has the specified number of characters from the start of the string.
### Parameters | Parameter | Required | Type | Description | |: |: |: |: | | originalValue |Yes |array or string |The array or string to take the elements from. |
-| numberToTake |Yes |int |The number of elements or characters to take. If this value is 0 or less, an empty array or string is returned. If it is larger than the length of the given array or string, all the elements in the array or string are returned. |
+| numberToTake |Yes |int |The number of elements or characters to take. If this value is 0 or less, an empty array or string is returned. If it's larger than the length of the given array or string, all the elements in the array or string are returned. |
### Return value
An array or string.
The following example takes the specified number of elements from the array, and characters from a string.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "testArray": {
- "type": "array",
- "defaultValue": [
- "one",
- "two",
- "three"
- ]
- },
- "elementsToTake": {
- "type": "int",
- "defaultValue": 2
- },
- "testString": {
- "type": "string",
- "defaultValue": "one two three"
- },
- "charactersToTake": {
- "type": "int",
- "defaultValue": 2
- }
- },
- "resources": [],
- "outputs": {
- "arrayOutput": {
- "type": "array",
- "value": "[take(parameters('testArray'),parameters('elementsToTake'))]"
- },
- "stringOutput": {
- "type": "string",
- "value": "[take(parameters('testString'),parameters('charactersToTake'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
Returns a single array or object with all elements from the parameters. Duplicat
|: |: |: |: | | arg1 |Yes |array or object |The first value to use for joining elements. | | arg2 |Yes |array or object |The second value to use for joining elements. |
-| additional arguments |No |array or object |Additional values to use for joining elements. |
+| more arguments |No |array or object |More values to use for joining elements. |
### Return value
An array or object.
### Example
-The following example shows how to use union with arrays and objects:
+The following example shows how to use union with arrays and objects.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "firstObject": {
- "type": "object",
- "defaultValue": {
- "one": "a",
- "two": "b",
- "three": "c1"
- }
- },
- "secondObject": {
- "type": "object",
- "defaultValue": {
- "three": "c2",
- "four": "d",
- "five": "e"
- }
- },
- "firstArray": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- },
- "secondArray": {
- "type": "array",
- "defaultValue": [ "three", "four" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "objectOutput": {
- "type": "object",
- "value": "[union(parameters('firstObject'), parameters('secondObject'))]"
- },
- "arrayOutput": {
- "type": "array",
- "value": "[union(parameters('firstArray'), parameters('secondArray'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
azure-resource-manager Template Functions Numeric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-numeric.md
Title: Template functions - numeric description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with numbers. Previously updated : 05/13/2021 Last updated : 09/08/2021 # Numeric functions for ARM templates
The output from the preceding example with the default values is:
Converts the value to a floating point number. You only use this function when passing custom parameters to an application, such as a Logic App.
-The `float` function is not supported in Bicep.
+The `float` function isn't supported in Bicep.
### Parameters
An integer representing the maximum value from the collection.
### Example
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/max.json) shows how to use max with an array and a list of integers:
+The following example shows how to use max with an array and a list of integers.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ 0, 3, 2, 5, 4 ]
- }
- },
- "resources": [],
- "outputs": {
- "arrayOutput": {
- "type": "int",
- "value": "[max(parameters('arrayToTest'))]"
- },
- "intOutput": {
- "type": "int",
- "value": "[max(0,3,2,5,4)]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
An integer representing minimum value from the collection.
### Example
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/min.json) shows how to use min with an array and a list of integers:
+The following example shows how to use min with an array and a list of integers.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ 0, 3, 2, 5, 4 ]
- }
- },
- "resources": [],
- "outputs": {
- "arrayOutput": {
- "type": "int",
- "value": "[min(parameters('arrayToTest'))]"
- },
- "intOutput": {
- "type": "int",
- "value": "[min(0,3,2,5,4)]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
The output from the preceding example with the default values is:
Returns the remainder of the integer division using the two provided integers.
-The `mod` function is not supported in Bicep. Use the [% operator](../bicep/operators-numeric.md#modulo-) instead.
+The `mod` function isn't supported in Bicep. Use the [% operator](../bicep/operators-numeric.md#modulo-) instead.
### Parameters
The output from the preceding example with the default values is:
Returns the multiplication of the two provided integers.
-The `mul` function is not supported in Bicep. Use the [* operator](../bicep/operators-numeric.md#multiply-) instead.
+The `mul` function isn't supported in Bicep. Use the [* operator](../bicep/operators-numeric.md#multiply-) instead.
### Parameters
The output from the preceding example with the default values is:
## Next steps * For a description of the sections in an ARM template, see [Understand the structure and syntax of ARM templates](./syntax.md).
-* To iterate a specified number of times when creating a type of resource, see [Resource iteration in ARM templates](copy-resources.md).
+* To iterate a specified number of times when creating a type of resource, see [Resource iteration in ARM templates](copy-resources.md).
azure-resource-manager Template Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-object.md
Title: Template functions - objects description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with objects. Previously updated : 05/13/2021 Last updated : 09/08/2021 # Object functions for ARM templates
Checks whether an array contains a value, an object contains a key, or a string
### Example
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/contains.json) shows how to use contains with different types:
+The following example shows how to use contains with different types:
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "stringToTest": {
- "type": "string",
- "defaultValue": "OneTwoThree"
- },
- "objectToTest": {
- "type": "object",
- "defaultValue": {
- "one": "a",
- "two": "b",
- "three": "c"
- }
- },
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "stringTrue": {
- "type": "bool",
- "value": "[contains(parameters('stringToTest'), 'e')]"
- },
- "stringFalse": {
- "type": "bool",
- "value": "[contains(parameters('stringToTest'), 'z')]"
- },
- "objectTrue": {
- "type": "bool",
- "value": "[contains(parameters('objectToTest'), 'one')]"
- },
- "objectFalse": {
- "type": "bool",
- "value": "[contains(parameters('objectToTest'), 'a')]"
- },
- "arrayTrue": {
- "type": "bool",
- "value": "[contains(parameters('arrayToTest'), 'three')]"
- },
- "arrayFalse": {
- "type": "bool",
- "value": "[contains(parameters('arrayToTest'), 'four')]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
The output from the preceding example with the default values is:
Creates an object from the keys and values.
-The `createObject` function is not supported by Bicep. Construct an object by using `{}`. See [Objects](../bicep/data-types.md#objects).
+The `createObject` function isn't supported by Bicep. Construct an object by using `{}`. See [Objects](../bicep/data-types.md#objects).
### Parameters
The `createObject` function is not supported by Bicep. Construct an object by u
|: |: |: |: | | key1 |No |string |The name of the key. | | value1 |No |int, boolean, string, object, or array |The value for the key. |
-| additional keys |No |string |Additional names of the keys. |
-| additional values |No |int, boolean, string, object, or array |Additional values for the keys. |
+| more keys |No |string |More names of the keys. |
+| more values |No |int, boolean, string, object, or array |More values for the keys. |
The function only accepts an even number of parameters. Each key must have a matching value.
Returns **True** if the value is empty; otherwise, **False**.
### Example
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/empty.json) checks whether an array, object, and string are empty.
+The following example checks whether an array, object, and string are empty.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "testArray": {
- "type": "array",
- "defaultValue": []
- },
- "testObject": {
- "type": "object",
- "defaultValue": {}
- },
- "testString": {
- "type": "string",
- "defaultValue": ""
- }
- },
- "resources": [
- ],
- "outputs": {
- "arrayEmpty": {
- "type": "bool",
- "value": "[empty(parameters('testArray'))]"
- },
- "objectEmpty": {
- "type": "bool",
- "value": "[empty(parameters('testObject'))]"
- },
- "stringEmpty": {
- "type": "bool",
- "value": "[empty(parameters('testString'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
Returns a single array or object with the common elements from the parameters.
|: |: |: |: | | arg1 |Yes |array or object |The first value to use for finding common elements. | | arg2 |Yes |array or object |The second value to use for finding common elements. |
-| additional arguments |No |array or object |Additional values to use for finding common elements. |
+| more arguments |No |array or object |More values to use for finding common elements. |
### Return value
An array or object with the common elements.
### Example
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/intersection.json) shows how to use intersection with arrays and objects:
+The following example shows how to use intersection with arrays and objects.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "firstObject": {
- "type": "object",
- "defaultValue": {
- "one": "a",
- "two": "b",
- "three": "c"
- }
- },
- "secondObject": {
- "type": "object",
- "defaultValue": {
- "one": "a",
- "two": "z",
- "three": "c"
- }
- },
- "firstArray": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- },
- "secondArray": {
- "type": "array",
- "defaultValue": [ "two", "three" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "objectOutput": {
- "type": "object",
- "value": "[intersection(parameters('firstObject'), parameters('secondObject'))]"
- },
- "arrayOutput": {
- "type": "array",
- "value": "[intersection(parameters('firstArray'), parameters('secondArray'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
An int.
### Example
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/length.json) shows how to use length with an array and string:
+The following example shows how to use length with an array and string:
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "arrayToTest": {
- "type": "array",
- "defaultValue": [
- "one",
- "two",
- "three"
- ]
- },
- "stringToTest": {
- "type": "string",
- "defaultValue": "One Two Three"
- },
- "objectToTest": {
- "type": "object",
- "defaultValue": {
- "propA": "one",
- "propB": "two",
- "propC": "three",
- "propD": {
- "propD-1": "sub",
- "propD-2": "sub"
- }
- }
- }
- },
- "resources": [],
- "outputs": {
- "arrayLength": {
- "type": "int",
- "value": "[length(parameters('arrayToTest'))]"
- },
- "stringLength": {
- "type": "int",
- "value": "[length(parameters('stringToTest'))]"
- },
- "objectLength": {
- "type": "int",
- "value": "[length(parameters('objectToTest'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
The output from the preceding example with the default values is:
Returns null.
-The `null` function is not available in Bicep. Use the `null` keyword instead.
+The `null` function isn't available in Bicep. Use the `null` keyword instead.
### Parameters
Returns a single array or object with all elements from the parameters. Duplicat
|: |: |: |: | | arg1 |Yes |array or object |The first value to use for joining elements. | | arg2 |Yes |array or object |The second value to use for joining elements. |
-| additional arguments |No |array or object |Additional values to use for joining elements. |
+| more arguments |No |array or object |More values to use for joining elements. |
### Return value
An array or object.
### Example
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/union.json) shows how to use union with arrays and objects:
+The following example shows how to use union with arrays and objects:
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "firstObject": {
- "type": "object",
- "defaultValue": {
- "one": "a",
- "two": "b",
- "three": "c1"
- }
- },
- "secondObject": {
- "type": "object",
- "defaultValue": {
- "three": "c2",
- "four": "d",
- "five": "e"
- }
- },
- "firstArray": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- },
- "secondArray": {
- "type": "array",
- "defaultValue": [ "three", "four" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "objectOutput": {
- "type": "object",
- "value": "[union(parameters('firstObject'), parameters('secondObject'))]"
- },
- "arrayOutput": {
- "type": "array",
- "value": "[union(parameters('firstArray'), parameters('secondArray'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
The output from the preceding example with the default values is:
## Next steps
-* For a description of the sections in an ARM template, see [Understand the structure and syntax of ARM templates](./syntax.md).
+* For a description of the sections in an ARM template, see [Understand the structure and syntax of ARM templates](./syntax.md).
azure-resource-manager Template Functions String https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-string.md
Title: Template functions - string description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with strings. Previously updated : 05/14/2021 Last updated : 09/08/2021 # String functions for ARM templates
Combines multiple string values and returns the concatenated string, or combines
| Parameter | Required | Type | Description | |: |: |: |: | | arg1 |Yes |string or array |The first string or array for concatenation. |
-| additional arguments |No |string or array |Additional strings or arrays in sequential order for concatenation. |
+| more arguments |No |string or array |More strings or arrays in sequential order for concatenation. |
This function can take any number of arguments, and can accept either strings or arrays for the parameters. However, you can't provide both arrays and strings for parameters. Strings are only concatenated with other strings.
A string or array of concatenated values.
### Examples
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/concat-string.json) shows how to combine two string values and return a concatenated string.
+The following example shows how to combine two string values and return a concatenated string.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "prefix": {
- "type": "string",
- "defaultValue": "prefix"
- }
- },
- "resources": [],
- "outputs": {
- "concatOutput": {
- "type": "string",
- "value": "[concat(parameters('prefix'), '-', uniqueString(resourceGroup().id))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
The output from the preceding example with the default values is:
| - | - | -- | | concatOutput | String | prefix-5yj4yjf5mbg72 |
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/concat-array.json) shows how to combine two arrays.
+The following example shows how to combine two arrays.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "firstArray": {
- "type": "array",
- "defaultValue": [
- "1-1",
- "1-2",
- "1-3"
- ]
- },
- "secondArray": {
- "type": "array",
- "defaultValue": [
- "2-1",
- "2-2",
- "2-3"
- ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "return": {
- "type": "array",
- "value": "[concat(parameters('firstArray'), parameters('secondArray'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
Checks whether an array contains a value, an object contains a key, or a string
### Examples
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/contains.json) shows how to use contains with different types:
+The following example shows how to use contains with different types:
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "stringToTest": {
- "type": "string",
- "defaultValue": "OneTwoThree"
- },
- "objectToTest": {
- "type": "object",
- "defaultValue": {
- "one": "a",
- "two": "b",
- "three": "c"
- }
- },
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "stringTrue": {
- "type": "bool",
- "value": "[contains(parameters('stringToTest'), 'e')]"
- },
- "stringFalse": {
- "type": "bool",
- "value": "[contains(parameters('stringToTest'), 'z')]"
- },
- "objectTrue": {
- "type": "bool",
- "value": "[contains(parameters('objectToTest'), 'one')]"
- },
- "objectFalse": {
- "type": "bool",
- "value": "[contains(parameters('objectToTest'), 'a')]"
- },
- "arrayTrue": {
- "type": "bool",
- "value": "[contains(parameters('arrayToTest'), 'three')]"
- },
- "arrayFalse": {
- "type": "bool",
- "value": "[contains(parameters('arrayToTest'), 'four')]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
Returns **True** if the value is empty; otherwise, **False**.
### Examples
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/empty.json) checks whether an array, object, and string are empty.
+The following example checks whether an array, object, and string are empty.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "testArray": {
- "type": "array",
- "defaultValue": []
- },
- "testObject": {
- "type": "object",
- "defaultValue": {}
- },
- "testString": {
- "type": "string",
- "defaultValue": ""
- }
- },
- "resources": [
- ],
- "outputs": {
- "arrayEmpty": {
- "type": "bool",
- "value": "[empty(parameters('testArray'))]"
- },
- "objectEmpty": {
- "type": "bool",
- "value": "[empty(parameters('testObject'))]"
- },
- "stringEmpty": {
- "type": "bool",
- "value": "[empty(parameters('testString'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
A string of the first character, or the type (string, int, array, or object) of
### Examples
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/first.json) shows how to use the first function with an array and string.
+The following example shows how to use the first function with an array and string.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "arrayOutput": {
- "type": "string",
- "value": "[first(parameters('arrayToTest'))]"
- },
- "stringOutput": {
- "type": "string",
- "value": "[first('One Two Three')]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
Creates a formatted string from input values.
|: |: |: |: | | formatString | Yes | string | The composite format string. | | arg1 | Yes | string, integer, or boolean | The value to include in the formatted string. |
-| additional arguments | No | string, integer, or boolean | Additional values to include in the formatted string. |
+| more arguments | No | string, integer, or boolean | More values to include in the formatted string. |
### Remarks
Use this function to format a string in your template. It uses the same formatti
### Examples
-The following example template shows how to use the format function.
+The following example shows how to use the format function.
```json {
Creates a value in the format of a globally unique identifier based on the value
| Parameter | Required | Type | Description | |: |: |: |: | | baseString |Yes |string |The value used in the hash function to create the GUID. |
-| additional parameters as needed |No |string |You can add as many strings as needed to create the value that specifies the level of uniqueness. |
+| more parameters as needed |No |string |You can add as many strings as needed to create the value that specifies the level of uniqueness. |
### Remarks
A string of the last character, or the type (string, int, array, or object) of t
### Examples
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/last.json) shows how to use the last function with an array and string.
+The following example shows how to use the last function with an array and string.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "arrayToTest": {
- "type": "array",
- "defaultValue": [ "one", "two", "three" ]
- }
- },
- "resources": [
- ],
- "outputs": {
- "arrayOutput": {
- "type": "string",
- "value": "[last(parameters('arrayToTest'))]"
- },
- "stringOutput": {
- "type": "string",
- "value": "[last('One Two Three')]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
An int.
### Examples
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/length.json) shows how to use length with an array and string:
+The following example shows how to use length with an array and string:
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "arrayToTest": {
- "type": "array",
- "defaultValue": [
- "one",
- "two",
- "three"
- ]
- },
- "stringToTest": {
- "type": "string",
- "defaultValue": "One Two Three"
- },
- "objectToTest": {
- "type": "object",
- "defaultValue": {
- "propA": "one",
- "propB": "two",
- "propC": "three",
- "propD": {
- "propD-1": "sub",
- "propD-2": "sub"
- }
- }
- }
- },
- "resources": [],
- "outputs": {
- "arrayLength": {
- "type": "int",
- "value": "[length(parameters('arrayToTest'))]"
- },
- "stringLength": {
- "type": "int",
- "value": "[length(parameters('stringToTest'))]"
- },
- "objectLength": {
- "type": "int",
- "value": "[length(parameters('objectToTest'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
A string containing 36 characters in the format of a globally unique identifier.
### Examples
-The following example template shows a parameter with a new identifier.
+The following example shows a parameter with a new identifier.
```json {
An array or string.
### Examples
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/skip.json) skips the specified number of elements in the array, and the specified number of characters in a string.
+The following example skips the specified number of elements in the array, and the specified number of characters in a string.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "testArray": {
- "type": "array",
- "defaultValue": [
- "one",
- "two",
- "three"
- ]
- },
- "elementsToSkip": {
- "type": "int",
- "defaultValue": 2
- },
- "testString": {
- "type": "string",
- "defaultValue": "one two three"
- },
- "charactersToSkip": {
- "type": "int",
- "defaultValue": 4
- }
- },
- "resources": [],
- "outputs": {
- "arrayOutput": {
- "type": "array",
- "value": "[skip(parameters('testArray'),parameters('elementsToSkip'))]"
- },
- "stringOutput": {
- "type": "string",
- "value": "[skip(parameters('testString'),parameters('charactersToSkip'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
The output from the preceding example with the default values is:
`take(originalValue, numberToTake)`
-Returns a string with the specified number of characters from the start of the string, or an array with the specified number of elements from the start of the array.
+Returns an array or string. An array has the specified number of elements from the start of the array. A string has the specified number of characters from the start of the string.
### Parameters
An array or string.
### Examples
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/take.json) takes the specified number of elements from the array, and characters from a string.
+The following example takes the specified number of elements from the array, and characters from a string.
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "testArray": {
- "type": "array",
- "defaultValue": [
- "one",
- "two",
- "three"
- ]
- },
- "elementsToTake": {
- "type": "int",
- "defaultValue": 2
- },
- "testString": {
- "type": "string",
- "defaultValue": "one two three"
- },
- "charactersToTake": {
- "type": "int",
- "defaultValue": 2
- }
- },
- "resources": [],
- "outputs": {
- "arrayOutput": {
- "type": "array",
- "value": "[take(parameters('testArray'),parameters('elementsToTake'))]"
- },
- "stringOutput": {
- "type": "string",
- "value": "[take(parameters('testString'),parameters('charactersToTake'))]"
- }
- }
-}
-```
The output from the preceding example with the default values is:
Creates a deterministic hash string based on the values provided as parameters.
| Parameter | Required | Type | Description | |: |: |: |: | | baseString |Yes |string |The value used in the hash function to create a unique string. |
-| additional parameters as needed |No |string |You can add as many strings as needed to create the value that specifies the level of uniqueness. |
+| more parameters as needed |No |string |You can add as many strings as needed to create the value that specifies the level of uniqueness. |
### Remarks
Creates an absolute URI by combining the baseUri and the relativeUri string.
| Parameter | Required | Type | Description | |: |: |: |: |
-| baseUri |Yes |string |The base uri string. Take care to observe the behavior regarding the handling of the trailing slash (`/`), as described following this table. |
+| baseUri |Yes |string |The base uri string. Take care to observe the behavior about the handling of the trailing slash (`/`), as described following this table. |
| relativeUri |Yes |string |The relative uri string to add to the base uri string. |
-* If **baseUri** ends in a trailing slash, the result is simply
- **baseUri** followed by **relativeUri**.
+* If **baseUri** ends in a trailing slash, the result is **baseUri** followed by **relativeUri**.
-* If **baseUri** does not end in a trailing slash one of two things
+* If **baseUri** doesn't end in a trailing slash one of two things
happens. * If **baseUri** has no slashes at all (aside from the `//` near
- the front) the result is simply **baseUri** followed by **relativeUri**.
+ the front) the result is **baseUri** followed by **relativeUri**.
* If **baseUri** has some slashes, but doesn't end with a slash, everything from the last slash onward is removed from **baseUri**
The output from the preceding example with the default values is:
* For a description of the sections in an ARM template, see [Understand the structure and syntax of ARM templates](./syntax.md). * To merge multiple templates, see [Using linked and nested templates when deploying Azure resources](linked-templates.md). * To iterate a specified number of times when creating a type of resource, see [Resource iteration in ARM templates](copy-resources.md).
-* To see how to deploy the template you've created, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
+* To see how to deploy the template you've created, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
azure-sql Transparent Data Encryption Byok Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-overview.md
Auditors can use Azure Monitor to review key vault AuditEvent logs, if logging i
### Requirements for configuring TDE protector -- TDE protector can be only asymmetric, RSA or RSA HSM key. The supported key lengths are 2048 bytes and 3072 bytes.
+- TDE protector can be only asymmetric, RSA or RSA HSM key. The supported key lengths are 2048-bit and 3072-bit.
- The key activation date (if set) must be a date and time in the past. Expiration date (if set) must be a future date and time.
You may also want to check the following PowerShell sample scripts for the commo
- [Remove a Transparent Data Encryption (TDE) protector for SQL Database using PowerShell](transparent-data-encryption-byok-remove-tde-protector.md) -- [Manage Transparent Data Encryption in SQL Managed Instance with your own key using PowerShell](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)
+- [Manage Transparent Data Encryption in SQL Managed Instance with your own key using PowerShell](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)
azure-sql Change Sql Server Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/change-sql-server-version.md
This article describes how to change the version of Microsoft SQL Server on a Wi
To do an in-place upgrade of SQL Server, the following conditions apply: - The setup media of the desired version of SQL Server is required. Customers who have [Software Assurance](https://www.microsoft.com/licensing/licensing-programs/software-assurance-default) can obtain their installation media from the [Volume Licensing Center](https://www.microsoft.com/Licensing/servicecenter/default.aspx). Customers who don't have Software Assurance can use the setup media from an Azure Marketplace SQL Server VM image that has a later version of SQL Server (typically located in C:\SQLServerFull).-- Edition upgrades should follow the [support upgrade paths](/sql/database-engine/install-windows/supported-version-and-edition-upgrades-version-15).
+- Version upgrades should follow the [support upgrade paths](/sql/database-engine/install-windows/supported-version-and-edition-upgrades-version-15).
## Planning for version change
For more information, see the following articles:
- [Overview of SQL Server on a Windows VM](sql-server-on-azure-vm-iaas-what-is-overview.md) - [FAQ for SQL Server on a Windows VM](frequently-asked-questions-faq.yml) - [Pricing guidance for SQL Server on a Windows VM](pricing-guidance.md)-- [Release notes for SQL Server on a Windows VM](doc-changes-updates-release-notes.md)
+- [Release notes for SQL Server on a Windows VM](doc-changes-updates-release-notes.md)
azure-sql Sql Agent Extension Manually Register Single Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md
To register your SQL Server VM with the extension, you'll need:
- An [Azure subscription](https://azure.microsoft.com/free/). - An Azure Resource Model [Windows Server 2008 (or greater) virtual machine](../../../virtual-machines/windows/quick-create-portal.md) with [SQL Server 2008 (or greater)](https://www.microsoft.com/sql-server/sql-server-downloads) deployed to the public or Azure Government cloud. - The latest version of [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell (5.0 minimum)](/powershell/azure/install-az-ps).
+- A minimum of .NET Framework 4.5.1 or later.
## Register subscription with RP
azure-video-analyzer Embed Player In Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/embed-player-in-power-bi.md
Azure Video Analyzer enables you to [record](detect-motion-record-video-clips-cl
Dashboards are an insightful way to monitor your business and view all your most important metrics at a glance. A Power BI dashboard is a powerful tool to combine video with multiple sources of data including telemetry from IoT Hub. In this tutorial, you will learn how to add one or more player widgets to a dashboard using [Microsoft Power BI](https://powerbi.microsoft.com/) web service.
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/power-bi/embed-block-diagram.png" alt-text="Block diagram to embed Azure Video Analyzer player widget in Microsoft Power BI.":::
+ ## Suggested pre-reading - Azure Video Analyzer [player widget](player-widget.md)
Dashboards are an insightful way to monitor your business and view all your most
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one. - Complete either [Detect motion and record video](detect-motion-record-video-clips-cloud.md) or [Continuous video recording](continuous-video-recording.md) - a pipeline with video sink is required.
- > [!NOTE]
+ > [!NOTE]
> Your video analyzer account should have a minimum of one video recorded to proceed. Check for list of videos by logging into your Azure Video Analyzer account > Videos > Video Analyzer section. - A [Power BI](https://powerbi.microsoft.com/) account.
Dashboards are an insightful way to monitor your business and view all your most
1. Open the [Power BI service](http://app.powerbi.com/) in your browser. From the navigation pane, select **My Workspace**
- :::image type="content" source="./media/power-bi/power-bi-workspace.png" alt-text="Screenshot of Power BI workspace home page.":::
+ :::image type="content" source="./media/power-bi/powerbi-ws.png" alt-text="Screenshot of Power BI workspace home page.":::
2. Create a new dashboard by clicking **New** > **Dashboard** or open an existing dashboard. Select the **Edit** drop down arrow and then **Add a tile**. Select **Web content** > **Next**. 3. In **Add web content tile**, enter your **Embed code** from previous section. Click **Apply**.
Dashboards are an insightful way to monitor your business and view all your most
4. You will see a player widget pinned to the dashboard with a video.
- :::image type="content" source="./media/power-bi/one-player-added.png" alt-text="Screenshot of one video player widget added.":::
+ :::image type="content" source="./media/power-bi/one-player.png" alt-text="Screenshot of one video player widget added.":::
5. To add more videos from Azure Video Analyzer Videos section, follow the same steps in this section.
-> [!NOTE]
+> [!NOTE]
> To add multiple videos from the same Video Analyzer account, a single set of access policy and token is sufficient. Here is a sample of multiple videos pinned to a single Power BI dashboard. > [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/power-bi/two-players-added.png" alt-text="Screenshot of two video player widgets added as an example.":::
+> :::image type="content" source="./media/power-bi/two-players.png" alt-text="Screenshot of two video player widgets added as an example.":::
## Next steps -- Learn more about the [widget API](https://github.com/Azure/video-analyzer/tree/main/widgets)
+- [Real-time visualization of AI inference events in Power BI](visualize-ai-events-power-bi.md)
+- Learn more about the [widget API](https://github.com/Azure/video-analyzer/tree/main/widgets)
azure-video-analyzer Visualize Ai Events Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/visualize-ai-events-power-bi.md
+
+ Title: Real-time visualization of AI inference events with Power BI
+description: You can use Azure Video Analyzer for continuous video recording or event-based recording. This tutorial walks through the steps for real-time to visualization AI inference events from IoT Hub in Microsoft Power BI.
++ Last updated : 09/08/2021++
+# Visualize AI inference events with Power BI
+
+Azure Video Analyzer provides the capability to capture, record, and analyze live video along with publishing the results of video analysis in form of AI inference events to the [IoT Edge Hub](../../iot-edge/iot-edge-runtime.md?view=iotedge-2020-11&preserve-view=true#iot-edge-hub). These AI inference events can then be routed to other destinations including Visual Studio Code and Azure services such as Time Series Insights and Event Hubs.
+
+Dashboards are an insightful way to monitor your business and visualize all your important metrics at a glance. You can visualize AI inference events generated by Video Analyzer using [Microsoft Power BI](https://powerbi.microsoft.com/) via [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/#overview) to quickly gain insights and share dashboards with peers in your organization.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/power-bi/tutorial-block-diagram.png" alt-text="Block diagram to connect Azure Video Analyzer to Microsoft Power BI via Azure Stream Analytics.":::
+
+In this tutorial, you will:
+
+- Create and run a Stream Analytics job to retrieve necessary data from IoT Hub and send it to Power BI
+- Run a live pipeline that generates inference events
+- Create a Power BI dashboard to visualize the AI inferences
+
+## Suggested pre-reading
+
+- [Monitoring and logging](monitor-log-edge.md) in Video Analyzer
+- Reading [device-to-cloud messages from IoT Hub](../../iot-hub/iot-hub-devguide-messages-read-builtin.md) built-in endpoints
+- Introduction to [Power BI dashboards](https://docs.microsoft.com/power-bi/create-reports/service-dashboards)
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one.
+- This tutorial is based on using [Line Crossing sample](use-line-crossing.md) to detect when objects cross a virtual line in a live video feed. You can choose to create visualization for other pipelines - **a pipeline with IoT Hub message sink is required**. Make sure to have the live pipeline created, but activate it only after creating a Stream Analytics job.
+
+ > [!NOTE]
+ > The [Line Crossing sample](use-line-crossing.md) uses a 5-minute video recording. For best results in visualization, use the 60-minute recording of vehicles on a freeway available in [Other dataset](https://github.com/Azure/video-analyzer/tree/main/media#other-dataset). See in [FAQ](faq-edge.yml) Configuration and deployment section on how to add sample video files to rtsp simulator. Once added, edit your operations.json properties -> paramters -> value to `"value": "rtsp://rtspsim:554/media/camera-3600s.mkv"`
+
+- A [Power BI](https://powerbi.microsoft.com/) account.
+
+## Create and run a Stream Analytics Job
+
+[Stream Analytics](https://azure.microsoft.com/services/stream-analytics/#overview) is a fully managed, real-time analytics service designed to help you analyze and process fast moving streams of data. In this section, you will create a Stream Analytics job, define the inputs, outputs and the query used to retrieve the required data.
+
+### Create a Stream Analytics job
+
+1. In the [Azure portal](https://portal.azure.com/), select **Create a resource**. Type _Stream Analytics Job_ in the search box and select it from the drop-down list. On the **Stream Analytics job** overview page, select **Create**
+2. Enter the following information for the job:
+
+ - **Job name**: The name of the job. The name must be globally unique.
+ - **Subscription**: Choose the same subscription used for setting up the Line Crossing sample.
+ - **Resource group**: Use the same resource group that your IoT Edge hub uses as part of setting up the Line Crossing sample.
+ - **Location**: Use the same location as your resource group.
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/power-bi/create-asa-job.png" alt-text="Screenshot to create a new Stream Analytics Job.":::
+
+3. Select **Create**.
+
+### Add an input to the Stream Analytics job
+
+1. Open the Stream Analytics job.
+2. Under Job topology, select **Inputs**.
+3. In the Inputs pane, select **Add stream input**, then select **IoT Hub** from the drop-down list. On the new input pane, enter the following information:
+
+ - **Input alias**: Enter a unique alias for the input, such as "iothubinput".
+ - **Select IoT Hub from your subscription**: Select this radio button.
+ - **Subscription**: Select the Azure subscription you're using for this article.
+ - **IoT Hub**: Select the IoT Hub used in setting up the Line Crossing sample.
+ - **Shared access policy name**: Select the name of the shared access policy you want the Stream Analytics job to use for your IoT hub. For this tutorial, you can select iothubowner. To learn more, see [Access control and permissions](../../iot-hub/iot-hub-dev-guide-sas.md#access-control-and-permissions).
+ - **Shared access policy key**: This field is auto filled based on your selection for the shared access policy name.
+
+Leave all other fields at their defaults.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/power-bi/add-iothub-input.png" alt-text="Screenshot to add IoT Hub input to Stream Analytics Job.":::
+
+4. Select Save.
+
+### Add an output to the Stream Analytics job
+
+1. Under Job topology, select **Outputs**.
+2. In the Outputs pane, select **Add**, and then select **Power BI** from the drop-down list.
+3. On the Power BI - New output pane:
+
+ - **Output alias**: Enter a unique alias for the output, such as "powerbioutput".
+ - **Group workspace**: Select your target group workspace.
+ - **Authentication mode**: Leave the default ΓÇ£User tokenΓÇ¥ for testing purposes.
+
+ > [!NOTE]
+ > For production jobs, we recommend to [Use Managed Identity to authenticate your Stream Analytics job to Power BI](../../stream-analytics/powerbi-output-managed-identity.md).
+
+ - **Dataset name**: Enter a dataset name.
+ - **Table name**: Enter a table name.
+ - **Authorize**: Select Authorize and follow the prompts to sign into your Power BI account (the token is valid for 90 days).
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/power-bi/add-pbi-output.png" alt-text="Screenshot to add Power BI output to Stream Analytics Job.":::
+
+4. Select **Save**.
+
+ > [!NOTE]
+ > For more information on using Power BI as output for a Stream Analytics job, click [here](../../stream-analytics/power-bi-output.md). Learn more about [renewing authentication](../../stream-analytics/stream-analytics-power-bi-dashboard.md#renew-authorization) for your Power BI account.
+
+### Configure the query for Stream Analytics job
+
+1. Under Job topology, select **Query**.
+2. Make the following changes to your query:
+
+```SQL
+SELECT
+ CAST(InferenceRecords.ArrayValue.event.properties.total AS bigint) as EventTotal,
+ CAST(i.EventProcessedUtcTime AS datetime) as EventProcessedUtcTime
+INTO [YourOutputAlias]
+FROM [YourInputAlias] i
+CROSS APPLY GetArrayElements(inferences) AS InferenceRecords
+WHERE InferenceRecords.ArrayValue.subType = 'lineCrossing'
+```
+
+> [!NOTE]
+> In the above query, the _i_ in FROM clause is syntactically required to fetch value of EventProcessedUtcTime that is not nested in the _inferences_ array.
+> The above query is customized to get AI inferences for [Line Crossing tutorial](use-line-crossing.md).
+> If you are running another pipeline, ensure to customize the query according to the pipeline's AI inference schema. Learn more about [Parsing JSON in Stream Analytics job](../../stream-analytics/stream-analytics-parsing-json.md).
+
+3. Replace [YourOutputAlias] with the output alias used in the step to Add an output to the Stream Analytics job such as "powerbioutput". Note the right sequence of output and input aliases.
+4. Replace [YourInputAlias] with the input alias used in the step to Add an input to the Stream Analytics job such as "iothubinput".
+5. Your query should look similar to the following screenshot. Click on **Test query** to verify the Test results as a table of EventTotal and corresponding EventProcessedUtcTime values.
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/power-bi/asa-query.png" alt-text="Screenshot to test and save query in Stream Analytics Job.":::
+
+6. Select **Save query**.
+
+### Run the Stream Analytics job
+
+In the Stream Analytics job, select Overview, then select **Start** > **Now** > **Start**. Once the job successfully starts, the job status changes from Created to **Running**.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/power-bi/start-asa.png" alt-text="Screenshot to Start and Run a Stream Analytics Job.":::
+
+> [!NOTE]
+> IoT Hub allows data retention in the built-in Event Hubs by default for 1 day and for a maximum of 7 days. You can set the retention time during creation of your IoT Hub. Data retention time in IoT Hub depends on your chosen tier and unit type. Click [here](../../iot-hub/iot-hub-devguide-messages-read-builtin.md) for more information. For longer data retention, use [Azure Storage as output](../../stream-analytics/blob-storage-azure-data-lake-gen2-output.md) and then connect Power BI to the files in storage account.
+
+## Run a sample pipeline
+
+When the Stream Analytics job created in above step is in **Running** state, go to [Run the sample program](use-line-crossing.md#run-the-sample-program) section of Line Crossing tutorial and activate the live pipeline. The live pipeline will start sending AI inference results to IoT Hub, which are then picked up by the Stream Analytics job.
+
+## Create a Power BI dashboard to visualize AI events
+
+In Power BI, you can visualize streaming data in 2 ways:
+
+1. Create reports from table created for streaming dataset and pin to dashboard
+2. A dashboard tile with custom streaming dataset
+
+ > [!NOTE]
+ > In this article, we will use the first method to create reports and then pin to dashboard. This method retains data on the visual for longer duration and aggregates automatically based on incoming data. To learn more about the second method, see [Setup your custom streaming dataset in Power BI](https://docs.microsoft.com/power-bi/connect-data/service-real-time-streaming#set-up-your-real-time-streaming-dataset-in-power-bi).
+
+### Create and publish a Power BI report
+
+The following steps show you how to create and publish a report using the Power BI service.
+
+1. Make sure the sample pipeline is activated on your device (started in previous step to run a pipeline).
+2. Sign in to your [Power BI account](https://powerbi.microsoft.com/) and select **Power BI service** from the top menu.
+3. Select the workspace you used from the side menu.
+4. Under the **All** tab or the **Datasets + dataflows** tab, you should see the dataset that you specified in the step to _Add an output to the Stream Analytics job_.
+5. Hover over the dataset name you provided while creating Stream Analytics output, select **More options** menu (the three dots to the right of the dataset name), and then select **Create report**.
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/power-bi/create-report.png" alt-text="Screenshot to create report in Power BI.":::
+
+6. Create a line chart to show line crossing events in real time.
+
+ - On the **Visualizations** pane of the report creation page, select the **Line chart** icon to add a line chart.
+ - On the **Fields** pane, expand the table that you specified when you created the output for the Stream Analytics job.
+ - Drag **EventProcessedUtcTime** to **Axis** on the **Visualizations** pane.
+ - Drag **EventTotal** to **Values**.
+ - A line chart is created. The x-axis displays date and time in the UTC time zone. The y-axis displays auto-summarized values for **Sum** of EventTotal from the live pipeline. Change the Values to **Maximum** of EventTotal to get the most recent value of EventTotal. You can change the aggregation function to show Average, Count (Distinct) etc.
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/power-bi/powerbi-report.png" alt-text="Screenshot of line crossing report in Power BI.":::
+
+7. Save the report by clicking **Save** at the top-right.
+
+### Pin visual to a dashboard
+
+Click **Pin to a dashboard** at the top-right and select where you want to pin this visual ΓÇô in an existing dashboard or a new dashboard, and then follow the prompts accordingly.
+
+## Interpret the dashboard
+
+The line crossing processor node detects when an object crosses a line specified in the topology using lineCoordinates parameter. When objects cross these coordinates, an event is triggered with:
+
+- EventTotal: The total number of line crossings by any object in any direction (clockwise or counterclockwise) so far since beginning of the video. Learn more about [Line Crossing Event inferences](use-line-crossing.md#line-crossing-events)
+- Event Processed UTC Time
+
+In the above dashboard, you will see an increasing number of **EventTotal** with time as more and more objects cross the virtual line. This visualization enables you to quickly identify that vehicles passing on the freeway have high frequency between 3:52:00 and 3:53:30 PM for example. You can then use these insights to narrow down your analysis on reasons for traffic congestion at certain times on the freeway.
+
+## Clean up resources
+
+If you want to try other quickstarts or tutorials, keep the resources that you created. Otherwise, go to the Azure portal, go to your resource groups, select the resource group where you ran the pre-requisites for this article and created the stream analytics job, and then select **Delete resource group**.
+
+You created a dataset **LineCrossingDataset** in Power BI. To remove it, sign in to your [Power BI](https://powerbi.microsoft.com/) account. On the left-hand menu under **Workspaces**, select the workspace you used. In the list of datasets under the **Datasets + dataflows** tab, hover over the dataset. Select the three vertical dots that appear to the right of the dataset name to open the **More options** menu, then select **Delete** and follow the prompts. When you remove the dataset, the report is removed as well.
+
+## Next steps
+
+- Combine video playback with telemetry by [embedding player widget into Power BI](embed-player-in-power-bi.md)
+- Learn more about [Monitoring and Logging events](monitor-log-edge.md)
azure-web-pubsub Howto Websocket Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/howto-websocket-connect.md
Title: How to start WebSocket connection to the Azure Web PubSub service
-description: An instruction on how to start WebSocket connection to the Azure Web PubSub service in different languages
+ Title: How to start a WebSocket connection to Azure Web PubSub
+description: Learn how to start a WebSocket connection to the Azure Web PubSub service in different languages.
Last updated 08/26/2021
-# How to start WebSocket connection to the Azure Web PubSub service
+# Start a WebSocket connection to Azure Web PubSub
-Clients connect to the Azure Web PubSub service using the standard [WebSocket](https://tools.ietf.org/html/rfc6455) protocol. So languages having WebSocket client support can be used to write a client for the service. In below sections, we show several WebSocket client samples in different languages.
+Clients connect to the Azure Web PubSub service by using the standard [WebSocket](https://tools.ietf.org/html/rfc6455) protocol. You can use languages that have WebSocket client support to write a client for the service. In this article, you'll see several WebSocket client samples in different languages.
-## Auth
-The Web PubSub service uses [JWT token](https://tools.ietf.org/html/rfc7519.html) to validate and auth the clients. Clients can either put the token in the `access_token` query parameter, or put it in `Authorization` header when connecting to the service.
+## Authorization
-A typical workflow is the client communicates with its app server first to get the URL of the service and the token. And then the client opens the WebSocket connection to the service using the URL and token it receives.
+Web PubSub uses a [JSON Web Token (JWT)](https://tools.ietf.org/html/rfc7519.html) to validate and authorize clients. Clients can either put the token in the `access_token` query parameter, or put it in the `Authorization` header when connecting to the service.
-The portal also provides a dynamically generated *Client URL* with token for clients to start a quick test:
+Typically, the client communicates with its app server first, to get the URL of the service and the token. Then, the client opens the WebSocket connection to the service by using the URL and token it receives.
+
+The portal also provides a tool to generate the client URL with the token dynamically. This tool can be useful to do a quick test.
:::image type="content" source="./media/howto-websocket-connect/generate-client-url.png" alt-text="Screenshot showing where to find the Client URL Generator."::: > [!NOTE]
-> Make sure to only include necessary roles when generating the token.
+> Make sure to only include necessary roles when you're generating the token.
>
-To simplify the sample workflow, in below sections, we use this temporarily generated URL from portal for the client to connect, using `<Client_URL_From_Portal>` to represent the value. The token generated expires in 50 minutes by default, so don't forget to regenerate one when the token expires.
+In the following sections, to simplify the sample workflow, we use this temporarily generated URL from the portal for the client to connect. We use `<Client_URL_From_Portal>` to represent the value. The token generated expires in 50 minutes by default, so don't forget to regenerate one when the token expires.
-The service supports two types of WebSocket clients, one is the simple WebSocket client, and the other is the PubSub WebSocket client. Here we show how these two kinds of clients connect to the service. Check [WebSocket client protocols for Azure Web PubSub](./concept-client-protocols.md) for the details of these two kinds of clients.
+The service supports two types of WebSocket clients: one is the simple WebSocket client, and the other is the PubSub WebSocket client. Here we show how these two kinds of clients connect to the service. For more information about these clients, see [WebSocket client protocols for Azure Web PubSub](./concept-client-protocols.md).
## Dependency
In most modern browsers, `WebSocket` API is natively supported.
# [Node.js](#tab/javascript)
-* [Node.js 12.x or above](https://nodejs.org)
+* [Node.js 12.x or later](https://nodejs.org)
* `npm install ws` # [Python](#tab/python)
In most modern browsers, `WebSocket` API is natively supported.
# [C#](#tab/csharp)
-* [.NET Core 2.1 or above](https://dotnet.microsoft.com/download)
+* [.NET Core 2.1 or later](https://dotnet.microsoft.com/download)
* `dotnet add package Websocket.Client`
- * [Websocket.Client](https://github.com/Marfusios/websocket-client) is a third-party WebSocket client with built-in reconnection and error handling
+ * [Websocket.Client](https://github.com/Marfusios/websocket-client) is a third-party WebSocket client with built-in reconnection and error handling.
# [Java](#tab/java)-- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above.
+- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or later.
- [Apache Maven](https://maven.apache.org/download.cgi).
-## Simple WebSocket Client
+## Simple WebSocket client
# [In Browser](#tab/browser)
-Inside the `script` block of the html page:
+Inside the `script` block of the HTML page:
```html <script> // Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
namespace subscriber
// Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal using (var client = new WebsocketClient(new Uri("<Client_URL_From_Portal>"))) {
- // Disable the auto disconnect and reconnect because the sample would like the client to stay online even no data comes in
+ // Disable the auto disconnect and reconnect because the sample would like the client to stay online even if no data comes in
client.ReconnectTimeout = null; client.MessageReceived.Subscribe(msg => Console.WriteLine($"Message received: {msg}")); await client.Start();
public final class SimpleClient {
-## PubSub WebSocket Client
+## PubSub WebSocket client
# [In Browser](#tab/browser)
-Inside the `script` block of the html page:
+Inside the `script` block of the HTML page:
```html <script> // Don't forget to replace this <Client_URL_From_Portal> with the value fetched from the portal
namespace subscriber
return inner; })) {
- // Disable the auto disconnect and reconnect because the sample would like the client to stay online even no data comes in
+ // Disable the auto disconnect and reconnect because the sample would like the client to stay online even if no data comes in
client.ReconnectTimeout = null; client.MessageReceived.Subscribe(msg => Console.WriteLine($"Message received: {msg}")); await client.Start();
public final class SubprotocolClient {
-## Next step
+## Next steps
-In this article, we show how to connect to the service using the URL generated from the portal. Check below tutorials to see how the clients communicate with the app server to get the URL in real-world applications.
+In this article, you learned how to connect to the service by using the URL generated from the portal. To see how the clients communicate with the app server to get the URL in real-world applications, read these tutorials and check out the samples.
> [!div class="nextstepaction"] > [Tutorial: Create a chatroom with Azure Web PubSub](./tutorial-build-chat.md)
azure-web-pubsub Reference Server Sdk Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/reference-server-sdk-csharp.md
Title: Reference - .NET server SDK for Azure Web PubSub service
-description: The reference describes the .NET server SDK for Azure Web PubSub service
+ Title: Reference - .NET server SDK for Azure Web PubSub
+description: This reference describes the .NET server SDK for the Azure Web PubSub service.
Last updated 08/26/2021
-# .NET server SDK for Azure Web PubSub service
+# .NET server SDK for Azure Web PubSub
-This library can be used to do the following actions. Details about the terms used here are described in [Key concepts](#key-concepts) section.
+You can use this library to:
- Send messages to hubs and groups. - Send messages to particular users and connections. - Organize users and connections into groups.-- Close connections-- Grant, revoke, and check permissions for an existing connection
+- Close connections.
+- Grant, revoke, and check permissions for an existing connection.
+
+For more information about this terminology, see [Key concepts](#key-concepts).
[Source code][code] | [Package][package] |
This library can be used to do the following actions. Details about the terms us
[Product documentation](https://aka.ms/awps/doc) | [Samples][samples_ref]
-## Getting started
-### Install the package
+## Get started
Install the client library from [NuGet](https://www.nuget.org/):
dotnet add package Azure.Messaging.WebPubSub --prerelease
### Prerequisites - An [Azure subscription][azure_sub].-- An existing Azure Web PubSub service instance.
+- An existing instance of the Azure Web PubSub service.
### Authenticate the client
-In order to interact with the service, you'll need to create an instance of the WebPubSubServiceClient class. To make this possible, you'll need the connection string or a key, which you can access in the Azure portal.
+To interact with the service, you'll need to create an instance of the `WebPubSubServiceClient` class. To make this possible, you'll need the connection string or a key, which you can access in the Azure portal.
### Create a `WebPubSubServiceClient`
+Here's how:
+ ```csharp var serviceClient = new WebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", new AzureKeyCredential("<access-key>")); ``` ## Key concepts ## Examples
serviceClient.SendToAll(RequestContent.Create(stream), ContentType.ApplicationOc
## Troubleshooting
-### Setting up console logging
-You can also easily [enable console logging](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/samples/Diagnostics.md#logging) if you want to dig deeper into the requests you're making against the service.
+If you want to dig deeper into the requests you're making against the service, you can [enable console logging](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/samples/Diagnostics.md#logging).
[azure_sub]: https://azure.microsoft.com/free/ [samples_ref]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp
azure-web-pubsub Reference Server Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/reference-server-sdk-java.md
Title: Reference - Java server SDK for Azure Web PubSub service
-description: The reference describes the Java server SDK for Azure Web PubSub service
+ Title: Reference - Java server SDK for Azure Web PubSub
+description: This reference describes the Java server SDK for the Azure Web PubSub service.
Last updated 08/26/2021
-# Java server SDK for Azure Web PubSub service
+# Java server SDK for Azure Web PubSub
-Use the library to:
+You can use this library to:
-- Send messages to hubs and groups.
+- Send messages to hubs and groups.
- Send messages to particular users and connections. - Organize users and connections into groups.-- Close connections-- Grant/revoke/check permissions for an existing connection
+- Close connections.
+- Grant, revoke, and check permissions for an existing connection.
[Source code][source_code] | [API Reference Documentation][api] | [Product Documentation][product_documentation] | [Samples][samples_readme]
-## Getting started
+## Get started
-### Prerequisites
+Before you get started, make sure that you have the following prerequisites:
- A [Java Development Kit (JDK)][jdk_link], version 8 or later.-- [Azure Subscription][azure_subscription]
+- An [Azure subscription][azure_subscription].
-### Include the Package
+### Include the package
[//]: # ({x-version-update-start;com.azure:azure-messaging-webpubsub;current})
Use the library to:
[//]: # ({x-version-update-end})
-### Create a Web PubSub client using connection string
+### Create a Web PubSub client by using a connection string
```java WebPubSubServiceClient webPubSubServiceClient = new WebPubSubClientBuilder()
WebPubSubServiceClient webPubSubServiceClient = new WebPubSubClientBuilder()
.buildClient(); ```
-### Create a Web PubSub client using access key
+### Create a Web PubSub client by using an access key
```java WebPubSubServiceClient webPubSubServiceClient = new WebPubSubClientBuilder()
WebPubSubServiceClient webPubSubServiceClient = new WebPubSubClientBuilder()
.buildClient(); ```
-### Create a Web PubSub Group client
+### Create a Web PubSub group client
```java WebPubSubServiceClient webPubSubServiceClient = new WebPubSubClientBuilder() .credential(new AzureKeyCredential("{access-key}"))
WebPubSubGroup javaGroup = webPubSubServiceClient.getGroup("java");
## Examples
-### Broadcast message to entire hub
+### Broadcast a message to an entire hub
```java webPubSubServiceClient.sendToAll("Hello world!"); ```
-### Broadcast message to a group
+### Broadcast a message to a group
```java WebPubSubGroup javaGroup = webPubSubServiceClient.getGroup("Java"); javaGroup.sendToAll("Hello Java!"); ```
-### Send message to a connection
+### Send a message to a connection
```java webPubSubServiceClient.sendToConnection("myconnectionid", "Hello connection!");
webPubSubServiceClient.sendToUser("Andy", "Hello Andy!");
### Enable client logging You can set the `AZURE_LOG_LEVEL` environment variable to view logging statements made in the client library. For
-example, setting `AZURE_LOG_LEVEL=2` would show all informational, warning, and error log messages. The log levels can
-be found here: [log levels][log_levels].
+example, setting `AZURE_LOG_LEVEL=2` shows all informational, warning, and error log messages. You can find the log levels here: [log levels][log_levels].
### Default HTTP Client All client libraries by default use the Netty HTTP client. Adding the above dependency will automatically configure
the client library to use the Netty HTTP client. Configuring or changing the HTT
### Default SSL library All client libraries, by default, use the Tomcat-native Boring SSL library to enable native-level performance for SSL
-operations. The Boring SSL library is an uber jar containing native libraries for Linux / macOS / Windows, and provides
+operations. The Boring SSL library contains native libraries for Linux, macOS, and Windows, and provides
better performance compared to the default SSL implementation within the JDK. For more information, including how to reduce the dependency size, see the [performance tuning][performance_tuning] section of the wiki.
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/reference-server-sdk-js.md
Title: Reference - JavaScript SDK for the Azure Web PubSub service
-description: The reference describes the JavaScript SDK for the Azure Web PubSub service
+ Title: Reference - JavaScript SDK for Azure Web PubSub
+description: This reference describes the JavaScript SDK for the Azure Web PubSub service.
Last updated 08/26/2021
-# JavaScript SDK for the Azure Web PubSub service
+# JavaScript SDK for Azure Web PubSub
-There are 2 libraries offered for JavaScript:
-- [Service client library](#service-client-library) to
- - Send messages to hubs and groups.
- - Send messages to particular users and connections.
- - Organize users and connections into groups.
- - Close connections
- - Grant/revoke/check permissions for an existing connection
-- [Express middleware](#express) to handle incoming client events
- - Handle abuse validation requests
- - Handle client events requests
+There are two libraries offered for JavaScript: the service client library and express middleware. The following sections contain more information about these libraries.
<a name="service-client-library"></a> ## Azure Web PubSub service client library for JavaScript
-Use the library to:
-- Send messages to hubs and groups.
+You can use this library to:
+- Send messages to hubs and groups.
- Send messages to particular users and connections. - Organize users and connections into groups.-- Close connections-- Grant/revoke/check permissions for an existing connection
+- Close connections.
+- Grant, revoke, and check permissions for an existing connection.
[Source code](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/web-pubsub/web-pubsub) | [Package (NPM)](https://www.npmjs.com/package/@azure/web-pubsub) |
Use the library to:
[Product documentation](https://aka.ms/awps/doc) | [Samples][samples_ref]
-### Getting started
+### Get started
-#### Currently supported environments
--- [Node.js](https://nodejs.org/) version 8.x.x or higher-
-#### Prerequisites
+Use [Node.js](https://nodejs.org/) version 8.x.x or later. Additionally, make sure you have the following prerequisites:
- An [Azure subscription][azure_sub]. - An existing Azure Web PubSub service instance.
-#### 1. Install the `@azure/web-pubsub` package
+#### Install the `@azure/web-pubsub` package
```bash npm install @azure/web-pubsub ```
-#### 2. Create and authenticate a WebPubSubServiceClient
+#### Create and authenticate `WebPubSubServiceClient`
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
const { WebPubSubServiceClient } = require("@azure/web-pubsub");
const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>"); ```
-You can also authenticate the `WebPubSubServiceClient` using an endpoint and an `AzureKeyCredential`:
+You can also authenticate `WebPubSubServiceClient` by using an endpoint and an `AzureKeyCredential`:
```js const { WebPubSubServiceClient, AzureKeyCredential } = require("@azure/web-pubsub");
await serviceClient.sendToAll(payload.buffer);
### Troubleshooting
-#### Enable logs
-
-You can set the following environment variable to get the debug logs when using this library.
+You can set the following environment variable to get the debug logs when you're using this library:
- Getting debug logs from the SignalR client library
You can set the following environment variable to get the debug logs when using
export AZURE_LOG_LEVEL=verbose ```
-For more detailed instructions on how to enable logs, you can look at the [@azure/logger package docs](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/core/logger).
+For more detailed instructions on how to enable logs, see the [@azure/logger package docs](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/core/logger).
<a name="express"></a>
-## Azure Web PubSub CloudEvents handlers for Express
+## Azure Web PubSub CloudEvents handlers for express
-Use the express library to:
-- Add Web PubSub CloudEvents middleware to handle incoming client events
- - Handle abuse validation requests
- - Handle client events requests
+You can use the express library to:
+- Add Web PubSub CloudEvents middleware to handle incoming client events.
+ - Handle abuse validation requests.
+ - Handle client events requests.
[Source code](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/web-pubsub/web-pubsub-express) | [Package (NPM)](https://www.npmjs.com/package/@azure/web-pubsub-express) |
Use the express library to:
[Product documentation](https://aka.ms/awps/doc) | [Samples][samples_ref]
-### Getting started
-
-#### Currently supported environments
+### Get started
-- [Node.js](https://nodejs.org/) version 8.x.x or higher-- [Express](https://expressjs.com/) version 4.x.x or higher-
-#### Prerequisites
+Use [Node.js](https://nodejs.org/) version 8.x.x or later, or [Express](https://expressjs.com/) version 4.x.x or later. Additionally, make sure you have the following prerequisites:
- An [Azure subscription][azure_sub]. - An existing Azure Web PubSub endpoint.
-#### 1. Install the `@azure/web-pubsub-express` package
+#### Install the `@azure/web-pubsub-express` package
```bash npm install @azure/web-pubsub-express ```
-#### 2. Create a WebPubSubEventHandler
+#### Create `WebPubSubEventHandler`
```js const express = require("express");
app.listen(3000, () =>
### Key concepts
-#### Client Events
-
-Events are created during the lifecycle of a client connection. For example, a simple WebSocket client connection creates a `connect` event when it tries to connect to the service, a `connected` event when it successfully connected to the service, a `message` event when it sends messages to the service and a `disconnected` event when it disconnects from the service.
-
-#### Event Handler
+- **Client events:** A client creates events during the lifecycle of a connection. For example, a simple WebSocket client connection creates the following events:
+ - A `connect` event when it tries to connect to the service.
+ - A `connected` event when it successfully connects to the service.
+ - A `message` event when it sends messages to the service.
+ - A `disconnected` event when it disconnects from the service.
-Event handler contains the logic to handle the client events. Event handler needs to be registered and configured in the service through the portal or Azure CLI beforehand. The place to host the event handler logic is generally considered as the server-side.
+- **Event handler:** An event handler contains the logic to handle the client events. The event handler needs to be registered and configured in the service beforehand, through the Azure portal or the Azure CLI. The server generally hosts the event handler logic.
### Troubleshooting
azure-web-pubsub Reference Server Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/reference-server-sdk-python.md
Title: Reference - Python server SDK for Azure Web PubSub service
-description: The reference describes the Python server SDK for Azure Web PubSub service
+ Title: Reference - Python server SDK for Azure Web PubSub
+description: This reference describes the Python server SDK for the Azure Web PubSub service.
Last updated 08/26/2021
-# Python server SDK for Azure Web PubSub service
+# Python server SDK for Azure Web PubSub
-Use the library to:
+You can use this library to:
-- Send messages to hubs and groups.
+- Send messages to hubs and groups.
- Send messages to particular users and connections. - Organize users and connections into groups.-- Close connections-- Grant/revoke/check permissions for an existing connection
+- Close connections.
+- Grant, revoke, and check permissions for an existing connection.
[Source code](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/webpubsub/azure-messaging-webpubsubservice) | [Package (Pypi)][package] | [API reference documentation](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/webpubsub/azure-messaging-webpubsubservice) | [Product documentation][webpubsubservice_docs] | [Samples][samples_ref]
-## Getting started
+## Get started
-### Installations the package
+Install the package as follows:
```bash python -m pip install azure-messaging-webpubsubservice ```
-#### Prerequisites
+### Prerequisites
-- Python 2.7, or 3.6 or later is required to use this package.-- You need an [Azure subscription][azure_sub], and an [Azure WebPubSub service instance][webpubsubservice_docs] to use this package.
+- Python 2.7, or 3.6 or later, is required to use this package.
+- An [Azure subscription][azure_sub].
- An existing Azure Web PubSub service instance.
-### Authenticating the client
+### Authenticate the client
-In order to interact with the Azure WebPubSub service, you'll need to create an instance of the [WebPubSubServiceClient][webpubsubservice_client_class] class. In order to authenticate against the service, you need to pass in an AzureKeyCredential instance with endpoint and access key. The endpoint and access key can be found on Azure portal.
+To interact with the Azure WebPubSub service, you need to create an instance of the [`WebPubSubServiceClient`][webpubsubservice_client_class] class. To authenticate against the service, you need to pass in an 'AzureKeyCredential' instance with endpoint and access key. You can find the endpoint and access key on the Azure portal.
```python >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
In order to interact with the Azure WebPubSub service, you'll need to create an
## Troubleshooting
-### Logging
-
-This SDK uses Python standard logging library.
-You can configure logging print debugging information to the stdout or anywhere you want.
+This SDK uses the standard logging library of Python. You can configure logging print debugging information to the `stdout`, or anywhere you want.
```python import logging
import logging
logging.basicConfig(level=logging.DEBUG) ```
-Http request and response details are printed to stdout with this logging config.
+The HTTP request and response details are printed to `stdout` with this logging configuration.
[webpubsubservice_docs]: https://aka.ms/awps/doc [azure_cli]: /cli/azure
azure-web-pubsub Tutorial Permission https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-permission.md
Title: Tutorial - Add authentication and permissions to your application when using Azure Web PubSub service
-description: A tutorial to walk through how to add authentication and permissions to your application when using Azure Web PubSub service
+ Title: Tutorial - Add authentication and permissions to your application when using Azure Web PubSub
+description: Walk through how to add authentication and permissions to your application when using Azure Web PubSub.
Last updated 08/26/2021
-# Tutorial: Add authentication and permissions to your application when using Azure Web PubSub service
+# Tutorial: Add authentication and permissions to your application when using Azure Web PubSub
-In [Build a chat app tutorial](./tutorial-build-chat.md), you've learned how to use WebSocket APIs to send and receive data with Azure Web PubSub. You may have noticed that for simplicity it does not require any authentication. Though Azure Web PubSub requires access token to be connected, the `negotiate` API we used in the tutorial to generate access token doesn't need authentication, so anyone can call this API to get an access token.
+In [Build a chat app](./tutorial-build-chat.md), you learned how to use WebSocket APIs to send and receive data with Azure Web PubSub. You might have noticed that, for simplicity, it doesn't require any authentication. Though Azure Web PubSub requires an access token to be connected, the `negotiate` API used in the tutorial to generate the access token doesn't need authentication. Anyone can call this API to get an access token.
-In a real world application it's common that you want user to log in first before they can use your application to protect it from being abused. In this tutorial, you'll learn how to integrate Azure Web PubSub with the authentication/authorization system of your application to make it more secure.
+In a real-world application, you typically want the user to sign in first, before they can use your application. In this tutorial, you learn how to integrate Web PubSub with the authentication and authorization system of your application, to make it more secure.
-The complete code sample of this tutorial can be found [here][code].
+You can find the complete code sample of this tutorial on [GitHub][code].
In this tutorial, you learn how to:
In this tutorial, you learn how to:
## Add authentication to the chat room app
-This tutorial reuses the chat application created in [Build a chat app tutorial](./tutorial-build-chat.md). You can also clone the complete code sample for the chat app from [here][chat-js].
+This tutorial reuses the chat application created in [Build a chat app](./tutorial-build-chat.md). You can also clone the complete code sample for the chat app from [GitHub][chat-js].
-In this tutorial, we will add authentication to the chat application and integrate it with Azure Web PubSub service.
+In this tutorial, you add authentication to the chat application and integrate it with Web PubSub.
-First let's add GitHub authentication to the chat room so user can use GitHub account to log in.
+First, add GitHub authentication to the chat room so the user can use a GitHub account to sign in.
-1. Install dependencies
+1. Install dependencies.
```bash npm install --save cookie-parser
First let's add GitHub authentication to the chat room so user can use GitHub ac
npm install --save passport-github2 ```
-2. Add the following code to `server.js` to enable GitHub authentication
+1. Enable GitHub authentication by adding the following code to `server.js`:
```javascript const app = express();
First let's add GitHub authentication to the chat room so user can use GitHub ac
app.get('/auth/github/callback', passport.authenticate('github', { successRedirect: '/' })); ```
- The code above uses [Passport.js](http://www.passportjs.org/) to enable GitHub authentication. Here is a simple illustration of how it works:
+ The preceding code uses [Passport.js](http://www.passportjs.org/) to enable GitHub authentication. Here's a simple illustration of how it works:
- 1. `/auth/github` will redirect to github.com for login
- 2. After login is completed, GitHub will redirect you to `/auth/github/callback` with a code for your application to complete the authentication (see the verify callback in `passport.use()` to see how the profile returned from GitHub is verified and persisted in the server).
- 3. After authentication is completed, you'll be redirected to the homepage (`/`) of the site.
+ 1. `/auth/github` redirects to github.com for sign-in.
+ 1. After you sign in, GitHub redirects you to `/auth/github/callback` with a code for your application to complete the authentication. (To see how the profile returned from GitHub is verified and persisted in the server, see the verify callback in `passport.use()`.)
+ 1. After authentication is completed, you're redirected to the homepage (`/`) of the site.
For more details about GitHub OAuth and Passport.js, see the following articles:
First let's add GitHub authentication to the chat room so user can use GitHub ac
To test this, you need to first create a GitHub OAuth app:
- 1. Go to https://www.github.com, open your profile -> Settings -> Developer settings
- 2. Go to OAuth Apps, click "New OAuth App"
- 3. Fill in application name, homepage URL (can be anything you like), and set Authorization callback URL to `http://localhost:8080/auth/github/callback` (which matches the callback API you exposed in the server)
- 4. After the application is registered, copy the Client ID and click "Generate a new client secret" to generate a new client secret
+ 1. Go to https://www.github.com, open your profile, and select **Settings** > **Developer settings**.
+ 1. Go to OAuth Apps, and select **New OAuth App**.
+ 1. Fill in the application name and homepage URL (the URL can be anything you like), and set **Authorization callback URL** to `http://localhost:8080/auth/github/callback`. This URL matches the callback API you exposed in the server.
+ 1. After the application is registered, copy the client ID and select **Generate a new client secret**.
- Then run `node server <connection-string> <client-id> <client-secret>`, open `http://localhost:8080/auth/github`, you'll be redirected to GitHub to log in. After the login is succeeded, you'll be redirected to the chat application.
+ Then run `node server <connection-string> <client-id> <client-secret>`, and open `http://localhost:8080/auth/github`. You're redirected to GitHub to sign in. After you sign in, you're redirected to the chat application.
-3. Then let's update the chat room to make use of the identity we get from GitHub, instead of popping up a dialog to ask for username.
+1. Update the chat room to make use of the identity you get from GitHub, instead of prompting the user for a username.
Update `public/https://docsupdatetracker.net/index.html` to directly call `/negotiate` without passing in a user ID.
First let's add GitHub authentication to the chat room so user can use GitHub ac
let ws = new WebSocket(data.url); ```
- When a user is logged in, the request will automatically carry the user's identity through cookie. So we just need to check whether the user exists in the `req` object and add the username to Web PubSub access token:
+ When a user is signed in, the request automatically carries the user's identity through a cookie. So you just need to check whether the user exists in the `req` object, and add the username to Web PubSub access token:
```javascript app.get('/negotiate', async (req, res) => {
First let's add GitHub authentication to the chat room so user can use GitHub ac
}); ```
- Now rerun the server and you'll see a "not authorized" message for the first time you open the chat room. Click the login link to log in, then you'll see it works as before.
+ Now rerun the server, and you'll see a "not authorized" message for the first time you open the chat room. Select the sign-in link to sign in, and then you'll see it works as before.
-## Working with permissions
+## Work with permissions
-In the previous tutorials, you have learned to use `WebSocket.send()` to directly publish messages to other clients using subprotocol. In a real application, you may not want client to be able to publish/subscribe to any group without permission control. In this section, you'll see how to control clients using the permission system of Azure Web PubSub.
+In the previous tutorials, you learned to use `WebSocket.send()` to directly publish messages to other clients by using subprotocol. In a real application, you might not want the client to be able to publish or subscribe to any group without permission control. In this section, you'll see how to control clients by using the permission system of Web PubSub.
-In Azure Web PubSub there are three types of operations a client can do with subprotocol:
+In Web PubSub, a client can perform the following types of operations with subprotocol:
-- Send events to server-- Publish messages to a group-- Join (subscribe) a group
+- Send events to server.
+- Publish messages to a group.
+- Join (subscribe to) a group.
-Send event to server is the default operation of client even no protocol is used, so it's always allowed. To publish and subscribe to a group, client needs to get permission. There are two ways for server to grant permission to clients:
+Sending an event to the server is the default operation of the client. No protocol is used, so it's always allowed. To publish and subscribe to a group, the client needs to get permission. There are two ways for the server to grant permission to clients:
-- Specify roles when a client is connected (role is a concept to represent initial permissions when a client is connected)-- Use API to grant permission to a client after it's connected
+- Specify roles when a client is connected (*role* is a concept to represent initial permissions when a client is connected).
+- Use an API to grant permission to a client after it's connected.
-For join group permission, client still needs to join the group using join group message after it gets the permission. Or server can use API to add client to a group even it doesn't have the join permission.
+For permission to join a group, the client still needs to join the group by using the "join group" message after it gets the permission. Alternatively, the server can use an API to add the client to a group, even if it doesn't have the join permission.
-Now let's use this permission system to add a new feature to the chat room. We will add a new type of user called administrator to the chat room, for administrator, we will allow them to send system message (message starts with "[SYSTEM]") directly from client.
+Now let's use this permission system to add a new feature to the chat room. You'll add a new type of user called *administrator* to the chat room. You'll allow the administrator to send system messages (messages that start with "[SYSTEM]") directly from the client.
-First we need to separate system and user messages into two different groups so their permissions can be controlled separately.
+First, you need to separate system and user messages into two different groups so you can control their permissions separately.
Change `server.js` to send different messages to different groups:
let handler = new WebPubSubEventHandler(hubName, ['*'], {
}); ```
-You can see the code above uses `WebPubSubServiceClient.group().sendToAll()` to send message to group instead of the hub.
+The preceding code uses `WebPubSubServiceClient.group().sendToAll()` to send the message to a group instead of the hub.
-Since the message is now sent to groups, we need to add clients to groups so they can continue receiving messages. This is done in the `handleConnect` handler.
+Because the message is now sent to groups, you need to add clients to groups so they can continue receiving messages. Use the `handleConnect` handler to add clients to groups.
> [!Note]
-> `handleConnect` is triggered when a client is trying to connect to Azure Web PubSub. In this handler you can return groups and roles, so service can add connection to groups or grant roles, as soon as the connection is established. It can also `res.fail()` to deny the connection.
+> `handleConnect` is triggered when a client is trying to connect to Web PubSub. In this handler, you can return groups and roles, so the service can add a connection to groups or grant roles, as soon as the connection is established. The service can also use `res.fail()` to deny the connection.
>
-To make `handleConnect` be triggered, go to event handler settings in Azure portal, and check `connect` in system events.
+To trigger `handleConnect`, go to the event handler settings in the Azure portal, and select **connect** in system events.
-We also need to update client HTML since now server sends JSON messages instead of plain text:
+You also need to update the client HTML, because now the server sends JSON messages instead of plain text:
```javascript let ws = new WebSocket(data.url, 'json.webpubsub.azure.v1');
message.addEventListener('keypress', e => {
}); ```
-Then change the client code to send to system group when users click "system message":
+Then change the client code to send to the system group when users select **system message**:
```html <button id="system">system message</button>
Then change the client code to send to system group when users click "system mes
</script> ```
-By default client doesn't have permission to send to any group, update server code to grant permission for admin user (for simplicity the ID of the admin is provided as a command-line argument).
+By default, the client doesn't have permission to send to any group. Update server code to grant permission for the admin user (for simplicity, the ID of the admin is provided as a command-line argument).
```javascript app.get('/negotiate', async (req, res) => {
app.get('/negotiate', async (req, res) => {
}); ```
-Now run `node server <connection-string> <client-id> <client-secret> <admin-id>`, you'll see you can send a system message to every client when you log in as `<admin-id>`.
+Now run `node server <connection-string> <client-id> <client-secret> <admin-id>`. You'll see that you can send a system message to every client when you sign in as `<admin-id>`.
-But if you log in as a different user, when you click "system message", nothing will happen. You may expect service give you an error to let you know the operation is not allowed. This can be done by setting an `ackId` when publishing the message. Whenever `ackId` is specified, Azure Web PubSub will return an ack message with a matching `ackId` to indicate whether the operation is succeeded or not.
+But if you sign in as a different user, when you select **system message**, nothing happens. You might expect the service to give you an error to let you know the operation isn't allowed. To provide this feedback, you can set `ackId` when you're publishing the message. Whenever `ackId` is specified, Web PubSub will return a message with a matching `ackId` to indicate whether the operation has succeeded or not.
-Change the code of sending system message to the following code:
+Change the code of sending a system message to the following code:
```javascript let ackId = 0;
system.addEventListener('click', e => {
}); ```
-Also change the code of processing messages to handle ack message:
+Also change the code of processing messages to handle an `ack` message:
```javascript ws.onmessage = event => {
ws.onmessage = event => {
}; ```
-Now rerun server and login as a different user, you'll see an error message when trying to send system message.
+Now rerun the server, and sign in as a different user. You'll see an error message when you're trying to send a system message.
-The complete code sample of this tutorial can be found [here][code].
+The complete code sample of this tutorial can be found on [GitHub][code].
## Next steps
-This tutorial provides you a basic idea of how to connect to the Web PubSub service and how to publish messages to the connected clients using subprotocol.
+This tutorial provides you with a basic idea of how to connect to the Web PubSub service, and how to publish messages to connected clients by using subprotocol.
-Check other tutorials to further dive into how to use the service.
+To learn more about using the Web PubSub service, read the other tutorials available in the documentation.
> [!div class="nextstepaction"] > [Explore more Azure Web PubSub samples](https://aka.ms/awps/samples)
backup Backup Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-vault-overview.md
Title: Overview of Backup vaults description: An overview of Backup vaults. Previously updated : 04/19/2021 Last updated : 09/08/2021+ # Backup vaults overview
In the **Backup Instances** tile, you get a summarized view of all backup instan
![Backup jobs](./media/backup-vault-overview/backup-jobs.png)
+## Move a Backup vault across Azure subscriptions/resource groups (Public Preview)
+
+This section explains how to move a Backup vault (configured for Azure Backup) across Azure subscriptions and resource groups using the Azure portal.
+
+>[!Note]
+>You can also move Backup vaults to a different resource group or subscription using [PowerShell](/powershell/module/az.resources/move-azresource?view=azps-6.3.0&preserve-view=true) and [CLI](/cli/azure/resource?view=azure-cli-latest&preserve-view=true#az_resource_move).
+
+### Supported regions
+
+The vault move across subscriptions and resource groups is currently supported in the following regions: West US, South Central US, East Asia, Switzerland North, South Africa North, UK West, North Central US, UAE North, Norway East, Australia Southeast, Japan West, Canada East, Korea Central, Australia Central, West Central US, Central India, West India, South India, UAE Central, South Africa West, Norway West, Switzerland West
+
+### Use Azure portal to move Backup vault to a different resource group
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Open the list of Backup vaults and select the vault you want to move.
+
+ The vault dashboard displays the vault details.
+
+ :::image type="content" source="./media/backup-vault-overview/vault-dashboard-to-move-to-resource-group-inline.png" alt-text="Screenshot showing the dashboard of the vault to be moved to another resource group." lightbox="./media/backup-vault-overview/vault-dashboard-to-move-to-resource-group-expanded.png":::
+
+1. In the vault **Overview** menu, click **Move**, and then select **Move to another resource group**.
+
+ :::image type="content" source="./media/backup-vault-overview/select-move-to-another-resource-group-inline.png" alt-text="Screenshot showing the option for moving the Backup vault to another resource group." lightbox="./media/backup-vault-overview/select-move-to-another-resource-group-expanded.png":::
+
+1. In the **Resource group** drop-down list, select an existing resource group or select **Create new** to create a new resource group.
+
+ The subscription details to move the vault auto-populate in the **Resource group** drop-down list.
+
+ :::image type="content" source="./media/backup-vault-overview/select-existing-or-create-resource-group-inline.png" alt-text="Screenshot showing the selection of an existing resource group or creation of a new resource group." lightbox="./media/backup-vault-overview/select-existing-or-create-resource-group-expanded.png":::
+
+1. On the **Resources to move** tab, the Backup vault that needs to be moved will undergo validation. This process may take a few minutes. Wait till the validation is complete.
+
+ :::image type="content" source="./media/backup-vault-overview/move-validation-process-to-move-to-resource-group-inline.png" alt-text="Screenshot showing the Backup vault validation status." lightbox="./media/backup-vault-overview/move-validation-process-to-move-to-resource-group-expanded.png":::
+
+1. Select the checkbox _I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs_ΓÇÖ to confirm, and then select **Move**.
+
+ >[!Note]
+ >The resource path changes after moving vault across resource groups or subscriptions. Ensure that you update the tools and scripts with the new resource path after the move operation completes.
+
+Wait till the move operation is complete to perform any other operations on the vault. Any operations performed on the Backup vault will fail if performed while move is in progress. When the process is complete, the Backup vault should appear in the target resource group.
+
+>[!Important]
+>If you encounter any error while moving the vault, refer to the [Error codes and troubleshooting section](#error-codes-and-troubleshooting).
+
+### Use Azure portal to move Backup vault to a different subscription
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Open the list of Backup vaults and select the vault you want to move.
+
+ The vault dashboard displays the vault details.
+
+ :::image type="content" source="./media/backup-vault-overview/vault-dashboard-to-move-to-another-subscription-inline.png" alt-text="Screenshot showing the dashboard of the vault to be moved to another Azure subscription." lightbox="./media/backup-vault-overview/vault-dashboard-to-move-to-another-subscription-expanded.png":::
+
+1. In the vault **Overview** menu, click **Move**, and then select **Move to another subscription**.
+
+ :::image type="content" source="./media/backup-vault-overview/select-move-to-another-subscription-inline.png" alt-text="Screenshot showing the option for moving the Backup vault to another Azure subscription." lightbox="./media/backup-vault-overview/select-move-to-another-subscription-expanded.png":::
+
+1. In the **Subscription** drop-down list, select an existing subscription.
+
+ For moving vaults across subscriptions, the target subscription must reside in the same tenant as the source subscription. To move a vault to a different tenant, see [Transfer subscription to a different directory](/azure/role-based-access-control/transfer-subscription).
+
+1. In the **Resource group** drop-down list, select an existing resource group or select **Create new** to create a new resource group.
+
+ :::image type="content" source="./media/backup-vault-overview/select-existing-or-create-resource-group-to-move-to-other-subscription-inline.png" alt-text="Screenshot showing the selection of an existing resource group or creation of a new resource group in another Azure subscription." lightbox="./media/backup-vault-overview/select-existing-or-create-resource-group-to-move-to-other-subscription-expanded.png":::
+
+1. On the **Resources to move** tab , the Backup vault that needs to be moved will undergo validation. This process may take a few minutes. Wait till the validation is complete.
+
+ :::image type="content" source="./media/backup-vault-overview/move-validation-process-to-move-to-another-subscription-inline.png" alt-text="Screenshot showing the validation status of Backup vault to be moved to another Azure subscription." lightbox="./media/backup-vault-overview/move-validation-process-to-move-to-another-subscription-expanded.png":::
+
+1. Select the checkbox _I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs_ to confirm, and then select **Move**.
+
+ >[!Note]
+ >The resource path changes after moving vault across resource groups or subscriptions. Ensure that you update the tools and scripts with the new resource path after the move operation completes.
+
+Wait till the move operation is complete to perform any other operations on the vault. Any operations performed on the Backup vault will fail if performed while move is in progress. When the process completes, the Backup vault should appear in the target Subscription and Resource group.
+
+>[!Important]
+>If you encounter any error while moving the vault, refer to the [Error codes and troubleshooting section](#error-codes-and-troubleshooting).
+
+### Error codes and troubleshooting
+
+Troubleshoot the following common issues you might encounter during Backup vault move:
+
+#### BackupVaultMoveResourcesPartiallySucceeded
+
+**Cause**: You may face this error when Backup vault move succeeds only partially.
+
+**Recommendation**: The issue should get resolved automatically within 36 hours. If it persists, contact Microsoft Support.
+
+#### BackupVaultMoveResourcesCriticalFailure
+
+**Cause**: You may face this error when Backup vault move fails critically.
+
+**Recommendation**: The issue should get resolved automatically within 36 hours. If it persists, contact Microsoft Support.
+
+#### UserErrorBackupVaultResourceMoveInProgress
+
+**Cause**: You may face this error if you try to perform any operations on the Backup vault while itΓÇÖs being moved.
+
+**Recommendation**: Wait till the move operation is complete, and then retry.
+#### UserErrorBackupVaultResourceMoveNotAllowedForMultipleResources
+
+**Cause**: You may face this error if you try to move multiple Backup vaults in a single attempt.
+
+**Recommentation**: Ensure that only one Backup vault is selected for every move operation.
+#### UserErrorBackupVaultResourceMoveNotAllowedUntilResourceProvisioned
+
+**Cause**: You may face this error if the vault is not yet provisioned.
+
+**Recommendation**: Retry the operation after some time.
+
+#### BackupVaultResourceMoveIsNotEnabled
+
+**Cause**: Resource move for Backup vault is currently not supported in the selected Azure region.
+ ## Next steps - [Configure backup on Azure PostgreSQL databases](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases)
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-docker-container-workloads.md
These images are only supported for use in Azure Batch pools and are geared for
You can also create custom images from VMs running Docker on one of the Linux distributions that is compatible with Batch. If you choose to provide your own custom Linux image, see the instructions in [Use a managed custom image to create a pool of virtual machines](batch-custom-images.md).
-For Docker support on a custom image, install [Docker Community Edition (CE)](https://www.docker.com/community-edition) or [Docker Enterprise Edition (EE)](https://www.docker.com/enterprise-edition).
+For Docker support on a custom image, install [Docker Community Edition (CE)](https://www.docker.com/community-edition) or [Docker Enterprise Edition (EE)](https://www.docker.com/blog/docker-enterprise-edition/).
Additional considerations for using a custom Linux image:
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/ReleaseNotes.md
Title: What's new in Face service?
+ Title: What's new in Azure Face service?
-description: Release notes for the Face service include a history of release changes for various versions.
+description: Stay up to date on recent releases and updates to the Azure Face service.
Previously updated : 04/26/2021 Last updated : 09/08/2021 -+
-# What's new in Face service?
+# What's new in Azure Face service?
-The Azure Face service is updated on an ongoing basis. Use this article to stay up to date with feature enhancements, fixes, and documentation updates.
+The Azure Face service is updated on an ongoing basis. Use this article to stay up to date with new features, enhancements, fixes, and documentation updates.
## April 2021
-### PersonDirectory
+### PersonDirectory data structure
* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities. Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically. For more details see [Use the PersonDirectory structure](Face-API-How-to-Topics/use-persondirectory.md).
cognitive-services Face How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/face-how-to-install-containers.md
- Title: Install and run Docker containers for the Face API-
-description: Use the Docker container for the Face API to detect and identify human faces in images.
------- Previously updated : 04/28/2021-
-keywords: on-premises, Docker, container, identify
--
-# Install and run Face containers (Retiring)
-
-> [!IMPORTANT]
-> The Face container preview is no longer accepting applications and the container has been deprecated as of April 29th 2021. The Face container will be fully retired on July 26th 2021.
-
-Azure Cognitive Services Face API provides a Linux Docker container that detects and analyzes human faces in images. It also identifies attributes, which include face landmarks such as noses and eyes, gender, age, and other machine-predicted facial features. In addition to detection, Face can check if two faces in the same image or different images are the same by using a confidence score. Face also can compare faces against a database to see if a similar-looking or identical face already exists. It also can organize similar faces into groups by using shared visual traits.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-You must meet the following prerequisites before you use the Face service containers.
-
-|Required|Purpose|
-|--|--|
-|Docker Engine| The Docker Engine must be installed on a [host computer](#the-host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> On Windows, Docker also must be configured to support Linux containers.<br><br>|
-|Familiarity with Docker | You need a basic understanding of Docker concepts, such as registries, repositories, containers, and container images. You also need knowledge of basic `docker` commands.|
-|Face resource |To use the container, you must have:<br><br>An Azure **Face** resource and the associated API key and the endpoint URI. Both values are available on the **Overview** and **Keys** pages for the resource. They're required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page
--
-### The host computer
--
-### Container requirements and recommendations
-
-The following table describes the minimum and recommended CPU cores and memory to allocate for each Face service container.
-
-| Container | Minimum | Recommended | Transactions per second<br>(Minimum, maximum)|
-|--||-|--|
-|Face | 1 core, 2-GB memory | 1 core, 4-GB memory |10, 20|
-
-* Each core must be at least 2.6 GHz or faster.
-* Transactions per second (TPS).
-
-Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-## Get the container image with docker pull
-
-Container images for the Face service are available.
-
-| Container | Repository |
-|--||
-| Face | `containerpreview.azurecr.io/microsoft/cognitive-services-face:latest` |
--
-### Docker pull for the Face container
-
-```
-docker pull containerpreview.azurecr.io/microsoft/cognitive-services-face:latest
-```
-
-## Use the container
-
-After the container is on the [host computer](#the-host-computer), use the following process to work with the container.
-
-1. [Run the container](#run-the-container-with-docker-run) with the required billing settings. More [examples](./face-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
-1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
-
-## Run the container with docker run
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gathering required parameters](#gathering-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
-
-[Examples](face-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
-containerpreview.azurecr.io/microsoft/cognitive-services-face \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
-
-* Runs a face container from the container image.
-* Allocates one CPU core and 4 GB of memory.
-* Exposes TCP port 5000 and allocates a pseudo TTY for the container.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-More [examples](./face-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
-
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container or the container won't start. For more information, see [Billing](#billing).
---
-## Query the container's prediction endpoint
-
-The container provides REST-based query prediction endpoint APIs.
-
-Use the host, `http://localhost:5000`, for container APIs.
--
-<!-- ## Validate container is running -->
--
-## Stop the container
--
-## Troubleshooting
-
-If you run the container with an output [mount](./face-resource-container-config.md#mount-settings) and logging is enabled, the container generates log files that are helpful to troubleshoot issues that happen while you start or run the container.
--
-## Billing
-
-The Face service containers send billing information to Azure by using a Face resource on your Azure account.
--
-For more information about these options, see [Configure containers](./face-resource-container-config.md).
-
-## Summary
-
-In this article, you learned concepts and workflow for how to download, install, and run Face service containers. In summary:
-
-* Container images are downloaded from the Azure Container Registry.
-* Container images run in Docker.
-* You can use either the REST API or the SDK to call operations in Face service containers by specifying the host URI of the container.
-* You must specify billing information when you instantiate a container.
-
-> [!IMPORTANT]
-> Cognitive Services containers aren't licensed to run without being connected to Azure for metering. Customers must enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
-
-## Next steps
-
-* For configuration settings, see [Configure containers](face-resource-container-config.md).
-* To learn more about how to detect and identify faces, see [Face overview](Overview.md).
-* For information about the methods supported by the container, see the [Face API](//westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
-* To use more Cognitive Services containers, see [Cognitive Services containers](../cognitive-services-container-support.md).
cognitive-services Face Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/face-resource-container-config.md
- Title: Configure containers - Face-
-description: The Face container runtime environment is configured using the `docker run` command arguments. There are both required and optional settings.
------- Previously updated : 04/29/2021---
-# Configure Face Docker containers (Retiring)
-
-> [!IMPORTANT]
-> The Face container preview is no longer accepting applications and the container has been deprecated as of April 29th 2021. The Face container will be fully retired on July 26th 2021.
-
-The **Face** container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the billing settings.
-
-## Configuration settings
--
-> [!IMPORTANT]
-> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](face-how-to-install-containers.md#billing).
-
-## ApiKey configuration setting
-
-The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Cognitive Services_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
-
-This setting can be found in the following place:
-
-* Azure portal: **Cognitive Services** Resource Management, under **Keys**
-
-## ApplicationInsights setting
--
-## Billing configuration setting
-
-The `Billing` setting specifies the endpoint URI of the _Cognitive Services_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Cognitive Services_ resource on Azure. The container reports usage about every 10 to 15 minutes.
-
-This setting can be found in the following place:
-
-* Azure portal: **Cognitive Services** Overview, labeled `Endpoint`
-
-Remember to add the _Face_ routing to the endpoint URI as shown in the example.
-
-|Required| Name | Data type | Description |
-|--||--|-|
-|Yes| `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gathering required parameters](face-how-to-install-containers.md#gathering-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
-
-<!-- specific to face only -->
-
-## CloudAI configuration settings
-
-The configuration settings in the `CloudAI` section provide container-specific options unique to your container. The following settings and objects are supported for the Face container in the `CloudAI` section
-
-| Name | Data type | Description |
-||--|-|
-| `Storage` | Object | The storage scenario used by the Face container. For more information about storage scenarios and associated settings for the `Storage` object, see [Storage scenario settings](#storage-scenario-settings) |
-
-### Storage scenario settings
-
-The Face container stores blob, cache, metadata, and queue data, depending on what's being stored. For example, training indexes and results for a **LargePersonGroup** are stored as blob data. The Face container provides two different storage scenarios when interacting with and storing these types of data:
-
-* Memory
- All four types of data are stored in memory. They're not distributed, nor are they persistent. If the Face container is stopped or removed, all of the data in storage for that container is destroyed.
- This is the default storage scenario for the Face container.
-* Azure
- The Face container uses Azure Storage and Azure Cosmos DB to distribute these four types of data across persistent storage. Blob and queue data is handled by Azure Storage. Metadata and cache data is handled by Azure Cosmos DB. If the Face container is stopped or removed, all of the data in storage for that container remains stored in Azure Storage and Azure Cosmos DB.
- The resources used by the Azure storage scenario have the following additional requirements
- * The Azure Storage resource must use the StorageV2 account kind
- * The Azure Cosmos DB resource must use the Azure Cosmos DB's API for MongoDB
-
-The storage scenarios and associated configuration settings are managed by the `Storage` object, under the `CloudAI` configuration section. The following configuration settings are available in the `Storage` object:
-
-| Name | Data type | Description |
-||--|-|
-| `StorageScenario` | String | The storage scenario supported by the container. The following values are available<br/>`Memory` - Default value. Container uses non-persistent, non-distributed and in-memory storage, for single-node, temporary usage. If the container is stopped or removed, the storage for that container is destroyed.<br/>`Azure` - Container uses Azure resources for storage. If the container is stopped or removed, the storage for that container is persisted.|
-| `ConnectionStringOfAzureStorage` | String | The connection string for the Azure Storage resource used by the container.<br/>This setting applies only if `Azure` is specified for the `StorageScenario` configuration setting. |
-| `ConnectionStringOfCosmosMongo` | String | The MongoDB connection string for the Azure Cosmos DB resource used by the container.<br/>This setting applies only if `Azure` is specified for the `StorageScenario` configuration setting. |
-
-For example, the following command specifies the Azure storage scenario and provides sample connection strings for the Azure Storage and Cosmos DB resources used to store data for the Face container.
-
- ```Docker
- docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 containerpreview.azurecr.io/microsoft/cognitive-services-face Eula=accept Billing=https://westcentralus.api.cognitive.microsoft.com/face/v1.0 ApiKey=0123456789 CloudAI:Storage:StorageScenario=Azure CloudAI:Storage:ConnectionStringOfCosmosMongo="mongodb://samplecosmosdb:0123456789@samplecosmosdb.documents.azure.com:10255/?ssl=true&replicaSet=globaldb" CloudAI:Storage:ConnectionStringOfAzureStorage="DefaultEndpointsProtocol=https;AccountName=sampleazurestorage;AccountKey=0123456789;EndpointSuffix=core.windows.net"
- ```
-
-The storage scenario is handled separately from input mounts and output mounts. You can specify a combination of those features for a single container. For example, the following command defines a Docker bind mount to the `D:\Output` folder on the host machine as the output mount, then instantiates a container from the Face container image, saving log files in JSON format to the output mount. The command also specifies the Azure storage scenario and provides sample connection strings for the Azure Storage and Cosmos DB resources used to store data for the Face container.
-
- ```Docker
- docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 --mount type=bind,source=D:\Output,destination=/output containerpreview.azurecr.io/microsoft/cognitive-services-face Eula=accept Billing=https://westcentralus.api.cognitive.microsoft.com/face/v1.0 ApiKey=0123456789 Logging:Disk:Format=json CloudAI:Storage:StorageScenario=Azure CloudAI:Storage:ConnectionStringOfCosmosMongo="mongodb://samplecosmosdb:0123456789@samplecosmosdb.documents.azure.com:10255/?ssl=true&replicaSet=globaldb" CloudAI:Storage:ConnectionStringOfAzureStorage="DefaultEndpointsProtocol=https;AccountName=sampleazurestorage;AccountKey=0123456789;EndpointSuffix=core.windows.net"
- ```
-
-## Eula setting
--
-## Fluentd settings
--
-## Http proxy credentials settings
--
-## Logging settings
-
-
-## Mount settings
-
-Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
-
-The Face containers don't use input or output mounts to store training or service data.
-
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](face-how-to-install-containers.md#the-host-computer)'s mount location may not be accessible due to a conflict between permissions used by the Docker service account and the host mount location permissions.
-
-|Optional| Name | Data type | Description |
-|-||--|-|
-|Not allowed| `Input` | String | Face containers do not use this.|
-|Optional| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
-
-## Example docker run commands
-
-The following examples use the configuration settings to illustrate how to write and use `docker run` commands. Once running, the container continues to run until you [stop](face-how-to-install-containers.md#stop-the-container) it.
-
-* **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
-* **Argument order**: Do not change the order of the arguments unless you are very familiar with Docker containers.
-
-Replace {_argument_name_} with your own values:
-
-| Placeholder | Value | Format or example |
-|-|-||
-| **{API_KEY}** | The endpoint key of the `Face` resource on the Azure `Face` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
-| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Face` Overview page.| See [gathering required parameters](face-how-to-install-containers.md#gathering-required-parameters) for explicit examples. |
--
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](face-how-to-install-containers.md#billing).
-> The ApiKey value is the **Key** from the Azure `Cognitive Services` Resource keys page.
-
-## Face container Docker examples
-
-The following Docker examples are for the face container.
-
-### Basic example
-
- ```
- docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
- containerpreview.azurecr.io/microsoft/cognitive-services-face \
- Eula=accept \
- Billing={ENDPOINT_URI} \
- ApiKey={API_KEY}
- ```
-
-### Logging example
-
- ```
- docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 containerpreview.azurecr.io/microsoft/cognitive-services-face \
- Eula=accept \
- Billing={ENDPOINT_URI} ApiKey={API_KEY} \
- Logging:Console:LogLevel:Default=Information
- ```
-
-## Next steps
-
-* Review [How to install and run containers](face-how-to-install-containers.md)
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-howto.md
Speech containers enable customers to build a speech application architecture th
| Container | Features | Latest | Release status | |--|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 2.13.0 | Generally Available |
-| Custom Speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 2.13.0 | Generally Available |
-| Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.14.1 | Generally Available |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 2.14.0 | Generally Available |
+| Custom Speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 2.14.0 | Generally Available |
+| Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.15.0 | Generally Available |
| Speech Language Identification | Detect the language spoken in audio files. | 1.3.0 | preview |
-| Neural Text-to-speech | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | 1.8.0 | Generally Available |
+| Neural Text-to-speech | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | 1.9.0 | Generally Available |
## Prerequisites
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/container-image-tags.md
The [Custom Speech-to-text][sp-cstt] container image can be found on the `mcr.mi
# [Latest version](#tab/current)
-Release note for `2.13.0-amd64`:
+Release note for `2.14.0-amd64`:
Regular monthly release
Note that due to the phrase lists feature, the size of this container image has
| Image Tags | Notes | Digest | |-|:|:-|
-| `latest` | | `sha256:55ff552d0c593a4ddbed0ae0dede758f93011a165f1afd6738ba906a7e24eeee`|
-| `2.13.0-amd64` | | `sha256:55ff552d0c593a4ddbed0ae0dede758f93011a165f1afd6738ba906a7e24eeee`|
+| `latest` | | `sha256:c83c4691f89dfcad9c92d8c73e24b23946706936e2c8a76b1cb278260448ebb9`|
+| `2.14.0-amd64` | | `sha256:c83c4691f89dfcad9c92d8c73e24b23946706936e2c8a76b1cb278260448ebb9`|
# [Previous version](#tab/previous)
+Release note for `2.13.0-amd64`:
+
+Regular monthly release
+ Release note for `2.12.1-amd64`: Regular monthly release
The [Custom Text-to-speech][sp-ctts] container image can be found on the `mcr.mi
# [Latest version](#tab/current)
-Release note for `1.14.1-amd64`:
+Release note for `1.15.0-amd64`:
Regular monthly release | Image Tags | Notes | Digest | |-|:|:--|
-| `latest` | | `sha256:1db1eea50b96fd56cf4e63ff22878a8da1130f8bfa497c9ce70fbe9db40e3d2c` |
-| `1.14.1-amd64` | | `sha256:1db1eea50b96fd56cf4e63ff22878a8da1130f8bfa497c9ce70fbe9db40e3d2c` |
+| `latest` | | `sha256:06eef68482a917a5c405b61146dc159cff6aef0bd8e13cfd8f669a79c6b1a071` |
+| `1.15.0-amd64` | | `sha256:06eef68482a917a5c405b61146dc159cff6aef0bd8e13cfd8f669a79c6b1a071` |
# [Previous version](#tab/previous)
+Release note for `1.14.1-amd64`:
+
+Regular monthly release
+ Release note for `1.13.0-amd64`: **Fixes**
Since Speech-to-text v2.5.0, images are supported in the *US Government Virginia
# [Latest version](#tab/current)
-Release note for `2.13.0-amd64-<locale>`:
+Release note for `2.14.0-amd64-<locale>`:
Regular monthly release
Note that due to the phrase lists feature, the size of this container image has
| Image Tags | Notes | |-|:--| | `latest` | Container image with the `en-US` locale. |
-| `2.13.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.13.0-amd64-en-us`.|
+| `2.14.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.14.0-amd64-en-us`.|
This container has the following locales available.
-| Locale for v2.13.0 | Notes | Digest |
+| Locale for v2.14.0 | Notes | Digest |
|--|:--|:--|
-| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:9114c6885513cc3ae8d3c9393d3f4f334bb68ff9e444734951f469f8d56fb41c` |
-| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:924dc807076633f4e04f1f604c3db63d908a484c69459bf593d72b58d901cd43` |
-| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:13387db275daf6375e12ce1da5b858493ab71b249a3759e438345ac32119c6b2` |
-| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:2e8bea90f7a106a94e36d9c90e767c58cd8004a61880af53bd4ffb4292a655fe` |
-| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:23c8529ee0e91fee549523021711a755da4c249f21493a1864a64941b36e2986` |
-| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:9114c6885513cc3ae8d3c9393d3f4f334bb68ff9e444734951f469f8d56fb41c` |
-| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:70bbb43641f22e96e70d3b5723b2599dd83533f33d979ff9dfb04a627799f4d1` |
-| `ar-om` | Container image with the `ar-OM` locale. | `sha256:f6fc1c1bcb7d20f2daa30506a039d16ad0537a60c01e41b399159704a001fe42` |
-| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:9114c6885513cc3ae8d3c9393d3f4f334bb68ff9e444734951f469f8d56fb41c` |
-| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:9114c6885513cc3ae8d3c9393d3f4f334bb68ff9e444734951f469f8d56fb41c` |
-| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:218c1f57623b81770c22c7f871bce58a3227ef5fcbe7581e18a69f77107b5c96` |
-| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:9537460403216802831fa02a6eb3bf7a3f6e1e6669953ab4ae9c98ea6283799a` |
-| `ca-es` | Container image with the `ca-ES` locale. | `sha256:94f68e496546eb3c33cf07b7f88807fa23c3f9d5022c2e630b589e29951f0538` |
-| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:10de908ebf603c6b3a2a937edc870d5fe1c4dc6bc9bb7e1f0eca9b9ed2b19a88` |
-| `da-dk` | Container image with the `da-DK` locale. | `sha256:cf03effc2a616b8fea8eacf7d45728cd00b9948f4f3e55d692db0125c51881da` |
-| `de-de` | Container image with the `de-DE` locale. | `sha256:9c9a51d595253c54811ba8d7502799b638f6332c0524fca2543f20efb76c7337` |
-| `el-gr` | Container image with the `el-GR` locale. | `sha256:6bb17c45a291f6293970a4de7bfdc9e31fdffedf80e76f66bca3cab118f76252` |
-| `en-au` | Container image with the `en-AU` locale. | `sha256:1e58c2e2416208b658d18fc4bf6374d6032710ff29c09f125c6d19a4d6609e92` |
-| `en-ca` | Container image with the `en-CA` locale. | `sha256:f0c4da3aa11f9eb72adbc7eab0c18047eec5016ec8c2fec2f1132ddceb3b6f3a` |
-| `en-gb` | Container image with the `en-GB` locale. | `sha256:4d0917974effee44ebf1721e9c0d9a3a2ab957613ce3862fe99062add5d5d08a` |
-| `en-hk` | Container image with the `en-HK` locale. | `sha256:b72a01b0cfaa97ea6102b48acb0a546501bb63618ee4ec9b892bdbdc6fd7ce8c` |
-| `en-ie` | Container image with the `en-IE` locale. | `sha256:d26f56f1f4c41b1c035eb47950cb5bc6bd86cbe07ef08c2276275a46ac4c4ad4` |
-| `en-in` | Container image with the `en-IN` locale. | `sha256:0ad933b9b3626d21d8ac0320f7fb4c72bcf6767258e39ac57698ce0269ed7750` |
-| `en-nz` | Container image with the `en-NZ` locale. | `sha256:d6f9344f7cf0b827b63fb91c31e490546732e8a6c93080e925cd922458ae3695` |
-| `en-ph` | Container image with the `en-PH` locale. | `sha256:dbd1fe80e1801b5fa7e468365f469c1b5770b0f27f2e5afb90c25a74702a0a21` |
-| `en-sg` | Container image with the `en-SG` locale. | `sha256:f234725e54af7bda1c6baa7e9f907b703a85118d65249ca0c050c52109397cc6` |
-| `en-us` | Container image with the `en-US` locale. | `sha256:88dd53d975829707f6ef91ad91aec9ed5fd12df8f4ef33e8c3bdf4701eaaca84` |
-| `en-za` | Container image with the `en-ZA` locale. | `sha256:502693715b8b666a9c10084c733848f95201e9882f9bfae7df770bd9dc8bb983` |
-| `es-ar` | Container image with the `es-AR` locale. | `sha256:6aa4f300639f7ee958adced5e7e5867e7f4d4093f2ca953f3ee5da9128bf08f6` |
-| `es-bo` | Container image with the `es-BO` locale. | `sha256:60f01882b393e00743c61c783e98c1cdcf73097c555999f10e5612b06b5afa90` |
-| `es-cl` | Container image with the `es-CL` locale. | `sha256:7b58b3a823c0fff1b92e46dd848610f2c9dcae5be0463845292e810d3efa1b1b` |
-| `es-co` | Container image with the `es-CO` locale. | `sha256:c51291acc65e1a839477f9bdbd042e4c81d2e638f48a00b6ca423023c9fd6c2c` |
-| `es-cr` | Container image with the `es-CR` locale. | `sha256:085b3bf2869fcedb56745e6adc98f2a332d57d0b1ac66cc219cec436a884d7d5` |
-| `es-cu` | Container image with the `es-CU` locale. | `sha256:43e5425cab3f708ed8632152514f4152f45a19953758fb7b5ebe9f4a767bcfdb` |
-| `es-do` | Container image with the `es-DO` locale. | `sha256:249f3165e0347b223ff06e34c309a753965a3df55bda2a78e04d86c946205d06` |
-| `es-ec` | Container image with the `es-EC` locale. | `sha256:624eeed264f25bab59a7723c6e6c3ae760bc63c46ebe3bcd3db171220682c14d` |
-| `es-es` | Container image with the `es-ES` locale. | `sha256:6d2d41e3b78ebba9d5d46fc8bddb90d0d69680a904774f5da1fa01eb4efd68e1` |
-| `es-gt` | Container image with the `es-GT` locale. | `sha256:ce4b4b761d1a2ca2b657b877c46a341a83f0b1a46447007262c051f6785b7312` |
-| `es-hn` | Container image with the `es-HN` locale. | `sha256:d4ecebce65a18763ac1126bf83706e49ebed80b79255e3820a68e97037d2a501` |
-| `es-mx` | Container image with the `es-MX` locale. | `sha256:c3088a60818b85cd0f04445837ea0ddcb6e7ac4f77269471717002166195d6d2` |
-| `es-ni` | Container image with the `es-NI` locale. | `sha256:1d88e66f6fd86ddf6e47596d2e2b9b3fe64ea7e72f6c4c965d3f1c5b98592e1b` |
-| `es-pa` | Container image with the `es-PA` locale. | `sha256:bb07eb832bcd23f302f0a7b6c4e87bf33186a47ed154ac8b42a1f6dea0f35432` |
-| `es-pe` | Container image with the `es-PE` locale. | `sha256:b726f92daf85c8aa6b169767efdb2af1691ddb7b21b8af3e9afcb984f41d8539` |
-| `es-pr` | Container image with the `es-PR` locale. | `sha256:660a5f9e13d62a963c9c92219f8268ad7f7af5ed08890534679e143cff184004` |
-| `es-py` | Container image with the `es-PY` locale. | `sha256:cb708bc008a59ac35e292094eba912af741c49eb7e67c2df3c1023ab41a6d454` |
-| `es-sv` | Container image with the `es-SV` locale. | `sha256:acd788410f8f6f8c269c85e6c70365e751a92976d61b34b7435766c0ae2fd11a` |
-| `es-us` | Container image with the `es-US` locale. | `sha256:f7ef486a64a413f7d69510f25a39ddce9653265852da1b3cc438000f1bbfa368` |
-| `es-uy` | Container image with the `es-UY` locale. | `sha256:7f6975423cbcf201e318bea9865e93a8e4a6a241b472845d90a877400470338b` |
-| `es-ve` | Container image with the `es-VE` locale. | `sha256:e2f498c4a19f88779dfae350e0cefb4f0aa1c518c18f43139d4bec6a4f655f45` |
-| `et-ee` | Container image with the `et-EE` locale. | `sha256:66ec075ea26141d73e07a223f72f10ea8237d0d9675e67d569f026ca6125cd95` |
-| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:34b4ee60880d310aa08f1584c2f8d1a9a0236ac0067b9d8ad8bf5057749f2d9b` |
-| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:709bc27ebd387cc18d3d16136280234f64c4ba28f05383a52e0bbe066574105a` |
-| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:cfd3140a3c7a5234c0273e34b9b124897cff6c2d11403217096616dd34c14e38` |
-| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:f03b3407772d4a5be1642ff0f78c64283c2e8fd9b473f8bab90864a59d4f8a4a` |
-| `gu-in` | Container image with the `gu-IN` locale. | `sha256:c67190092fcf7af406406e5906d9de79a8fb37565e84b2dc0786caee0b5b27e2` |
-| `hi-in` | Container image with the `hi-IN` locale. | `sha256:eea6f9608d9802ac43e755de39d87e95e708d5c642f58de09863363051112540` |
-| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:3943c40ef4696c44887d08a1cb911f535af451b811737b0101a4fa0ef4284d68` |
-| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:52eb41ca6694497356cb23bd02daf4bb2408ffad418696aeb1bdf1f03c2e2845` |
-| `it-it` | Container image with the `it-IT` locale. | `sha256:70aa2b907f114278d839a958dea29c74b64cd1f7a5a0406194d2aa3583c12048` |
-| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:14e222688387847f51fd858c5575e554046796090e41f072d6200d89f5608e4a` |
-| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:8f3ed7b3896b205b5690e5515a5511581715e698cd6fe0704c153d35a4c9af80` |
-| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:806572a1ae31575806062301d22233b753c415388184496ee67589ddbc264d49` |
-| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:780444acc9be4514072926146c36b7ccce003f27577b339cf431fec2ca6d79f5` |
-| `mr-in` | Container image with the `mr-IN` locale. | `sha256:75460753cba8d45babaf859f94dfd1a1c75b312a841eacded099680dc77c2f89` |
-| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:8d92a5f26100d309a11f05ce13e5e5a0f2bbc072df917af158cc251dc75a4d4f` |
-| `nb-no` | Container image with the `nb-NO` locale. | `sha256:d9c75c885591ced0e10cca5594ae5cf92cb1dde73306f8454737b7927aada89a` |
-| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:15cc274d238cae2a1d9cabc3e5a71e4ba90ae6318fea63937c8830bd55da0fc2` |
-| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:a45730afdc6d15060eff8526e1be08f679b25a2be26156d39266a40e6cd82bc9` |
-| `pt-br` | Container image with the `pt-BR` locale. | `sha256:8f578440ae5c9cd81eee18f68c677bb56ced7c6a6a217d98da60dc856fd2e002` |
-| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:99fedeb4acc49fd3185d34532b1a7321931b17f2eda16ab8643312dbf8afcf38` |
-| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:7677c49b2426fb26eff59a97a012d5890aa7fdbc09684ef0fb29fdbe63fac333` |
-| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:452d269e8e12ae1379d4568bc1b15fefdd3679903365adb3a68bc6669c738615` |
-| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:e6fd994a344b5452b4a5b90a499fed0681dd6ef2fab3db161d407cf4f45ff5dd` |
-| `sl-si` | Container image with the `sl-SI` locale. | `sha256:4df5fdc9732c07d479275561522ce34a38c3864098a56e12ec8329e40f4e6f2a` |
-| `sv-se` | Container image with the `sv-SE` locale. | `sha256:49180ac0eccee59a22800f4c1ae870e3a71543e46d2986fc82ec9b77c7de1ea0` |
-| `ta-in` | Container image with the `ta-IN` locale. | `sha256:a0c64efbf2d9d0a111efc79cc7b70e06ac01745de57d9c768f99c54ac5642cee` |
-| `te-in` | Container image with the `te-IN` locale. | `sha256:8811c30c10980a3ddf441f1d4e21240bfb8663af6200c2d666fdeb83f48a79c5` |
-| `th-th` | Container image with the `th-TH` locale. | `sha256:99860f484f52d9665f33d95659daa8aec5071fa5a97534d40ee4941690ce3e96` |
-| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:170b56107ccb22335422c1838e368c0f5cb4518c3309e6259b754ede9e46ff51` |
-| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:d8721f303ca0b24705c42e8c0f5d20dcafb3d00b278b7c363d1a4c129f5e2cbd` |
-| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:12af9f057acec8231dcdeb1e4037ac53a95957796b5e8dbf48f55db6970a4431` |
-| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:b2c1d333b7718c9cc2708287e388c45abcd28a3e8d7fc3c758cc4b73d2697662` |
+| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:1f9fc0564b2ba2bdbeb5a3160e7afe6d867f3ad48cc90825054359f0f129b730` |
+| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:7af3ad10e6095078ee67d0426863117e2c7c861299b3f9323b6f71a87bd7fc1a` |
+| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:ebe36bd9689e12ed888a327de459b3ae26b261ff3371a696924b540ddd8375b9` |
+| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:dd46d062ba1b7ad03b59c9dd04816a139f976db04739788316431d223e0b5ea8` |
+| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:10284e45719cc5ad1f0783807e5a3b731bb8728fc09e56206788fbb80f0943ee` |
+| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:1f9fc0564b2ba2bdbeb5a3160e7afe6d867f3ad48cc90825054359f0f129b730` |
+| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:1bf6456a34e1ae5f6797741039848020b1c4b7fb68f1816533091abf92be56c1` |
+| `ar-om` | Container image with the `ar-OM` locale. | `sha256:f1389c71a85ea2bc16c9f990bfcd74b0c8d92e576f0dfd5e68bdc9d1e860ba71` |
+| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:1f9fc0564b2ba2bdbeb5a3160e7afe6d867f3ad48cc90825054359f0f129b730` |
+| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:1f9fc0564b2ba2bdbeb5a3160e7afe6d867f3ad48cc90825054359f0f129b730` |
+| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:58ffbf778fa71cacfdddcb6421d9e2514356b75797a3f0f689056c4e7267e527` |
+| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:37baff85bfe5d78b3858c8f7bf921af4c8d73b02fa40b731a0843df908107eba` |
+| `ca-es` | Container image with the `ca-ES` locale. | `sha256:43abb6d9c2a85fb3f7daf757acccbc67058cd5d49d268ef043cf67851fe8d3b7` |
+| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:db9192414bc542b77670f4a281f0f2b818d23a95cba2751fa43bf60203942b81` |
+| `da-dk` | Container image with the `da-DK` locale. | `sha256:88c502880609a9cd2f35faa7d6d4a527e4e4bc80477deb21977086615677a700` |
+| `de-de` | Container image with the `de-DE` locale. | `sha256:3567cf9cfbc72a0280cc79b561e832c1a3a26d63ddcf41fa2d08e3a80e09b765` |
+| `el-gr` | Container image with the `el-GR` locale. | `sha256:484935e2a676d561c94a2e2a335f5328688e0b71a9683351ef1439a386e92651` |
+| `en-au` | Container image with the `en-AU` locale. | `sha256:b1d18f984bbb86f3cbcc9401608a31c85b9af1c0c6a6cc0f7366bda18e79d5f9` |
+| `en-ca` | Container image with the `en-CA` locale. | `sha256:c04c67628e49557136860cbb64ea350aee8f09ab0664ed00fa09cdeb3fb19726` |
+| `en-gb` | Container image with the `en-GB` locale. | `sha256:dad0620af6f3c4880914b9ca25266c6436d372127a41531f639bf682972197fe` |
+| `en-hk` | Container image with the `en-HK` locale. | `sha256:43dc4ea303c9509f562d50f470b3590beb755aab295b40d9de6a5d2f4ca62c04` |
+| `en-ie` | Container image with the `en-IE` locale. | `sha256:c8370e1398b7ec2b4ca88b4d2e6d62df9e4495c25644231abe59a2baa89287fe` |
+| `en-in` | Container image with the `en-IN` locale. | `sha256:cdf5a3f4dc32113b9fd7e667bddc36820ac359c65b860cc8b94faa6a6c5009b0` |
+| `en-nz` | Container image with the `en-NZ` locale. | `sha256:5d4d5811f02295831b90133aa47ca370a3243ea854ae52971c3439d156eb72c3` |
+| `en-ph` | Container image with the `en-PH` locale. | `sha256:051497eadedd0d9de7a36ce111ea2b82b37e2c98b3e8b06b408d32b791c4b76b` |
+| `en-sg` | Container image with the `en-SG` locale. | `sha256:d2b89396713a1188eef7873f479f8deca9ba2a80c43e19a7d227a6f73509010a` |
+| `en-us` | Container image with the `en-US` locale. | `sha256:5f66867b47fd9fd8d1bc67c05da8ae775f937ab208c192f4199d2de7b30e5aa6` |
+| `en-za` | Container image with the `en-ZA` locale. | `sha256:e08d7cc82725de9ff7aa392a08dc484407f60c950b85525f337c42ff274aff11` |
+| `es-ar` | Container image with the `es-AR` locale. | `sha256:126d73f1cb82c3e2b8995afab07a9d6470ca7b236681ef7aff5194827df52008` |
+| `es-bo` | Container image with the `es-BO` locale. | `sha256:7cfe66dc2bcc9c7b975841954735061e0b287664083f35bb75d226015cd32805` |
+| `es-cl` | Container image with the `es-CL` locale. | `sha256:27c45610f38099a50934e214b75bbb578d3ed61fb982e49427985ac76f7be9d1` |
+| `es-co` | Container image with the `es-CO` locale. | `sha256:b06e4a35f6ad8b195870dfa9816fb81016a9cbdd8adb3c31f30dc09e370f17d4` |
+| `es-cr` | Container image with the `es-CR` locale. | `sha256:8c50b7e3847f095de6bd7599c7a953b82fca9f849411cc7407b20b805c5132a7` |
+| `es-cu` | Container image with the `es-CU` locale. | `sha256:73434492751b1ff9e2a3f141f0c5857d5cf2c5891f1084ce981181cdcb115e69` |
+| `es-do` | Container image with the `es-DO` locale. | `sha256:769db62fab433e1337a8a47db155127e887de4826d035e05345ccd86c9668c72` |
+| `es-ec` | Container image with the `es-EC` locale. | `sha256:e765c40a9b09fc4d9f42a9ef4bd138181c4f4826e63af1787453c10b10cc1554` |
+| `es-es` | Container image with the `es-ES` locale. | `sha256:b589b794035513de33740d5b5b6ecdda04fd059e9efcb1110525d4b076168cf6` |
+| `es-gt` | Container image with the `es-GT` locale. | `sha256:b30b4a330b7e74777e5d2575b1c2bd4dbb7a920d7165dad40b00765a0b04c564` |
+| `es-hn` | Container image with the `es-HN` locale. | `sha256:8d29f96322db11e99200cf14390e225d8c332b3a6848eb6d23d099bfe552dd99` |
+| `es-mx` | Container image with the `es-MX` locale. | `sha256:bf6edf5bd61b57095181546579df3033e26aca7261e822b932137eca6078a947` |
+| `es-ni` | Container image with the `es-NI` locale. | `sha256:a02537bbfd3a4231938a321dbf9a575178018122aa4c387ebc9bbae070f35152` |
+| `es-pa` | Container image with the `es-PA` locale. | `sha256:82f14c7711bcc02b82b75e3be1620b528991e4c5f1859155926bbb58c4941858` |
+| `es-pe` | Container image with the `es-PE` locale. | `sha256:500fa361a26d3da4e1d9c2523b232ab0d6c00ab4a15141bed6086e39022b79d4` |
+| `es-pr` | Container image with the `es-PR` locale. | `sha256:e364eec54c48e2bb5c14d53356ac3942f8bfdd65d7e139c86ce79fa9f85d4634` |
+| `es-py` | Container image with the `es-PY` locale. | `sha256:165bab6f0a5a12c58c8ec04e1b5168228f97031b80ca368529576d507152ccc0` |
+| `es-sv` | Container image with the `es-SV` locale. | `sha256:42855d56f39956d456c5337b022c71352bd54bfd2d7a6a9a9ffeda1b821c938d` |
+| `es-us` | Container image with the `es-US` locale. | `sha256:622193b64874a3169c21862762017c4b9a46590057e388330fba80ab775e7909` |
+| `es-uy` | Container image with the `es-UY` locale. | `sha256:a33ca1aeb6181f6b034ce831aaf3ca1da0df8260b08c87749eceafce6346afd7` |
+| `es-ve` | Container image with the `es-VE` locale. | `sha256:6cf2522c507c77ccf7fa61ef4b54e8dc4a3f3b47b7602aa8306ae867bc53c8c2` |
+| `et-ee` | Container image with the `et-EE` locale. | `sha256:e6672fc9da94245f7d75d91b7d09e4b929d60ef0157d28885e057eef20e347d8` |
+| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:3731f5a8baeebf02300f3d40a3ea3e3fdf343b817122eba47a0188b5568e1666` |
+| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:1567e85ffe2e2660585a40b43e61fa936abf4ccdd8eb89a37294c2bad83f70b3` |
+| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:bea4884eee3382741e1d02d99a530b608d30e48a446ba52a73749bba7ad34fa7` |
+| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:f40151a6519e0678969adb2d492f240e355ac7ac9b4f57f75e0eff878741d33c` |
+| `gu-in` | Container image with the `gu-IN` locale. | `sha256:ab196670c90b23d8a40448431f703483da9605f92e71f6ebc75a72cfc38e1598` |
+| `hi-in` | Container image with the `hi-IN` locale. | `sha256:ec26ed76cde3ae36eae3a5f0b6567ac60ea341e2f95838a84524819829967f4f` |
+| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:054f188fd9be04f57858053a0a6c1146c13c2781eecd732c137807ee94009d22` |
+| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:ea6bf9b3b4bfa1c4a25f1890c90a18fc309ef17afd6044cb66ed0fc31083c957` |
+| `it-it` | Container image with the `it-IT` locale. | `sha256:6869362837c0124964ed75ab8901fcfede3894902018b41b400a55a0b5f20cf0` |
+| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:7f8557bc112fd4ffef29df308bd10c0803cce1f1f6e79e69f88be53d636aeaf6` |
+| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:746602c288d80c0599af276acd50f1434a331f288c95a3cf9ba269386a9a3929` |
+| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:b6f432bc80770f13ca537a2b49a4e89e8f979a8167f2cdaaa8f1242bb3edc97b` |
+| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:f2126bc8886218374550f2f9a941500cf48675abdb70332990e90e456c332f5d` |
+| `mr-in` | Container image with the `mr-IN` locale. | `sha256:82c75a0c741543c2195d271dd82bfd4400901204584d9f7b83d154d418b3eea5` |
+| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:e95f2edc5bc2090e0359c63047c4c5c879522080f8bf7cbc9484d1854b606a12` |
+| `nb-no` | Container image with the `nb-NO` locale. | `sha256:49c5d9b0d9de260d88deda5eeaa09979f44972610b26ddfad7969c91278f055b` |
+| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:3c5789fbb82c62eaa68451d391ec736ae78c298248f3afba027172c477609489` |
+| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:79a2bb077362c29495fdbee7fc6c8fd0990f080718390fb469ae1f01051d597d` |
+| `pt-br` | Container image with the `pt-BR` locale. | `sha256:ef80359958fdf6b07461c3742c3f860c22652ccf9d123693a9947d11626531db` |
+| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:000345b6a1a28cb5970c471e47963f10209acee74cd46afa7d41310a397c9b61` |
+| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:c4a996b483f91f278f42f1696ed1d89d2e4ee8c0ed409e9d21d471303f78bb71` |
+| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:396bab6bcfe341b53b0992fc2aaf4809767d47b8637f6fd21448dee899e5480f` |
+| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:73624839708f88c93645a6a35278d0c2ce9a944c5992e237a4096be9142b74a0` |
+| `sl-si` | Container image with the `sl-SI` locale. | `sha256:86fbdc4e994507b020ff27735b741407d9f7d1e01fce2b17e610dfc9c16d3af9` |
+| `sv-se` | Container image with the `sv-SE` locale. | `sha256:c3ee782b60499ef16127b5829c36fd98c1933890752fdc4af6cc34b7a90747d9` |
+| `ta-in` | Container image with the `ta-IN` locale. | `sha256:d10ced4e32336d4b411e65066dd5486733d19b8d1f4756d60602117bce6557b7` |
+| `te-in` | Container image with the `te-IN` locale. | `sha256:ebcbaec4e3a494099c7edf15b22acbd7b29347fe7cb4825d13504474e72a5900` |
+| `th-th` | Container image with the `th-TH` locale. | `sha256:bb239e9081d9cf4fffeee666346cc3c67ce83b9bda1d2e81a09fdbdf705a7c46` |
+| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:68d554ae90b0a2202be5f09fb03a7d277f7d0cb0336cb0510bb09ecd7f42eb12` |
+| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:2ec742699abb843b91f9516cb863d66ecf5f38d5350c3c23c693dcb2f5804c66` |
+| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:34f21fed7129dbeaef6476b286e5d6741b635f034ea038b8fe467512ee0092e2` |
+| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:f2e2dc638ac2e58177302947df30bea7448563a012deb3e4f48f345c09902bb0` |
# [Previous version](#tab/previous)
+Release note for `2.13.0-amd64-<locale>`:
+
+Regular monthly release
+ Release note for `2.12.1-amd64-<locale>`: **Feature**
Release note for `2.5.0-amd64-<locale>`:
| Image Tags | Notes | |--|:--|
+| `2.13.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.13.0-amd64-en-us`.|
| `2.12.1-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.12.1-amd64-en-us`.| | `2.11.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.11.0-amd64-en-us`.| | `2.10.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.10.0-amd64-en-us`.|
Release note for `2.5.0-amd64-<locale>`:
This container has the following locales available.
+| Locale for v2.13.0 | Notes | Digest |
+|--|:--|:--|
+| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:9114c6885513cc3ae8d3c9393d3f4f334bb68ff9e444734951f469f8d56fb41c` |
+| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:924dc807076633f4e04f1f604c3db63d908a484c69459bf593d72b58d901cd43` |
+| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:13387db275daf6375e12ce1da5b858493ab71b249a3759e438345ac32119c6b2` |
+| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:2e8bea90f7a106a94e36d9c90e767c58cd8004a61880af53bd4ffb4292a655fe` |
+| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:23c8529ee0e91fee549523021711a755da4c249f21493a1864a64941b36e2986` |
+| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:9114c6885513cc3ae8d3c9393d3f4f334bb68ff9e444734951f469f8d56fb41c` |
+| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:70bbb43641f22e96e70d3b5723b2599dd83533f33d979ff9dfb04a627799f4d1` |
+| `ar-om` | Container image with the `ar-OM` locale. | `sha256:f6fc1c1bcb7d20f2daa30506a039d16ad0537a60c01e41b399159704a001fe42` |
+| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:9114c6885513cc3ae8d3c9393d3f4f334bb68ff9e444734951f469f8d56fb41c` |
+| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:9114c6885513cc3ae8d3c9393d3f4f334bb68ff9e444734951f469f8d56fb41c` |
+| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:218c1f57623b81770c22c7f871bce58a3227ef5fcbe7581e18a69f77107b5c96` |
+| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:9537460403216802831fa02a6eb3bf7a3f6e1e6669953ab4ae9c98ea6283799a` |
+| `ca-es` | Container image with the `ca-ES` locale. | `sha256:94f68e496546eb3c33cf07b7f88807fa23c3f9d5022c2e630b589e29951f0538` |
+| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:10de908ebf603c6b3a2a937edc870d5fe1c4dc6bc9bb7e1f0eca9b9ed2b19a88` |
+| `da-dk` | Container image with the `da-DK` locale. | `sha256:cf03effc2a616b8fea8eacf7d45728cd00b9948f4f3e55d692db0125c51881da` |
+| `de-de` | Container image with the `de-DE` locale. | `sha256:9c9a51d595253c54811ba8d7502799b638f6332c0524fca2543f20efb76c7337` |
+| `el-gr` | Container image with the `el-GR` locale. | `sha256:6bb17c45a291f6293970a4de7bfdc9e31fdffedf80e76f66bca3cab118f76252` |
+| `en-au` | Container image with the `en-AU` locale. | `sha256:1e58c2e2416208b658d18fc4bf6374d6032710ff29c09f125c6d19a4d6609e92` |
+| `en-ca` | Container image with the `en-CA` locale. | `sha256:f0c4da3aa11f9eb72adbc7eab0c18047eec5016ec8c2fec2f1132ddceb3b6f3a` |
+| `en-gb` | Container image with the `en-GB` locale. | `sha256:4d0917974effee44ebf1721e9c0d9a3a2ab957613ce3862fe99062add5d5d08a` |
+| `en-hk` | Container image with the `en-HK` locale. | `sha256:b72a01b0cfaa97ea6102b48acb0a546501bb63618ee4ec9b892bdbdc6fd7ce8c` |
+| `en-ie` | Container image with the `en-IE` locale. | `sha256:d26f56f1f4c41b1c035eb47950cb5bc6bd86cbe07ef08c2276275a46ac4c4ad4` |
+| `en-in` | Container image with the `en-IN` locale. | `sha256:0ad933b9b3626d21d8ac0320f7fb4c72bcf6767258e39ac57698ce0269ed7750` |
+| `en-nz` | Container image with the `en-NZ` locale. | `sha256:d6f9344f7cf0b827b63fb91c31e490546732e8a6c93080e925cd922458ae3695` |
+| `en-ph` | Container image with the `en-PH` locale. | `sha256:dbd1fe80e1801b5fa7e468365f469c1b5770b0f27f2e5afb90c25a74702a0a21` |
+| `en-sg` | Container image with the `en-SG` locale. | `sha256:f234725e54af7bda1c6baa7e9f907b703a85118d65249ca0c050c52109397cc6` |
+| `en-us` | Container image with the `en-US` locale. | `sha256:88dd53d975829707f6ef91ad91aec9ed5fd12df8f4ef33e8c3bdf4701eaaca84` |
+| `en-za` | Container image with the `en-ZA` locale. | `sha256:502693715b8b666a9c10084c733848f95201e9882f9bfae7df770bd9dc8bb983` |
+| `es-ar` | Container image with the `es-AR` locale. | `sha256:6aa4f300639f7ee958adced5e7e5867e7f4d4093f2ca953f3ee5da9128bf08f6` |
+| `es-bo` | Container image with the `es-BO` locale. | `sha256:60f01882b393e00743c61c783e98c1cdcf73097c555999f10e5612b06b5afa90` |
+| `es-cl` | Container image with the `es-CL` locale. | `sha256:7b58b3a823c0fff1b92e46dd848610f2c9dcae5be0463845292e810d3efa1b1b` |
+| `es-co` | Container image with the `es-CO` locale. | `sha256:c51291acc65e1a839477f9bdbd042e4c81d2e638f48a00b6ca423023c9fd6c2c` |
+| `es-cr` | Container image with the `es-CR` locale. | `sha256:085b3bf2869fcedb56745e6adc98f2a332d57d0b1ac66cc219cec436a884d7d5` |
+| `es-cu` | Container image with the `es-CU` locale. | `sha256:43e5425cab3f708ed8632152514f4152f45a19953758fb7b5ebe9f4a767bcfdb` |
+| `es-do` | Container image with the `es-DO` locale. | `sha256:249f3165e0347b223ff06e34c309a753965a3df55bda2a78e04d86c946205d06` |
+| `es-ec` | Container image with the `es-EC` locale. | `sha256:624eeed264f25bab59a7723c6e6c3ae760bc63c46ebe3bcd3db171220682c14d` |
+| `es-es` | Container image with the `es-ES` locale. | `sha256:6d2d41e3b78ebba9d5d46fc8bddb90d0d69680a904774f5da1fa01eb4efd68e1` |
+| `es-gt` | Container image with the `es-GT` locale. | `sha256:ce4b4b761d1a2ca2b657b877c46a341a83f0b1a46447007262c051f6785b7312` |
+| `es-hn` | Container image with the `es-HN` locale. | `sha256:d4ecebce65a18763ac1126bf83706e49ebed80b79255e3820a68e97037d2a501` |
+| `es-mx` | Container image with the `es-MX` locale. | `sha256:c3088a60818b85cd0f04445837ea0ddcb6e7ac4f77269471717002166195d6d2` |
+| `es-ni` | Container image with the `es-NI` locale. | `sha256:1d88e66f6fd86ddf6e47596d2e2b9b3fe64ea7e72f6c4c965d3f1c5b98592e1b` |
+| `es-pa` | Container image with the `es-PA` locale. | `sha256:bb07eb832bcd23f302f0a7b6c4e87bf33186a47ed154ac8b42a1f6dea0f35432` |
+| `es-pe` | Container image with the `es-PE` locale. | `sha256:b726f92daf85c8aa6b169767efdb2af1691ddb7b21b8af3e9afcb984f41d8539` |
+| `es-pr` | Container image with the `es-PR` locale. | `sha256:660a5f9e13d62a963c9c92219f8268ad7f7af5ed08890534679e143cff184004` |
+| `es-py` | Container image with the `es-PY` locale. | `sha256:cb708bc008a59ac35e292094eba912af741c49eb7e67c2df3c1023ab41a6d454` |
+| `es-sv` | Container image with the `es-SV` locale. | `sha256:acd788410f8f6f8c269c85e6c70365e751a92976d61b34b7435766c0ae2fd11a` |
+| `es-us` | Container image with the `es-US` locale. | `sha256:f7ef486a64a413f7d69510f25a39ddce9653265852da1b3cc438000f1bbfa368` |
+| `es-uy` | Container image with the `es-UY` locale. | `sha256:7f6975423cbcf201e318bea9865e93a8e4a6a241b472845d90a877400470338b` |
+| `es-ve` | Container image with the `es-VE` locale. | `sha256:e2f498c4a19f88779dfae350e0cefb4f0aa1c518c18f43139d4bec6a4f655f45` |
+| `et-ee` | Container image with the `et-EE` locale. | `sha256:66ec075ea26141d73e07a223f72f10ea8237d0d9675e67d569f026ca6125cd95` |
+| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:34b4ee60880d310aa08f1584c2f8d1a9a0236ac0067b9d8ad8bf5057749f2d9b` |
+| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:709bc27ebd387cc18d3d16136280234f64c4ba28f05383a52e0bbe066574105a` |
+| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:cfd3140a3c7a5234c0273e34b9b124897cff6c2d11403217096616dd34c14e38` |
+| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:f03b3407772d4a5be1642ff0f78c64283c2e8fd9b473f8bab90864a59d4f8a4a` |
+| `gu-in` | Container image with the `gu-IN` locale. | `sha256:c67190092fcf7af406406e5906d9de79a8fb37565e84b2dc0786caee0b5b27e2` |
+| `hi-in` | Container image with the `hi-IN` locale. | `sha256:eea6f9608d9802ac43e755de39d87e95e708d5c642f58de09863363051112540` |
+| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:3943c40ef4696c44887d08a1cb911f535af451b811737b0101a4fa0ef4284d68` |
+| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:52eb41ca6694497356cb23bd02daf4bb2408ffad418696aeb1bdf1f03c2e2845` |
+| `it-it` | Container image with the `it-IT` locale. | `sha256:70aa2b907f114278d839a958dea29c74b64cd1f7a5a0406194d2aa3583c12048` |
+| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:14e222688387847f51fd858c5575e554046796090e41f072d6200d89f5608e4a` |
+| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:8f3ed7b3896b205b5690e5515a5511581715e698cd6fe0704c153d35a4c9af80` |
+| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:806572a1ae31575806062301d22233b753c415388184496ee67589ddbc264d49` |
+| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:780444acc9be4514072926146c36b7ccce003f27577b339cf431fec2ca6d79f5` |
+| `mr-in` | Container image with the `mr-IN` locale. | `sha256:75460753cba8d45babaf859f94dfd1a1c75b312a841eacded099680dc77c2f89` |
+| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:8d92a5f26100d309a11f05ce13e5e5a0f2bbc072df917af158cc251dc75a4d4f` |
+| `nb-no` | Container image with the `nb-NO` locale. | `sha256:d9c75c885591ced0e10cca5594ae5cf92cb1dde73306f8454737b7927aada89a` |
+| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:15cc274d238cae2a1d9cabc3e5a71e4ba90ae6318fea63937c8830bd55da0fc2` |
+| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:a45730afdc6d15060eff8526e1be08f679b25a2be26156d39266a40e6cd82bc9` |
+| `pt-br` | Container image with the `pt-BR` locale. | `sha256:8f578440ae5c9cd81eee18f68c677bb56ced7c6a6a217d98da60dc856fd2e002` |
+| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:99fedeb4acc49fd3185d34532b1a7321931b17f2eda16ab8643312dbf8afcf38` |
+| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:7677c49b2426fb26eff59a97a012d5890aa7fdbc09684ef0fb29fdbe63fac333` |
+| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:452d269e8e12ae1379d4568bc1b15fefdd3679903365adb3a68bc6669c738615` |
+| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:e6fd994a344b5452b4a5b90a499fed0681dd6ef2fab3db161d407cf4f45ff5dd` |
+| `sl-si` | Container image with the `sl-SI` locale. | `sha256:4df5fdc9732c07d479275561522ce34a38c3864098a56e12ec8329e40f4e6f2a` |
+| `sv-se` | Container image with the `sv-SE` locale. | `sha256:49180ac0eccee59a22800f4c1ae870e3a71543e46d2986fc82ec9b77c7de1ea0` |
+| `ta-in` | Container image with the `ta-IN` locale. | `sha256:a0c64efbf2d9d0a111efc79cc7b70e06ac01745de57d9c768f99c54ac5642cee` |
+| `te-in` | Container image with the `te-IN` locale. | `sha256:8811c30c10980a3ddf441f1d4e21240bfb8663af6200c2d666fdeb83f48a79c5` |
+| `th-th` | Container image with the `th-TH` locale. | `sha256:99860f484f52d9665f33d95659daa8aec5071fa5a97534d40ee4941690ce3e96` |
+| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:170b56107ccb22335422c1838e368c0f5cb4518c3309e6259b754ede9e46ff51` |
+| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:d8721f303ca0b24705c42e8c0f5d20dcafb3d00b278b7c363d1a4c129f5e2cbd` |
+| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:12af9f057acec8231dcdeb1e4037ac53a95957796b5e8dbf48f55db6970a4431` |
+| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:b2c1d333b7718c9cc2708287e388c45abcd28a3e8d7fc3c758cc4b73d2697662` |
+ | Locale for v2.12.1 | Notes | Digest | |--|:--|:--| | `ar-ae` | Container image with the `ar-AE` locale. | `sha256:070b6f390dbe7b81b72845c1c9c83087979e1e330d84d417f39a371298a4d270` |
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
-Release note for `1.14.1-amd64-<locale-and-voice>`:
+Release note for `1.15.0-amd64-<locale-and-voice>`:
**Feature** * Upgrade to latest models.
-| Image Tags | Notes |
-||:--|
-| `latest` | Container image with the `en-US` locale and `en-US-AriaRUS` voice. |
-| `1.14.1-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.14.1-amd64-en-us-ariarus`. |
-
-| Locales for v1.14.1 | Notes | Digest |
+| Locales for v1.15.0 | Notes | Digest |
||:|:-|
-| `ar-eg-hoda` | Container image with the `ar-EG` locale and `ar-EG-Hoda` voice. | `sha256:506c4694cb4628aab870d81b53885c4b63f7d167fcc3407dd7a203ab3da6bd9b` |
-| `ar-sa-naayf` | Container image with the `ar-SA` locale and `ar-SA-Naayf` voice. | `sha256:ec6963d01458464eff3ed2be965cbe782c11bd751022ead9d4dad39caa7db4a1` |
-| `bg-bg-ivan` | Container image with the `bg-BG` locale and `bg-BG-Ivan` voice. | `sha256:d296080e707bb20eba7db2473c8caa76c17ded594b8a82e0932a71694ee0f2a9` |
-| `ca-es-herenarus` | Container image with the `ca-ES` locale and `ca-ES-HerenaRUS` voice. | `sha256:80545662ec2dce6949c902351dd29be9778749ee980efc0c78be5074a9e126a8` |
-| `cs-cz-jakub` | Container image with the `cs-CZ` locale and `cs-CZ-Jakub` voice. | `sha256:206773547eadde8e5e396ebac9f7a17e0e20ba6c8a453f7c03c8723689224384` |
-| `da-dk-hellerus` | Container image with the `da-DK` locale and `da-DK-HelleRUS` voice. | `sha256:b5636a23d0d0a9c6f5c93885a1033730bf1f0c12335769fc544bb23f1697ae21` |
-| `de-at-michael` | Container image with the `de-AT` locale and `de-AT-Michael` voice. | `sha256:df6d494145125b1945626834084f8f8d91d7b996edf417e33ec8d9441665cc16` |
-| `de-ch-karsten` | Container image with the `de-CH` locale and `de-CH-Karsten` voice. | `sha256:65a088fa6dc97d60c2d35214af0c90a6e9a33ae2f4082270dcc7961a64e38bfd` |
-| `de-de-heddarus` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:acd5c459d0447aa39e4bf5ed74c7f4fdfa275c3ca0cabc24ee4f110f6500e743` |
-| `de-de-hedda` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:acd5c459d0447aa39e4bf5ed74c7f4fdfa275c3ca0cabc24ee4f110f6500e743` |
-| `de-de-stefan-apollo` | Container image with the `de-DE` locale and `de-DE-Stefan-Apollo` voice. | `sha256:a879c3dff58420b8af5fb955e8cb5727c76f7acddfe89dde298ca0934d72f1aa` |
-| `el-gr-stefanos` | Container image with the `el-GR` locale and `el-GR-Stefanos` voice. | `sha256:50422aa0cd5b58a5e1c4e334e7098f7590f02fbfb392a5d08fde2018577a6cac` |
-| `en-au-catherine` | Container image with the `en-AU` locale and `en-AU-Catherine` voice. | `sha256:68ee93b7e541836fb4df93a6925edc9734a8390765fd10b9541eddb94788128d` |
-| `en-au-hayleyrus` | Container image with the `en-AU` locale and `en-AU-HayleyRUS` voice. | `sha256:b4c6a1580faf6466238060c9e26b2c9bf17da2ee8492f856fceb96e927722c70` |
-| `en-ca-heatherrus` | Container image with the `en-CA` locale and `en-CA-HeatherRUS` voice. | `sha256:1ada3a373ae2e3475c8e1ee9b2a5966ae126376bb5ac0c01e07591b53de5c2e4` |
-| `en-ca-linda` | Container image with the `en-CA` locale and `en-CA-Linda` voice. | `sha256:4989ac096aa8923ef16c823cd3767730dcbea633827d269a1e5dc9206325edcc` |
-| `en-gb-george-apollo` | Container image with the `en-GB` locale and `en-GB-George-Apollo` voice. | `sha256:1fc5a152d99e61823a8d0253ba1c04a79c1a846b5c135e1638695f47d21b936c` |
-| `en-gb-hazelrus` | Container image with the `en-GB` locale and `en-GB-HazelRUS` voice. | `sha256:8814ea674f531e12e0d502cc542afbabf5123107f05792215c81f68a259cd5e8` |
-| `en-gb-susan-apollo` | Container image with the `en-GB` locale and `en-GB-Susan-Apollo` voice. | `sha256:3dd9b566fb592009693159d2c1eeebb034e22124746ee4d20f7b904a04e90a5b` |
-| `en-ie-sean` | Container image with the `en-IE` locale and `en-IE-Sean` voice. | `sha256:a1cddb74a6f14c3f9e3514dbcd64d05406f36e79089ef8217fcb724f8126a3e9` |
-| `en-in-heera-apollo` | Container image with the `en-IN` locale and `en-IN-Heera-Apollo` voice. | `sha256:1f5e27a078dc61d558864b29e060e963fe1cd4e56d5a5c33e943088803f3b3fd` |
-| `en-in-priyarus` | Container image with the `en-IN` locale and `en-IN-PriyaRUS` voice. | `sha256:0f2873c0a80159624960b1d7c3dafa1e60be69f94aa1939bac37bdb941240ba1` |
-| `en-in-ravi-apollo` | Container image with the `en-IN` locale and `en-IN-Ravi-Apollo` voice. | `sha256:338a4c2b0923d44895ebba1d3aed13eef8ec775c911e39ee9acd33b304831db0` |
-| `en-us-aria24krus` | Container image with the `en-US` locale and `en-US-Aria24kRUS` voice. | `sha256:ab856028f3ab7c7af881b4e53fe957bc89d3f8bb1daf7b3376593f845cac1fad` |
-| `en-us-ariarus` | Container image with the `en-US` locale and `en-US-AriaRUS` voice. | `sha256:ab856028f3ab7c7af881b4e53fe957bc89d3f8bb1daf7b3376593f845cac1fad` |
-| `en-us-benjaminrus` | Container image with the `en-US` locale and `en-US-BenjaminRUS` voice. | `sha256:0e4862eb77acb3b3f5c08984ce3605d06e12876b72d5c48dcd86e05461aecff7` |
-| `en-us-guy24krus` | Container image with the `en-US` locale and `en-US-Guy24kRUS` voice. | `sha256:bde0c632722de7093c787c076e73cfcc84ce6afa282fc269a7fb5e3edc5e986a` |
-| `en-us-zirarus` | Container image with the `en-US` locale and `en-US-ZiraRUS` voice. | `sha256:feebe5f990e6713c2a8e3759059553c9b9ec59505449686896bd7ef25d2d4bd8` |
-| `es-es-helenarus` | Container image with the `es-ES` locale and `es-ES-HelenaRUS` voice. | `sha256:84b9517218281c7660f2851e819dc79a003cd2c06adf50341a46293dab3754db` |
-| `es-es-laura-apollo` | Container image with the `es-ES` locale and `es-ES-Laura-Apollo` voice. | `sha256:fbcdd314a1c94b60a338c9a3b352fdb19bc0d64d1e698ae8ca9b30eeb0cc89b0` |
-| `es-es-pablo-apollo` | Container image with the `es-ES` locale and `es-ES-Pablo-Apollo` voice. | `sha256:4d0a3a6f789acbee3cf52e26ce4f2bc7f15a1d51bd4a4187262fbd432a7a0512` |
-| `es-mx-hildarus` | Container image with the `es-MX` locale and `es-MX-HildaRUS` voice. | `sha256:232730b6b1732a6169b024f9513527a01f515b5534ffbe5e6b0ec816c452333b` |
-| `es-mx-raul-apollo` | Container image with the `es-MX` locale and `es-MX-Raul-Apollo` voice. | `sha256:a24417b4e2d2f22c17a6a2ea6ae8acd67386881c1c10e7cb4988a4fc93e06b72` |
-| `fi-fi-heidirus` | Container image with the `fi-FI` locale and `fi-FI-HeidiRUS` voice. | `sha256:24178c994f15ef135453b6417c3866e5cc6e0db4767a0ed70a446fe67d2124de` |
-| `fr-ca-caroline` | Container image with the `fr-CA` locale and `fr-CA-Caroline` voice. | `sha256:3e9b860513a1f0ebfe4280fa7994348305c78fccf00906e1983e1e557b44d455` |
-| `fr-ca-harmonierus` | Container image with the `fr-CA` locale and `fr-CA-HarmonieRUS` voice. | `sha256:3b5a7a1e8a01782e12a1b39f9f2981a3f1798751351251e6d477f4df1b5f4997` |
-| `fr-ch-guillaume` | Container image with the `fr-CH` locale and `fr-CH-Guillaume` voice. | `sha256:b2cbd6b417b42e11d6d64d8a1f26b2f00f398ec2225207dd89043b859712b261` |
-| `fr-fr-hortenserus` | Container image with the `fr-FR` locale and `fr-FR-HortenseRUS` voice. | `sha256:dc2b98bb93526bc95bff551a3dc3869afff041a904022bc3bd2d30b0b7ce1993` |
-| `fr-fr-julie-apollo` | Container image with the `fr-FR` locale and `fr-FR-Julie-Apollo` voice. | `sha256:1af6a1807b4d4d48a1f7229e6e03360d9bb979113bbe4f4590975f9e98f09af1` |
-| `fr-fr-paul-apollo` | Container image with the `fr-FR` locale and `fr-FR-Paul-Apollo` voice. | `sha256:0b5ed83a9a48cba741b5e491926bb5a1e3022eda8660b573e3abb231f3f81b73` |
-| `he-il-asaf` | Container image with the `he-IL` locale and `he-IL-Asaf` voice. | `sha256:5f2307252f16876be05545581f1698c8a8834c4b462db76c151400c538f1aff4` |
-| `hi-in-hemant` | Container image with the `hi-IN` locale and `hi-IN-Hemant` voice. | `sha256:a86d04e0ae19a1ca30ba14a4951e8f8d78c4c27a78378f07e5f37a753e282ea9` |
-| `hi-in-kalpana-apollo` | Container image with the `hi-IN` locale and `hi-IN-Kalpana-Apollo` voice. | `sha256:1e56c468fae9c07c76581a7c7430d9bcc02eeaee5e4657830a2c59649cdfd80c` |
-| `hi-in-kalpana` | Container image with the `hi-IN` locale and `hi-IN-Kalpana` voice. | `sha256:1e56c468fae9c07c76581a7c7430d9bcc02eeaee5e4657830a2c59649cdfd80c` |
-| `hr-hr-matej` | Container image with the `hr-HR` locale and `hr-HR-Matej` voice. | `sha256:7445bc7d1d73c5bb4775de73253b4733fbe53caae93a7bd5093f2cf61dc7f7cd` |
-| `hu-hu-szabolcs` | Container image with the `hu-HU` locale and `hu-HU-Szabolcs` voice. | `sha256:96050684a66cede45f5a757dc6faa45663efcae1739abc820a77a7e171b7733a` |
-| `id-id-andika` | Container image with the `id-ID` locale and `id-ID-Andika` voice. | `sha256:28065b6532a04912cb59104e7d6d1904be3b71b8f45427082825c752c3f1737e` |
-| `it-it-cosimo-apollo` | Container image with the `it-IT` locale and `it-IT-Cosimo-Apollo` voice. | `sha256:ee465ab38a0b9331fdf7a1baeda62b6a368b2aceb10754158e3f14a45b473dfd` |
-| `it-it-luciarus` | Container image with the `it-IT` locale and `it-IT-LuciaRUS` voice. | `sha256:b15a06df122dac510aa9327aa623147435ce2e576ebbe0be1c28ecf19b4f9717` |
-| `ja-jp-ayumi-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ayumi-Apollo` voice. | `sha256:cbced8cfbd556c8a169bfd2da35446787c5f5acd1607083155cf2f8e7ad8b2a2` |
-| `ja-jp-harukarus` | Container image with the `ja-JP` locale and `ja-JP-HarukaRUS` voice. | `sha256:1dda74d78c7227c45720e6aac912053160a65957b43b0b528376dc3f7a8570f6` |
-| `ja-jp-ichiro-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ichiro-Apollo` voice. | `sha256:ffa25c2702b5156e97eb9457085341d035add070d43638e78b0ae9f2f23fe76b` |
-| `ko-kr-heamirus` | Container image with the `ko-KR` locale and `ko-KR-HeamiRUS` voice. | `sha256:f4955991abb31d5814913e49c17535f79b618f3376de75af1feac74ff9430cd5` |
-| `ms-my-rizwan` | Container image with the `ms-MY` locale and `ms-MY-Rizwan` voice. | `sha256:4c4fdfc2c70ae624d69c1435433068efacccd96809e9112a4fcb1f4e52802d00` |
-| `nb-no-huldarus` | Container image with the `nb-NO` locale and `nb-NO-HuldaRUS` voice. | `sha256:080902d1f8f67d018746d3099d2739fc203cf87959912e45352a7525c7b95bb9` |
-| `nl-nl-hannarus` | Container image with the `nl-NL` locale and `nl-NL-HannaRUS` voice. | `sha256:b3c808f060b29485c8a18f5b717f96f4f1d5c724811012cf9ad4654b658b08f6` |
-| `pl-pl-paulinarus` | Container image with the `pl-PL` locale and `pl-PL-PaulinaRUS` voice. | `sha256:f95ded0a8f5dc9bf53f469fcd8c9608fa53ab45b5fdc915f132fff3cb6fcb8e0` |
-| `pt-br-daniel-apollo` | Container image with the `pt-BR` locale and `pt-BR-Daniel-Apollo` voice. | `sha256:da85762763f2a4cf6de112244138aee57235bbfab807e5dd80b76e9fc6703e44` |
-| `pt-br-heloisarus` | Container image with the `pt-BR` locale and `pt-BR-HeloisaRUS` voice. | `sha256:085dd402f070660f2a0a9139b2b09ec7699191533e4b442260364715fd83ff38` |
-| `pt-pt-heliarus` | Container image with the `pt-PT` locale and `pt-PT-HeliaRUS` voice. | `sha256:4cf8270fb836dda947580886891c79d07ccd9cca7cfb19d328fafba9f61d5303` |
-| `ro-ro-andrei` | Container image with the `ro-RO` locale and `ro-RO-Andrei` voice. | `sha256:a11f8da57c87b49145293b1c91e2073f96a70301b839e9d9848fdd1a2a164aed` |
-| `ru-ru-ekaterinarus` | Container image with the `ru-RU` locale and `ru-RU-EkaterinaRUS` voice. | `sha256:e6619b9518029ba9e19d6b98dbe1b79c676c135248c32c9a3c3c2e3edb56efc7` |
-| `ru-ru-irina-apollo` | Container image with the `ru-RU` locale and `ru-RU-Irina-Apollo` voice. | `sha256:04ecb7975978c004fbe2960e74d71b9d1fdfbaea904f1104f519f43351dc77e5` |
-| `ru-ru-pavel-apollo` | Container image with the `ru-RU` locale and `ru-RU-Pavel-Apollo` voice. | `sha256:c7fe3fc2fd40891e51fe00c3bbbf5386b7400cee6091956ad08fa974fe7518d7` |
-| `sk-sk-filip` | Container image with the `sk-SK` locale and `sk-SK-Filip` voice. | `sha256:e7624a3f3521a663bfd96f30904f722b16c6b2523fa2d150a578311c2abfe7b1` |
-| `sl-si-lado` | Container image with the `sl-SI` locale and `sl-SI-Lado` voice. | `sha256:898ab51ca3e6697b39391fdc34d76f79cea6a40dc53f9fb16ae9241e09eeaec1` |
-| `sv-se-hedvigrus` | Container image with the `sv-SE` locale and `sv-SE-HedvigRUS` voice. | `sha256:7aba595a1b4994dfb2002bc7c56e1dc94d92bb3e49ba9024ef2ebd8614deb24d` |
-| `ta-in-valluvar` | Container image with the `ta-IN` locale and `ta-IN-Valluvar` voice. | `sha256:850f8b7e23434c01fd3c901549bf00e541f0e86f96e75ed22531036acc899418` |
-| `te-in-chitra` | Container image with the `te-IN` locale and `te-IN-Chitra` voice. | `sha256:cc155a9aba2e1f4786702b570608c4aa344fddaba9bd6f3d705a2cc8d5990b37` |
-| `th-th-pattara` | Container image with the `th-TH` locale and `th-TH-Pattara` voice. | `sha256:3c0c5b6ea14b697219420730f195553ac691ff69cb65a7aecb3df2e35de2f3b8` |
-| `tr-tr-sedarus` | Container image with the `tr-TR` locale and `tr-TR-SedaRUS` voice. | `sha256:ee98a8a4e5ccd68ca0fe7c485a7595f4b62930ee2a13cc85e3c5486954a18c4c` |
-| `vi-vn-an` | Container image with the `vi-VN` locale and `vi-VN-An` voice. | `sha256:2bfa898d787863b7ec55421b8d21db7b2ba89c904a95705573a02bb43b2226de` |
-| `zh-cn-huihuirus` | Container image with the `zh-CN` locale and `zh-CN-HuihuiRUS` voice. | `sha256:f5afefbd54a45418fbffa6f272e2dc8651fbd06276ce7d4ecf2e50ea1b947b12` |
-| `zh-cn-kangkang-apollo` | Container image with the `zh-CN` locale and `zh-CN-Kangkang-Apollo` voice. | `sha256:fc314d3e4729ec77b2cfdb1408d3aeed7f6d17b7e3c353e4cfc31fc9712eccd3` |
-| `zh-cn-yaoyao-apollo` | Container image with the `zh-CN` locale and `zh-CN-Yaoyao-Apollo` voice. | `sha256:102c47ff3b91c7106cf116f86dad5814a2d893672fa833d082d30ae500df8112` |
-| `zh-hk-danny-apollo` | Container image with the `zh-HK` locale and `zh-HK-Danny-Apollo` voice. | `sha256:75892d547cc35964fe079efd077e83825c38f43179bee4486e672113ff56d612` |
-| `zh-hk-tracy-apollo` | Container image with the `zh-HK` locale and `zh-HK-Tracy-Apollo` voice. | `sha256:e7cf6d4d0d7509c829a39cceac03f1f97e2f0f496bc1193d2291cac6ce08a007` |
-| `zh-hk-tracyrus` | Container image with the `zh-HK` locale and `zh-HK-TracyRUS` voice. | `sha256:e7cf6d4d0d7509c829a39cceac03f1f97e2f0f496bc1193d2291cac6ce08a007` |
-| `zh-tw-hanhanrus` | Container image with the `zh-TW` locale and `zh-TW-HanHanRUS` voice. | `sha256:6d9c790d7a322dd6dc56512d008055e72863b9fa5c01a5bd074de79227d45093` |
-| `zh-tw-yating-apollo` | Container image with the `zh-TW` locale and `zh-TW-Yating-Apollo` voice. | `sha256:acf24aca14e04a4120f9fd71c5eadd9e1f61e61c835e5482249dae2a1546ee02` |
-| `zh-tw-zhiwei-apollo` | Container image with the `zh-TW` locale and `zh-TW-Zhiwei-Apollo` voice. | `sha256:90767a1712dc74a9a3d1c73d5613c088d2d28034a2d8430e4cfd7062478dbd29` |
+| `ar-eg-hoda` | Container image with the `ar-EG` locale and `ar-EG-Hoda` voice. | `sha256:61a154451bfef9766235f85fc7ca3698151244b04bf32cfc5a47a04b9c08f8e4` |
+| `ar-sa-naayf` | Container image with the `ar-SA` locale and `ar-SA-Naayf` voice. | `sha256:13cf045d959ce9362adfad114d8997e628f5e0d08e6e86a86e733967372e5e2d` |
+| `bg-bg-ivan` | Container image with the `bg-BG` locale and `bg-BG-Ivan` voice. | `sha256:19f8c32f6723470c14c4b1731ff256853ee5c441a95a89faff767c2c4e4447a9` |
+| `ca-es-herenarus` | Container image with the `ca-ES` locale and `ca-ES-HerenaRUS` voice. | `sha256:16835388036906af8b35238f05b7f17308b8fae92bf4c89199dcc0b35bb289d6` |
+| `cs-cz-jakub` | Container image with the `cs-CZ` locale and `cs-CZ-Jakub` voice. | `sha256:06af13ede8234c14f8a48b956017cd7858a1c0d984042a9a60309ae9f8f6a25b` |
+| `da-dk-hellerus` | Container image with the `da-DK` locale and `da-DK-HelleRUS` voice. | `sha256:1c6375ee05948ec9a9b2554e2423e2c2d68e7595f58d401bd2f9fc25bd512bde` |
+| `de-at-michael` | Container image with the `de-AT` locale and `de-AT-Michael` voice. | `sha256:27e88c817ab91b2a4dbb5df1f88828708445993c1d657d974b6253f1820e280f` |
+| `de-ch-karsten` | Container image with the `de-CH` locale and `de-CH-Karsten` voice. | `sha256:6b8ce6192783c1b158410a43a8fd9517cfe63c8b4a3cd0f1118acd891e7ebea5` |
+| `de-de-heddarus` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:28e1a6f0860165a4f3750b059334117240e0613ddf44d1e3c41615093bd3e226` |
+| `de-de-hedda` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:28e1a6f0860165a4f3750b059334117240e0613ddf44d1e3c41615093bd3e226` |
+| `de-de-stefan-apollo` | Container image with the `de-DE` locale and `de-DE-Stefan-Apollo` voice. | `sha256:3730dfefb60f3a74df523e790738595b29e3dc694a16506a6deccffec264aa2a` |
+| `el-gr-stefanos` | Container image with the `el-GR` locale and `el-GR-Stefanos` voice. | `sha256:dfce427d7c08bd26d38513fd4b5c85662fe4feeddefa75e1245c37bb5b245b45` |
+| `en-au-catherine` | Container image with the `en-AU` locale and `en-AU-Catherine` voice. | `sha256:71a9a64adc48044e2ce81119bc118056a906db284311fc3761b3cdfe21c6ad18` |
+| `en-au-hayleyrus` | Container image with the `en-AU` locale and `en-AU-HayleyRUS` voice. | `sha256:a42624ebf51afff052a0ed8518f474855d70b4a9245cd8e81492b449e6b765d1` |
+| `en-ca-heatherrus` | Container image with the `en-CA` locale and `en-CA-HeatherRUS` voice. | `sha256:b5f745bbf9de83f57ac4e6e2760049e10a8eaae362018c4d5a4ace02a50710dc` |
+| `en-ca-linda` | Container image with the `en-CA` locale and `en-CA-Linda` voice. | `sha256:6638a92b495c76ca16331c652b123fa52163242cfbd8f8298c9118a0f1261719` |
+| `en-gb-george-apollo` | Container image with the `en-GB` locale and `en-GB-George-Apollo` voice. | `sha256:748c7042dfa3107f387c34ee29269fc2bd96f27af525f2dc7b50275dae106bd1` |
+| `en-gb-hazelrus` | Container image with the `en-GB` locale and `en-GB-HazelRUS` voice. | `sha256:8a439470579f95645bf5831ee5f0643b872d6bdbd7426cf57264bb5a13c12624` |
+| `en-gb-susan-apollo` | Container image with the `en-GB` locale and `en-GB-Susan-Apollo` voice. | `sha256:ad9c5741a2b19fc936ec740fa0bbd2700e09e361d7ce9df0bb5fb204f6c31ec5` |
+| `en-ie-sean` | Container image with the `en-IE` locale and `en-IE-Sean` voice. | `sha256:45d1d07f67c81b11f7b239f0e46bd229694d0e795b01e72e583b2aecf671af3e` |
+| `en-in-heera-apollo` | Container image with the `en-IN` locale and `en-IN-Heera-Apollo` voice. | `sha256:f4cd71fac26b0d1f0693ce91535a0fd14ac90e323c6f9d8239f3eb7a196ff454` |
+| `en-in-priyarus` | Container image with the `en-IN` locale and `en-IN-PriyaRUS` voice. | `sha256:5a228190a5fe62aaa5f8443ab4041d2a7af381e30236333a44c364e990eeaba4` |
+| `en-in-ravi-apollo` | Container image with the `en-IN` locale and `en-IN-Ravi-Apollo` voice. | `sha256:3f477ad93ff643f90adf268775c9b8cd8fb3b2cadf347b3663317184c4e462c6` |
+| `en-us-aria24krus` | Container image with the `en-US` locale and `en-US-Aria24kRUS` voice. | `sha256:d4ece3a336171cd46068831b3203460c86e5cd7f053b56a8a7017a0547580030` |
+| `en-us-ariarus` | Container image with the `en-US` locale and `en-US-AriaRUS` voice. | `sha256:d4ece3a336171cd46068831b3203460c86e5cd7f053b56a8a7017a0547580030` |
+| `en-us-benjaminrus` | Container image with the `en-US` locale and `en-US-BenjaminRUS` voice. | `sha256:f668eb749ee51c01bcadf0df8e1a0b6fc000fb64a93bd12458bcff4e817bd4cf` |
+| `en-us-guy24krus` | Container image with the `en-US` locale and `en-US-Guy24kRUS` voice. | `sha256:50900ece25a078bc4e0a0fec845cc9516e975a7b90621e4fdea135c16b593752` |
+| `en-us-zirarus` | Container image with the `en-US` locale and `en-US-ZiraRUS` voice. | `sha256:772bdc81780a05f7400a88b1cddcef6ef0be153ce873df23918986f72920aa41` |
+| `es-es-helenarus` | Container image with the `es-ES` locale and `es-ES-HelenaRUS` voice. | `sha256:ab25fc60c8a8e095fcf63fe953bd2acf1f0569e6aafb02e90da916f7ef1905ce` |
+| `es-es-laura-apollo` | Container image with the `es-ES` locale and `es-ES-Laura-Apollo` voice. | `sha256:11c144693d62b28e1444378638295e801c07888fd6ff70903bdbb775a8cd4c7a` |
+| `es-es-pablo-apollo` | Container image with the `es-ES` locale and `es-ES-Pablo-Apollo` voice. | `sha256:56db18adc44ee4412fd64f2d9303960525627ecf9b6cd6c201d5260af5340378` |
+| `es-mx-hildarus` | Container image with the `es-MX` locale and `es-MX-HildaRUS` voice. | `sha256:80ad68c2ca58380ca3d88e509ad32a21f70ecc41fab701f629e2de509162bf61` |
+| `es-mx-raul-apollo` | Container image with the `es-MX` locale and `es-MX-Raul-Apollo` voice. | `sha256:fd51cdcc46ac5c81949d7ff3ceeacf7144fb6e516089fff645b64b9159269488` |
+| `fi-fi-heidirus` | Container image with the `fi-FI` locale and `fi-FI-HeidiRUS` voice. | `sha256:0ba17a99d35d4963110316d6bb7742082d0362f23490790bb8a8142f459ed143` |
+| `fr-ca-caroline` | Container image with the `fr-CA` locale and `fr-CA-Caroline` voice. | `sha256:67304f764165b34c051104d8ef51202dcbaafcf3b88d5568ac41b54ecf820563` |
+| `fr-ca-harmonierus` | Container image with the `fr-CA` locale and `fr-CA-HarmonieRUS` voice. | `sha256:9b428ec672b60e8e6f9642cc5f23741e84df5e68477bb5fd4fdee4222e401d47` |
+| `fr-ch-guillaume` | Container image with the `fr-CH` locale and `fr-CH-Guillaume` voice. | `sha256:d3fedebf0321f9135335be369fec84be42a3653977f0834c6b5fda3fefeab81e` |
+| `fr-fr-hortenserus` | Container image with the `fr-FR` locale and `fr-FR-HortenseRUS` voice. | `sha256:2d33762773d299ffd37a3103b3c32ce8d1b7f3f107daf6514be4006cfbc8fd47` |
+| `fr-fr-julie-apollo` | Container image with the `fr-FR` locale and `fr-FR-Julie-Apollo` voice. | `sha256:54f762a2d68cc8a33049b18085fac44f5bad1750a1d85347d5174550fe2c2798` |
+| `fr-fr-paul-apollo` | Container image with the `fr-FR` locale and `fr-FR-Paul-Apollo` voice. | `sha256:7d3e4a75495be2c503f55596d39a5bdfe75538b453a5fb7edb7d17e0c036f3f0` |
+| `he-il-asaf` | Container image with the `he-IL` locale and `he-IL-Asaf` voice. | `sha256:729bd1c6128ee059e89d04e2e2fd5cd925e59550014b901bf5ac0b7cd44e9fa4` |
+| `hi-in-hemant` | Container image with the `hi-IN` locale and `hi-IN-Hemant` voice. | `sha256:9ed035183c7c2a0debe44dc6bae67d097334b0be8f5bec643b7e320c534b7cb2` |
+| `hi-in-kalpana-apollo` | Container image with the `hi-IN` locale and `hi-IN-Kalpana-Apollo` voice. | `sha256:f043d625788fd61bba7454a64502572a2e4fed310775c371c71db3c0fcf6aa01` |
+| `hi-in-kalpana` | Container image with the `hi-IN` locale and `hi-IN-Kalpana` voice. | `sha256:f043d625788fd61bba7454a64502572a2e4fed310775c371c71db3c0fcf6aa01` |
+| `hr-hr-matej` | Container image with the `hr-HR` locale and `hr-HR-Matej` voice. | `sha256:a320245b93af76b125386f4566383ec6e13a21c951a8468d1f0f87e800b79bb6` |
+| `hu-hu-szabolcs` | Container image with the `hu-HU` locale and `hu-HU-Szabolcs` voice. | `sha256:94d86ae188bb08df0192de4221404132d631cae6aa6d4fc4bfc0ffcce8f68d89` |
+| `id-id-andika` | Container image with the `id-ID` locale and `id-ID-Andika` voice. | `sha256:8fee6f6d8552fae0ce050765ea5c842497a699f5feb700f705c506dab3bac4a6` |
+| `it-it-cosimo-apollo` | Container image with the `it-IT` locale and `it-IT-Cosimo-Apollo` voice. | `sha256:1d99f0f538e0d61b527fbc77f9281e0f932bac7e6ba513b13ecfc734bd95f44d` |
+| `it-it-luciarus` | Container image with the `it-IT` locale and `it-IT-LuciaRUS` voice. | `sha256:99db33a668e298c58be1c50b9d4b84aeb0949f0334187b02167cfa3044997993` |
+| `ja-jp-ayumi-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ayumi-Apollo` voice. | `sha256:50d1e986d318692917968654008466fc3cca4911c3bcd36af67f37e91de18fe2` |
+| `ja-jp-harukarus` | Container image with the `ja-JP` locale and `ja-JP-HarukaRUS` voice. | `sha256:7736a87dcf3595056bb558c6cb38094d1732bb164406a99d87c0ac09c8eee271` |
+| `ja-jp-ichiro-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ichiro-Apollo` voice. | `sha256:6ce704a51150e0ee092f2197ba7cf4bcbf8473e5cd56a9a0839ad81d87b2dfe2` |
+| `ko-kr-heamirus` | Container image with the `ko-KR` locale and `ko-KR-HeamiRUS` voice. | `sha256:ec5d75470dbae50cb5bc2f93ed642e40446b099cb2302499b3a83b3a27358bd0` |
+| `ms-my-rizwan` | Container image with the `ms-MY` locale and `ms-MY-Rizwan` voice. | `sha256:e572b62f0b4153382318266dcd59d6e92daf8acc6f323e461d517d34f9be45dd` |
+| `nb-no-huldarus` | Container image with the `nb-NO` locale and `nb-NO-HuldaRUS` voice. | `sha256:691ef2ead95a0d4703cd6064bac9355e86a361fcffe5ad36a78e9f1e1c78739c` |
+| `nl-nl-hannarus` | Container image with the `nl-NL` locale and `nl-NL-HannaRUS` voice. | `sha256:f52a717d4d8b7db39b18c9a9e448e2e6d6e19600093518002a6fc03f0b2a57c9` |
+| `pl-pl-paulinarus` | Container image with the `pl-PL` locale and `pl-PL-PaulinaRUS` voice. | `sha256:1927ff28b40b7c37ee1b8d5f4efb2fd7d905affd35c27983940c7e5795763c70` |
+| `pt-br-daniel-apollo` | Container image with the `pt-BR` locale and `pt-BR-Daniel-Apollo` voice. | `sha256:ebce3b7b51fb28fce4c446fbbf3607f4307b1cec3f9fa7abdd046839a259e91d` |
+| `pt-br-heloisarus` | Container image with the `pt-BR` locale and `pt-BR-HeloisaRUS` voice. | `sha256:195e719735768fdf6ea2f1fc829a40cae5af4d35b62e52d1c798e680f915dd12` |
+| `pt-pt-heliarus` | Container image with the `pt-PT` locale and `pt-PT-HeliaRUS` voice. | `sha256:f0ea6ec57615a55b13f491e6f96b3cc0e29092f63a981fd29771bcfa2b26c0e1` |
+| `ro-ro-andrei` | Container image with the `ro-RO` locale and `ro-RO-Andrei` voice. | `sha256:deee319f2b6d8145f3ed567cfcdfa2ca718cd1b408f8d9fbf15f90d02d5b6b35` |
+| `ru-ru-ekaterinarus` | Container image with the `ru-RU` locale and `ru-RU-EkaterinaRUS` voice. | `sha256:d0005c1363e197c0f85180a07d650655b473117de12170a631f3049d99f86581` |
+| `ru-ru-irina-apollo` | Container image with the `ru-RU` locale and `ru-RU-Irina-Apollo` voice. | `sha256:53731218ed6e2bed2227c25a2a2e1d528a19dbc078e2af55aa959d191df50487` |
+| `ru-ru-pavel-apollo` | Container image with the `ru-RU` locale and `ru-RU-Pavel-Apollo` voice. | `sha256:81b2a56f72460a780466337136729b011ef1eac4689b1ec9edbbd980b53ba6c3` |
+| `sk-sk-filip` | Container image with the `sk-SK` locale and `sk-SK-Filip` voice. | `sha256:e3d44c7ac30b1b9b186eaf1761ccadd89b17fcb4d4f63e1dab246a80093967f3` |
+| `sl-si-lado` | Container image with the `sl-SI` locale and `sl-SI-Lado` voice. | `sha256:8ecb2b3d0c60f4c88522090d24e55d84a6132b751d71b41a3d1ebbae78fc3b2b` |
+| `sv-se-hedvigrus` | Container image with the `sv-SE` locale and `sv-SE-HedvigRUS` voice. | `sha256:5b61e4ebe696e7cee23403ec4aed299cbf4874c0eeb5a163a82ba0ba752b78a8` |
+| `ta-in-valluvar` | Container image with the `ta-IN` locale and `ta-IN-Valluvar` voice. | `sha256:adf3c421feb6385ba3acb241750d909a42f41d09b5ebbc66dbb50dac84ef5638` |
+| `te-in-chitra` | Container image with the `te-IN` locale and `te-IN-Chitra` voice. | `sha256:e9fc71faf37ca890a82e29bec29b6cfd94299e2d78aaed8c98bc09add2522e2d` |
+| `th-th-pattara` | Container image with the `th-TH` locale and `th-TH-Pattara` voice. | `sha256:b02cc2b23a7d1ec2f3f2d3917a51316fb009597d5d9606b5f129968c35c365f6` |
+| `tr-tr-sedarus` | Container image with the `tr-TR` locale and `tr-TR-SedaRUS` voice. | `sha256:961773f7f544cc0643590f4ed44d40f12e3fa23e44834afd199e261651b702ae` |
+| `vi-vn-an` | Container image with the `vi-VN` locale and `vi-VN-An` voice. | `sha256:f1fdda1c758a4361d2fb594f02d47be7cf88571e5a51fb845b1b00bf0b89d20e` |
+| `zh-cn-huihuirus` | Container image with the `zh-CN` locale and `zh-CN-HuihuiRUS` voice. | `sha256:183125591097ab157bf57088fae3a8ab0af4472cabd3d1c7bdaba51748e73342` |
+| `zh-cn-kangkang-apollo` | Container image with the `zh-CN` locale and `zh-CN-Kangkang-Apollo` voice. | `sha256:72a77502eb91ebf407bfbfb068b442e1c281da33814e042b026973af2d8d42e0` |
+| `zh-cn-yaoyao-apollo` | Container image with the `zh-CN` locale and `zh-CN-Yaoyao-Apollo` voice. | `sha256:9a202b3172def1a35553d7adf5298af71b44dde10ee261752b057b3dcc39ddea` |
+| `zh-hk-danny-apollo` | Container image with the `zh-HK` locale and `zh-HK-Danny-Apollo` voice. | `sha256:9bbba04f272231084b9c87d668e5a71ab7f61d464eeaab50d44a3f2121874524` |
+| `zh-hk-tracy-apollo` | Container image with the `zh-HK` locale and `zh-HK-Tracy-Apollo` voice. | `sha256:048d335ea90493fde6ccce8715925e472fddb405c3208bba5ac751bfdf85b254` |
+| `zh-hk-tracyrus` | Container image with the `zh-HK` locale and `zh-HK-TracyRUS` voice. | `sha256:048d335ea90493fde6ccce8715925e472fddb405c3208bba5ac751bfdf85b254` |
+| `zh-tw-hanhanrus` | Container image with the `zh-TW` locale and `zh-TW-HanHanRUS` voice. | `sha256:fe30bb665c416d0a6cc3547425e1736802d7527eebdd919ee4ed66989ebc368b` |
+| `zh-tw-yating-apollo` | Container image with the `zh-TW` locale and `zh-TW-Yating-Apollo` voice. | `sha256:6308d4e4302d02bbb4043ec6cceb6e574b7e156a5d774bef095be6c34af7194c` |
+| `zh-tw-zhiwei-apollo` | Container image with the `zh-TW` locale and `zh-TW-Zhiwei-Apollo` voice. | `sha256:e40dda8b5e9313a5962c260c1e9eb410b19e60fa74062ad0691455dc8442a4d9` |
# [Previous version](#tab/previous)
+Release note for `1.14.1-amd64-<locale-and-voice>`:
+
+**Feature**
+* Upgrade to latest models.
+ Release note for `1.13.0-amd64-<locale-and-voice>`: **Feature**
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
-Release notes for `v1.8.0`:
-Regular monthly release
+Release notes for `v1.9.0`:
+* Add 1 new `en-GB` and 9 (4 are preview) new `zh-CN` voices.
| Image Tags | Notes | ||:| | `latest` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
-| `1.8.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.8.0-amd64-en-us-arianeural`. |
+| `1.9.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.9.0-amd64-en-us-arianeural`. |
-| v1.8.0 Locales and voices | Notes |
+| v1.9.0 Locales and voices | Notes |
|-|:| | `de-de-conradneural` | Container image with the `de-DE` locale and `de-DE-ConradNeural` voice. | | `de-de-katjaneural` | Container image with the `de-DE` locale and `de-DE-KatjaNeural` voice. |
Regular monthly release
| `en-ca-claraneural` | Container image with the `en-CA` locale and `en-CA-ClaraNeural` voice. | | `en-ca-liamneural` | Container image with the `en-CA` locale and `en-CA-LiamNeural` voice. | | `en-gb-libbyneural` | Container image with the `en-GB` locale and `en-GB-LibbyNeural` voice. |
-| `en-gb-mianeural` | Container image with the `en-GB` locale and `en-GB-MiaNeural` voice. |
| `en-gb-ryanneural` | Container image with the `en-GB` locale and `en-GB-RyanNeural` voice. |
+| `en-gb-sonianeural` | Container image with the `en-GB` locale and `en-GB-SoniaNeural` voice. |
| `en-us-arianeural` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. | | `en-us-guyneural` | Container image with the `en-US` locale and `en-US-GuyNeural` voice. | | `en-us-jennyneural` | Container image with the `en-US` locale and `en-US-JennyNeural` voice. |
Regular monthly release
| `zh-cn-xiaoyouneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoYouNeural` voice. | | `zh-cn-yunyangneural` | Container image with the `zh-CN` locale and `zh-CN-YunYangNeural` voice. | | `zh-cn-yunyeneural` | Container image with the `zh-CN` locale and `zh-CN-YunYeNeural` voice. |
+| `zh-cn-xiaochenneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoChenNeural` voice. |
+| `zh-cn-xiaohanneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoHanNeural` voice. |
+| `zh-cn-xiaomoneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoMoNeural` voice. |
+| `zh-cn-xiaoqiuneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoQiuNeural` voice. |
+| `zh-cn-xiaoruineural` | Container image with the `zh-CN` locale and `zh-CN-XiaoRuiNeural` voice. |
+| `zh-cn-xiaoshuangneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoShuangNeural` voice.|
+| `zh-cn-xiaoxuanneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoXuanNeural` voice. |
+| `zh-cn-xiaoyanneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoYanNeural` voice. |
+| `zh-cn-yunxineural` | Container image with the `zh-CN` locale and `zh-CN-YunXiNeural` voice. |
# [Previous version](#tab/previous)
+Release notes for `v1.8.0`:
+Regular monthly release
+ Release notes for `v1.7.0`: * Upgrade to latest models with quality improvements and bug fixes
Release notes for `v1.3.0`:
| Image Tags | Notes | ||:|
+| `1.8.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.8.0-amd64-en-us-arianeural`. |
+| `1.7.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.7.0-amd64-en-us-arianeural`. |
+| `1.6.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.6.0-amd64-en-us-arianeural`. |
| `1.5.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.5.0-amd64-en-us-arianeural`. | | `1.4.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.4.0-amd64-en-us-arianeural`. | | `1.3.0-amd64-<locale-and-voice>-preview` | Replace `<locale>` with one of the available locales, listed below. For example `1.3.0-amd64-en-us-arianeural-preview`. | | `1.2.0-amd64-<locale-and-voice>-preview` | Replace `<locale>` with one of the available locales, listed below. For example `1.2.0-amd64-en-us-arianeural-preview`. |
+| v1.8.0 Locales and voices | Notes |
+|-|:|
+| `de-de-conradneural` | Container image with the `de-DE` locale and `de-DE-ConradNeural` voice. |
+| `de-de-katjaneural` | Container image with the `de-DE` locale and `de-DE-KatjaNeural` voice. |
+| `en-au-natashaneural` | Container image with the `en-AU` locale and `en-AU-NatashaNeural` voice. |
+| `en-au-williamneural` | Container image with the `en-AU` locale and `en-AU-WilliamNeural` voice. |
+| `en-ca-claraneural` | Container image with the `en-CA` locale and `en-CA-ClaraNeural` voice. |
+| `en-ca-liamneural` | Container image with the `en-CA` locale and `en-CA-LiamNeural` voice. |
+| `en-gb-libbyneural` | Container image with the `en-GB` locale and `en-GB-LibbyNeural` voice. |
+| `en-gb-ryanneural` | Container image with the `en-GB` locale and `en-GB-RyanNeural` voice. |
+| `en-us-arianeural` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
+| `en-us-guyneural` | Container image with the `en-US` locale and `en-US-GuyNeural` voice. |
+| `en-us-jennyneural` | Container image with the `en-US` locale and `en-US-JennyNeural` voice. |
+| `es-es-alvaroneural` | Container image with the `es-ES` locale and `es-ES-AlvaroNeural` voice. |
+| `es-es-elviraneural` | Container image with the `es-ES` locale and `es-ES-ElviraNeural` voice. |
+| `es-mx-dalianeural` | Container image with the `es-MX` locale and `es-MX-DaliaNeural` voice. |
+| `es-mx-jorgeneural` | Container image with the `es-MX` locale and `es-MX-JorgeNeural` voice. |
+| `fr-ca-antoineneural` | Container image with the `fr-CA` locale and `fr-CA-AntoineNeural` voice. |
+| `fr-ca-jeanneural` | Container image with the `fr-CA` locale and `fr-CA-JeanNeural` voice. |
+| `fr-ca-sylvieneural` | Container image with the `fr-CA` locale and `fr-CA-SylvieNeural` voice. |
+| `fr-fr-deniseneural` | Container image with the `fr-FR` locale and `fr-FR-DeniseNeural` voice. |
+| `fr-fr-henrineural` | Container image with the `fr-FR` locale and `fr-FR-HenriNeural` voice. |
+| `hi-in-madhurneural` | Container image with the `hi-IN` locale and `hi-IN-MadhurNeural` voice. |
+| `hi-in-swaraneural` | Container image with the `hi-IN` locale and `hi-IN-Swaraneural` voice. |
+| `it-it-diegoneural` | Container image with the `it-IT` locale and `it-IT-DiegoNeural` voice. |
+| `it-it-elsaneural` | Container image with the `it-IT` locale and `it-IT-ElsaNeural` voice. |
+| `it-it-isabellaneural` | Container image with the `it-IT` locale and `it-IT-IsabellaNeural` voice. |
+| `ja-jp-keitaneural` | Container image with the `ja-JP` locale and `ja-JP-KeitaNeural` voice. |
+| `ja-jp-nanamineural` | Container image with the `ja-JP` locale and `ja-JP-NanamiNeural` voice. |
+| `ko-kr-injoonneural` | Container image with the `ko-KR` locale and `ko-KR-InJoonNeural` voice. |
+| `ko-kr-sunhineural` | Container image with the `ko-KR` locale and `ko-KR-SunHiNeural` voice. |
+| `pt-br-antonioneural` | Container image with the `pt-BR` locale and `pt-BR-AntonioNeural` voice. |
+| `pt-br-franciscaneural` | Container image with the `pt-BR` locale and `pt-BR-FranciscaNeural` voice. |
+| `tr-tr-ahmetneural` | Container image with the `tr-TR` locale and `tr-TR-AhmetNeural` voice. |
+| `tr-tr-emelneural` | Container image with the `tr-TR` locale and `tr-TR-EmelNeural` voice. |
+| `zh-cn-xiaoxiaoneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoxiaoNeural` voice. |
+| `zh-cn-xiaoyouneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoYouNeural` voice. |
+| `zh-cn-yunyangneural` | Container image with the `zh-CN` locale and `zh-CN-YunYangNeural` voice. |
+| `zh-cn-yunyeneural` | Container image with the `zh-CN` locale and `zh-CN-YunYeNeural` voice. |
+ | v1.7.0 Locales and voices | Notes | |-|:| | `de-de-conradneural` | Container image with the `de-DE` locale and `de-DE-ConradNeural` voice. |
This container image has the following tags available. You can also find a full
| `3.0-ja` | Sentiment Analysis v3 (Japanese) | | `3.0-pt` | Sentiment Analysis v3 (Portuguese) | | `3.0-nl` | Sentiment Analysis v3 (Dutch) |
-| `2.1` | Sentiment Analysis v2 |
+| `2.1` | Sentiment Analysis v2 |
## Text Analytics for health
Release notes for `3.0.015490002-onprem-amd64`:
| Image Tags | Notes | ||:-| | `latest` | |
-| `3.0.015490002-onprem-amd64` | |
+| `3.0.015490002-onprem-amd64` | |
## Translator
connectors Connectors Native Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-native-http.md
HTTP requests have a [timeout limit](../logic-apps/logic-apps-limits-and-config.
<a name="disable-location-header-check"></a>
+### Set up interval between retry attempts with the Retry-After header
+
+To specify the number of seconds between retry attempts, you can add the `Retry-After` header to the HTTP action response. For example, if the target endpoint returns the `429 - Too many requests` status code, you can specify a longer interval between retries. The `Retry-After` header also works with the `202 - Accepted` status code.
+
+Here is the same example that shows the HTTP action response that contains `Retry-After`:
+
+```json
+{
+ "statusCode": 429,
+ "headers": {
+ "Retry-After": "300"
+ }
+}
+```
++ ## Disable checking location headers Some endpoints, services, systems, or APIs return a `202 ACCEPTED` response that doesn't have a `location` header. To avoid having an HTTP action continually check the request status when the `location` header doesn't exist, you can have these options:
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-release-notes.md
This article shows the Azure Cosmos DB Emulator release notes with a list of fea
## Release notes
+### 2.14.3 (8 September 2021)
+
+ - This release updates the Cosmos Emulator background services to match the latest online functionality of the Azure Cosmos DB, fixes couple issues with telemetry data that is collected and resets the base image for the Linux Cosmos emulator Docker image.
+ ### 2.14.2 (12 August 2021) - This release updates the local Data Explorer content to latest Azure Portal version and resets the base for the Linux Cosmos emulator Docker image.
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/migrate-dotnet-v3.md
Title: Migrate your application to use the Azure Cosmos DB .NET SDK 3.0 (com.azure.cosmos)
-description: Learn how to upgrade your existing .NET application from the v2 SDK to the newer .NET SDK v3 (com.azure.cosmos package) for Core (SQL) API.
+ Title: Migrate your application to use the Azure Cosmos DB .NET SDK 3.0 (Microsoft.Azure.Cosmos)
+description: Learn how to upgrade your existing .NET application from the v2 SDK to the newer .NET SDK v3 (Microsoft.Azure.Cosmos package) for Core (SQL) API.
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/create-table-dotnet.md
Title: 'Quickstart: Table API with .NET - Azure Cosmos DB'
-description: This quickstart shows how to use the Azure Cosmos DB Table API to create an application with the Azure portal and .NET
-
+description: This quickstart shows how to access the Azure Cosmos DB Table API from a .NET application using the Azure.Data.Tables SDK
+
+ms.devlang: csharp
Previously updated : 05/28/2020- Last updated : 08/25/2021+ +
+# Quickstart: Build a Table API app with .NET SDK and Azure Cosmos DB
-
-# Quickstart: Build a Table API app with .NET SDK and Azure Cosmos DB
[!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)]
-> [!div class="op_single_selector"]
-> * [.NET](create-table-dotnet.md)
-> * [Java](create-table-java.md)
-> * [Node.js](create-table-nodejs.md)
-> * [Python](how-to-use-python.md)
->
+This quickstart shows how to access the Azure Cosmos DB [Table API](introduction.md) from a .NET application. The Cosmos DB Table API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. Because data is stored in a schemaless design, new properties (columns) are automatically added to the table when an object with a new attribute is added to the table.
-This quickstart shows how to use .NET and the Azure Cosmos DB [Table API](introduction.md) to build an app by cloning an example from GitHub. This quickstart also shows you how to create an Azure Cosmos DB account and how to use Data Explorer to create tables and entities in the web-based Azure portal.
+.NET applications can access the Cosmos DB Table API using the [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) NuGet package. The [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) package is a [.NET Standard 2.0](/dotnet/standard/net-standard) library that works with both .NET Framework (4.7.2 and later) and .NET Core (2.0 and later) applications.
## Prerequisites
-If you donΓÇÖt already have Visual Studio 2019 installed, you can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
+The sample application is written in [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet/3.1), though the principles apply to both .NET Framework and .NET Core applications. You can use either [Visual Studio](https://www.visualstudio.com/downloads/), [Visual Studio for Mac](https://visualstudio.microsoft.com/vs/mac/), or [Visual Studio Code](https://code.visualstudio.com/) as an IDE.
-## Create a database account
+## Sample application
+The sample application for this tutorial may be cloned or downloaded from the repository [https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet](https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet). Both a starter and completed app are included in the sample repository.
-## Add a table
+```bash
+git clone https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet
+```
+The sample application uses weather data as an example to demonstrate the capabilities of the Table API. Objects representing weather observations are stored and retrieved using the Table API, including storing objects with additional properties to demonstrate the schemaless capabilities of the Table API.
-## Add sample data
+## 1 - Create an Azure Cosmos DB account
-## Clone the sample application
+You first need to create a Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
-Now let's clone a Table app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+### [Azure portal](#tab/azure-portal)
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+Log in to the [Azure portal](https://portal.azure.com/) and follow these steps to create an Cosmos DB account.
- ```bash
- md "C:\git-samples"
- ```
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create cosmos db account step 1](<./includes/create-table-dotnet/create-cosmos-db-acct-1.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Cosmos DB accounts in Azure." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
+| [!INCLUDE [Create cosmos db account step 1](<./includes/create-table-dotnet/create-cosmos-db-acct-2.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Cosmos DB accounts page in Azure." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
+| [!INCLUDE [Create cosmos db account step 1](<./includes/create-table-dotnet/create-cosmos-db-acct-3.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
+| [!INCLUDE [Create cosmos db account step 1](<./includes/create-table-dotnet/create-cosmos-db-acct-4.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Cosmos DB Account creation page." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+### [Azure CLI](#tab/azure-cli)
- ```bash
- cd "C:\git-samples"
- ```
+Cosmos DB accounts are created using the [az cosmosdb create](/cli/azure/cosmosdb#az_cosmosdb_create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started.git
- ```
+Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
-> [!TIP]
-> For a more detailed walkthrough of similar code, see the [Cosmos DB Table API sample](tutorial-develop-table-dotnet.md) article.
+It typically takes several minutes for the Cosmos DB account creation process to complete.
-## Open the sample application in Visual Studio
+```azurecli
+LOCATION='eastus'
+RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
+COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+COSMOS_TABLE_NAME='WeatherData'
-1. In Visual Studio, from the **File** menu, choose **Open**, then choose **Project/Solution**.
+az group create \
+ --location $LOCATION \
+ --name $RESOURCE_GROUP_NAME
- :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-open-solution.png" alt-text="Open the solution":::
+az cosmosdb create \
+ --name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --capabilities EnableTable
+```
-2. Navigate to the folder where you cloned the sample application and open the TableStorage.sln file.
+### [Azure PowerShell](#tab/azure-powershell)
-## Review the code
+Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [update the connection string](#update-your-connection-string) section of this doc.
+Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
-* The following code shows how to create a table within the Azure Storage:
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
- :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/Common.cs" id="CreateTable":::
+It typically takes several minutes for the Cosmos DB account creation process to complete.
-* The following code shows how to insert data into the table:
+```azurepowershell
+$location = 'eastus'
+$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
+$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
- :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/SamplesUtils.cs" id="InsertItem":::
+# Create a resource group
+New-AzResourceGroup `
+ -Location $location `
+ -Name $resourceGroupName
-* The following code shows how to query data from the table:
+# Create an Azure Cosmos DB
+New-AzCosmosDBAccount `
+ -Name $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName `
+ -Location $location `
+ -ApiKind "Table"
+```
- :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/SamplesUtils.cs" id="QueryData":::
+
-* The following code shows how to delete data from the table:
+## 2 - Create a table
- :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/SamplesUtils.cs" id="DeleteItem":::
+Next, you need to create a table within your Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
-## Update your connection string
+### [Azure portal](#tab/azure-portal)
-Now go back to the Azure portal to get your connection string information and copy it into the app. This enables your app to communicate with your hosted database.
+In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Cosmos DB account.
-1. In the [Azure portal](https://portal.azure.com/), click **Connection String**. Use the copy button on the right side of the window to copy the **PRIMARY CONNECTION STRING**.
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create cosmos db table step 1](<./includes/create-table-dotnet/create-cosmos-table-1.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Cosmos DB account." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-1.png"::: |
+| [!INCLUDE [Create cosmos db table step 2](<./includes/create-table-dotnet/create-cosmos-table-2.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-2.png"::: |
+| [!INCLUDE [Create cosmos db table step 3](<./includes/create-table-dotnet/create-cosmos-table-3.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for an Cosmos DB table." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-3.png"::: |
- :::image type="content" source="./media/create-table-dotnet/connection-string.png" alt-text="View and copy the PRIMARY CONNECTION STRING in the Connection String pane":::
+### [Azure CLI](#tab/azure-cli)
-2. In Visual Studio, open the **Settings.json** file.
+Tables in Cosmos DB are created using the [az cosmosdb table create](/cli/azure/cosmosdb/table#az_cosmosdb_table_create) command.
-3. Paste the **PRIMARY CONNECTION STRING** from the portal into the StorageConnectionString value. Paste the string inside the quotes.
+```azurecli
+COSMOS_TABLE_NAME='WeatherData'
- ```csharp
- {
- "StorageConnectionString": "<Primary connection string from Azure portal>"
- }
- ```
+az cosmosdb table create \
+ --account-name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_TABLE_NAME \
+ --throughput 400
+```
-4. Press CTRL+S to save the **Settings.json** file.
+### [Azure PowerShell](#tab/azure-powershell)
-You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+Tables in Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
-## Build and deploy the app
+```azurepowershell
+$cosmosTableName = 'WeatherData'
-1. In Visual Studio, right-click on the **CosmosTableSamples** project in **Solution Explorer** and then click **Manage NuGet Packages**.
+# Create the table for the application to use
+New-AzCosmosDBTable `
+ -Name $cosmosTableName `
+ -AccountName $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName
+```
- :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-manage-nuget.png" alt-text="Manage NuGet Packages":::
+
-2. In the NuGet **Browse** box, type Microsoft.Azure.Cosmos.Table. This will find the Cosmos DB Table API client library. Note that this library is currently available for .NET Framework and .NET Standard.
-
- :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-nuget-browse.png" alt-text="NuGet Browse tab":::
+## 3 - Get Cosmos DB connection string
-3. Click **Install** to install the **Microsoft.Azure.Cosmos.Table** library. This installs the Azure Cosmos DB Table API package and all dependencies.
+To access your table(s) in Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
-4. When you run the entire app, sample data is inserted into the table entity and deleted at the end so you wonΓÇÖt see any data inserted if you run the whole sample. However you can insert some breakpoints to view the data. Open BasicSamples.cs file and right-click on line 52, select **Breakpoint**, then select **Insert Breakpoint**. Insert another breakpoint on line 55.
+### [Azure portal](#tab/azure-portal)
- :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-breakpoint.png" alt-text="Add a breakpoint":::
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Get cosmos db table connection string step 1](<./includes/create-table-dotnet/get-cosmos-connection-string-1.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Cosmos DB page." lightbox="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-1.png"::: |
+| [!INCLUDE [Get cosmos db table connection string step 2](<./includes/create-table-dotnet/get-cosmos-connection-string-2.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing the which connection string to select and use in your application." lightbox="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-2.png"::: |
-5. Press F5 to run the application. The console window displays the name of the new table database (in this case, demoa13b1) in Azure Cosmos DB.
-
- :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-console.png" alt-text="Console output":::
+### [Azure CLI](#tab/azure-cli)
- When you hit the first breakpoint, go back to Data Explorer in the Azure portal. Click the **Refresh** button, expand the demo* table, and click **Entities**. The **Entities** tab on the right shows the new entity that was added for Walter Harp. Note that the phone number for the new entity is 425-555-0101.
+To get the primary table storage connection string using Azure CLI, use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az_cosmosdb_keys_list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
- :::image type="content" source="media/create-table-dotnet/azure-cosmosdb-entity.png" alt-text="New entity":::
-
- If you receive an error that says Settings.json file canΓÇÖt be found when running the project, you can resolve it by adding the following XML entry to the project settings. Right click on CosmosTableSamples, select Edit CosmosTableSamples.csproj and add the following itemGroup:
+```azurecli
+# This gets the primary Table connection string
+az cosmosdb keys list \
+ --type connection-strings \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_ACCOUNT_NAME \
+ --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
+ --output tsv
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To get the primary table storage connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+```azurepowershell
+# This gets the primary Table connection string
+ $(Get-AzCosmosDBAccountKey `
+ -ResourceGroupName $resourceGroupName `
+ -Name $cosmosAccountName `
+ -Type "ConnectionStrings")."Primary Table Connection String"
+```
+++
+The connection string for your Cosmos DB account is considered an app secret and must be protected like any other app secret or password. This example uses the [Secret Manager tool](/aspnet/core/security/app-secrets#secret-manager) to store the connection string during development and make it available to the application. The Secret Manager tool can be accessed from either Visual Studio or the .NET CLI.
+
+### [Visual Studio](#tab/visual-studio)
+
+To open the Secret Manager tool from Visual Studio, right-click on the project and select **Manage User Secrets** from the context menu. This will open the *secrets.json* file for the project. Replace the contents of the file with the JSON below, substituting in your Cosmos DB table connection string.
+
+```json
+{
+ "ConnectionStrings": {
+ "CosmosTableApi": "<cosmos db table connection string>"
+ }
+}
+```
+
+### [.NET CLI](#tab/netcore-cli)
+
+To use the Secret Manager, you must first initialize it for your project using the `dotnet user-secrets init` command.
+
+```dotnetcli
+dotnet user-secrets init
+```
+
+Then, use the `dotnet user-secrets set` command to add the Cosmos DB table connection string as a secret.
+
+```dotnetcli
+dotnet user-secrets set "ConnectionStrings:CosmosTableApi" "<cosmos db table connection string>"
+```
+++
+## 4 - Install Azure.Data.Tables NuGet package
- ```csharp
- <ItemGroup>
- <None Update="Settings.json">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- </ItemGroup>
- ```
+To access the Cosmos DB Table API from a .NET application, install the [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables) NuGet package.
-6. Close the **Entities** tab in Data Explorer.
+### [Visual Studio](#tab/visual-studio)
+
+```PowerShell
+Install-Package Azure.Data.Tables
+```
+
+### [.NET CLI](#tab/netcore-cli)
+
+```dotnetcli
+dotnet add package Azure.Data.Tables
+```
+++
+## 5 - Configure the Table client in Startup.cs
+
+The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The [TableClient](/dotnet/api/azure.data.tables.tableclient) object is the object used to communicate with the Cosmos DB Table API.
+
+An application will typically create a single [TableClient](/dotnet/api/azure.data.tables.tableclient) object per table to be used throughout the application. It's recommended to use dependency injection (DI) and register the [TableClient](/dotnet/api/azure.data.tables.tableclient) object as a singleton to accomplish this. For more information about using DI with the Azure SDK, see [Dependency injection with the Azure SDK for .NET](/dotnet/azure/sdk/dependency-injection).
+
+In the `Startup.cs` file of the application, edit the ConfigureServices() method to match the following code snippet:
+
+```csharp
+public void ConfigureServices(IServiceCollection services)
+{
+ services.AddRazorPages()
+ .AddMvcOptions(options =>
+ {
+ options.Filters.Add(new ValidationFilter());
+ });
+
+ var connectionString = Configuration.GetConnectionString("CosmosTableApi");
+ services.AddSingleton<TableClient>(new TableClient(connectionString, "WeatherData"));
-7. Press F5 to run the app to the next breakpoint.
+ services.AddSingleton<TablesService>();
+}
+```
+
+You will also need to add the following using statement at the top of the Startup.cs file.
+
+```csharp
+using Azure.Data.Tables;
+```
+
+## 6 - Implement Cosmos DB table operations
+
+All Cosmos DB table operations for the sample app are implemented in the `TableService` class located in the *Services* directory. You will need to import the `Azure` and `Azure.Data.Tables` namespaces at the top of this file to work with objects in the `Azure.Data.Tables` SDK package.
+
+```csharp
+using Azure;
+using Azure.Data.Tables;
+```
+
+At the start of the `TableService` class, add a member variable for the [TableClient](/dotnet/api/azure.data.tables.tableclient) object and a constructor to allow the [TableClient](/dotnet/api/azure.data.tables.tableclient) object to be injected into the class.
+
+```csharp
+private TableClient _tableClient;
+
+public TablesService(TableClient tableClient)
+{
+ _tableClient = tableClient;
+}
+```
+
+### Get rows from a table
+
+The [TableClient](/dotnet/api/azure.data.tables.tableclient) class contains a method named [Query](/dotnet/api/azure.data.tables.tableclient.query) which allows you to select rows from the table. In this example, since no parameters are being passed to the method, all rows will be selected from the table.
+
+The method also takes a generic parameter of type [ITableEntity](/dotnet/api/azure.data.tables.itableentity) that specifies the model class data will be returned as. In this case, the built-in class [TableEntity](/dotnet/api/azure.data.tables.itableentity) is used, meaning the `Query` method will return a `Pageable\<TableEntity\>` collection as its results.
- When you hit the breakpoint, switch back to the Azure portal, click **Entities** again to open the **Entities** tab, and note that the phone number has been updated to 425-555-0105.
+```csharp
+public IEnumerable<WeatherDataModel> GetAllRows()
+{
+ Pageable<TableEntity> entities = _tableClient.Query<TableEntity>();
-8. Press F5 to run the app.
-
- The app adds entities for use in an advanced sample app that the Table API currently does not support. The app then deletes the table created by the sample app.
+ return entities.Select(e => MapTableEntityToWeatherDataModel(e));
+}
+```
-9. In the console window, press Enter to end the execution of the app.
+The [TableEntity](/dotnet/api/azure.data.tables.itableentity) class defined in the `Azure.Data.Tables` package has properties for the partition key and row key values in the table. Together, these two values for a unique key for the row in the table. In this example application, the name of the weather station (city) is stored in the partition key and the date/time of the observation is stored in the row key. All other properties (temperature, humidity, wind speed) are stored in a dictionary in the `TableEntity` object.
+
+It is common practice to map a [TableEntity](/dotnet/api/azure.data.tables.tableentity) object to an object of your own definition. The sample application defines a class `WeatherDataModel` in the *Models* directory for this purpose. This class has properties for the station name and observation date that the partition key and row key will map to, providing more meaningful property names for these values. It then uses a dictionary to store all the other properties on the object. This is a common pattern when working with Table storage since a row can have any number of arbitrary properties and we want our model objects to be able to capture all of them. This class also contains methods to list the properties on the class.
+
+```csharp
+public class WeatherDataModel
+{
+ // Captures all of the weather data properties -- temp, humidity, wind speed, etc
+ private Dictionary<string, object> _properties = new Dictionary<string, object>();
+
+ public string StationName { get; set; }
+
+ public string ObservationDate { get; set; }
+
+ public DateTimeOffset? Timestamp { get; set; }
+
+ public string Etag { get; set; }
+
+ public object this[string name]
+ {
+ get => ( ContainsProperty(name)) ? _properties[name] : null;
+ set => _properties[name] = value;
+ }
+
+ public ICollection<string> PropertyNames => _properties.Keys
+
+ public int PropertyCount => _properties.Count
+
+ public bool ContainsProperty(string name) => _properties.ContainsKey(name);
+}
+```
+
+The `MapTableEntityToWeatherDataModel` method is used to map a [TableEntity](/dotnet/api/azure.data.tables.itableentity) object to a `WeatherDataModel` object. The [TableEntity](/dotnet/api/azure.data.tables.itableentity) object contains a [Keys](/dotnet/api/azure.data.tables.tableentity.keys) property to get all of the property names contained in the table for the object (effectively the column names for this row in the table). The `MapTableEntityToWeatherDataModel` method directly maps the `PartitionKey`, `RowKey`, `Timestamp`, and `Etag` properties and then uses the `Keys` property to iterate over the other properties in the `TableEntity` object and map those to the `WeatherDataModel` object, minus the properties that have already been directly mapped.
+
+Edit the code in the `MapTableEntityToWeatherDataModel` method to match the following code block.
+
+```csharp
+public WeatherDataModel MapTableEntityToWeatherDataModel(TableEntity entity)
+{
+ WeatherDataModel observation = new WeatherDataModel();
+ observation.StationName = entity.PartitionKey;
+ observation.ObservationDate = entity.RowKey;
+ observation.Timestamp = entity.Timestamp;
+ observation.Etag = entity.ETag.ToString();
+
+ var measurements = entity.Keys.Where(key => !EXCLUDE_TABLE_ENTITY_KEYS.Contains(key));
+ foreach (var key in measurements)
+ {
+ observation[key] = entity[key];
+ }
+ return observation;
+}
+```
+
+### Filter rows returned from a table
+
+To filter the rows returned from a table, you can pass an OData style filter string to the [Query](/dotnet/api/azure.data.tables.tableclient.query) method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
+
+```odata
+PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00 AM' and RowKey le '2021-07-02 12:00 AM'
+```
+
+You can view all OData filter operators on the OData website in the section [Filter System Query Option](https://www.odata.org/documentation/odata-version-2-0/uri-conventions/).
+
+In the example application, the `FilterResultsInputModel` object is designed to capture any filter criteria provided by the user.
+
+```csharp
+public class FilterResultsInputModel : IValidatableObject
+{
+ public string PartitionKey { get; set; }
+ public string RowKeyDateStart { get; set; }
+ public string RowKeyTimeStart { get; set; }
+ public string RowKeyDateEnd { get; set; }
+ public string RowKeyTimeEnd { get; set; }
+ [Range(-100, +200)]
+ public double? MinTemperature { get; set; }
+ [Range(-100,200)]
+ public double? MaxTemperature { get; set; }
+ [Range(0, 300)]
+ public double? MinPrecipitation { get; set; }
+ [Range(0,300)]
+ public double? MaxPrecipitation { get; set; }
+}
+```
+
+When this object is passed to the `GetFilteredRows` method in the `TableService` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the [Query](/dotnet/api/azure.data.tables.tableclient.query) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient) object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
+
+```csharp
+public IEnumerable<WeatherDataModel> GetFilteredRows(FilterResultsInputModel inputModel)
+{
+ List<string> filters = new List<string>();
+
+ if (!String.IsNullOrEmpty(inputModel.PartitionKey))
+ filters.Add($"PartitionKey eq '{inputModel.PartitionKey}'");
+ if (!String.IsNullOrEmpty(inputModel.RowKeyDateStart) && !String.IsNullOrEmpty(inputModel.RowKeyTimeStart))
+ filters.Add($"RowKey ge '{inputModel.RowKeyDateStart} {inputModel.RowKeyTimeStart}'");
+ if (!String.IsNullOrEmpty(inputModel.RowKeyDateEnd) && !String.IsNullOrEmpty(inputModel.RowKeyTimeEnd))
+ filters.Add($"RowKey le '{inputModel.RowKeyDateEnd} {inputModel.RowKeyTimeEnd}'");
+ if (inputModel.MinTemperature.HasValue)
+ filters.Add($"Temperature ge {inputModel.MinTemperature.Value}");
+ if (inputModel.MaxTemperature.HasValue)
+ filters.Add($"Temperature le {inputModel.MaxTemperature.Value}");
+ if (inputModel.MinPrecipitation.HasValue)
+ filters.Add($"Precipitation ge {inputModel.MinTemperature.Value}");
+ if (inputModel.MaxPrecipitation.HasValue)
+ filters.Add($"Precipitation le {inputModel.MaxTemperature.Value}");
+
+ string filter = String.Join(" and ", filters);
+ Pageable<TableEntity> entities = _tableClient.Query<TableEntity>(filter);
+
+ return entities.Select(e => MapTableEntityToWeatherDataModel(e));
+}
+```
+
+### Insert data using a TableEntity object
+
+The simplest way to add data to a table is by using a [TableEntity](/dotnet/api/azure.data.tables.itableentity) object. In this example, data is mapped from an input model object to a [TableEntity](/dotnet/api/azure.data.tables.itableentity) object. The properties on the input object representing the weather station name and observation date/time are mapped to the [PartitionKey](/dotnet/api/azure.data.tables.tableentity.partitionkey) and [RowKey](/dotnet/api/azure.data.tables.tableentity.rowkey) properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the TableEntity object. Finally, the [AddEntity](/dotnet/api/azure.data.tables.tableclient.addentity) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient) object is used to insert data into the table.
+
+Modify the `InsertTableEntity` class in the example application to contain the following code.
+
+```csharp
+public void InsertTableEntity(WeatherInputModel model)
+{
+ TableEntity entity = new TableEntity();
+ entity.PartitionKey = model.StationName;
+ entity.RowKey = $"{model.ObservationDate} {model.ObservationTime}";
+
+ // The other values are added like a items to a dictionary
+ entity["Temperature"] = model.Temperature;
+ entity["Humidity"] = model.Humidity;
+ entity["Barometer"] = model.Barometer;
+ entity["WindDirection"] = model.WindDirection;
+ entity["WindSpeed"] = model.WindSpeed;
+ entity["Precipitation"] = model.Precipitation;
+
+ _tableClient.AddEntity(entity);
+}
+```
+
+### Upsert data using a TableEntity object
+
+If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you will receive an error. For this reason, it is often preferable to use the [UpsertEntity](/dotnet/api/azure.data.tables.tableclient.upsertentity) instead of the AddEntity method when adding rows to a table. If the given partition key/row key combination already exists in the table, the [UpsertEntity](/dotnet/api/azure.data.tables.tableclient.upsertentity) method will update the existing row. Otherwise, the row will be added to the table.
+
+```csharp
+public void UpsertTableEntity(WeatherInputModel model)
+{
+ TableEntity entity = new TableEntity();
+ entity.PartitionKey = model.StationName;
+ entity.RowKey = $"{model.ObservationDate} {model.ObservationTime}";
+
+ // The other values are added like a items to a dictionary
+ entity["Temperature"] = model.Temperature;
+ entity["Humidity"] = model.Humidity;
+ entity["Barometer"] = model.Barometer;
+ entity["WindDirection"] = model.WindDirection;
+ entity["WindSpeed"] = model.WindSpeed;
+ entity["Precipitation"] = model.Precipitation;
+
+ _tableClient.UpsertEntity(entity);
+}
+```
+
+### Insert or upsert data with variable properties
+
+One of the advantages of using the Cosmos DB Table API is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Cosmos DB. There is no need to run DDL statements like ALTER TABLE to add columns as in a traditional database.
+
+This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
+
+In the sample application, the `ExpandableWeatherObject` class is built around an internal dictionary to support any set of properties on the object. This class represents a typical pattern for when an object needs to contain an arbitrary set of properties.
+
+```csharp
+public class ExpandableWeatherObject
+{
+ public Dictionary<string, object> _properties = new Dictionary<string, object>();
+
+ public string StationName { get; set; }
+
+ public string ObservationDate { get; set; }
+
+ public object this[string name]
+ {
+ get => (ContainsProperty(name)) ? _properties[name] : null;
+ set => _properties[name] = value;
+ }
+
+ public ICollection<string> PropertyNames => _properties.Keys;
+
+ public int PropertyCount => _properties.Count;
+
+ public bool ContainsProperty(string name) => _properties.ContainsKey(name);
+}
+```
+
+To insert or upsert such an object using the Table API, map the properties of the expandable object into a [TableEntity](/dotnet/api/azure.data.tables.tableentity) object and use the [AddEntity](/dotnet/api/azure.data.tables.tableclient.addentity) or [UpsertEntity](/dotnet/api/azure.data.tables.tableclient.upsertentity) methods on the [TableClient](/dotnet/api/azure.data.tables.tableclient) object as appropriate.
+
+```csharp
+public void InsertExpandableData(ExpandableWeatherObject weatherObject)
+{
+ TableEntity entity = new TableEntity();
+ entity.PartitionKey = weatherObject.StationName;
+ entity.RowKey = weatherObject.ObservationDate;
+
+ foreach (string propertyName in weatherObject.PropertyNames)
+ {
+ var value = weatherObject[propertyName];
+ entity[propertyName] = value;
+ }
+ _tableClient.AddEntity(entity);
+}
+
+
+public void UpsertExpandableData(ExpandableWeatherObject weatherObject)
+{
+ TableEntity entity = new TableEntity();
+ entity.PartitionKey = weatherObject.StationName;
+ entity.RowKey = weatherObject.ObservationDate;
+
+ foreach (string propertyName in weatherObject.PropertyNames)
+ {
+ var value = weatherObject[propertyName];
+ entity[propertyName] = value;
+ }
+ _tableClient.UpsertEntity(entity);
+}
+
+```
+### Update an entity
+
+Entities can be updated by calling the [UpdateEntity](/dotnet/api/azure.data.tables.tableclient.updateentity) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient) object. Because an entity (row) stored using the Table API could contain any arbitrary set of properties, it is often useful to create an update object based around a Dictionary object similar to the `ExpandableWeatherObject` discussed earlier. In this case, the only difference is the addition of an `Etag` property which is used for concurrency control during updates.
+
+```csharp
+public class UpdateWeatherObject
+{
+ public Dictionary<string, object> _properties = new Dictionary<string, object>();
+
+ public string StationName { get; set; }
+ public string ObservationDate { get; set; }
+ public string Etag { get; set; }
+
+ public object this[string name]
+ {
+ get => (ContainsProperty(name)) ? _properties[name] : null;
+ set => _properties[name] = value;
+ }
+
+ public ICollection<string> PropertyNames => _properties.Keys;
+
+ public int PropertyCount => _properties.Count;
+
+ public bool ContainsProperty(string name) => _properties.ContainsKey(name);
+}
+```
+
+In the sample app, this object is passed to the `UpdateEntity` method in the `TableService` class. This method first loads the existing entity from the Table API using the [GetEntity](/dotnet/api/azure.data.tables.tableclient.getentity) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient). It then updates that entity object and uses the `UpdateEntity` method save the updates to the database. Note how the [UpdateEntity](/dotnet/api/azure.data.tables.tableclient.updateentity) method takes the current Etag of the object to insure the object has not changed since it was initially loaded. If you want to update the entity regardless, you may pass a value of `Etag.Any` to the `UpdateEntity` method.
+
+```csharp
+public void UpdateEntity(UpdateWeatherObject weatherObject)
+{
+ string partitionKey = weatherObject.StationName;
+ string rowKey = weatherObject.ObservationDate;
+
+ // Use the partition key and row key to get the entity
+ TableEntity entity = _tableClient.GetEntity<TableEntity>(partitionKey, rowKey).Value;
+
+ foreach (string propertyName in weatherObject.PropertyNames)
+ {
+ var value = weatherObject[propertyName];
+ entity[propertyName] = value;
+ }
+
+ _tableClient.UpdateEntity(entity, new ETag(weatherObject.Etag));
+}
+```
+
+### Remove an entity
-## Review SLAs in the Azure portal
+To remove an entity from a table, call the [DeleteEntity](/dotnet/api/azure.data.tables.tableclient.deleteentity) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient) object with the partition key and row key of the object.
+```csharp
+public void RemoveEntity(string partitionKey, string rowKey)
+{
+ _tableClient.DeleteEntity(partitionKey, rowKey);
+}
+```
+
+## 7 - Run the code
+
+Run the sample application to interact with the Cosmos DB Table API. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
++
+Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
++
+Selecting the **Insert using Expandable Data** button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Cosmos DB Table API automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
++
+Use the **Insert Sample Data** button to load some sample data into your Cosmos DB Table.
++
+Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Cosmos DB Table API.
+ ## Clean up resources
+When you are finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
+
+### [Azure portal](#tab/azure-portal)
+
+A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Delete resource group step 1](<./includes/create-table-dotnet/remove-resource-group-1.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-1.png"::: |
+| [!INCLUDE [Delete resource group step 2](<./includes/create-table-dotnet/remove-resource-group-2.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-2.png"::: |
+| [!INCLUDE [Delete resource group step 3](<./includes/create-table-dotnet/remove-resource-group-3.md>)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az_group_delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurecli
+az group delete --name $RESOURCE_GROUP_NAME
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name $resourceGroupName
+```
++ ## Next steps In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Table API. > [!div class="nextstepaction"]
-> [Import table data to the Table API](table-import.md)
+> [Import table data to the Table API](table-import.md)
cosmos-db Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/dotnet-sdk.md
- Title: Azure Cosmos DB Table API .NET SDK & Resources
-description: Learn all about the Azure Cosmos DB Table API for .NET including release dates, retirement dates, and changes made between each version.
----- Previously updated : 08/17/2018---
-# Azure Cosmos DB Table .NET API: Download and release notes
-
-> [!div class="op_single_selector"]
-> * [.NET](dotnet-sdk.md)
-> * [.NET Standard](dotnet-standard-sdk.md)
-> * [Java](java-sdk.md)
-> * [Node.js](nodejs-sdk.md)
-> * [Python](python-sdk.md)
-
-| | Links|
-|||
-|**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table)|
-|**Quickstart**|[Azure Cosmos DB: Build an app with .NET and the Table API](create-table-dotnet.md)|
-|**Tutorial**|[Azure Cosmos DB: Develop with the Table API in .NET](tutorial-develop-table-dotnet.md)|
-|**Current supported framework**|[Microsoft .NET Framework 4.5.1](https://www.microsoft.com/en-us/download/details.aspx?id=40779)|
-
-> [!IMPORTANT]
-> The .NET Framework SDK [Microsoft.Azure.CosmosDB.Table](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table) is in maintenance mode and it will be deprecated soon. Please upgrade to the new .NET Standard library [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) to continue to get the latest features supported by the Table API.
-
-> If you created a Table API account during the preview, please create a [new Table API account](create-table-dotnet.md#create-a-database-account) to work with the generally available Table API SDKs.
->
-
-## Release notes
-
-### <a name="2.1.2"></a>2.1.2
-
-* Bug fixes
-
-### <a name="2.1.0"></a>2.1.0
-
-* Bug fixes
-
-### <a name="2.0.0"></a>2.0.0
-
-* Added Multi-region write support
-* Fixed NuGet package dependencies on Microsoft.Azure.DocumentDB, Microsoft.OData.Core, Microsoft.OData.Edm, Microsoft.Spatial
-
-### <a name="1.1.3"></a>1.1.3
-
-* Fixed NuGet package dependencies on Microsoft.Azure.Storage.Common and Microsoft.Azure.DocumentDB.
-* Bug fixes on table serialization when JsonConvert.DefaultSettings are configured.
-
-### <a name="1.1.1"></a>1.1.1
-
-* Added validation for malformed ETAGs in Direct Mode.
-* Fixed LINQ query bug in Gateway Mode.
-* Synchronous APIs now run on the thread pool with SynchronizationContext.
-
-### <a name="1.1.0"></a>1.1.0
-
-* Add TableQueryMaxItemCount, TableQueryEnableScan, TableQueryMaxDegreeOfParallelism, and TableQueryContinuationTokenLimitInKb to TableRequestOptions
-* Bug Fixes
-
-### <a name="1.0.0"></a>1.0.0
-
-* General availability release
-
-### <a name="0.1.0-preview"></a>0.9.0-preview
-
-* Initial preview release
-
-## Release and Retirement dates
-
-Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version.
-
-The `Microsoft.Azure.CosmosDB.Table` library is currently available for .NET Framework only, and is in maintenance mode and will be deprecated soon. New features and functionalities and optimizations are only added to the .NET Standard library [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table), as such it is recommended that you upgrade to [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table).
-
-The [WindowsAzure.Storage-PremiumTable](https://www.nuget.org/packages/WindowsAzure.Storage-PremiumTable/0.1.0-preview) preview package has been deprecated. The WindowsAzure.Storage-PremiumTable SDK will be retired on November 15, 2018, at which time requests to the retired SDK will not be permitted.
-
-| Version | Release Date | Retirement Date |
-| | | |
-| [2.1.2](#2.1.2) |September 16, 2019| |
-| [2.1.0](#2.1.0) |January 22, 2019|April 01, 2020 |
-| [2.0.0](#2.0.0) |September 26, 2018|March 01, 2020 |
-| [1.1.3](#1.1.3) |July 17, 2018|December 01, 2019 |
-| [1.1.1](#1.1.1) |March 26, 2018|December 01, 2019 |
-| [1.1.0](#1.1.0) |February 21, 2018|December 01, 2019 |
-| [1.0.0](#1.0.0) |November 15, 2017|November 15, 2019 |
-| 0.9.0-preview |November 11, 2017 |November 11, 2019 |
-
-## Troubleshooting
-
-If you get the error
-
-```
-Unable to resolve dependency 'Microsoft.Azure.Storage.Common'. Source(s) used: 'nuget.org',
-'CliFallbackFolder', 'Microsoft Visual Studio Offline Packages', 'Microsoft Azure Service Fabric SDK'`
-```
-
-when attempting to use the Microsoft.Azure.CosmosDB.Table NuGet package, you have two options to fix the issue:
-
-* Use Package Manage Console to install the Microsoft.Azure.CosmosDB.Table package and its dependencies. To do this, type the following in the Package Manager Console for your solution.
-
- ```powershell
- Install-Package Microsoft.Azure.CosmosDB.Table -IncludePrerelease
- ```
-
-
-* Using your preferred NuGet package management tool, install the Microsoft.Azure.Storage.Common NuGet package before installing Microsoft.Azure.CosmosDB.Table.
-
-## FAQ
--
-## See also
-
-To learn more about the Azure Cosmos DB Table API, see [Introduction to Azure Cosmos DB Table API](introduction.md).
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/how-to-create-container.md
This article explains the different ways to create a container in Azure Cosmos D
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. [Create a new Azure Cosmos account](create-table-dotnet.md#create-a-database-account), or select an existing account.
+1. [Create a new Azure Cosmos account](create-table-dotnet.md#1create-an-azure-cosmos-db-account), or select an existing account.
1. Open the **Data Explorer** pane, and select **New Table**. Next, provide the following details:
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/introduction.md
Previously updated : 01/08/2021 Last updated : 08/25/2021
> The .NET Cosmos DB Table Library [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) is in maintenance mode and will be deprecated soon. Please upgrade to the new .NET Azure Data Tables Library [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) to continue to get the latest features supported by the Table API. ## Table offerings+ If you currently use Azure Table Storage, you gain the following benefits by moving to the Azure Cosmos DB Table API: | Feature | Azure Table storage | Azure Cosmos DB Table API |
If you currently use Azure Table Storage, you gain the following benefits by mov
## Get started
-Create an Azure Cosmos DB account in the [Azure portal](https://portal.azure.com). Then get started with our [Quick Start for Table API by using .NET](create-table-dotnet.md).
-
-> [!IMPORTANT]
-> If you created a Table API account during the preview, please create a [new Table API account](create-table-dotnet.md#create-a-database-account) to work with the generally available Table API SDKs.
->
+Create an Azure Cosmos DB account in the [Azure portal](https://portal.azure.com). Then get started with our [Quick Start for Table API by using .NET](create-table-dotnet.md).
## Next steps
cosmos-db Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/java-sdk.md
- Title: Azure Cosmos DB Table API for Java
-description: Learn all about the Azure Cosmos DB Table API for Java including release dates, retirement dates, and changes made between each version.
--- Previously updated : 11/20/2018-----
-# Azure Cosmos DB Table API for Java: Release notes and resources
-
-> [!div class="op_single_selector"]
-> * [.NET](dotnet-sdk.md)
-> * [.NET Standard](dotnet-standard-sdk.md)
-> * [Java](java-sdk.md)
-> * [Node.js](nodejs-sdk.md)
-> * [Python](python-sdk.md)
-
-
-| | Links |
-|||
-|**SDK download**|[Download Options](https://github.com/azure/azure-storage-java#download)|
-|**API documentation**|[Java API reference documentation](https://azure.github.io/azure-storage-java/)|
-|**Contribute to SDK**|[GitHub](https://github.com/Azure/azure-storage-java#contribute-code-or-provide-feedback)|
-
-> [!IMPORTANT]
-> If you created a Table API account during the preview, please create a [new Table API account](create-table-dotnet.md#create-a-database-account) to work with the generally available Table API SDKs.
->
-
-## Release notes
-
-### <a name="1.0.0"></a>1.0.0
-* General availability release
-
-## Release and retirement dates
-Microsoft will provide notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version.
-
-New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
-
-| Version | Release Date | Retirement Date |
-| | | |
-| [1.0.0](#1.0.0) |November 15, 2017 | |
-
-## FAQ
-
-## See also
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
-
cosmos-db Nodejs Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/nodejs-sdk.md
- Title: Azure Cosmos DB Table API for Node.js
-description: Learn all about the Azure Cosmos DB Table API for Node.js including release dates, retirement dates, and changes made between each version.
--- Previously updated : 11/20/2018----
-# Azure Cosmos DB Table API for Node.js: Release notes and resources
-
-> [!div class="op_single_selector"]
-> * [.NET](dotnet-sdk.md)
-> * [.NET Standard](dotnet-standard-sdk.md)
-> * [Java](java-sdk.md)
-> * [Node.js](nodejs-sdk.md)
-> * [Python](python-sdk.md)
-
-
-| | Links |
-|||
-|**SDK download**|[NPM](https://www.npmjs.com/package/azure-storage)|
-|**API documentation**|[Node.js API reference documentation](https://azure.github.io/azure-storage-node/)|
-|**Contribute to SDK**|[GitHub](https://github.com/Azure/azure-storage-node#contribute)|
-
-> [!IMPORTANT]
-> If you created a Table API account during the preview, please create a [new Table API account](create-table-dotnet.md#create-a-database-account) to work with the generally available Table API SDKs.
->
-
-## Release notes
-
-### <a name="1.0.0"></a>1.0.0
-* General availability release
-
-## Release and retirement dates
-Microsoft will provide notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version.
-
-New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
-
-| Version | Release Date | Retirement Date |
-| | | |
-| [1.0.0](#1.0.0) |November 15, 2017 | |
-
-## FAQ
-
-## See also
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
-
cosmos-db Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/python-sdk.md
- Title: Azure Cosmos DB Table API for Python
-description: Learn all about the Azure Cosmos DB Table API including release dates, retirement dates, and changes made between each version.
--- Previously updated : 11/20/2018----
-# Azure Cosmos DB Table API SDK for Python: Release notes and resources
-
-> [!div class="op_single_selector"]
-> * [.NET](dotnet-sdk.md)
-> * [.NET Standard](dotnet-standard-sdk.md)
-> * [Java](java-sdk.md)
-> * [Node.js](nodejs-sdk.md)
-> * [Python](python-sdk.md)
-
-
-| | Links |
-|||
-|**SDK download**|[PyPI](https://pypi.python.org/pypi/azure-cosmosdb-table/)|
-|**API documentation**|[Python API reference documentation](/python/api/overview/azure/cosmosdb)|
-|**SDK installation instructions**|[Python SDK installation instructions](https://github.com/Azure/azure-cosmosdb-python/tree/master/azure-cosmosdb-table)|
-|**Contribute to SDK**|[GitHub](https://github.com/Azure/azure-cosmosdb-python/tree/master/azure-cosmosdb-table)|
-|**Current supported platform**|[Python 2.7](https://www.python.org/downloads/) or [Python 3.6+](https://www.python.org/downloads/)|
-
-> [!IMPORTANT]
-> If you created a Table API account during the preview, please create a [new Table API account](create-table-dotnet.md#create-a-database-account) to work with the generally available Table API SDKs.
->
-
-## Release notes
-
-### <a name="1.0.0"></a>1.0.0
-* General availability release
-
-### <a name="0.37.1"></a>0.37.1
-* Pre-release SDK
-
-## Release and retirement dates
-Microsoft will provide notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version.
-
-New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
-
-<br/>
-
-| Version | Release Date | Retirement Date |
-| | | |
-| [1.0.0](#1.0.0) |November 15, 2017 | |
-| [0.37.1](#0.37.1) |October 05, 2017 | |
--
-## FAQ
-
-## See also
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Table Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/table-support.md
description: Learn how Azure Cosmos DB Table API and Azure Storage Tables work t
Previously updated : 01/08/2021 Last updated : 08/25/2021
Azure Cosmos DB Table API and Azure Table storage share the same table data mode
[!INCLUDE [storage-table-cosmos-comparison](../../../includes/storage-table-cosmos-comparison.md)]
-## Developing with the Azure Cosmos DB Table API
+## Azure SDKs
-At this time, the [Azure Cosmos DB Table API](introduction.md) has four SDKs available for development:
+### Current release
-* [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table): .NET SDK. This library targets .NET Standard and has the same classes and method signatures as the public [Windows Azure Storage SDK](https://www.nuget.org/packages/WindowsAzure.Storage), but also has the ability to connect to Azure Cosmos DB accounts using the Table API. Users of .NET Framework library [Microsoft.Azure.CosmosDB.Table](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table/) are recommended to upgrade to [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) as it is in maintenance mode and will be deprecated soon.
+The following SDK packages work with both the Azure Cosmos Table API and Azure Table storage.
-* [Python SDK](python-sdk.md): The new Azure Cosmos DB Python SDK is the only SDK that supports Azure Table storage in Python. This SDK connects with both Azure Table storage and Azure Cosmos DB Table API.
+* **.NET** - Use the [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) available on NuGet.
-* [Java SDK](java-sdk.md): This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the Table API.
+* **Python** - Use the [azure-data-tables](https://pypi.org/project/azure-data-tables/) available from PyPi.
-* [Node.js SDK](nodejs-sdk.md): This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the Table API.
+* **JavaScript/TypeScript** - Use the [@azure/data-tables](https://www.npmjs.com/package/@azure/data-tables) package available on npm.js.
+* **Java** - Use the [azure-data-tables](https://mvnrepository.com/artifact/com.azure/azure-data-tables/12.0.0) package available on Maven.
-Additional information about working with the Table API is available in the [FAQ: Develop with the Table API](table-api-faq.yml) article.
+### Prior releases
-## Developing with Azure Table storage
+The following SDK packages work only with Azure Cosmos DB Table API.
-Azure Table storage has these SDKs available for development:
+* **.NET** - [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) available on NuGet. This library works only with the Cosmos DB Table API.
-- The [Microsoft.Azure.Storage.Blob](https://www.nuget.org/packages/Microsoft.Azure.Storage.Blob/), [Microsoft.Azure.Storage.File](https://www.nuget.org/packages/Microsoft.Azure.Storage.File/), [Microsoft.Azure.Storage.Queue](https://www.nuget.org/packages/Microsoft.Azure.Storage.Queue/), and [Microsoft.Azure.Storage.Common](https://www.nuget.org/packages/Microsoft.Azure.Storage.Common/) libraries allow you to work with the Azure Table storage service. If you are using the Table API in Azure Cosmos DB, you can instead use the [Microsoft.Azure.CosmosDB.Table](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table/) library.-- [Python SDK](https://github.com/Azure/azure-cosmos-table-python). The Azure Cosmos DB Table SDK for Python supports the Table Storage service (because Azure Table Storage and Cosmos DB's Table API share the same features and functionalities, and in an effort to factorize our SDK development efforts, we recommend to use this SDK).-- [Azure Storage SDK for Java](https://github.com/azure/azure-storage-java). This Azure Storage SDK provides a client library in Java to consume Azure Table storage.-- [Node.js SDK](https://github.com/Azure/azure-storage-node). This SDK provides a Node.js package and a browser-compatible JavaScript client library to consume the storage Table service.-- [AzureRmStorageTable PowerShell module](https://www.powershellgallery.com/packages/AzureRmStorageTable). This PowerShell module has cmdlets to work with storage Tables.-- [Azure Storage Client Library for C++](https://github.com/Azure/azure-storage-cpp/). This library enables you to build applications against Azure Storage.-- [Azure Storage Table Client Library for Ruby](https://github.com/azure/azure-storage-ruby/tree/master/table). This project provides a Ruby package that makes it easy to access Azure storage Table services.-- [Azure Storage Table PHP Client Library](https://github.com/Azure/azure-storage-php/tree/master/azure-storage-table). This project provides a PHP client library that makes it easy to access Azure storage Table services.
+* **Python** - [azure-cosmosdb-table](https://pypi.org/project/azure-cosmosdb-table/) available from PyPi. This SDK connects with both Azure Table storage and Azure Cosmos DB Table API.
+* **JavaScript/TypeScript** - [azure-storage](https://www.npmjs.com/package/azure-storage) package available on npm.js. This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the Table API.
-
-
+* **Java** - [Microsoft Azure Storage Client SDK for Java](https://mvnrepository.com/artifact/com.microsoft.azure/azure-storage) on Maven. This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the Table API.
+* **C++** - [Azure Storage Client Library for C++](https://github.com/Azure/azure-storage-cpp/). This library enables you to build applications against Azure Storage.
+* **Ruby** - [Azure Storage Table Client Library for Ruby](https://github.com/azure/azure-storage-ruby/tree/master/table). This project provides a Ruby package that makes it easy to access Azure storage Table services.
+* **PHP** - [Azure Storage Table PHP Client Library](https://github.com/Azure/azure-storage-php/tree/master/azure-storage-table). This project provides a PHP client library that makes it easy to access Azure storage Table services.
+* **PowerShell** - [AzureRmStorageTable PowerShell module](https://www.powershellgallery.com/packages/AzureRmStorageTable). This PowerShell module has cmdlets to work with storage Tables.
cosmos-db Tutorial Develop Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/tutorial-develop-table-dotnet.md
- Title: Azure Cosmos DB Table API using .NET Standard SDK
-description: Learn how to store and query the structured data in Azure Cosmos DB Table API account
----- Previously updated : 12/03/2019--
-# Get started with Azure Cosmos DB Table API and Azure Table storage using the .NET SDK
----
-You can use the Azure Cosmos DB Table API or Azure Table storage to store structured NoSQL data in the cloud, providing a key/attribute store with a schema less design. Because Azure Cosmos DB Table API and Table storage are schema less, it's easy to adapt your data as the needs of your application evolve. You can use Azure Cosmos DB Table API or the Table storage to store flexible datasets such as user data for web applications, address books, device information, or other types of metadata your service requires.
-
-This tutorial describes a sample that shows you how to use the [Microsoft Azure Cosmos DB Table Library for .NET](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) with Azure Cosmos DB Table API and Azure Table storage scenarios. These scenarios are explored using C# examples that illustrate how to create tables, insert/ update data, query data and delete the tables.
-
-While this walkthrough will discuss the specifics of the Cosmos DB implementation, you can create a Table Storage resource and use the same NuGet package and API to access the resource; only the resource creation is different. Regardless of which resource type you choose, you must use the connection string specific to the Azure service you have created.
-
-## Prerequisites
-
-You need the following to complete this sample successfully:
-
-* [Microsoft Visual Studio](https://www.visualstudio.com/downloads/)
-
-* [Microsoft Azure CosmosDB Table Library for .NET](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) - This library is currently available for .NET Standard and .NET framework.
-
-* [Azure Cosmos DB Table API account](create-table-dotnet.md#create-a-database-account).
-
-## Create an Azure Cosmos DB Table API account
--
-## Create a .NET console project
-
-In Visual Studio, create a new .NET console application. The following steps show you how to create a console application in Visual Studio 2019. You can use the Azure Cosmos DB Table Library in any type of .NET application, including an Azure cloud service or web app, and desktop and mobile applications. In this guide, we use a console application for simplicity.
-
-1. Select **File** > **New** > **Project**.
-
-1. Choose **Console App (.NET Core)**, and then select **Next**.
-
-1. In the **Project name** field, enter a name for your application, such as **CosmosTableSamples**. (You can provide a different name as needed.)
-
-1. Select **Create**.
-
-All code examples in this sample can be added to the Main() method of your console application's **Program.cs** file.
-
-## Install the required NuGet package
-
-To obtain the NuGet package, follow these steps:
-
-1. Right-click your project in **Solution Explorer** and choose **Manage NuGet Packages**.
-
-1. Search online for [`Microsoft.Azure.Cosmos.Table`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table), [`Microsoft.Extensions.Configuration`](https://www.nuget.org/packages/Microsoft.Extensions.Configuration), [`Microsoft.Extensions.Configuration.Json`](https://www.nuget.org/packages/Microsoft.Extensions.Configuration.Json), [`Microsoft.Extensions.Configuration.Binder`](https://www.nuget.org/packages/Microsoft.Extensions.Configuration.Binder) and select **Install** to install the Microsoft Azure Cosmos DB Table Library.
-
-## Configure your storage connection string
-
-1. From the [Azure portal](https://portal.azure.com/), navigate to your Azure Cosmos account or the Table Storage account.
-
-1. Open the **Connection String** or **Access keys** pane. Use the copy button on the right side of the window to copy the **PRIMARY CONNECTION STRING**.
-
- :::image type="content" source="./media/create-table-dotnet/connection-string.png" alt-text="View and copy the PRIMARY CONNECTION STRING in the Connection String pane":::
-
-1. To configure your connection string, from visual studio right click on your project **CosmosTableSamples**.
-
-1. Select **Add** and then **New Item**. Create a new file **Settings.json** with file type as **TypeScript JSON Configuration** File.
-
-1. Replace the code in Settings.json file with the following code and assign your primary connection string:
-
- ```csharp
- {
- "StorageConnectionString": <Primary connection string of your Azure Cosmos DB account>
- }
- ```
-
-1. Right click on your project **CosmosTableSamples**. Select **Add**, **New Item** and add a class named **AppSettings.cs**.
-
-1. Add the following code to the AppSettings.cs file. This file reads the connection string from Settings.json file and assigns it to the configuration parameter:
-
- :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/AppSettings.cs":::
-
-## Parse and validate the connection details
-
-1. Right click on your project **CosmosTableSamples**. Select **Add**, **New Item** and add a class named **Common.cs**. You will write code to validate the connection details and create a table within this class.
-
-1. Define a method `CreateStorageAccountFromConnectionString` as shown below. This method will parse the connection string details and validate that the account name and account key details provided in the "Settings.json" file are valid.
-
- :::code language="csharp" source="~/azure-cosmosdb-dotnet-table/CosmosTableSamples/Common.cs" id="createStorageAccount":::
-
-## Create a Table
-
-The [CloudTableClient](/dotnet/api/microsoft.azure.cosmos.table.cloudtableclient) class enables you to retrieve tables and entities stored in Table storage. Because we donΓÇÖt have any tables in the Cosmos DB Table API account, letΓÇÖs add the `CreateTableAsync` method to the **Common.cs** class to create a table:
--
-If you get a "503 service unavailable exception" error, it's possible that the required ports for the connectivity mode are blocked by a firewall. To fix this issue, either open the required ports or use the gateway mode connectivity as shown in the following code:
-
-```csharp
-tableClient.TableClientConfiguration.UseRestExecutorForCosmosEndpoint = true;
-```
-
-## Define the entity
-
-Entities map to C# objects by using a custom class derived from [TableEntity](/dotnet/api/microsoft.azure.cosmos.table.tableentity). To add an entity to a table, create a class that defines the properties of your entity.
-
-Right click on your project **CosmosTableSamples**. Select **Add**, **New Folder** and name it as **Model**. Within the Model folder add a class named **CustomerEntity.cs** and add the following code to it.
--
-This code defines an entity class that uses the customer's first name as the row key and last name as the partition key. Together, an entity's partition and row key uniquely identify it in the table. Entities with the same partition key can be queried faster than entities with different partition keys but using diverse partition keys allows for greater scalability of parallel operations. Entities to be stored in tables must be of a supported type, for example derived from the [TableEntity](/dotnet/api/microsoft.azure.cosmos.table.tableentity) class. Entity properties you'd like to store in a table must be public properties of the type, and support both getting and setting of values. Also, your entity type must expose a parameter-less constructor.
-
-## Insert or merge an entity
-
-The following code example creates an entity object and adds it to the table. The InsertOrMerge method within the [TableOperation](/dotnet/api/microsoft.azure.cosmos.table.tableoperation) class is used to insert or merge an entity. The [CloudTable.ExecuteAsync](/dotnet/api/microsoft.azure.cosmos.table.cloudtable.executeasync) method is called to execute the operation.
-
-Right click on your project **CosmosTableSamples**. Select **Add**, **New Item** and add a class named **SamplesUtils.cs**. This class stores all the code required to perform CRUD operations on the entities.
--
-## Get an entity from a partition
-
-You can get entity from a partition by using the Retrieve method under the [TableOperation](/dotnet/api/microsoft.azure.cosmos.table.tableoperation) class. The following code example gets the partition key row key, email and phone number of a customer entity. This example also prints out the request units consumed to query for the entity. To query for an entity, append the following code to **SamplesUtils.cs** file:
--
-## Delete an entity
-
-You can easily delete an entity after you have retrieved it by using the same pattern shown for updating an entity. The following code retrieves and deletes a customer entity. To delete an entity, append the following code to **SamplesUtils.cs** file:
--
-## Execute the CRUD operations on sample data
-
-After you define the methods to create table, insert or merge entities, run these methods on the sample data. To do so, right click on your project **CosmosTableSamples**. Select **Add**, **New Item** and add a class named **BasicSamples.cs** and add the following code to it. This code creates a table, adds entities to it.
-
-If don't want to delete the entity and table at the end of the project, comment the `await table.DeleteIfExistsAsync()` and `SamplesUtils.DeleteEntityAsync(table, customer)` methods from the following code. It's best to comment out these methods and validate the data before you delete the table.
--
-The previous code creates a table that starts with ΓÇ£demoΓÇ¥ and the generated GUID is appended to the table name. It then adds a customer entity with first and last name as ΓÇ£Harp WalterΓÇ¥ and later updates the phone number of this user.
-
-In this tutorial, you built code to perform basic CRUD operations on the data stored in Table API account. You can also perform advanced operations such as ΓÇô batch inserting data, query all the data within a partition, query a range of data within a partition, Lists tables in the account whose names begin with the specified prefix. You can download the complete sample form [azure-cosmos-table-dotnet-core-getting-started](https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started) GitHub repository. The [AdvancedSamples.cs](https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started/blob/main/CosmosTableSamples/AdvancedSamples.cs) class has more operations that you can perform on the data.
-
-## Run the project
-
-From your project **CosmosTableSamples**. Open the class named **Program.cs** and add the following code to it for calling BasicSamples when the project runs.
--
-Now build the solution and press F5 to run the project. When the project is run, you will see the following output in the command prompt:
--
-If you receive an error that says Settings.json file canΓÇÖt be found when running the project, you can resolve it by adding the following XML entry to the project settings. Right click on CosmosTableSamples, select Edit CosmosTableSamples.csproj and add the following itemGroup:
-
-```csharp
- <ItemGroup>
- <None Update="Settings.json">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- </ItemGroup>
-```
-Now you can sign into the Azure portal and verify that the data exists in the table.
--
-## Next steps
-
-You can now proceed to the next tutorial and learn how to migrate data to Azure Cosmos DB Table API account.
-
-> [!div class="nextstepaction"]
->[Migrate data to Azure Cosmos DB Table API](table-import.md)
cost-management-billing Link Partner Id https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/link-partner-id.md
description: Track engagements with Azure customers by linking a partner ID to t
Previously updated : 10/05/2020 Last updated : 09/08/2021
When you have access to the customer's resources, use the Azure portal, PowerShe
```azurepowershell-interactive
- C:\> new-AzManagementPartner -PartnerId 12345
+ C:\> New-AzManagementPartner -PartnerId 12345
``` #### Get the linked partner ID ```azurepowershell-interactive
-C:\> get-AzManagementPartner
+C:\> Get-AzManagementPartner
``` #### Update the linked partner ID
C:\> Update-AzManagementPartner -PartnerId 12345
``` #### Delete the linked partner ID ```azurepowershell-interactive
-C:\> remove-AzManagementPartner -PartnerId 12345
+C:\> Remove-AzManagementPartner -PartnerId 12345
``` ### Use the Azure CLI to link to a new partner ID
cost-management-billing Review Enterprise Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/review-enterprise-agreement-bill.md
tags: billing
Previously updated : 08/20/2020 Last updated : 09/08/2021
This section doesn't apply to Azure customers in Australia, Japan, or Singapore.
You receive an Azure invoice when any of the following events occur during your billing cycle: - **Service overage**: Your organization's usage charges exceed your credit balance.-- **Charges billed separately**: The services your organization used aren't covered by the credit. You're invoiced for the following services despite your credit balance:
+- **Charges billed separately**: The services your organization used aren't covered by the credit. You're invoiced for the following services despite your credit balance. The services shown are examples of charges billed separately. You can get a full list of the services where charges are billed separately by submitting a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
- Canonical - Citrix XenApp Essentials - Citrix XenDesktop
This section only applies to Azure customers in Australia, Japan, or Singapore.
You receive one or more Azure invoices when any of the following events occur: - **Service overage**: Your organization's usage charges exceed your credit balance.-- **Charges billed separately**: The services your organization used aren't covered by the credit. You're invoiced for the following
+- **Charges billed separately**: The services your organization used aren't covered by the credit. You're invoiced for the following services. The services shown are examples of charges billed separately. You can get a full list of the services where charges are billed separately by submitting a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).:
- Canonical - Citrix XenApp Essentials - Citrix XenDesktop
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Previously updated : 06/27/2021 Last updated : 09/07/2021 # Troubleshoot CI-CD, Azure DevOps, and GitHub issues in ADF [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-In this article, let us explore common troubleshooting methods for Continuous Integration-Continuous Deployment (CI-CD), Azure DevOps and GitHub issues in Azure Data Factory.
+In this article, let us explore common troubleshooting methods for Continuous Integration-Continuous Deployment (CI-CD), Azure DevOps, and GitHub issues in Azure Data Factory.
If you have questions or issues in using source control or DevOps techniques, here are a few articles you may find useful:
Until recently, only way to publish ADF pipeline for deployments was using ADF P
CI/CD process has been enhanced. The **Automated** publish feature takes, validates, and exports all ARM template features from the ADF UX. It makes the logic consumable via a publicly available npm package [@microsoft/azure-data-factory-utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities). This method allows you to programmatically trigger these actions instead of having to go to the ADF UI and do a button click. This method gives your CI/CD pipelines a **true** continuous integration experience. Follow [ADF CI/CD Publishing Improvements](./continuous-integration-deployment-improvements.md) for details.
-### Cannot publish because of 4-MB ARM template limit
+### Cannot publish because of 4 MB ARM template limit
#### Issue
-You cannot deploy because you hit Azure Resource Manager limit of 4-MB total template size. You need a solution to deploy after crossing the limit.
+You cannot deploy because you hit Azure Resource Manager limit of 4 MB total template size. You need a solution to deploy after crossing the limit.
#### Cause
-Azure Resource Manager restricts template size to be 4-MB. Limit the size of your template to 4-MB, and each parameter file to 64 KB. The 4-MB limit applies to the final state of the template after it has been expanded with iterative resource definitions, and values for variables and parameters. But, you have crossed the limit.
+Azure Resource Manager restricts template size to be 4-MB. Limit the size of your template to 4-MB, and each parameter file to 64 KB. The 4 MB limit applies to the final state of the template after it has been expanded with iterative resource definitions, and values for variables and parameters. But, you have crossed the limit.
#### Resolution
During development and deployment cycles, you may want to unit test your pipelin
Because customers may have different unit testing requirements with different skill sets, usual practice is to follow following steps: 1. Setup Azure DevOps CI/CD project or develop .NET/PYTHON/REST type SDK driven test strategy.
-2. For CI/CD, create build artifact containing all scripts and deploy resources in release pipeline. For SDK driven approach, develop Test units using PyTest in Python, C# **Nunit** using .NET SDK and so on.
+2. For CI/CD, create build artifact containing all scripts and deploy resources in release pipeline. For SDK driven approach, develop Test units using PyTest in Python, Nunit in C# using .NET SDK and so on.
3. Run unit tests as part of release pipeline or independently with ADF Python/PowerShell/.NET/REST SDK. For example, you want to delete duplicates in a file and then store curated file as table in a database. To test the pipeline, you set up a CI/CD project using Azure DevOps. You set up a TEST pipeline stage where you deploy your developed pipeline. You configure TEST stage to run Python tests for making sure table data is what you expected. If you do not use CI/CD, you use **Nunit** to trigger deployed pipelines with tests you want. Once you are satisfied with the results, you can finally publish the pipeline to a production data factory.
+### Pipeline runs temporarily fail after CI/CD deployment or authoring updates
+
+#### Issue
+After some amount of time, new pipeline runs begin to succeed without any user action after temporary failures.
+
+#### Cause
+
+There are several scenarios which can trigger this behavior, all of which involve a new version of a dependent resource being called by the old version of the parent resource. For example, suppose an existing child pipeline called by ΓÇ£Execute pipelineΓÇ¥ is updated to have required parameters and the existing parent pipeline is updated to pass these parameters. If the deployment occurs during a parent pipeline execution, but before the **Execute Pipeline** activity, the old version of the pipeline will call the new version of the child pipeline, and the expected parameters will not be passed. This will cause the pipeline to fail with a *UserError*. This can also occur with other types of dependencies, such as if a breaking change is made to linked service during a pipeline run that references it.
+
+#### Resolution
+
+New runs of the parent pipeline will automatically begin succeeding, so typically no action is needed. However, to prevent these errors, customers should consider dependencies while authoring and planning deployments to avoid breaking changes.
+ ## Next steps For more help with troubleshooting, try the following resources:
data-factory Self Hosted Integration Runtime Automation Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-automation-scripts.md
To automate installation of Self-hosted Integration Runtime on local machines (o
> The scripts need to be applied per node, so make sure to run it across all nodes in case of high availability setup (2 or more nodes). * For automating setup:
-Install and register a new self-hosted integration runtime node using **[InstallGatewayOnLocalMachine.ps1](https://github.com/nabhishek/SelfHosted-IntegrationRuntime_AutomationScripts/blob/master/InstallGatewayOnLocalMachine.ps1)** - The script can be used to install self-hosted integration runtime node and register it with an authentication key. The script accepts two arguments, **first** specifying the location of the [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717) on a local disk, **second** specifying the **authentication key** (for registering self-hosted IR node).
+Install and register a new self-hosted integration runtime node using **[InstallGatewayOnLocalMachine.ps1](https://github.com/Azure/Azure-DataFactory/blob/main/SamplesV2/SelfHostedIntegrationRuntime/AutomationScripts/InstallGatewayOnLocalMachine.ps1)** - The script can be used to install self-hosted integration runtime node and register it with an authentication key. The script accepts two arguments, **first** specifying the location of the [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717) on a local disk, **second** specifying the **authentication key** (for registering self-hosted IR node).
* For automating manual updates:
-Update the self-hosted IR node with a specific version or to the latest version **[script-update-gateway.ps1](https://github.com/nabhishek/SelfHosted-IntegrationRuntime_AutomationScripts/blob/master/script-update-gateway.ps1)** - This is also supported in case you have turned off the auto-update, or want to have more control over updates. The script can be used to update the self-hosted integration runtime node to the latest version or to a specified higher version (downgrade doesnΓÇÖt work). It accepts an argument for specifying version number (example: -version 3.13.6942.1). When no version is specified, it always updates the self-hosted IR to the latest version found in the [downloads](https://www.microsoft.com/download/details.aspx?id=39717).
+Update the self-hosted IR node with a specific version or to the latest version **[script-update-gateway.ps1](https://github.com/Azure/Azure-DataFactory/blob/main/SamplesV2/SelfHostedIntegrationRuntime/AutomationScripts/script-update-gateway.ps1)** - This is also supported in case you have turned off the auto-update, or want to have more control over updates. The script can be used to update the self-hosted integration runtime node to the latest version or to a specified higher version (downgrade doesnΓÇÖt work). It accepts an argument for specifying version number (example: -version 3.13.6942.1). When no version is specified, it always updates the self-hosted IR to the latest version found in the [downloads](https://www.microsoft.com/download/details.aspx?id=39717).
> [!NOTE] > Only last 3 versions can be specified. Ideally this is used to update an existing node to the latest version. **IT ASSUMES THAT YOU HAVE A REGISTERED SELF HOSTED IR**.
digital-twins Reference Query Reserved https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/reference-query-reserved.md
+
+# Mandatory fields.
+ Title: Azure Digital Twins query language reference - Reserved keywords
+
+description: Reference documentation for the Azure Digital Twins query language reserved keywords
++ Last updated : 9/1/2021+++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Azure Digital Twins query language reference: Reserved keywords
+
+This document contains the list of **reserved keywords** in the [Azure Digital Twins query language](concepts-query-language.md). These words cannot be used as identifiers in queries unless they are [escaped in double square brackets](#escaping-reserved-keywords-in-queries).
+
+## List of reserved keywords
+
+Here are the reserved keywords in the Azure Digital Twins query language:
+
+* ALL
+* AND
+* AS
+* ASC
+* AVG
+* BY
+* COUNT
+* DESC
+* DEVICES_JOBS
+* DEVICES_MODULES
+* DEVICES
+* ENDS_WITH
+* FALSE
+* FROM
+* GROUP
+* IN
+* IS_BOOL
+* IS_DEFINED
+* IS_NULL
+* IS_NUMBER
+* IS_OBJECT
+* IS_PRIMITIVE
+* IS_STRING
+* MAX
+* MIN
+* NOT
+* NOT_IN
+* NULL
+* OR
+* ORDER
+* SELECT
+* STARTS_WITH
+* SUM
+* TOP
+* TRUE
+* WHERE
+* IS_OF_MODEL
+
+## Escaping reserved keywords in queries
+
+To use a reserved keyword as an identifier in a query, escape the keyword by enclosing it with double square brackets like this: `[[<keyword>]]`
+
+For example, consider a set of digital twins with a property called `GROUP`, which is a reserved keyword. To filter on that property value, the property name must be escaped where it is used in the query, as shown below:
+
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/migration-using-azure-data-studio.md
The workflow of the migration process is illustrated below.
Azure Database Migration Service prerequisites that are common across all supported migration scenarios include the need to:
-* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio.md)
-* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension.md) from the Azure Data Studio marketplace
+* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
+* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
Azure Database Migration Service prerequisites that are common across all suppor
> If your database backup files are already provided in an Azure storage account, self-hosted integration runtime is not required during the migration process. * When using self-hosted integration runtime, make sure that the machine where the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share.
-* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](/quickstart-create-data-migration-service-portal.md#register-the-resource-provider)
+* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](/azure/dms/quickstart-create-data-migration-service-portal#register-the-resource-provider)
### Recommendations for using self-hosted integration runtime for database migrations - Use a single self-hosted integration runtime for multiple source SQL Server databases.
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-caching.md
These profiles support the following compression encodings:
If a request supports gzip and Brotli compression, Brotli compression takes precedence.</br> When a request for an asset specifies compression and the request results in a cache miss, Front Door does compression of the asset directly on the POP server. Afterward, the compressed file is served from the cache. The resulting item is returned with a transfer-encoding: chunked.
+> [!NOTE]
+> Range requests may be compressed into different sizes. Azure Front Door requires the content-length values to be the same for any GET HTTP request. If clients send byte range requests with the `accept-encoding` header that leads to the Origin responding with different content lengths, then Azure Front Door will return a 503 error. You can either disable compression on Origin/Azure Front Door or create a Rules Set rule to remove `accept-encoding` from the request for byte range requests.
+ ## Query string behavior With Front Door, you can control how files are cached for a web request that contains a query string. In a web request with a query string, the query string is that portion of the request that occurs after a question mark (?). A query string can contain one or more key-value pairs, in which the field name and its value are separated by an equals sign (=). Each key-value pair is separated by an ampersand (&). For example, `http://www.contoso.com/content.mov?field1=value1&field2=value2`. If there's more than one key-value pair in a query string of a request then their order doesn't matter. - **Ignore query strings**: In this mode, Front Door passes the query strings from the requestor to the backend on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires.
frontdoor Front Door Troubleshoot Routing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-troubleshoot-routing.md
na ms.devlang: na Previously updated : 09/30/2020 Last updated : 09/08/2021
This article describes how to troubleshoot common routing problems that you migh
* Regular requests sent to your backend without going through Azure Front Door are succeeding. Going via Azure Front Door results in 503 error responses. * The failure from Azure Front Door typically shows after about 30 seconds.
+* Intermittent 503 errors with log `ErrorInfo: OriginInvalidResponse`.
### Cause
-The cause of this problem can be one of two things:
+The cause of this problem can be one of three things:
* Your backend is taking longer than the timeout configured (default is 30 seconds) to receive the request from Azure Front Door.
-* The time it takes to send a response to the request from Azure Front Door is taking longer than the timeout value.
+* The time it takes to send a response to the request from Azure Front Door is taking longer than the timeout value.
+* Client sent a byte range request with `Accept-Encoding header` (compression enabled).
### Troubleshooting steps * Send the request to your backend directly (without going through Azure Front Door). See how long your backend usually takes to respond. * Send the request via Azure Front Door and see if you're getting any 503 responses. If not, the problem might not be a timeout issue. Contact support.
-* If going through Azure Front Door results in a 503 error response code, configure the `sendReceiveTimeout` field for Azure Front Door. You can extend the default timeout up to 4 minutes (240 seconds). The setting is under `backendPoolSettings` and is called `sendRecvTimeoutSeconds`.
+* If going through Azure Front Door results in a 503 error response code, configure the `sendReceiveTimeout` field for Azure Front Door. You can extend the default timeout up to 4 minutes (240 seconds). The setting is under `backendPoolSettings` and is called `sendRecvTimeoutSeconds`.
+* If the timeout doesnΓÇÖt resolve the issue, use a tool like Fiddler or your browser's developer tool to check if the client is sending byte range requests with Accept-Encoding headers, leading to the origin responding with different content lengths. If yes, then you can either disable compression on the Origin/Azure Front Door or create a Rules Set rule to remove `accept-encoding` from the request for byte range requests.
+
+ :::image type="content" source=".\media\troubleshoot-route-issues\remove-encoding-rule.png" alt-text="Screenshot of accept-encoding rule in Rules Engine.":::
## Requests sent to the custom domain return a 400 status code
frontdoor Troubleshoot Route Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/troubleshoot-route-issues.md
Previously updated : 02/18/2021- Last updated : 09/08/2021+ # Troubleshooting common routing problems with Azure Front Door Standard/Premium
This article describes how to troubleshoot common routing problems that you migh
* Regular requests sent to your backend without going through Azure Front Door are succeeding. Going via Azure Front Door results in 503 error responses. * The failure from Azure Front Door typically shows after about 30 seconds.
+* Intermittent 503 errors with log `ErrorInfo: OriginInvalidResponse`.
### Cause
-The cause of this problem can be one of two things:
+The cause of this problem can be one of three things:
* Your origin is taking longer than the timeout configured (default is 30 seconds) to receive the request from Azure Front Door.
-* The time it takes to send a response to the request from Azure Front Door is taking longer than the timeout value.
+* The time it takes to send a response to the request from Azure Front Door is taking longer than the timeout value.
+* Client sent a byte range request with `Accept-Encoding header` (compression enabled).
### Troubleshooting steps * Send the request to your backend directly (without going through Azure Front Door). See how long your backend usually takes to respond. * Send the request via Azure Front Door and see if you're getting any 503 responses. If not, the problem might not be a timeout issue. Contact support. * If going through Azure Front Door results in a 503 error response code, configure the `sendReceiveTimeout` field for Azure Front Door. You can extend the default timeout up to 4 minutes (240 seconds). The setting is under *Endpoint Setting* and is called **Origin response timeout**.-
-### Symptom
-
-Intermittent 503 errors with log `ErrorInfo: OriginInvalidResponse`
-
-### Cause
-
-Client sent a byte range request with `Accept-Encoding header` (compression enabled).
-
-### Troubleshooting steps
-
-If going through Azure Front Door results in a 503 error response code, then:
-* Configure the `sendReceiveTimeout` field for Azure Front Door. You can extend the default timeout up to 4 minutes (240 seconds). The setting is under *Endpoint Setting* and is called **Origin response timeout**.
* If the timeout doesnΓÇÖt resolve the issue, use a tool like Fiddler or your browser's developer tool to check if the client is sending byte range requests with Accept-Encoding headers, leading to the origin responding with different content lengths. If yes, then you can either disable compression on the Origin/Azure Front Door or create a Rules Set rule to remove `accept-encoding` from the request for byte range requests. :::image type="content" source="..\media\troubleshoot-route-issues\remove-encoding-rule.png" alt-text="Screenshot of accept-encoding rule in a Rule Set.":::
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-component-versioning.md
Title: Apache Hadoop components and versions - Azure HDInsight
description: Learn about the Apache Hadoop components and versions in Azure HDInsight. Previously updated : 02/08/2021 Last updated : 08/26/2021 # Azure HDInsight versions
This table lists the versions of HDInsight that are available in the Azure porta
| HDInsight version | VM OS | Release date| Support type | Support expiration date | Retirement date | High availability | | | | | | | | | | [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 16.0.4 LTS |September 24, 2018 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | | |Yes |
-| [HDInsight 3.6](hdinsight-36-component-versioning.md) |Ubuntu 16.0.4 LTS |April 4, 2017 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Standard support expiration - June 30, 2021 <br> Basic support expiration - April 3, 2022 |April 4, 2022 |Yes |
+| [HDInsight 3.6](hdinsight-36-component-versioning.md) |Ubuntu 16.0.4 LTS |April 4, 2017 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Standard support expired on June 30, 2021 for all cluster types.<br> Basic support expires on April 3, 2022. See [HDInsight 3.6 component versions](hdinsight-36-component-versioning.md) for cluster type details. |April 4, 2022 |Yes |
-*Starting July 1st, 2021 Microsoft will offer Basic support for certain HDI 3.6 cluster types. See [HDInsight 3.6 component versions](hdinsight-36-component-versioning.md).
-
-## Release notes
+**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. And it may no longer available through the Azure portal for cluster creation.
-For additional release notes on the latest versions of HDInsight, see [HDInsight release notes](hdinsight-release-notes.md).
+**Retirement** means that existing clusters of an HDInsight version continue to run as is. New clusters of this version can't be created through any means, which includes the CLI and SDKs. Other control plane features, such as manual scaling and autoscaling, are not guaranteed to work after retirement date. Support isn't available for retired versions.
## Support options for HDInsight versions Support is defined as a time period that an HDInsight version is supported by Microsoft Customer Service and Support. HDInsight offers two types of support: -- **Standard support** is a time period in which Microsoft provides updates and support on HDInsight clusters.
- We recommend building solutions using the most recent fully supported version.
-- **Basic support** is a time period in which Microsoft will provide limited servicing to HDInsight Resource provider. HDInsight images and open-source software (OSS) components will not be serviced. Only critical security fixes will be patched on HDInsight clusters.
- Microsoft does not encourage creating new clusters or building any fresh solutions when a version is in Basic support. We recommend migrating existing clusters to the most recent fully supported version.
+- **Standard support**
+- **Basic support**
-**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. And it may no longer available through the Azure portal for cluster creation.
+### Standard support
-**Retirement** means that existing clusters of an HDInsight version continue to run as is. New clusters of this version can't be created through any means, which includes the CLI and SDKs. Other control plane features, such as manual scaling and autoscaling, are not guaranteed to work after retirement date. Support isn't available for retired versions.
+Standard support provides updates and support on HDInsight clusters. Microsoft recommends building solutions using the most recent fully supported version.
+
+Standard support includes the following:
+- Ability to create support requests on HDInsight 4.0 clusters.
+- Support for troubleshooting solutions built on 4.0 clusters.
+- Requests to restart services or nodes.
+- Root cause analysis investigations on support requests.
+- Root cause analysis or fixes to improve job or query performance.
+- Root cause analysis or fixes to improve customer-initiated changes, e.g., changing service configurations or issues due to custom script actions.
+- Product updates for critical security fixes until version retirement.
+- Scoped product updates to the HDInsight Resource provider.
+- Selective fixes or changes to HDInsight 4.0 images or open-source software (OSS) component versions.
+
+### Basic support
+
+Basic support provides limited servicing to the HDInsight Resource provider. HDInsight images and open-source software (OSS) components will not be serviced. Only critical security fixes will be patched on HDInsight clusters.
+
+Basic support includes the following:
+- Continued use of existing HDInsight 3.6 clusters.
+- Ability for existing HDInsight 3.6 customers to create new 3.6 clusters.
+- Ability to scale HDInsight 3.6 clusters up and down via autoscale or manual scale.
+- Scoped product updates to the HDInsight Resource provider.
+- Product updates for critical security fixes until version retirement.
+- Ability to create support requests on HDInsight 3.6 clusters.
+- Requests to restart services or nodes.
+
+Basic support does not include the following:
+- Fixes or changes to HDInsight 3.6 images or open-source software (OSS) component versions.
+- Support for troubleshooting solutions built on 3.6 clusters.
+- Adding new features or functionality.
+- Support for advice or ad-hoc queries.
+- Root cause analysis investigations on support requests.
+- Root cause analysis or fixes to improve job or query performance.
+- Root cause analysis or fixes to improve customer-initiated changes, e.g., changing service configurations or issues due to custom script actions.
+
+Microsoft does not encourage creating analytics pipelines or solutions on clusters in basic support. We recommend migrating existing clusters to the most recent fully supported version.
+
+## Release notes
+
+For additional release notes on the latest versions of HDInsight, see [HDInsight release notes](hdinsight-release-notes.md).
## Versioning considerations - Once a cluster is deployed with an image, that cluster is not automatically upgraded to newer image version. When creating new clusters, most recent image version will be deployed.
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2-portal.md
Previously updated : 09/17/2020 Last updated : 09/07/2021 # Create a cluster with Data Lake Storage Gen2 using the Azure portal
Assign the managed identity to the **Storage Blob Data Owner** role on the stora
> [!NOTE] > * To add a secondary storage account with Data Lake Storage Gen2, at the storage account level, simply assign the managed identity created earlier to the new Data Lake Storage Gen2 that you want to add. Please be advised that adding a secondary storage account with Data Lake Storage Gen2 via the "Additional storage accounts" blade on HDInsight isn't supported. > * You can enable RA-GRS or RA-ZRS on the Azure Blob storage account that HDInsight uses. However, creating a cluster against the RA-GRS or RA-ZRS secondary endpoint isn't supported.
+ > * HDInsight does not support setting Data Lake Storage Gen2 as read-access geo-zone-redundant storage (RA-GZRS) or geo-zone-redundant storage (GZRS).
## Delete the cluster
hdinsight Apache Kafka Ssl Encryption Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/kafka/apache-kafka-ssl-encryption-authentication.md
The details of each step are given below.
keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
- keytool -keystore kafka.client.keystore.jks -import -file client-cert-signed -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
+ keytool -keystore kafka.client.keystore.jks -import -file client-signed-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
``` 1. Create a file `client-ssl-auth.properties` on client machine (hn1) . It should have the following lines:
hdinsight Share Hive Metastore With Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/share-hive-metastore-with-synapse.md
Azure Synapse Analytics allows Apache Spark pools in the same workspace to share
The feature works with both Spark 2.4 and Spark 3.0. The following table shows the supported Hive metastore service (HMS) versions for each Spark version.
-|Spark Version|HMS 1.2.X|HMS 2.1.X|HMS 3.1.X|
-|--|--|--|--|
-|2.4|Yes|Yes|No|
-|3|Yes|Yes|Yes|
+|Spark Version|HMS 1.2.X|HMS 2.1.X|HMS 2.3.X|HMS 3.1.X|
+|--|--|--|--|--|
+|2.4|Yes|Yes|Yes|No|
+|3|Yes|Yes|Yes|Yes|
> [!NOTE] > You can use the existing external Hive metastore from HDInsight clusters, both 3.6 and 4.0 clusters. See [use external metadata stores in Azure HDInsight](./hdinsight-use-external-metadata-stores.md).
Here are the configurations and descriptions:
|Spark config|Description| |--|--|
-|`spark.sql.hive.metastore.version`|Supported versions: <ul><li>`1.2`</li><li>`2.1`</li><li>`3.1`</li></ul> Make sure you use the first 2 parts without the 3rd part|
-|`spark.sql.hive.metastore.jars`|<ul><li>Version 1.2: `/opt/hive-metastore/lib-1.2/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 2.1: `/opt/hive-metastore/lib-2.1/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 3.1: `/opt/hive-metastore/lib-3.1/*:/usr/hdp/current/hadoop-client/lib/*`</li></ul>|
+|`spark.sql.hive.metastore.version`|Supported versions: <ul><li>`1.2`</li><li>`2.1`</li><li>`2.3`</li><li>`3.1`</li></ul> Make sure you use the first 2 parts without the 3rd part|
+|`spark.sql.hive.metastore.jars`|<ul><li>Version 1.2: `/opt/hive-metastore/lib-1.2/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 2.1: `/opt/hive-metastore/lib-2.1/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 2.3: `/opt/hive-metastore/lib-2.3/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 3.1: `/opt/hive-metastore/lib-3.1/*:/usr/hdp/current/hadoop-client/lib/*`</li></ul>|
|`spark.hadoop.hive.synapse.externalmetastore.linkedservice.name`|Name of your linked service created to the Azure SQL Database.| ### Configure Spark pool
If you need to migrate your HMS version, we recommend using [hive schema tool](h
If you want to share the Hive catalog with a spark cluster in HDInsight 4.0, please ensure your property `spark.hadoop.metastore.catalog.default` in Synapse spark aligns with the value in HDInsight spark. The default value is `Spark`. ### When sharing the Hive metastore with HDInsight 4.0 Hive clusters, I can list the tables successfully, but only get empty result when I query the table
-As mentioned in the limitations, Synapse Spark pool only supports external hive tables and non-transitional/ACID managed tables, it doesnΓÇÖt support Hive ACID/transactional tables currently. By default in HDInsight 4.0 Hive clusters, all managed tables are created as ACID/transactional tables by default, thatΓÇÖs why you get empty results when querying those tables.
+As mentioned in the limitations, Synapse Spark pool only supports external hive tables and non-transitional/ACID managed tables, it doesnΓÇÖt support Hive ACID/transactional tables currently. By default in HDInsight 4.0 Hive clusters, all managed tables are created as ACID/transactional tables by default, thatΓÇÖs why you get empty results when querying those tables.
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-simulator.md
Agent running. [main]
## Import update
-1. Download the [sample import manifest](https://github.com/Azure/iot-hub-device-update/releases/download/0.7.0-rc1/TutorialImportManifest.json) and [sample image update](https://github.com/Azure/iot-hub-device-update/releases/download/0.7.0-rc1/adu-update-image-raspberrypi3-0.6.5073.1.swu).
+1. Download the [sample import manifest](https://github.com/Azure/iot-hub-device-update/releases/download/0.7.0-rc1/TutorialImportManifest.json) and [sample image update](https://github.com/Azure/iot-hub-device-update/releases/download/0.7.0-rc1/adu-update-image-raspberrypi3-0.6.5073.1.swu). _Note_: these are re-used update files from the Raspberry Pi tutorial, because the update in this tutorial will be simulated and therefore the specific file content doesn't matter.
2. Log in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT Hub with Device Update. Then, select the Device Updates option under Automatic Device Management from the left-hand navigation bar. 3. Select the Updates tab.
lighthouse Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/enterprise.md
Title: Azure Lighthouse in enterprise scenarios description: The capabilities of Azure Lighthouse can be used to simplify cross-tenant management within an enterprise which uses multiple Azure AD tenants. Previously updated : 05/11/2021 Last updated : 09/08/2021
For most organizations, management is easier with a single Azure AD tenant. Havi
Some organizations may need to use multiple Azure AD tenants. This might be a temporary situation, as when acquisitions have taken place and a long-term tenant consolidation strategy hasn't been defined yet. Other times, organizations may need to maintain multiple tenants on an ongoing basis due to wholly independent subsidiaries, geographical or legal requirements, or other considerations.
-In cases where a multi-tenant architecture is required, Azure Lighthouse can help centralize and streamline management operations. By using [Azure delegated resource management](architecture.md), users in one managing tenant can perform [cross-tenant management functions](cross-tenant-management-experience.md) in a centralized, scalable manner.
+In cases where a multi-tenant architecture is required, Azure Lighthouse can help centralize and streamline management operations. By using Azure Lighthouse, users in one managing tenant can perform [cross-tenant management functions](cross-tenant-management-experience.md) in a centralized, scalable manner.
## Tenant management architecture
lighthouse Isv Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/isv-scenarios.md
Title: Azure Lighthouse in ISV scenarios description: The capabilities of Azure Lighthouse can be used by ISVs for more flexibility with customer offerings. Previously updated : 05/11/2021 Last updated : 09/08/2021 # Azure Lighthouse in ISV scenarios
-A common scenario for [Azure Lighthouse](../overview.md) is a service provider managing resources in its customersΓÇÖ Azure Active Directory (Azure AD) tenants. The capabilities of Azure Lighthouse can also be used by Independent Software Vendors (ISVs) using SaaS-based offerings with their customers. Azure Lighthouse can be especially useful for ISVs who are offering managed services or support that require access to the subscription scope.
+A common scenario for [Azure Lighthouse](../overview.md) is when a service provider manages resources in its customers' Azure Active Directory (Azure AD) tenants. The capabilities of Azure Lighthouse can also be used by Independent Software Vendors (ISVs) using SaaS-based offerings with their customers. Azure Lighthouse can be especially useful for ISVs who are offering managed services or support that require access to the subscription scope.
## Managed Service offers in Azure Marketplace
For more information, see [Azure Lighthouse and Azure managed applications](mana
## SaaS-based multi-tenant offerings
-An additional scenario is where ISV hosts the resources in a subscription in their own tenant, then uses Azure Lighthouse to let customers access these resources. The customer can then log in to their own tenant and access these resources as needed. ISVs maintain their IP in their own tenant, and can use their own support plans to raise tickets related to the solution hosted in their tenant, rather than using the customer's plan. Since the resources are in the ISV's tenant, all actions can be performed directly by the ISV, such as logging into VMs, installing apps, and performing maintenance tasks.
+An additional scenario is where the ISV hosts the resources in a subscription in their own tenant, then uses Azure Lighthouse to let customers access these resources. The customer can then log in to their own tenant and access these resources as needed. The ISV maintains their IP in their own tenant, and can use their own support plan to raise tickets related to the solution hosted in their tenant, rather than the customer's plan. Since the resources are in the ISV's tenant, all actions can be performed directly by the ISV, such as logging into VMs, installing apps, and performing maintenance tasks.
In this scenario, users in the customerΓÇÖs tenant are essentially granted access as a "managing tenant", even though the customer is not managing the ISV's resources. Because they are accessing the ISV's tenant directly, itΓÇÖs important to grant only the minimum permissions necessary, so that customers cannot inadvertently make changes to the solution or other ISV resources.
lighthouse Managed Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/managed-applications.md
Title: Azure Lighthouse and Azure managed applications description: Understand how Azure Lighthouse and Azure managed applications can be used together. Previously updated : 05/11/2021 Last updated : 09/08/2021
lighthouse Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/managed-services-offers.md
Title: Managed Service offers in Azure Marketplace description: Offer your Azure Lighthouse management services to customers through Managed Services offers in Azure Marketplace. Previously updated : 05/11/2021 Last updated : 09/08/2021
This article describes the **Managed Service** offer type in [Azure Marketplace]
Managed Service offers streamline the process of onboarding customers to Azure Lighthouse. When a customer purchases an offer in Azure Marketplace, they'll be able to specify which subscriptions and/or resource groups should be onboarded.
-After that, users in your organization will be able to work on those resources from within your managing tenant through [Azure delegated resource management](architecture.md), according to the access you defined when creating the offer. This is done through a manifest that specifies the Azure Active Directory (Azure AD) users, groups, and service principals that will have access to customer resources, along with [roles](tenants-users-roles.md) that define their level of access.
+For each offer, you define the access that users in your organization will have to work on resources in the customer tenant. This is done through a manifest that specifies the Azure Active Directory (Azure AD) users, groups, and service principals that will have access to customer resources, along with [roles](tenants-users-roles.md) that define their level of access.
> [!NOTE] > Managed Service offers may not be available in Azure Government and other national clouds.
lighthouse Recommended Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/recommended-security-practices.md
Title: Recommended security practices description: When using Azure Lighthouse, it's important to consider security and access control. Previously updated : 03/12/2021 Last updated : 09/08/2021
Once you've created these groups, you can assign users as needed. Only add the u
Keep in mind that when you [onboard customers through a public managed service offer](../how-to/publish-managed-services-offers.md), any group (or user or service principal) that you include will have the same permissions for every customer who purchases the plan. To assign different groups to work with each customer, you'll need to publish a separate private plan that is exclusive to each customer, or onboard customers individually by using Azure Resource Manager templates. For example, you could publish a public plan that has very limited access, then work with the customer directly to onboard their resources for additional access using a customized Azure Resource Template granting additional access as needed.
+> [!TIP]
+> You can also create *eligible authorizations* that let users in your managing tenant temporarily elevate their role. By using eligible authorizations, you can minimize the number of permanent assignments of users to privileged roles, helping to reduce security risks related to privileged access by users in your tenant. This feature is currently in public preview and has specific licensing requirements. For more information, see [Create eligible authorizations](../how-to/create-eligible-authorizations.md).
+ ## Next steps - Review the [security baseline information](../security-baseline.md) to understand how guidance from the Azure Security Benchmark applies to Azure Lighthouse.
lighthouse Create Eligible Authorizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/create-eligible-authorizations.md
Title: Create eligible authorizations description: When onboarding customers to Azure Lighthouse, you can let users in your managing tenant elevate their role on a just-in-time basis. Previously updated : 08/26/2021 Last updated : 09/08/2021
You can't use eligible authorizations with service principals, since there's cur
Each eligible authorization needs to include an [Azure built-in role](../../role-based-access-control/built-in-roles.md) that the user will be eligible to use on a just-in-time basis.
-The role can be any Azure built-in role that is supported for Azure delegated resource management except for User Access Administrator.
+The role can be any Azure built-in role that is [supported for Azure delegated resource management](../concepts/tenants-users-roles.md#role-support-for-azure-lighthouse), except for User Access Administrator.
> [!IMPORTANT] > If you include multiple eligible authorizations that use the same role, each of the eligible authorizations must have the same access policy settings.
lighthouse Manage Hybrid Infrastructure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/manage-hybrid-infrastructure-arc.md
Title: Manage hybrid infrastructure at scale with Azure Arc description: Azure Lighthouse helps you effectively manage customers' machines and Kubernetes clusters outside of Azure. Previously updated : 03/12/2021 Last updated : 09/07/2021 # Manage hybrid infrastructure at scale with Azure Arc
-As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](../overview.md). Azure Lighthouse allows service providers to perform operations at scale across several Azure Active Directory (Azure AD) tenants at once, making management tasks more efficient.
+[Azure Lighthouse](../overview.md) can help service providers use Azure Arc to manage customers' hybrid environments, with visibility across all managed Azure Active Directory (Azure AD) tenants.
[Azure Arc](../../azure-arc/overview.md) helps simplify complex and distributed environments across on-premises, edge and multicloud, enabling deployment of Azure services anywhere and extending Azure management to any infrastructure.
-With [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), customers can manage any Windows and Linux machines hosted outside of Azure on their corporate network, in the same way they manage native Azure virtual machines. By linking a hybrid machine to Azure, it becomes connected and is treated as a resource in Azure. Service providers can then manage these non-Azure machines along with their customers' Azure resources.
+With [Azure ArcΓÇôenabled servers](../../azure-arc/servers/overview.md), customers can manage any Windows and Linux machines hosted outside of Azure on their corporate network, in the same way they manage native Azure virtual machines. By linking a hybrid machine to Azure, it becomes connected and is treated as a resource in Azure. Service providers can then manage these non-Azure machines along with their customers' Azure resources.
-[Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) lets customers attach and configure Kubernetes clusters inside or outside of Azure. When a Kubernetes cluster is attached to Azure Arc, it will appear in the Azure portal, with an Azure Resource Manager ID and a managed identity. Clusters are attached to standard Azure subscriptions, are located in a resource group, and can receive tags just like any other Azure resource.
+[Azure ArcΓÇôenabled Kubernetes](../../azure-arc/kubernetes/overview.md) lets customers attach and configure Kubernetes clusters inside or outside of Azure. When a Kubernetes cluster is attached to Azure Arc, it will appear in the Azure portal, with an Azure Resource Manager ID and a managed identity. Clusters are attached to standard Azure subscriptions, are located in a resource group, and can receive tags just like any other Azure resource.
-This topic provides an overview of how service providers can use Azure Arc enabled servers and Azure Arc enabled Kubernetes in a scalable way to manage their customers' hybrid environment, with visibility across all managed customer tenants.
+This topic provides an overview of how to use Azure ArcΓÇôenabled servers and Azure ArcΓÇôenabled Kubernetes in a scalable way across the customer tenants you manage.
> [!TIP] > Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md).
-## Manage hybrid servers at scale with Azure Arc-enabled servers
+## Manage hybrid servers at scale with Azure ArcΓÇôenabled servers
-As a service provider, you can manage on-premises Windows Server or Linux machines outside Azure that your customers have connected to their subscription using the [Azure Connected Machine agent](../../azure-arc/servers/agent-overview.md).
+As a service provider, you can manage on-premises Windows Server or Linux machines outside Azure that your customers have connected to their subscription using the [Azure Connected Machine agent](../../azure-arc/servers/agent-overview.md). When viewing resources for a delegated subscription in the Azure portal, you'll see these connected machines labeled with **Azure Arc**.
-When viewing resources for a delegated subscription in the Azure portal, you'll see these connected machines labeled with **Azure Arc**. You can manage these connected machines using Azure constructs, such as Azure Policy and tagging, the same way that youΓÇÖd manage the customer's Azure resources. You can also work across customer tenants to manage all connected hybrid machines together.
+You can manage these connected machines using Azure constructs, such as Azure Policy and tagging, the same way that youΓÇÖd manage the customer's Azure resources. You can also work across customer tenants to manage all connected hybrid machines together.
-For example, you can [ensure the same set of policies are applied across customers' hybrid machines](../../azure-arc/servers/learn/tutorial-assign-policy-portal.md). You can also use Azure Security Center to monitor compliance across all of your customers' hybrid environments, or [use Azure Monitor to collect data directly from your hybrid machines](../../azure-arc/servers/learn/tutorial-enable-vm-insights.md) into a Log Analytics workspace. [Virtual machine extensions](../../azure-arc/servers/manage-vm-extensions.md) can be deployed to non-Azure Windows and Linux VMs, simplifying management of customer's hybrid machines.
+For example, you can [ensure the same set of policies are applied across customers' hybrid machines](../../azure-arc/servers/learn/tutorial-assign-policy-portal.md). You can also use Azure Security Center to monitor compliance across all of your customers' hybrid environments, or [use Azure Monitor to collect data directly from hybrid machines](../../azure-arc/servers/learn/tutorial-enable-vm-insights.md) into a Log Analytics workspace. [Virtual machine extensions](../../azure-arc/servers/manage-vm-extensions.md) can be deployed to non-Azure Windows and Linux VMs, simplifying management of customer's hybrid machines.
-## Manage hybrid Kubernetes clusters at scale with Azure Arc enabled Kubernetes
+## Manage hybrid Kubernetes clusters at scale with Azure ArcΓÇôenabled Kubernetes
You can manage Kubernetes clusters that have been [connected to a customer's subscription with Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md), just as if they were running in Azure.
-If your customer has created a [service principal account to onboard Kubernetes clusters to Azure Arc](../../azure-arc/kubernetes/create-onboarding-service-principal.md), you can access this service principal account to onboard and manage clusters. This can be done by users in the managing tenant who were granted the "Kubernetes Cluster - Azure Arc Onboarding" Azure built-in role when the subscription containing the service principal account was [onboarded to Azure Lighthouse](onboard-customer.md).
+If your customer has created a [service principal account to onboard Kubernetes clusters to Azure Arc](../../azure-arc/kubernetes/create-onboarding-service-principal.md), you can access this account so that you can onboard and manage clusters. To do so, a user in the managing tenant must have been granted the "Kubernetes Cluster - Azure Arc Onboarding" Azure built-in role when the subscription containing the service principal account was [onboarded to Azure Lighthouse](onboard-customer.md).
You can deploy [configurations](../../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md) and [Helm charts](../../azure-arc/kubernetes/use-gitops-with-helm.md) using GitOps for connected clusters.
You can also monitor connected clusters with Azure Monitor, and [use Azure Polic
## Next steps - Explore the jumpstarts and samples in the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc).-- Learn about [supported scenarios for Azure Arc enabled servers](../../azure-arc/servers/overview.md#supported-cloud-operations).
+- Learn about [supported scenarios for Azure ArcΓÇôenabled servers](../../azure-arc/servers/overview.md#supported-cloud-operations).
- Learn about [Kubernetes distributions supported by Azure Arc](../../azure-arc/kubernetes/overview.md#supported-kubernetes-distributions).-- Learn how to [deploy a policy at scale](policy-at-scale.md).-- Learn how to [use Azure Monitor Logs at scale](monitor-at-scale.md).
lighthouse Monitor Delegation Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/monitor-delegation-changes.md
Title: Monitor delegation changes in your managing tenant description: Learn how to monitor all Azure Lighthouse delegation activity to your managing tenant. Previously updated : 05/11/2021 Last updated : 09/08/2021
After you elevate your access, your account will have the User Access Administra
Once you have elevated your access, you can assign the appropriate permissions to an account so that it can query tenant-level activity log data. This account will need to have the [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) Azure built-in role assigned at the root scope of your managing tenant. > [!IMPORTANT]
-> Granting a role assignment at root scope means that the same permissions will apply to every resource in the tenant. Because this is a broad level of access, you may wish to [assign this role to a service principal account and using that account to query data](#use-a-service-principal-account-to-query-the-activity-log). You can also assign the Monitoring Reader role at root scope to individual users or to user groups so that they can [view delegation information directly in the Azure portal](#view-delegation-changes-in-the-azure-portal). If you do so, be aware that this is a broad level of access which should be limited to the fewest number of users possible.
+> Granting a role assignment at root scope means that the same permissions will apply to every resource in the tenant. Because this is a broad level of access, we recommend [assigning this role to a service principal account and using that account to query data](#use-a-service-principal-account-to-query-the-activity-log).
+>
+> You can also assign the Monitoring Reader role at root scope to individual users or to user groups so that they can [view delegation information directly in the Azure portal](#view-delegation-changes-in-the-azure-portal). If you do so, be aware that this is a broad level of access which should be limited to the fewest number of users possible.
Use one of the following methods to make the root scope assignment.
lighthouse Remove Delegation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/remove-delegation.md
Title: Remove access to a delegation description: Learn how to remove access to resources that had been delegated to a service provider for Azure Lighthouse. Previously updated : 05/11/2021 Last updated : 09/08/2021
lighthouse Update Delegation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/update-delegation.md
Title: Update a delegation description: Learn how to update a delegation for a customer previously onboarded to Azure Lighthouse. Previously updated : 02/16/2021 Last updated : 09/08/2021
After the deployment has been completed, [confirm that it was successful](onboar
## Updating Managed Service offers
-If you onboarded your customer through a Managed Service offer published to Azure Marketplace, and you want to update authorizations, you can update the delegation by [publishing a new version of your offer](../../marketplace/update-existing-offer.md) with the [authorizations](../../marketplace/plan-managed-service-offer.md) that you want to use updated in the plan for that customer. The customer will then be able to update to the newest version in the Azure portal.
+If you onboarded your customer through a Managed Service offer published to Azure Marketplace, and you want to update authorizations, you can do so by [publishing a new version of your offer](../../marketplace/update-existing-offer.md) with the [authorizations](../../marketplace/create-managed-service-offer-plans.md#authorizations) that you want to use updated in the plan for that customer. The customer will then be able to [review the changes in the Azure portal and accept the new version](view-manage-service-providers.md#update-service-provider-offers).
-If you want to change the managing tenant, you will need to [create and publish a new Managed Service offer](../../marketplace/plan-managed-service-offer.md) for the customer to accept.
+If you want to change the managing tenant, you will need to [create and publish a new Managed Service offer](publish-managed-services-offers.md) for the customer to accept.
-> [!TIP]
-> As mentioned earlier, we recommend that you donΓÇÖt use multiple different offers between the same customer and managing tenant. If you do publish a new offer for the same customer which uses the same managing tenant, be sure that the earlier offer is removed before the customer accepts the newer offer.
+> [!IMPORTANT]
+> As mentioned earlier, we recommend that you avoid using multiple offers for the same customer and managing tenant. If you do publish a new offer for the same customer which uses the same managing tenant, be sure that the earlier offer is removed before the customer accepts the newer offer.
## Next steps - [View and manage customers](view-manage-customers.md) by going to **My customers** in the Azure portal. - Learn how to [remove access to a delegation](remove-delegation.md) that was previously onboarded.-- Learn more about [Azure Lighthouse architecture](../concepts/architecture.md).
+- Learn more about [Azure Lighthouse architecture](../concepts/architecture.md).
load-balancer Load Balancer Standard Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-standard-availability-zones.md
Now that you understand the zone related properties for Standard Load Balancer,
- A **zone redundant** Load Balancer can serve a zonal resource in any zone with one IP address. The IP can survive one or more zone failures as long as at least one zone remains healthy within the region. - A **zonal** frontend is a reduction of the service to a single zone and shares fate with the respective zone. If the zone your deployment is in goes down, your deployment will not survive this failure.
-It is recommended you use zone redundant Load Balancer for your production workloads.
+It is recommended you use zone-redundant Load Balancer for your production workloads.
+
+### Multiple frontends
+
+Using multiple frontends allow you to load balance traffic on more than one port and/or IP address. When designing your architecture, it is important to account for the way zone redundancy and multiple frontends can interact. Note that if the goal is to always have every frontend be resilient to failure, then all IP addresses used assigned as frontends must be zone-redundant. If a set of frontends are intended to be associated with a single zone, then every IP address for that set must be associated with that specific zone. It is not required to have a load balancer for each zone; rather, each zonal frontend (or set of zonal frontends) could be associated with virtual machines in the backend pool that are part of that specific availability zone.
+
+### Transition between regional zonal models
+
+In the case where a region is augmented to have [availability zones](https://docs.microsoft.com/azure/availability-zones/az-overview), any existing frontend IPs would remain non-zonal. In order to ensure your architecture can take advantage of the new zones, it is recommended that new frontend IPs be created, and the appropriate rules and configurations be replicated to utilizes these new public IPs.
### Control vs data plane implications
Review [Azure cloud design patterns](/azure/architecture/patterns/) to improve t
## Limitations * Zones can't be changed, updated, or created for the resource after creation.-
-* Resources can't be updated from zonal to zone redundant or vice versa after creation.
+* Resources can't be updated from zonal to zone-redundant or vice versa after creation.
## Next steps - Learn more about [Availability Zones](../availability-zones/az-overview.md)
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/edit-app-settings-host-settings.md
Your logic app also has *host settings*, which specify the runtime configuration
## App settings, parameters, and deployment
-In *multi-tenant* Azure Logic Apps, deployment depends on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both logic apps and infrastructure. This design poses a challenge when you have to maintain environment variables for logic apps across across various dev, test, and production environments. Everything in an ARM template is defined at deployment. If you need to change just a single variable, you have to redeploy everything.
+In *multi-tenant* Azure Logic Apps, deployment depends on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both logic apps and infrastructure. This design poses a challenge when you have to maintain environment variables for logic apps across various dev, test, and production environments. Everything in an ARM template is defined at deployment. If you need to change just a single variable, you have to redeploy everything.
In *single-tenant* Azure Logic Apps, deployment becomes easier because you can separate resource provisioning between apps and infrastructure. You can use *parameters* to abstract values that might change between environments. By defining parameters to use in your workflows, you can first focus on designing your workflows, and then insert your environment-specific variables later. You can call and reference your environment variables at runtime by using app settings and parameters. That way, you don't have to redeploy as often.
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 08/12/2021 Last updated : 09/08/2021
These rule collections are described in more detail in [What are some Azure Fire
| AzureFrontDoor.FrontEnd</br>* Not needed in Azure China. | TCP | 443 | | ContainerRegistry.region | TCP | 443 | | MicrosoftContainerRegistry.region | TCP | 443 |
+ | Keyvault.region | TCP | 443 |
> [!TIP] > * ContainerRegistry.region is only needed for custom Docker images. This includes small modifications (such as additional packages) to base images provided by Microsoft. > * MicrosoftContainerRegistry.region is only needed if you plan on using the _default Docker images provided by Microsoft_, and _enabling user-managed dependencies_.
+ > * Keyvault.region is only needed if your workspace was created with the [hbi_workspace](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-) flag enabled.
> * For entries that contain `region`, replace with the Azure region that you're using. For example, `ContainerRegistry.westus`. 1. Add __Application rules__ for the following hosts:
The hosts in the following tables are owned by Microsoft, and provide services r
> [!IMPORTANT] > Your firewall must allow communication with \*.instances.azureml.ms over __TCP__ ports __18881, 443, and 8787__.
+> [!TIP]
+> The FQDN for Azure Key Vault is only needed if your workspace was created with the [hbi_workspace](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-) flag enabled.
+ **Docker images maintained by by Azure Machine Learning** | **Required for** | **Azure public** | **Azure Government** | **Azure China 21Vianet** | | -- | -- | -- | -- |
-| Azure Container Registry | azurecr.io | azurecr.us | azurecr.cn |
| Microsoft Container Registry | mcr.microsoft.com | mcr.microsoft.com | mcr.microsoft.com | | Azure Machine Learning pre-built images | viennaglobal.azurecr.io | viennaglobal.azurecr.io | viennaglobal.azurecr.io |
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-assign-roles.md
In this article, you learn how to manage access (authorization) to an Azure Mach
## Default roles
-An Azure Machine Learning workspace is an Azure resource. Like other Azure resources, when a new Azure Machine Learning workspace is created, it comes with three default roles. You can add users to the workspace and assign them to one of these built-in roles.
+Azure Machine Learning workspaces have a four built-in roles that are available by default. When adding users to a workspace, they can be assigned one of the built-in roles described below.
| Role | Access level | | | |
+| **AzureML Data Scientist** | Can perform all actions within an Azure Machine Learning workspace, except for creating or deleting compute resources and modifying the workspace itself. |
| **Reader** | Read-only actions in the workspace. Readers can list and view assets, including [datastore](how-to-access-data.md) credentials, in a workspace. Readers can't create or update these assets. | | **Contributor** | View, create, edit, or delete (where applicable) assets in a workspace. For example, contributors can create an experiment, create or attach a compute cluster, submit a run, and deploy a web service. | | **Owner** | Full access to the workspace, including the ability to view, create, edit, or delete (where applicable) assets in a workspace. Additionally, you can change role assignments. |
-| **Custom Role** | Allows you to customize access to specific control or data plane operations within a workspace. For example, submitting a run, creating a compute, deploying a model or registering a dataset. |
> [!IMPORTANT] > Role access can be scoped to multiple levels in Azure. For example, someone with owner access to a workspace may not have owner access to the resource group that contains the workspace. For more information, see [How Azure RBAC works](../role-based-access-control/overview.md#how-azure-rbac-works).
-Currently there are no additional built-in roles that are specific to Azure Machine Learning. For more information on built-in roles, see [Azure built-in roles](../role-based-access-control/built-in-roles.md).
## Manage workspace access
marketplace Plan Consulting Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-consulting-service-offer.md
The commercial marketplace supports five types of consulting service:
* **Proof of concept**: a limited-scope implementation to determine whether a solution meets the customerΓÇÖs requirements. * **Workshop**: an interactive engagement conducted on the customerΓÇÖs premises. It can involve training, briefings, assessments, or demos built on the customerΓÇÖs data or environment.
-Your service should have a fixed and predetermined duration of up to 10 weeks. The service duration must be made explicit in the offer listing.
+Your service should have a predetermined duration of up to 12 months. The service duration must be explicitly defined in the offer listing.
## Customer leads
You can use HTML tags to format your description. You can enter up to 2,000 char
**Search keywords** (optional): Provide up to three search keywords that customers can use to find your offer in the online stores. You don't need to include the offer **Name** and **Description**.
-**Duration**: your consulting service offer must have a predetermined duration of up to 10 weeks.
+**Duration**: your consulting service offer must have a predetermined duration of up to 12 months.
**Contact information**: in Partner Center, youΓÇÖll be asked to provide name, email address, and phone number of two people in your company (you can be one of the two contacts). We'll use this information to communicate with you about your offer. This information isnΓÇÖt shown to customers but may be provided to Cloud Solution Provider (CSP) partners.
Your consulting service offer can be made available in one or more countries or
## Next steps * [Create a consulting service offer in the commercial marketplace](./create-consulting-service-offer.md)
-* [Offer listing best practices](./gtm-offer-listing-best-practices.md)
+* [Offer listing best practices](./gtm-offer-listing-best-practices.md)
media-services Integrate Azure Functions Dotnet How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/integrate-azure-functions-dotnet-how-to.md
Title: Develop Azure Functions with Media Services v3
-description: This topic shows how to start developing Azure Functions with Media Services v3 using the Azure portal.
+description: This article shows how to start developing Azure Functions with Media Services v3 using Visual Studio Code.
--+ ms.devlang: dotnet Previously updated : 03/22/2021- Last updated : 06/09/2021+
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-This article shows you how to get started with creating Azure Functions that use Media Services. The Azure Function defined in this article monitors a storage account container named **input** for new MP4 files. Once a file is dropped into the storage container, the blob trigger executes the function. To review Azure Functions, see [Overview](../../azure-functions/functions-overview.md) and other topics in the **Azure Functions** section.
+This article shows you how to get started with creating Azure Functions that use Media Services. The Azure Function defined in this article encodes a video file with Media Encoder Standard. As soon as the encoding job has been created, the function returns the job name and output asset name. To review Azure Functions, see [Overview](../../azure-functions/functions-overview.md) and other topics in the **Azure Functions** section.
-If you want to explore and deploy existing Azure Functions that use Azure Media Services, check out [Media Services Azure Functions](https://github.com/Azure-Samples/media-services-v3-dotnet-core-functions-integration). This repository contains examples that use Media Services to show workflows related to ingesting content directly from blob storage, encoding, and writing content back to blob storage.
+If you want to explore and deploy existing Azure Functions that use Azure Media Services, check out [Media Services Azure Functions](https://github.com/Azure-Samples/media-services-v3-dotnet-core-functions-integration). This repository contains examples that use Media Services to show workflows related to ingesting content directly from blob storage, encoding, and live streaming operations.
## Prerequisites - Before you can create your first function, you need to have an active Azure account. If you don't already have an Azure account, [free accounts are available](https://azure.microsoft.com/free/). - If you are going to create Azure Functions that perform actions on your Azure Media Services (AMS) account or listen to events sent by Media Services, you should create an AMS account, as described [here](account-create-how-to.md).
+- Install [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
-## Create a function app
+This article explains how to create a C# .NET 5 function that communicates with Azure Media Services. To create a function with another language, look to this [article](../../azure-functions/functions-develop-vs-code.md).
-1. Go to the [Azure portal](https://portal.azure.com) and sign-in with your Azure account.
-2. Create a function app as described [here](../../azure-functions/functions-create-function-app-portal.md).
+### Run local requirements
->[!NOTE]
-> A storage account that you specify should be in the same region as your app.
+These prerequisites are only required to run and debug your functions locally. They aren't required to create or publish projects to Azure Functions.
-## Configure function app settings
+- [.NET Core 3.1 and .NET 5 SDKs](https://dotnet.microsoft.com/download/dotnet).
-When developing Media Services functions, it is handy to add environment variables that will be used throughout your functions. To configure app settings, click the Configure App Settings link. For more information, see [How to configure Azure Function app settings](../../azure-functions/functions-how-to-use-azure-function-app-settings.md).
+- The [Azure Functions Core Tools](../../azure-functions/functions-run-local.md#install-the-azure-functions-core-tools) version 3.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-## Create a function
+- The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
-Once your function app is deployed, you can find it among **App Services** Azure Functions.
+## Install the Azure Functions extension
-1. Select your function app and click **New Function**.
-1. Choose the **C#** language and **Data Processing** scenario.
-1. Choose **BlobTrigger** template. This function is triggered whenever a blob is uploaded into the **input** container. The **input** name is specified in the **Path**, in the next step.
-1. Once you select **BlobTrigger**, some more controls appear on the page.
-1. Click **Create**.
+You can use the Azure Functions extension to create and test functions and deploy them to Azure.
-## Files
+1. In Visual Studio Code, open **Extensions** and search for **Azure functions**, or select this link in Visual Studio Code: [`vscode:extension/ms-azuretools.vscode-azurefunctions`](vscode:extension/ms-azuretools.vscode-azurefunctions).
-Your Azure Function is associated with code files and other files that are described in this section. When you use the Azure portal to create a function, **function.json** and **run.csx** are created for you. You need to add or upload a **project.json** file. The rest of this section gives a brief explanation of each file and shows their definitions.
+1. Select **Install** to install the extension for Visual Studio Code:
-### function.json
+ ![Install the extension for Azure Functions](./Media/integrate-azure-functions-dotnet-how-to/vscode-install-extension.png)
-The function.json file defines the function bindings and other configuration settings. The runtime uses this file to determine the events to monitor and how to pass data into and return data from function execution. For more information, see [Azure Functions HTTP and webhook bindings](../../azure-functions/functions-reference.md#function-code).
+1. After installation, select the Azure icon on the Activity bar. You should see an Azure Functions area in the Side Bar.
->[!NOTE]
->Set the **disabled** property to **true** to prevent the function from being executed.
+ ![Azure Functions area in the Side Bar](./Media/integrate-azure-functions-dotnet-how-to/azure-functions-window-vscode.png)
-Replace the contents of the existing function.json file with the following code:
+## Create an Azure Functions project
-```json
-{
- "bindings": [
- {
- "name": "myBlob",
- "type": "blobTrigger",
- "direction": "in",
- "path": "input/{filename}.mp4",
- "connection": "ConnectionString"
- }
- ],
- "disabled": false
-}
-```
+The Functions extension lets you create a function app project, along with your first function. The following steps show how to create an HTTP-triggered function in a new Functions project. HTTP trigger is the simplest function trigger template to demonstrate.
-### project.json
+1. From **Azure: Functions**, select the **Create Function** icon:
-The project.json file contains dependencies. Here is an example of **project.json** file that includes the required .NET Azure Media Services packages from NuGet. Note that the version numbers change with latest updates to the packages, so you should confirm the most recent versions.
+ ![Create a function](./Media/integrate-azure-functions-dotnet-how-to/create-function.png)
-Add the following definition to project.json.
+1. Select the folder for your function app project, and then **Select C# for your function project** and **.NET 5 Isolated** for the runtime.
-```json
-{
- "frameworks": {
- "net46":{
- "dependencies": {
- "windowsazure.mediaservices": "4.0.0.4",
- "windowsazure.mediaservices.extensions": "4.0.0.4",
- "Microsoft.IdentityModel.Clients.ActiveDirectory": "3.13.1",
- "Microsoft.IdentityModel.Protocol.Extensions": "1.0.2.206221351"
- }
- }
- }
-}
+1. Select the **HTTP trigger** function template.
+
+ ![Choose the HTTP trigger template](./Media/integrate-azure-functions-dotnet-how-to/create-function-choose-template.png)
+
+1. Type **HttpTriggerEncode** for the function name and select Enter, accept **Company.Function** for the namespace then select **Function** for the access rights. This authorization level requires you to provide a [function key](../../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys) when you call the function endpoint.
+
+ ![Select Function authorization](./Media/integrate-azure-functions-dotnet-how-to/create-function-auth.png)
+ A function is created in your chosen language and in the template for an HTTP-triggered function.
+
+ ![HTTP-triggered function template in Visual Studio Code](./Media/integrate-azure-functions-dotnet-how-to/new-function-full.png)
+
+## Install Media Services and other extensions
+
+Run the dotnet add package command in the Terminal window to install the extension packages that you need in your project. The following command installs the Media Services package and other extensions needed by the sample.
+
+```bash
+dotnet add package Azure.Storage.Blobs
+dotnet add package Microsoft.Azure.Management.Media
+dotnet add package Microsoft.Identity.Client
```
-### run.csx
-
-This is the C# code for your function. The function defined below monitors a storage account container named **input** (that is what was specified in the path) for new MP4 files. Once a file is dropped into the storage container, the blob trigger executes the function.
-
-The example defined in this section demonstrates:
-
-1. How to ingest an asset into a Media Services account (by coping a blob into an AMS asset)
-2. How to submit an encoding job that uses Media Encoder Standard's "Adaptive Streaming" preset
-
-Replace the contents of the existing run.csx file with the following code: Once you are done defining your function click **Save and Run**.
-
-```csharp
-#r "Microsoft.WindowsAzure.Storage"
-#r "Newtonsoft.Json"
-#r "System.Web"
-
-using System;
-using System.Net;
-using System.Net.Http;
-using Newtonsoft.Json;
-using Microsoft.WindowsAzure.MediaServices.Client;
-using System.Collections.Generic;
-using System.Linq;
-using System.Text;
-using System.Threading;
-using System.Threading.Tasks;
-using System.IO;
-using System.Web;
-using Microsoft.Azure;
-using Microsoft.WindowsAzure.Storage;
-using Microsoft.WindowsAzure.Storage.Blob;
-using Microsoft.WindowsAzure.Storage.Auth;
-using Microsoft.Azure.WebJobs;
-using Microsoft.IdentityModel.Clients.ActiveDirectory;
-
-// Read values from the App.config file.
-
-static readonly string _AADTenantDomain = Environment.GetEnvironmentVariable("AMSAADTenantDomain");
-static readonly string _RESTAPIEndpoint = Environment.GetEnvironmentVariable("AMSRESTAPIEndpoint");
-
-static readonly string _mediaservicesClientId = Environment.GetEnvironmentVariable("AMSClientId");
-static readonly string _mediaservicesClientSecret = Environment.GetEnvironmentVariable("AMSClientSecret");
-
-static readonly string _connectionString = Environment.GetEnvironmentVariable("ConnectionString");
-
-private static CloudMediaContext _context = null;
-private static CloudStorageAccount _destinationStorageAccount = null;
-
-public static void Run(CloudBlockBlob myBlob, string fileName, TraceWriter log)
-{
- // NOTE that the variables {fileName} here come from the path setting in function.json
- // and are passed into the Run method signature above. We can use this to make decisions on what type of file
- // was dropped into the input container for the function.
-
- // No need to do any Retry strategy in this function, By default, the SDK calls a function up to 5 times for a
- // given blob. If the fifth try fails, the SDK adds a message to a queue named webjobs-blobtrigger-poison.
-
- log.Info($"C# Blob trigger function processed: {fileName}.mp4");
- log.Info($"Media Services REST endpoint : {_RESTAPIEndpoint}");
-
- try
- {
- AzureAdTokenCredentials tokenCredentials = new AzureAdTokenCredentials(_AADTenantDomain,
- new AzureAdClientSymmetricKey(_mediaservicesClientId, _mediaservicesClientSecret),
- AzureEnvironments.AzureCloudEnvironment);
-
- AzureAdTokenProvider tokenProvider = new AzureAdTokenProvider(tokenCredentials);
-
- _context = new CloudMediaContext(new Uri(_RESTAPIEndpoint), tokenProvider);
-
- IAsset newAsset = CreateAssetFromBlob(myBlob, fileName, log).GetAwaiter().GetResult();
-
- // Step 2: Create an Encoding Job
-
- // Declare a new encoding job with the Standard encoder
- IJob job = _context.Jobs.Create("Azure Function - MES Job");
-
- // Get a media processor reference, and pass to it the name of the
- // processor to use for the specific task.
- IMediaProcessor processor = GetLatestMediaProcessorByName("Media Encoder Standard");
-
- // Create a task with the encoding details, using a custom preset
- ITask task = job.Tasks.AddNew("Encode with Adaptive Streaming",
- processor,
- "Adaptive Streaming",
- TaskOptions.None);
-
- // Specify the input asset to be encoded.
- task.InputAssets.Add(newAsset);
-
- // Add an output asset to contain the results of the job.
- // This output is specified as AssetCreationOptions.None, which
- // means the output asset is not encrypted.
- task.OutputAssets.AddNew(fileName, AssetCreationOptions.None);
-
- job.Submit();
- log.Info("Job Submitted");
-
- }
- catch (Exception ex)
- {
- log.Error("ERROR: failed.");
- log.Info($"StackTrace : {ex.StackTrace}");
- throw ex;
- }
-}
+## Generated project files
-private static IMediaProcessor GetLatestMediaProcessorByName(string mediaProcessorName)
-{
- var processor = _context.MediaProcessors.Where(p => p.Name == mediaProcessorName).
- ToList().OrderBy(p => new Version(p.Version)).LastOrDefault();
+The project template creates a project in your chosen language and installs required dependencies. The new project has these files:
- if (processor == null)
- throw new ArgumentException(string.Format("Unknown media processor", mediaProcessorName));
+* **host.json**: Lets you configure the Functions host. These settings apply when you're running functions locally and when you're running them in Azure. For more information, see [host.json reference](./../../azure-functions/functions-host-json.md).
- return processor;
-}
+* **local.settings.json**: Maintains settings used when you're running functions locally. These settings are used only when you're running functions locally.
-public static async Task<IAsset> CreateAssetFromBlob(CloudBlockBlob blob, string assetName, TraceWriter log){
- IAsset newAsset = null;
-
- try{
- Task<IAsset> copyAssetTask = CreateAssetFromBlobAsync(blob, assetName, log);
- newAsset = await copyAssetTask;
- log.Info($"Asset Copied : {newAsset.Id}");
- }
- catch(Exception ex){
- log.Info("Copy Failed");
- log.Info($"ERROR : {ex.Message}");
- throw ex;
- }
-
- return newAsset;
-}
+ >[!IMPORTANT]
+ >Because the local.settings.json file can contain secrets, you need to exclude it from your project source control.
-/// <summary>
-/// Creates a new asset and copies blobs from the specifed storage account.
-/// </summary>
-/// <param name="blob">The specified blob.</param>
-/// <returns>The new asset.</returns>
-public static async Task<IAsset> CreateAssetFromBlobAsync(CloudBlockBlob blob, string assetName, TraceWriter log)
+* **HttpTriggerEncode.cs** class file that implements the function.
+
+### HttpTriggerEncode.cs
+
+This is the C# code for your function. Its role is to take a Media Services asset or a source URL and launches an encoding job with Media Services. It uses a Transform that is created if it does not exist. When it is created, it used the preset provided in the input body.
+
+>[!IMPORTANT]
+>Replace the full content of HttpTriggerEncode.cs file with [`HttpTriggerEncode.cs` from this repository](https://github.com/Azure-Samples/media-services-v3-dotnet-core-functions-integration/blob/main/Tutorial/HttpTriggerEncode.cs).
+
+Once you are done defining your function, select **Save and Run**.
+
+The source code for the **Run** method of the function is:
+
+[!code-csharp[Main](../../../media-services-v3-dotnet-core-functions-integration/Tutorial/HttpTriggerEncode.cs#Run)]
+
+### local.settings.json
+
+Update the file with the following content (and replace the values).
+
+```json
{
- //Get a reference to the storage account that is associated with the Media Services account.
- _destinationStorageAccount = CloudStorageAccount.Parse(_connectionString);
-
- // Create a new asset.
- var asset = _context.Assets.Create(blob.Name, AssetCreationOptions.None);
- log.Info($"Created new asset {asset.Name}");
-
- IAccessPolicy writePolicy = _context.AccessPolicies.Create("writePolicy",
- TimeSpan.FromHours(4), AccessPermissions.Write);
- ILocator destinationLocator = _context.Locators.CreateLocator(LocatorType.Sas, asset, writePolicy);
- CloudBlobClient destBlobStorage = _destinationStorageAccount.CreateCloudBlobClient();
-
- // Get the destination asset container reference
- string destinationContainerName = (new Uri(destinationLocator.Path)).Segments[1];
- CloudBlobContainer assetContainer = destBlobStorage.GetContainerReference(destinationContainerName);
-
- try{
- assetContainer.CreateIfNotExists();
- }
- catch (Exception ex)
- {
- log.Error ("ERROR:" + ex.Message);
- }
-
- log.Info("Created asset.");
-
- // Get hold of the destination blob
- CloudBlockBlob destinationBlob = assetContainer.GetBlockBlobReference(blob.Name);
-
- // Copy Blob
- try
- {
- using (var stream = await blob.OpenReadAsync())
- {
- await destinationBlob.UploadFromStreamAsync(stream);
- }
-
- log.Info("Copy Complete.");
-
- var assetFile = asset.AssetFiles.Create(blob.Name);
- assetFile.ContentFileSize = blob.Properties.Length;
- assetFile.IsPrimary = true;
- assetFile.Update();
- asset.Update();
- }
- catch (Exception ex)
- {
- log.Error(ex.Message);
- log.Info (ex.StackTrace);
- log.Info ("Copy Failed.");
- throw;
- }
-
- destinationLocator.Delete();
- writePolicy.Delete();
-
- return asset;
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "",
+ "FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated",
+ "AadClientId": "00000000-0000-0000-0000-000000000000",
+ "AadEndpoint": "https://login.microsoftonline.com",
+ "AadSecret": "00000000-0000-0000-0000-000000000000",
+ "AadTenantId": "00000000-0000-0000-0000-000000000000",
+ "AccountName": "amsaccount",
+ "ArmAadAudience": "https://management.core.windows.net/",
+ "ArmEndpoint": "https://management.azure.com/",
+ "ResourceGroup": "amsResourceGroup",
+ "SubscriptionId": "00000000-0000-0000-0000-000000000000"
+ }
} ``` ## Test your function
-To test your function, you need to upload an MP4 file into the **input** container of the storage account that you specified in the connection string.
+When you run the function locally in VS Code, the function should be exposed as:
+
+```url
+http://localhost:7071/api/HttpTriggerEncode
+```
+
+To test it, you can use Postman to do a POST on this URL using a JSON input body.
+
+JSON input body example:
+
+```json
+{
+ "inputUrl":"https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/Ignite-short.mp4",
+ "transformName" : "TransformAS",
+ "builtInPreset" :"AdaptiveStreaming"
+ }
+```
-1. Select the storage account you specified.
-2. Click **Blobs**.
-3. Click **+ Container**. Name the container **input**.
-4. Press **Upload** and browse to a .mp4 file that you want to upload.
+The function should return 200 OK with an output body containing the job and output asset names.
->[!NOTE]
-> When you're using a blob trigger on a Consumption plan, there can be up to a 10-minute delay in processing new blobs after a function app has gone idle. After the function app is running, blobs are processed immediately. For more information, see [Blob storage triggers and bindings](../../azure-functions/functions-bindings-storage-blob.md).
+![Test the function with Postman](./Media/integrate-azure-functions-dotnet-how-to/postman.png)
## Next steps
-At this point, you are ready to start developing a Media Services application.
+At this point, you are ready to start developing functions that call Media Services API.
-For more details and complete samples/solutions of using Azure Functions and Logic Apps with Azure Media Services to create custom content creation workflows, see the [Media Services Azure Functions](https://github.com/Azure-Samples/media-services-v3-dotnet-core-functions-integration)
+For more information and a complete sample of using Azure Functions with Azure Media Services v3, see the [Media Services v3 Azure Functions sample](https://github.com/Azure-Samples/media-services-v3-dotnet-core-functions-integration/tree/main/Functions).
media-services Samples Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/samples-overview.md
You'll find description and links to the samples you may be looking for in each
| [VideoEncoding/Encoding_HEVC](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_HEVC)|The sample shows how to submit a job using a custom HEVC encoding preset and an HTTP URL input, publish output asset for streaming, and download results for verification.| | [VideoEncoding/Encoding_StitchTwoAssets](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_StitchTwoAssets)|The sample shows how to submit a job using the JobInputSequence to stitch together 2 or more assets that may be clipped by start or end time. The resulting encoded file is a single video with all assets stitched together. The sample will also publish output asset for streaming and download results for verification.| | [VideoEncoding/Encoding_SpriteThumbnail](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_SpriteThumbnail)|The sample shows how to submit a job using a custom preset with a thumbnail sprite and an HTTP URL input, publish output asset for streaming, and download results for verification.|
-| [Live/LiveEventWithDVR](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/Live/LiveEventWithDVR)|This sample first shows how to create a LiveEvent with a full archive up to 25 hours and an filter on the asset with 5 minutes DVR window, then it shows how to use the filter to create a locator for streaming.|
+| [Live/LiveEventWithDVR](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/Live/LiveEventWithDVR)|This sample first shows how to create a LiveEvent with a full archive up to 25 hours and a filter on the asset with 5 minutes DVR window, then it shows how to use the filter to create a locator for streaming.|
| [VideoAnalytics/VideoAnalyzer](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoAnalytics/VideoAnalyzer)|This sample illustrates how to create a video analyzer transform, upload a video file to an input asset, submit a job with the transform and download the results for verification.|
-| [AudioAnalytics/AudioAnalyzer](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/AudioAnalytics/AudioAnalyzer)|This sample illustrates how to create a audio analyzer transform, upload a media file to an input asset, submit a job with the transform and download the results for verification.|
+| [AudioAnalytics/AudioAnalyzer](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/AudioAnalytics/AudioAnalyzer)|This sample illustrates how to create an audio analyzer transform, upload a media file to an input asset, submit a job with the transform and download the results for verification.|
| [ContentProtection/BasicAESClearKey](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/ContentProtection/BasicAESClearKey)|This sample demonstrates how to create a transform with built-in AdaptiveStreaming preset, submit a job, create a ContentKeyPolicy using a secret key, associate the ContentKeyPolicy with StreamingLocator, get a token and print a url for playback in Azure Media Player. When a stream is requested by a player, Media Services uses the specified key to dynamically encrypt your content with AES-128 and Azure Media Player uses the token to decrypt.| | [ContentProtection/BasicWidevine](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/ContentProtection/BasicWidevine)|This sample demonstrates how to create a transform with built-in AdaptiveStreaming preset, submit a job, create a ContentKeyPolicy with Widevine configuration using a secret key, associate the ContentKeyPolicy with StreamingLocator, get a token and print a url for playback in a Widevine Player. When a user requests Widevine-protected content, the player application requests a license from the Media Services license service. If the player application is authorized, the Media Services license service issues a license to the player. A Widevine license contains the decryption key that can be used by the client player to decrypt and stream the content.|
-| [ContentProtection/BasicPlayReady](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/ContentProtection/BasicPlayReady)|This sample demonstrates how to create a transform with built-in AdaptiveStreaming preset, submit a job, create a ContentKeyPolicy with PlayReady configuration using a secret key, associate the ContentKeyPolicy with StreamingLocator, get a token and print a url for playback in a Azure Media Player. When a user requests PlayReady-protected content, the player application requests a license from the Media Services license service. If the player application is authorized, the Media Services license service issues a license to the player. A PlayReady license contains the decryption key that can be used by the client player to decrypt and stream the content.|
+| [ContentProtection/BasicPlayReady](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/ContentProtection/BasicPlayReady)|This sample demonstrates how to create a transform with built-in AdaptiveStreaming preset, submit a job, create a ContentKeyPolicy with PlayReady configuration using a secret key, associate the ContentKeyPolicy with StreamingLocator, get a token and print a url for playback in an Azure Media Player. When a user requests PlayReady-protected content, the player application requests a license from the Media Services license service. If the player application is authorized, the Media Services license service issues a license to the player. A PlayReady license contains the decryption key that can be used by the client player to decrypt and stream the content.|
| [ContentProtection/OfflinePlayReadyAndWidevine](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/ContentProtection/OfflinePlayReadyAndWidevine)|This sample demonstrates how to dynamically encrypt your content with PlayReady and Widevine DRM and play the content without requesting a license from license service. It shows how to create a transform with built-in AdaptiveStreaming preset, submit a job, create a ContentKeyPolicy with open restriction and PlayReady/Widevine persistent configuration, associate the ContentKeyPolicy with a StreamingLocator and print a url for playback.| | [Streaming/AssetFilters](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/Streaming/AssetFilters)|This sample demonstrates how to create a transform with built-in AdaptiveStreaming preset, submit a job, create an asset-filter and an account-filter, associate the filters to streaming locators and print urls for playback.| | [Streaming/StreamHLSAndDASH](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/Streaming/StreamHLSAndDASH)|This sample demonstrates how to create a transform with built-in AdaptiveStreaming preset, submit a job, publish output asset for HLS and DASH streaming.| | [HighAvailabilityEncodingStreaming](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/HighAvailabilityEncodingStreaming/) | This sample provides guidance and best practices for a production system using on-demand encoding or analytics. Readers should start with the companion article [High Availability with Media Services and VOD](architecture-high-availability-encoding-concept.md). There is a separate solution file provided for the [HighAvailabilityEncodingStreaming](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/HighAvailabilityEncodingStreaming/README.md) sample. |
+| [Azure Functions for Media Services](https://github.com/xpouyat/media-services-v3-dotnet-core-functions-integration/tree/main/Functions)|This project contains examples of Azure Functions that connect to Azure Media Services v3 for video processing. You can use Visual Studio 2019 or Visual Studio Code to develop and run the functions. An Azure Resource Manager (ARM) template and a GitHub Actions workflow are provided for the deployment of the Function resources and to enable continuous deployment.|
## [Node.JS](#tab/node/)
You'll find description and links to the samples you may be looking for in each
## REST Postman collection
-The [REST Postman](https://github.com/Azure-Samples/media-services-v3-rest-postman) samples includes a Postman environment and collection for you to import into the Postman client. The Postman collection samples are recommended for getting familiar with the API structure and how it works with Azure Resource Management (ARM), as well as the structure of calls from the client SDKs.
+The [REST Postman](https://github.com/Azure-Samples/media-services-v3-rest-postman) samples include a Postman environment and collection for you to import into the Postman client. The Postman collection samples are recommended for getting familiar with the API structure and how it works with Azure Resource Management (ARM), as well as the structure of calls from the client SDKs.
[!INCLUDE [warning-rest-api-retry-policy.md](./includes/warning-rest-api-retry-policy.md)]
media-services Media Services Sspk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-sspk.md
The SSPK Distribution portal is accessible to registered Interim licensees.
Interim and Final SSPK licensees can submit technical questions to [smoothpk@microsoft.com](mailto:smoothpk@microsoft.com). ## Microsoft Smooth Streaming Client Interim Product Agreement Licensees
+* Beijing ESWIN Computing Technology Co., Ltd.
* Enseo, Inc. * Fluendo S.A. * Guangzhou Dimai Digital Limited Co.
Interim and Final SSPK licensees can submit technical questions to [smoothpk@mic
* SKARDIN INDUSTRIAL CORP * Sky CP Ltd * SMARDTV GLOBAL SAS
-* Sony Corporation
* SoftAtHome
-* Technicolor Delivery Technologies, SAS
+* Sony Corporation
* Top Victory Investments, Ltd. * Vizio, Inc. * Walton Hi-Tech Industries Ltd.
migrate Concepts Dependency Visualization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/concepts-dependency-visualization.md
There are two options for deploying dependency analysis
**Option** | **Details** | **Public cloud** | **Azure Government** - |- | -
-**Agentless** | Polls data from servers on VMware using vSphere APIs.<br/><br/> You don't need to install agents on servers.<br/><br/> This option is currently in preview, only for servers on VMware. | Supported. | Supported.
+**Agentless** | Polls data from servers on VMware using vSphere APIs.<br/><br/> You don't need to install agents on servers.<br/><br/> This option is currently only for servers on VMware. | Supported. | Supported.
**Agent-based analysis** | Uses the [Service Map solution](../azure-monitor/vm/service-map.md) in Azure Monitor, to enable dependency visualization and analysis.<br/><br/> You need to install agents on each on-premises server that you want to analyze. | Supported | Not supported. ## Agentless analysis
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-read-replicas.md
However, there are limitations to consider:
## Create a replica > [!IMPORTANT]
-> The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
+> * The read replica feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the source server is in one of these pricing tiers.
+> * If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
-If a source server has no existing replica servers, the source will first restart to prepare itself for replication.
When you start the create replica workflow, a blank Azure Database for MySQL server is created. The new server is filled with the data that was on the source server. The creation time depends on the amount of data on the source and the time since the last weekly full backup. The time can range from a few minutes to several hours. The replica server is always created in the same resource group and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation.
Read replicas are currently only available in the General Purpose and Memory Opt
### Source server restart
-When you create a replica for a source that has no existing replicas, the source will first restart to prepare itself for replication. Take this into consideration and perform these operations during an off-peak period.
+Server that has General purpose storage v1, the `log_bin` parameter will be OFF by default. The value will be turned ON when you create the first read replica. If a source server has no existing read replicas, source server will first restart to prepare itself for replication. Please consider server restart and perform this operation during off-peak hours.
+
+Source server that has General purpose storage v2, the `log_bin` parameter will be ON by default and does not require a restart when you add a read replica.
### New replicas
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/whats-new.md
This release of Azure Database for MySQL - Flexible Server includes the followin
- Unable to create Same-Zone High availability server in the following regions: Central India, East Asia, Korea Central, South Africa North, Switzerland North. - In a rare scenario and after HA failover, the primary server will be in read_only mode. Resolve the issue by updating ΓÇ£read_onlyΓÇ¥ value from the server parameters blade to OFF. - After successfully scaling Compute in the Compute+Storage blade, IOPS is reset to the SKU default. Customers can work around the issue by re-scaling IOPs in the Compute+Storage blade to desired value (previously set) post the compute deployment and consequent IOPS reset.
- - When you try to enable or deploy Same zone HA, the deployment fails in the following regions
- - Central India
- - East Asia
- - Korea Central
- - South Africa North
- - Switzerland North
- ## July 2021
mysql Howto Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-read-replicas-cli.md
You can create and manage read replicas using the Azure CLI.
### Create a read replica > [!IMPORTANT]
-> When you create a replica for a source that has no existing replicas, the source will first restart to prepare itself for replication. Take this into consideration and perform these operations during an off-peak period.
+> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
> >If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid)
mysql Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-read-replicas-portal.md
In this article, you will learn how to create and manage read replicas in the Az
## Create a read replica > [!IMPORTANT]
-> When you create a replica for a source that has no existing replicas, the source will first restart to prepare itself for replication. Take this into consideration and perform these operations during an off-peak period.
+> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
> >If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid)
mysql Howto Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-read-replicas-powershell.md
If you choose to use PowerShell locally, connect to your Azure account using the
### Create a read replica > [!IMPORTANT]
-> When you create a replica for a source that has no existing replicas, the source will first restart to prepare itself for replication. Take this into consideration and perform these operations during an off-peak period.
+> If your source server has no existing replica servers, source server might need a restart to prepare itself for replication depending upon the storage used (v1/v2). Please consider server restart and perform this operation during off-peak hours. See [Source Server restart](./concepts-read-replicas.md#source-server-restart) for more details.
A read replica server can be created using the following command:
notification-hubs Move Registrations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/move-registrations.md
+
+ Title: Move Azure Notification Hubs resources from one region to another
+description: Learn how to move Azure Notification Hubs resources to a different Azure region.
++++ Last updated : 09/07/2021+++
+# Move resources between Azure regions
+
+This article describes how to move Azure Notification Hubs resources to a different Azure region. At a high level, the process is:
+
+1. Create a destination namespace with a different name.
+1. Export the registrations from the previous namespace.
+1. Import the registrations into the new namespace in the desired region.
+
+## Overview
+
+In some scenarios, you might need to move service resources between Azure regions for various business reasons. They might move to a newly available region, you might want to deploy features or services available only in a specific region, move due to internal policy or compliance requirements, or to solve capacity issues.
+
+Azure Notification Hubs namespace names are unique, and registrations are per hub, so to perform such a move, you must create a new hub in the desired region, then move the registrations along with all other relevant data to the newly created namespace.
+
+## Create a Notification Hubs namespace with a different name
+
+Follow these steps to create a new Notification Hubs namespace. Fill in all the required information in the **Basics** tab, including the desired destination region for the namespace.
++
+Once the new namespace has been created, ensure that you set the PNS credentials in the new namespace and create equivalent policies in the new namespace.
+
+## Export/import registrations
+
+Once the new namespace has been created in the region to which you want to move the resource, export all the registrations in bulk and import them into the new namespace. To do so, see [Export and import Azure Notification Hubs registrations in bulk](export-modify-registrations-bulk.md).
+
+## Delete the previous namespace (optional)
+
+After completing the registration export from your old namespace to the new namespace, if desired you can delete the old namespace.
+
+1. Go to the existing namespace in the previous region.
+
+2. Click **Delete**, and then re-enter the namespace name in the **Delete namespace** pane.
+
+3. Click **Delete** at the bottom of the **Delete namespace** pane.
+
+## Next steps
+
+The following articles are examples of other services that have a region-move article in place.
+
+- [Move NSGs to another region](/virtual-network/move-across-regions-nsg-portal)
+- [Move public IP addresses to another region](/virtual-network/move-across-regions-publicip-portal)
+- [Move a storage account to another region](/storage/common/storage-account-move?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal)
+- [Move resources across regions (from resource group)](/resource-mover/move-region-within-resource-group#:~:text=1%20In%20the%20Azure%20portal%2C%20open%20the%20relevant,you%20want%20to%20move.%20...%20More%20items...%20)
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-permissions.md
Previously updated : 08/18/2020 Last updated : 08/18/2021 # Access control in Azure Purview
purview Concept Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-account-upgrade.md
To edit role assignments in a collection, select the **Role assignments** tab to
For more information about collections in upgraded accounts, please read our [guide on creating and managing collections](how-to-create-and-manage-collections.md).
-### What happens to your collections during upgrade
+### What happens to your collections during upgrade?
1. A root collection is created. The root collection is the top collection in your collection list and will have the same name as your Purview resource. In our example below, it's called Contoso Purview.
For more information about collections in upgraded accounts, please read our [gu
1. Your previously existing collections will be connected to the root collection. You'll see them listed below the root collection, and can interact with them there.
-### What happens to your sources during upgrade
+### What happens to your sources during upgrade?
1. Any sources that weren't previously associated with a collection are automatically added to the root collection.
For one-time scans, you'll need to rerun these manually to populate the assets i
:::image type="content" source="./media/concept-account-upgrade/run-scan-now.png" alt-text="Screenshot of Purview studio window, opened to a scan, with Run scan now highlighted." border="true":::
+### What happens when your upgraded account doesn't have a collection admin?
+
+Your upgraded Purview account will have default collection admin(s) if the process can identify at least one user or group in the following order:
+
+1. Owner (explicitly assigned)
+
+1. User Access Administrator (explicitly assigned)
+
+1. Data Source Administrator and Data Curator
+
+If your account did not have any user or group matched with the above criteria, the upgraded Purview account will have no collection admin.
+
+You can still manually add a collection admin by using the management API. The user who calls this API must have Owner or User Access Administrator permission on Purview account to execute a write action. You will need to know the `objectId` of the new collection admin to submit via the API.
+
+#### Request
+
+ ```
+ POST https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Purview/accounts/<accountName>/addRootCollectionAdmin?api-version=2021-07-01
+ ```
+
+#### Request body
+
+ ```json
+ {
+ "objectId": "<objectId>"
+ }
+ ```
+
+`objectId` is the objectId of the new collection admin to add.
+
+#### Response body
+
+If success, you will get an empty body response with `200` code.
+
+If failure, the format of the response body is as follows.
+
+ ```json
+ {
+ "error": {
+ "code": "19000",
+ "message": "The caller does not have Microsoft.Authorization/roleAssignments/write permission on resource: [/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>].",
+ "target": null,
+ "details": []
+ }
+ }
+ ```
+ ## Permissions In upgraded Purview accounts, permissions are managed through collections.
purview Concept Scans And Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-scans-and-ingestion.md
To keep deleted files out of your catalog, it's important to run regular scans.
When you enumerate large data stores like Data Lake Storage Gen2, there are multiple ways (including enumeration errors and dropped events) to miss information. A particular scan might miss that a file was created or deleted. So, unless the catalog is certain a file was deleted, it won't delete it from the catalog. This strategy means there can be errors when a file that doesn't exist in the scanned data store still exists in the catalog. In some cases, a data store might need to be scanned two or three times before it catches certain deleted assets.
+> [!NOTE]
+> Assets that are marked for deletion are deleted after a successful scan. Deleted assets might continue to be visible in your catalog for some time before they are processed and removed.
+ ## Ingestion The technical metadata or classifications identified by the scanning process are then sent to Ingestion. The ingestion process is responsible for populating the data map and is managed by Purview. Ingestion analyses the input from scan, [applies resource set patterns](concept-resource-sets.md#how-azure-purview-detects-resource-sets), populates available [lineage](concept-data-lineage.md) information, and then loads the data map automatically. Assets/schemas can be discovered or curated only after ingestion is complete. So, if your scan is completed but you haven't seen your assets in the data map or catalog, you'll need to wait for the ingestion process to finish.
purview Sources And Scans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/sources-and-scans.md
The following file types are supported for scanning, for schema extraction and c
> Every Gzip file must be mapped to a single csv file within. Gzip files are subject to System and Custom Classification rules. We currently don't support scanning a gzip file mapped to multiple files within, or any file type other than csv. Also, Purview scanner supports scanning snappy compressed PARQUET types for schema extraction and classification. > [!Note]
-> Purview scanner does not support complex data types in AVRO, ORC and PARQUET file types for schema extraction.
+> Purview scanner does not support complex data types (for example, MAP, LIST, STRUCT) in AVRO, ORC, and PARQUET file types for schema extraction.
## Sampling within a file
search Index Add Custom Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/index-add-custom-analyzers.md
Previously updated : 03/17/2021 Last updated : 09/08/2021 # Add custom analyzers to string fields in an Azure Cognitive Search index
-A *custom analyzer* is a combination of tokenizer, one or more token filters, and one or more character filters that you define in the search index, and then reference on field definitions that require custom analysis. The tokenizer is responsible for breaking text into tokens, and the token filters for modifying tokens emitted by the tokenizer. Character filters prepare the input text before it is processed by the tokenizer.
+A *custom analyzer* is a user-defined combination of tokenizer, one or more token filters, and one or more character filters specified in the search index, and then referenced on field definitions that require custom analysis. The tokenizer is responsible for breaking text into tokens, and the token filters for modifying tokens emitted by the tokenizer. Character filters prepare the input text before it is processed by the tokenizer. For concepts and examples, see [Analyzers in Azure Cognitive Search](search-analyzers.md).
-A custom analyzer gives you control over the process of converting text into indexable and searchable tokens by allowing you to choose which types of analysis or filtering to invoke, and the order in which they occur. If you want to use a built-in analyzer with custom options, such as changing the maxTokenLength on Standard, you would create a custom analyzer, with a user-defined name, to set those options.
+A custom analyzer gives you control over the process of converting text into indexable and searchable tokens by allowing you to choose which types of analysis or filtering to invoke, and the order in which they occur.
-Situations where custom analyzers can be helpful include:
+Create and assign a custom analyzer if none of default (Standard Lucence), built-in, or language analyzers are sufficient for your needs. You might also create a custom analyzer if you want to use a built-in analyzer with custom options. For example, if you wanted to change the maxTokenLength on Standard, you would create a custom analyzer, with a user-defined name, to set that option.
+
+Scenarios where custom analyzers can be helpful include:
- Using character filters to remove HTML markup before text inputs are tokenized, or replace certain characters or symbols.
Situations where custom analyzers can be helpful include:
- ASCII folding. Add the Standard ASCII folding filter to normalize diacritics like ö or ê in search terms.
-To create a custom analyzer, specify it in the "analyzers" section of an index at design time, and then reference it on searchable, Edm.String fields using either the "analyzer" property, or the "indexAnalyzer" and "searchAnalyzer" pair.
- > [!NOTE]
-> Custom analyzers that you create are not exposed in the Azure portal. The only way to add a custom analyzer is through code that defines an index.
+> Custom analyzers are not exposed in the Azure portal. The only way to add a custom analyzer is through code that defines an index.
## Create a custom analyzer
+To create a custom analyzer, specify it in the "analyzers" section of an index at design time, and then reference it on searchable, Edm.String fields using either the "analyzer" property, or the "indexAnalyzer" and "searchAnalyzer" pair.
+ An analyzer definition includes a name, type, one or more character filters, a maximum of one tokenizer, and one or more token filters for post-tokenization processing. Character filters are applied before tokenization. Token filters and character filters are applied from left to right. - Names in a custom analyzer must be unique and cannot be the same as any of the built-in analyzers, tokenizers, token filters, or characters filters. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters. -- The type must be #Microsoft.Azure.Search.CustomAnalyzer.
+- Type must be #Microsoft.Azure.Search.CustomAnalyzer.
- "charFilters" can be one or more filters from [Character Filters](#CharFilter), processed before tokenization, in the order provided. Some character filters have options, which can be set in a "charFilter section. Character filters are optional.
The analyzer_type is only provided for analyzers that can be customized. If ther
## Character filters
-In the table below, the character filters that are implemented using Apache Lucene are linked to the Lucene API documentation.
+Character filters add processing before a string reaches the tokenizer.
+
+Cognitive Search supports character filters in the following list. More information about each one can be found in the Lucene API reference.
|**char_filter_name**|**char_filter_type** <sup>1</sup>|**Description and Options**| |--|||
In the table below, the character filters that are implemented using Apache Luce
## Tokenizers
-A tokenizer divides continuous text into a sequence of tokens, such as breaking a sentence into words. In the table below, the tokenizers that are implemented using Apache Lucene are linked to the Lucene API documentation.
+A tokenizer divides continuous text into a sequence of tokens, such as breaking a sentence into words, or a word into root forms.
+
+Cognitive Search supports tokenizers in the following list. More information about each one can be found in the Lucene API reference.
|**tokenizer_name**|**tokenizer_type** <sup>1</sup>|**Description and Options**| ||-||
A tokenizer divides continuous text into a sequence of tokens, such as breaking
## Token filters
-A token filter is used to filter out or modify the tokens generated by a tokenizer. For example, you can specify a lowercase filter that converts all characters to lowercase. You can have multiple token filters in a custom analyzer. Token filters run in the order in which they are listed.
+A token filter is used to filter out or modify the tokens generated by a tokenizer. For example, you can specify a lowercase filter that converts all characters to lowercase. You can have multiple token filters in a custom analyzer. Token filters run in the order in which they are listed.
In the table below, the token filters that are implemented using Apache Lucene are linked to the Lucene API documentation.
search Index Add Language Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/index-add-language-analyzers.md
Previously updated : 03/17/2021 Last updated : 09/08/2021 # Add language analyzers to string fields in an Azure Cognitive Search index
-A *language analyzer* is a specific type of [text analyzer](search-analyzers.md) that performs lexical analysis using the linguistic rules of the target language. Every searchable field has an **analyzer** property. If your content consists of translated strings, such as separate fields for English and Chinese text, you could specify language analyzers on each field to access the rich linguistic capabilities of those analyzers.
+A *language analyzer* is a specific type of [text analyzer](search-analyzers.md) that performs lexical analysis using the linguistic rules of the target language. Every searchable string field has an **analyzer** property. If your content consists of translated strings, such as separate fields for English and Chinese text, you could specify language analyzers on each field to access the rich linguistic capabilities of those analyzers.
## When to use a language analyzer You should consider a language analyzer when awareness of word or sentence structure adds value to text parsing. A common example is the association of irregular verb forms ("bring" and "brought) or plural nouns ("mice" and "mouse"). Without linguistic awareness, these strings are parsed on physical characteristics alone, which fails to catch the connection. Since large chunks of text are more likely to have this content, fields consisting of descriptions, reviews, or summaries are good candidates for a language analyzer.
-You should also consider language analyzers when content consists of non-Western language strings. While the [default analyzer](search-analyzers.md#default-analyzer) is language-agnostic, the concept of using spaces and special characters (hyphens and slashes) to separate strings tends is more applicable to Western languages than non-Western ones.
+You should also consider language analyzers when content consists of non-Western language strings. While the [default analyzer (Standard Lucene)](search-analyzers.md#default-analyzer) is language-agnostic, the concept of using spaces and special characters (hyphens and slashes) to separate strings is more applicable to Western languages than non-Western ones.
For example, in Chinese, Japanese, Korean (CJK), and other Asian languages, a space is not necessarily a word delimiter. Consider the following Japanese string. Because it has no spaces, a language-agnostic analyzer would likely analyze the entire string as one token, when in fact the string is actually a phrase.
The default analyzer is Standard Lucene, which works well for English, but perha
## How to specify a language analyzer
-Set a language analyzer on "searchable" fields of type Edm.String during field definition.
+Set the analyzer during index creation, before it's loaded with data.
-Although field definitions have several analyzer-related properties, only the "analyzer" property can be used for language analyzers. The value of "analyzer" must be one of the language analyzers from the support analyzers list.
+1. In the field definition, make sure the field is attributed as "searchable" and is of type Edm.String.
+
+1. Set the "analyzer" property to one of the language analyzers from the [supported analyzers list](#language-analyzer-list).
+
+ The "analyzer" property is the only property that will accept a language analyzer, and it's used for both indexing and queries. Other analyzer-related properties ("searchAnalyzer" and "indexAnalyzer") will not accept a language analyzer.
+
+Language analyzers cannot be customized. If an analyzer isn't meeting your requirements, you can try creating a [custom analyzer](cognitive-search-working-with-skillsets.md) with the microsoft_language_tokenizer or microsoft_language_stemming_tokenizer, and add filters for pre- and post-tokenization processing.
+
+The following example illustrates a language analyzer specification in an index:
```json {
search Knowledge Store Connect Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/knowledge-store-connect-power-bi.md
Previously updated : 08/10/2021 Last updated : 09/07/2021 # Connect a knowledge store with Power BI
-In this article, learn how to connect to and explore a knowledge store using Power Query in the Power BI Desktop app. You can get started faster with templates, or build a custom dashboard from scratch. This brief video below demonstrates how you can enrich your experience with your data by using Azure Cognitive Search in combination with Power BI.
+In this article, learn how to connect to and explore a knowledge store using Power Query in the Power BI Desktop app. You can get started faster with templates, or build a custom dashboard from scratch.
-+ Follow the steps in [Create a knowledge store in the Azure portal](knowledge-store-create-portal.md) or [Create an Azure Cognitive Search knowledge store by using REST](knowledge-store-create-rest.md) to create the sample knowledge store used in this walkthrough. You will also need the name of the Azure Storage account that you used to create the knowledge store, along with its access key from the Azure portal.
+A knowledge store that's composed of tables in Azure Storage work best in Power BI. If the tables contain projections from the same skillset and projection group, you can easily "join" them to build table visualizations that include fields from related tables.
-+ [Install Power BI Desktop](https://powerbi.microsoft.com/downloads/)
+Follow along with the steps in this article using the sample data and knowledge store [created in the Azure portal](knowledge-store-create-portal.md) or through [Postman and REST APIs](knowledge-store-create-rest.md).
-> [!VIDEO https://www.youtube.com/embed/XWzLBP8iWqg?version=3&start=593&end=663]
+## Connect to Azure Storage
-## Sample Power BI template - Azure portal only
-
-When creating a [knowledge store using the Azure portal](knowledge-store-create-portal.md), you have the option of downloading a [Power BI template](https://github.com/Azure-Samples/cognitive-search-templates) on the second page of the **Import data** wizard. This template gives you several visualizations, such as WordCloud and Network Navigator, for text-based content.
-
-Click **Get Power BI Template** on the **Add cognitive skills** page to retrieve and download the template from its public GitHub location. The wizard modifies the template to accommodate the shape of your data, as captured in the knowledge store projections specified in the wizard. For this reason, the template you download will vary each time you run the wizard, assuming different data inputs and skill selections.
-
-![Sample Azure Cognitive Search Power BI Template](media/knowledge-store-connect-power-bi/powerbi-sample-template-portal-only.png "Sample Power BI template")
-
-> [!NOTE]
-> Although the template is downloaded while the wizard is in mid-flight, you'll have to wait until the knowledge store is actually created in Azure Table Storage before you can use it.
-
-## Connect with Power BI
-
-1. Start Power BI Desktop and click **Get data**.
+1. Start [Power BI Desktop](https://powerbi.microsoft.com/downloads/) and select **Get data**.
1. In the **Get Data** window, select **Azure**, and then select **Azure Table Storage**.
-1. Click **Connect**.
+1. Select **Connect**.
1. For **Account Name or URL**, enter in your Azure Storage account name (the full URL will be created for you). 1. If prompted, enter the storage account key.
-1. Select the tables containing the hotel reviews data created by the previous walkthroughs.
+## Set up tables
- + For the portal walkthrough, table names are *hotelReviewsSsDocument*, *hotelReviewsSsEntities*, *hotelReviewsSsKeyPhrases*, and *hotelReviewsSsPages*.
-
- + For the REST walkthrough, table names are *hotelReviewsDocument*, *hotelReviewsPages*, *hotelReviewsKeyPhrases*, and *hotelReviewsSentiment*.
+1. Select the checkbox next to all of the tables that were created from the same skillset, and then select **Load**.
-1. Click **Load**.
+ ![Load tables](media/knowledge-store-connect-power-bi/power-bi-load-tables.png "Load tables")
-1. On the top ribbon, click **Edit Queries** to open the **Power Query Editor**.
+1. On the top ribbon, select **Transform Data** to open the **Power Query Editor**.
![Open Power Query](media/knowledge-store-connect-power-bi/powerbi-edit-queries.png "Open Power Query") 1. Select *hotelReviewsSsDocument*, and then remove the *PartitionKey*, *RowKey*, and *Timestamp* columns. + ![Edit tables](media/knowledge-store-connect-power-bi/powerbi-edit-table.png "Edit tables") 1. Click the icon with opposing arrows at the upper right side of the table to expand the *Content*. When the list of columns appears, select all columns, and then deselect columns that start with 'metadata'. Click **OK** to show the selected columns.
Click **Get Power BI Template** on the **Add cognitive skills** page to retrieve
1. On the command bar, click **Close and Apply**.
+## Check table relationships
+ 1. Click on the Model tile on the left navigation pane and validate that Power BI shows relationships between all three tables. ![Validate relationships](media/knowledge-store-connect-power-bi/powerbi-relationships.png "Validate relationships") 1. Double-click each relationship and make sure that the **Cross-filter direction** is set to **Both**. This enables your visuals to refresh when a filter is applied.
-1. Click on the Report tile on the left navigation pane to explore data through visualizations. For text fields, tables and cards are useful visualizations. You can choose fields from each of the three tables to fill in the table or card.
+## Build a report
-<!-- ## Try with larger data sets
+1. Click on the Report tile on the left navigation pane to explore data through visualizations. For text fields, tables and cards are useful visualizations.
-We purposely kept the data set small to avoid charges for a demo walkthrough. For a more realistic experience, you can create and then attach a billable Cognitive Services resource to enable a larger number of transactions against the sentiment analyzer, keyphrase extraction, and language detector skills.
+1. Choose fields from each of the three tables to fill in the table or card.
-Create new containers in Azure Blob storage and upload each CSV file to its own container. Specify one of these containers in the data source creation step in Import data wizard.
+ ![Build a table report](media/knowledge-store-connect-power-bi/power-bi-table-report.png "Build a table report")
+
+## Sample Power BI template - Azure portal only
-| Description | Link |
-|-||
-| Free tier | [HotelReviews_Free.csv](https://knowledgestoredemo.blob.core.windows.net/hotel-reviews/HotelReviews_Free.csv?st=2019-07-29T17%3A51%3A30Z&se=2021-07-30T17%3A51%3A00Z&sp=rl&sv=2018-03-28&sr=c&sig=LnWLXqFkPNeuuMgnohiz3jfW4ijePeT5m2SiQDdwDaQ%3D) |
-| Small (500 Records) | [HotelReviews_Small.csv](https://knowledgestoredemo.blob.core.windows.net/hotel-reviews/HotelReviews_Small.csv?st=2019-07-29T17%3A51%3A30Z&se=2021-07-30T17%3A51%3A00Z&sp=rl&sv=2018-03-28&sr=c&sig=LnWLXqFkPNeuuMgnohiz3jfW4ijePeT5m2SiQDdwDaQ%3D) |
-| Medium (6000 Records)| [HotelReviews_Medium.csv](https://knowledgestoredemo.blob.core.windows.net/hotel-reviews/HotelReviews_Medium.csv?st=2019-07-29T17%3A51%3A30Z&se=2021-07-30T17%3A51%3A00Z&sp=rl&sv=2018-03-28&sr=c&sig=LnWLXqFkPNeuuMgnohiz3jfW4ijePeT5m2SiQDdwDaQ%3D)
-| Large (Full dataset 35000 Records) | [HotelReviews_Large.csv](https://knowledgestoredemo.blob.core.windows.net/hotel-reviews/HotelReviews_Large.csv?st=2019-07-29T17%3A51%3A30Z&se=2021-07-30T17%3A51%3A00Z&sp=rl&sv=2018-03-28&sr=c&sig=LnWLXqFkPNeuuMgnohiz3jfW4ijePeT5m2SiQDdwDaQ%3D). Be aware that very large data sets are expensive to process. This one costs roughly $1000 U.S dollars.|
+When creating a [knowledge store using the Azure portal](knowledge-store-create-portal.md), you have the option of downloading a [Power BI template](https://github.com/Azure-Samples/cognitive-search-templates) on the second page of the **Import data** wizard. This template gives you several visualizations, such as WordCloud and Network Navigator, for text-based content.
-In the enrichment step of the wizard, attach a billable [Cognitive Services](../cognitive-services/cognitive-services-apis-create-account.md) resource, created at the *S0* tier, in the same region as Azure Cognitive Search to use larger data sets.
+Click **Get Power BI Template** on the **Add cognitive skills** page to retrieve and download the template from its public GitHub location. The wizard modifies the template to accommodate the shape of your data, as captured in the knowledge store projections specified in the wizard. For this reason, the template you download will vary each time you run the wizard, assuming different data inputs and skill selections.
- ![Create a Cognitive Services resource](media/knowledge-store-connect-power-bi/create-cognitive-service.png "Create a Cognitive Services resource") -->
+![Sample Azure Cognitive Search Power BI Template](media/knowledge-store-connect-power-bi/powerbi-sample-template-portal-only.png "Sample Power BI template")
-## Clean up
+> [!NOTE]
+> The template is downloaded while the wizard is in mid-flight. You'll have to wait until the knowledge store is actually created in Azure Table Storage before you can use it.
-When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
+## Video introduction
-You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
+For a demonstration of using Power BI with a knowledge store, watch the following video.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+> [!VIDEO https://www.youtube.com/embed/XWzLBP8iWqg?version=3]
## Next steps
-To learn how to explore this knowledge store using Storage Explorer, see the following article.
- > [!div class="nextstepaction"]
-> [View with Storage Explorer](knowledge-store-view-storage-explorer.md)
+> [Tables in Power BI reports and dashboards](/power-bi/visuals/power-bi-visualization-tables)
search Knowledge Store View Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/knowledge-store-view-storage-explorer.md
Previously updated : 08/10/2021 Last updated : 09/10/2021 # View a knowledge store with Storage Explorer
-A [knowledge store](knowledge-store-concept-intro.md) is created by a skillset and saved to Azure Storage. In this article, you'll learn how to view the content of a knowledge store using Storage Explorer in the Azure portal.
+A [knowledge store](knowledge-store-concept-intro.md) is content created by an Azure Cognitive Search skillset and saved to Azure Storage. In this article, you'll learn how to view the contents of a knowledge store using Storage Explorer in the Azure portal.
## Prerequisites
-+ Create a knowledge store in [Azure portal](knowledge-store-create-portal.md) or [Postman and the REST APIs](knowledge-store-create-rest.md).
-
-+ You will also need the name of the Azure Storage account that has the knowledge store, along with its access key from the Azure portal.
+Start with an existing knowledge store created in the [Azure portal](knowledge-store-create-portal.md) or using the [REST APIs](knowledge-store-create-rest.md). Both the portal and REST walkthroughs create a knowledge store in Azure Table Storage.
## Start Storage Explorer 1. In the Azure portal, [open the Storage account](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) that you used to create the knowledge store.
-1. In the storage account's left navigation pane, click **Storage Explorer**.
-
-## View, edit, and query tables
+1. In the storage account's left navigation pane, select **Storage Explorer**.
-Both the portal and REST walkthroughs create a knowledge store in Table Storage.
+## Edit and query tables
-1. Expand the **TABLES** list to show a list of Azure table projections that were created when you created the knowledge store. The tables should contain content related to hotel reviews.
+1. Expand the **TABLES** list to show a list of Azure table projections that were created when you created the knowledge store. If you used the quickstart or REST article to create the knowledge store, the tables will contain content related to customer reviews of a European hotel.
-1. Select any table to view the enriched data, including key phrases and sentiment scores.
+1. Select a table from the list.
![View tables in Storage Explorer](media/knowledge-store-view-storage-explorer/storage-explorer-tables.png "View tables in Storage Explorer")
-1. To change the data type for any table value or to change individual values in your table, click **Edit**. When you change the data type for any column in one table row, it will be applied to all rows.
+1. To change the data type, property name, or individual data values in your table, click **Edit**.
![Edit table in Storage Explorer](media/knowledge-store-view-storage-explorer/storage-explorer-edit-table.png "Edit table in Storage Explorer")
-1. To run queries, click **Query** on the command bar and enter your conditions.
+1. To run queries, select **Query** on the command bar and enter your conditions.
![Query table in Storage Explorer](media/knowledge-store-view-storage-explorer/storage-explorer-query-table.png "Query table in Storage Explorer")
+In Storage Explorer, you can only query one table at time using [supported query syntax](/rest/api/storageservices/Querying-Tables-and-Entities). To query across tables, consider using Power BI instead.
+ ## Next steps
-Connect this knowledge store to Power BI for deeper analysis, or move forward with code, using the REST API and Postman to create a different knowledge store.
+Connect this knowledge store to Power BI to build visualizations that include multiple tables.
> [!div class="nextstepaction"] > [Connect with Power BI](knowledge-store-connect-power-bi.md)
search Search Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-analyzers.md
Previously updated : 03/17/2021 Last updated : 09/08/2021 # Analyzers for text processing in Azure Cognitive Search
-An *analyzer* is a component of [full text search](search-lucene-query-architecture.md) responsible for processing text in query strings and indexed documents. Text processing (also known as lexical analysis) is transformative, modifying a query string through actions such as these:
+An *analyzer* is a component of the [full text search engine](search-lucene-query-architecture.md) that's responsible for processing strings during indexing and query execution. Text processing (also known as lexical analysis) is transformative, modifying a string through actions such as these:
+ Remove non-essential words (stopwords) and punctuation + Split up phrases and hyphenated words into component parts
An *analyzer* is a component of [full text search](search-lucene-query-architect
Analysis applies to `Edm.String` fields that are marked as "searchable", which indicates full text search.
-For fields with this configuration, analysis occurs during indexing when tokens are created, and then again during query execution when queries are parsed and the engine scans for matching tokens. A match is more likely to occur when the same analyzer is used for both indexing and queries, but you can set the analyzer for each workload independently, depending on your requirements.
+For fields of this configuration, analysis occurs during indexing when tokens are created, and then again during query execution when queries are parsed and the engine scans for matching tokens. A match is more likely to occur when the same analyzer is used for both indexing and queries, but you can set the analyzer for each workload independently, depending on your requirements.
Query types that are *not* full text search, such as filters or fuzzy search, do not go through the analysis phase on the query side. Instead, the parser sends those strings directly to the search engine, using the pattern that you provide as the basis for the match. Typically, these query forms require whole-string tokens to make pattern matching work. To ensure whole terms tokens during indexing, you might need [custom analyzers](index-add-custom-analyzers.md). For more information about when and why query terms are analyzed, see [Full text search in Azure Cognitive Search](search-lucene-query-architecture.md).
For more background on lexical analysis, listen to the following video clip for
## Default analyzer
-In Azure Cognitive Search queries, an analyzer is automatically invoked on all string fields marked as searchable.
+In Azure Cognitive Search, an analyzer is automatically invoked on all string fields marked as searchable.
By default, Azure Cognitive Search uses the [Apache Lucene Standard analyzer (standard lucene)](https://lucene.apache.org/core/6_6_1/core/org/apache/lucene/analysis/standard/StandardAnalyzer.html), which breaks text into elements following the ["Unicode Text Segmentation"](https://unicode.org/reports/tr29/) rules. Additionally, the standard analyzer converts all characters to their lower case form. Both indexed documents and search terms go through the analysis during indexing and query processing.
The best time to add and assign analyzers is during active development, when dro
Because analyzers are used to tokenize terms, you should assign an analyzer when the field is created. In fact, assigning an analyzer or indexAnalyzer to a field that has already been physically created is not allowed (although you can change the searchAnalyzer property at any time with no impact to the index).
-To change the analyzer of an existing field, you'll have to [rebuild the index entirely](search-howto-reindex.md) (you cannot rebuild individual fields). For indexes in production, you can defer a rebuild by creating a new field with the new analyzer assignment, and start using it in place of the old one. Use [Update Index](/rest/api/searchservice/update-index) to incorporate the new field and [mergeOrUpload](/rest/api/searchservice/addupdate-or-delete-documents) to populate it. Later, as part of planned index servicing, you can clean up the index to remove obsolete fields.
+To change the analyzer of an existing field, you'll have to drop and recreate the entire index (you cannot rebuild individual fields). For indexes in production, you can defer a rebuild by creating a new field with the new analyzer assignment, and start using it in place of the old one. Use [Update Index](/rest/api/searchservice/update-index) to incorporate the new field and [mergeOrUpload](/rest/api/searchservice/addupdate-or-delete-documents) to populate it. Later, as part of planned index servicing, you can clean up the index to remove obsolete fields.
To add a new field to an existing index, call [Update Index](/rest/api/searchservice/update-index) to add the field, and [mergeOrUpload](/rest/api/searchservice/addupdate-or-delete-documents) to populate it.
A detailed description of query execution can be found in [Full text search in A
To learn more about analyzers, see the following articles:
-+ [Language analyzers](index-add-language-analyzers.md)
-+ [Custom analyzers](index-add-custom-analyzers.md)
++ [Add a language analyzer](index-add-language-analyzers.md)++ [Add a custom analyzer](index-add-custom-analyzers.md) + [Create a search index](search-what-is-an-index.md) + [Create a multi-language index](search-language-support.md)
search Search Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-language-support.md
Previously updated : 03/22/2021 Last updated : 09/08/2021
-# How to create an index for multiple languages in Azure Cognitive Search
+# Create an index for multiple languages in Azure Cognitive Search
A key requirement in a multilingual search application is the ability to search over and retrieve results in the user's own language. In Azure Cognitive Search, one way to meet the language requirements of a multilingual app is to create dedicated fields for storing strings in a specific language, and then constrain full text search to just those fields at query time.
-+ On field definitions, set a language analyzer that invokes the linguistic rules of the target language. To view the full list of supported analyzers, see [Add language analyzers](index-add-language-analyzers.md).
++ On field definitions, [specify a language analyzer](index-add-language-analyzers.md) that invokes the linguistic rules of the target language.
-+ On the query request, set parameters to scope full text search to specific fields, and then trim the results of any fields that don't provide content compatible with the search experience you want to deliver.
++ On the query request, set the `searchFields` parameter to scope full text search to specific fields, and then use `select` to return just those fields that have compatible content. The success of this technique hinges on the integrity of field contents. Azure Cognitive Search does not translate strings or perform language detection as part of query execution. It is up to you to make sure that fields contain the strings you expect.
POST /indexes/hotels/docs/search?api-version=2020-06-30
## Next steps
-+ [Language analyzers](index-add-language-analyzers.md)
++ [Add a language analyzer](index-add-language-analyzers.md) + [How full text search works in Azure Cognitive Search](search-lucene-query-architecture.md) + [Search Documents REST API](/rest/api/searchservice/search-documents) + [AI enrichment overview](cognitive-search-concept-intro.md)
service-bus-messaging Deprecate Service Bus Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/deprecate-service-bus-management.md
Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subs
| [Get-AzureSBNamespace](/powershell/module/servicemanagement/azure.service/get-azuresbnamespace) | [Get-AzureRmServiceBusNamespace](/powershell/module/azurerm.servicebus/get-azurermservicebusnamespace) | [Get-AzServiceBusNamespace](/powershell/module/az.servicebus/get-azservicebusnamespace) | | [New-AzureSBAuthorizationRule](/powershell/module/servicemanagement/azure.service/new-azuresbauthorizationrule) | [New-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/new-azurermservicebusauthorizationrule) | [New-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/new-azservicebusauthorizationrule) | | [New-AzureSBNamespace](/powershell/module/servicemanagement/azure.service/new-azuresbnamespace) | [New-AzureRmServiceBusNamespace](/powershell/module/azurerm.servicebus/new-azurermservicebusnamespace) | [New-AzServiceBusNamespace](/powershell/module/az.servicebus/new-azservicebusnamespace) |
-| [Remove-AzureSBAuthorizationRule](/powershell/module/servicemanagement/azure.service/remove-azuresbauthorizationrule0) | [Remove-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/remove-azurermservicebusauthorizationrule) | [Remove-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/remove-azservicebusauthorizationrule) |
+| [Remove-AzureSBAuthorizationRule](/powershell/module/servicemanagement/azure.service/remove-azuresbauthorizationrule) | [Remove-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/remove-azurermservicebusauthorizationrule) | [Remove-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/remove-azservicebusauthorizationrule) |
| [Remove-AzureSBNamespace](/powershell/module/servicemanagement/azure.service/remove-azuresbnamespace) | [Remove-AzureRmServiceBusNamespace](/powershell/module/azurerm.servicebus/remove-azurermservicebusnamespace) | [Remove-AzServiceBusNamespace](/powershell/module/az.servicebus/remove-azservicebusnamespace) | | [Set-AzureSBAuthorizationRule](/powershell/module/servicemanagement/azure.service/set-azuresbauthorizationrule) | [Set-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/set-azurermservicebusauthorizationrule) | [Set-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/set-azservicebusauthorizationrule) |
service-bus-messaging Service Bus Filter Examples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-filter-examples.md
Title: Set subscriptions filters in Azure Service Bus | Microsoft Docs description: This article provides examples for defining filters and actions on Azure Service Bus topic subscriptions. Previously updated : 02/17/2021 Last updated : 09/07/2021 # Set subscription filters (Azure Service Bus)
sys.To NOT IN ('Store1','Store2','Store3','Store4','Store5','Store6','Store7','S
For a C# sample, see [Topic Filters sample on GitHub](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Azure.Messaging.ServiceBus/BasicSendReceiveTutorialwithFilters).
-## Correlation filter using CorrelationID
+## Correlation filters
+
+### Correlation filter using CorrelationID
```csharp new CorrelationFilter("Contoso");
new CorrelationFilter("Contoso");
It filters messages with `CorrelationID` set to `Contoso`.
-## Correlation filter using system and user properties
+> [!NOTE]
+> The [CorrelationRuleFilter](/dotnet/api/azure.messaging.servicebus.administration.correlationrulefilter) class in .NET is in the [Azure.Messaging.ServiceBus.Administration](/dotnet/api/azure.messaging.servicebus.administration) namespace. For sample code that shows how to create filters in general using .NET, see [this code on GitHub](https://github.com/Azure/azure-service-bus/blob/master/samples/DotNet/Azure.Messaging.ServiceBus/BasicSendReceiveTutorialwithFilters/BasicSendReceiveTutorialWithFilters/Program.cs#L179).
++
+### Correlation filter using system and user properties
```csharp
-var filter = new CorrelationFilter();
+var filter = new CorrelationRuleFilter();
filter.Label = "Important"; filter.ReplyTo = "johndoe@contoso.com"; filter.Properties["color"] = "Red";
filter.Properties["color"] = "Red";
It's equivalent to: `sys.ReplyTo = 'johndoe@contoso.com' AND sys.Label = 'Important' AND color = 'Red'` ++++ ## Next steps See the following samples:
service-fabric How To Managed Cluster Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-networking.md
Managed clusters do not enable IPv6 by default. This feature will enable full du
} ```
-2. Deploy your IPv6 enabled managed cluster. Customize the [sample template](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/SF-Managed-Standard-SKU-2-NT-IPv6/AzureDeploy.json) as needed or build your own.
+2. Deploy your IPv6 enabled managed cluster. Customize the [sample template](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/master/SF-Managed-Standard-SKU-2-NT-IPv6/azuredeploy.json) as needed or build your own.
In the following example, we'll create a resource group called `MyResourceGroup` in `westus` and deploy a cluster with this feature enabled. ```powershell New-AzResourceGroup -Name MyResourceGroup -Location westus
This feature allows customers to use an existing virtual network by specifying a
> [!NOTE] > VNetRoleAssignmentID has to be a [GUID](../azure-resource-manager/templates/template-functions-string.md#examples-16). If you deploy a template again including this role assignment, make sure the GUID is the same as the one originally used. We suggest you run this isolated or remove this resource from the cluster template post-deployment as it just needs to be created once.
- Here is a full sample [Azure Resource Manager (ARM) template that creates a VNet subnet and does role assignment](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/SF-Managed-Standard-SKU-2-NT-BYOVNET/SFMC-VNet-RoleAssign.json) you can use for this step.
+ Here is a full sample [Azure Resource Manager (ARM) template that creates a VNet subnet and does role assignment](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/master/SF-Managed-Standard-SKU-2-NT-BYOVNET/createVNet-assign-role.json) you can use for this step.
3. Configure the `subnetId` property for the cluster deployment after the role is set up as shown below:
This feature allows customers to use an existing virtual network by specifying a
... } ```
- See the [bring your own VNet cluster sample template](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/SF-Managed-Standard-SKU-2-NT-BYOVNET/AzureDeploy.json) or customize your own.
+ See the [bring your own VNet cluster sample template](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/master/SF-Managed-Standard-SKU-2-NT-BYOVNET/azuredeploy.json) or customize your own.
4. Deploy the configured managed cluster Azure Resource Manager (ARM) template.
Managed clusters create an Azure Load Balancer and fully qualified domain name w
**Feature Requirements** * Basic and Standard SKU Azure Load Balancer types are supported
- * You must have backend and NAT pools configured on the existing Azure Load Balancer. See full [create and assign role sample here](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/SF-Managed-Standard-SKU-2-NT-BYOLB/createlb-and-assign-role) for an example.
+ * You must have backend and NAT pools configured on the existing Azure Load Balancer. See full [create and assign role sample here](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/master/SF-Managed-Standard-SKU-2-NT-BYOLB/createlb-and-assign-role.json) for an example.
Here are a couple example scenarios customers may use this for:
To configure bring your own load balancer:
5. Optionally configure the managed cluster NSG rules applied to the node type to allow any required traffic that you've configured on the Azure Load Balancer or traffic will be blocked.
- See the [bring your own load balancer sample Azure Resource Manager (ARM) template](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/SF-Managed-Standard-SKU-2-NT-BYOLB/AzureDeploy.json) for an example on how to open inbound rules.
+ See the [bring your own load balancer sample Azure Resource Manager (ARM) template](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/master/SF-Managed-Standard-SKU-2-NT-BYOLB/azuredeploy.json) for an example on how to open inbound rules.
6. Deploy the configured managed cluster ARM Template
To configure bring your own load balancer:
[sfmc-rdp-connect]: ./media/how-to-managed-cluster-networking/sfmc-rdp-connect.png [sfmc-byolb-example-1]: ./media/how-to-managed-cluster-networking/sfmc-byolb-scenario-1.png [sfmc-byolb-example-2]: ./media/how-to-managed-cluster-networking/sfmc-byolb-scenario-2.png-
service-fabric How To Patch Cluster Nodes Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-patch-cluster-nodes-windows.md
Ensure that durability settings are not mismatched on the Service Fabric cluster
With Bronze durability, automatic OS image upgrade isn't available. While [Patch Orchestration Application](service-fabric-patch-orchestration-application.md) (intended only for non-Azure hosted clusters) is *not recommended* for Silver or greater durability levels, it is your only option to automate Windows updates with respect to Service Fabric upgrade domains.
+If you want to switch from Patch Orchestration Application to automatic OS image upgrade, you must first deprecate the use of Patch Orchestration Application.
+ ## Enable auto OS upgrades and disable Windows Update When enabling automatic OS updates, you'll also need to disable Windows Update in the deployment template. Once you deploy these changes, all machines in the scale set will be reimaged and the scale set will be enabled for automatic updates.
When enabling automatic OS updates, you'll also need to disable Windows Update i
## Next steps
-Learn how to enable [automatic OS image upgrades on Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md).
+Learn how to enable [automatic OS image upgrades on Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md).
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-physical-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Supported release** | **Mobility service version** | **Kernel version** | | | |
-14.04 LTS | [9.39](https://support.microsoft.com/help/4597409/), [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a), [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533), [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
+14.04 LTS | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a), [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533), [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6), [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
|||
+16.04 LTS | [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
16.04 LTS | [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure| 16.04 LTS | [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure| 16.04 LTS | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.4.0-21-generic to 4.4.0-201-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-133-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1106-azure| 16.04 LTS | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | 4.4.0-21-generic to 4.4.0-197-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-128-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1102-azure |
-16.04 LTS | [9.39](https://support.microsoft.com/help/4597409/) | 4.4.0-21-generic to 4.4.0-194-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-123-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1098-azure|
|||
+18.04 LTS | [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic </br> 4.15.0-153-generic </br> 5.4.0-1056-azure </br> 5.4.0-81-generic </br> 4.15.0-1122-azure </br> 4.15.0-154-generic |
18.04 LTS | [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic | 18.04 LTS |[9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic | 18.04 LTS | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.15.0-20-generic to 4.15.0-135-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-70-generic </br> 5.4.0-37-generic to 5.4.0-59-generic</br> 5.4.0-60-generic to 5.4.0-65-generic </br> 4.15.0-1009-azure to 4.15.0-1106-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1039-azure| 18.04 LTS | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | 4.15.0-20-generic to 4.15.0-129-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-63-generic </br> 5.3.0-19-generic to 5.3.0-69-generic </br> 5.4.0-37-generic to 5.4.0-59-generic</br> 4.15.0-1009-azure to 4.15.0-1103-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1035-azure|
-18.04 LTS | [9.39](https://support.microsoft.com/help/4597409/) | 4.15.0-20-generic to 4.15.0-123-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-63-generic </br> 5.3.0-19-generic to 5.3.0-69-generic </br> 5.4.0-37-generic to 5.4.0-53-generic</br> 4.15.0-1009-azure to 4.15.0-1099-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1031-azure|
|||
+20.04 LTS |[9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 5.4.0-26-generic to 5.4.0-80 </br> 5.4.0-1010-azure to 5.4.0-1048-azure </br> 5.4.0-81-generic </br> 5.4.0-1056-azure |
20.04 LTS |[9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 5.4.0-26-generic to 5.4.0-80 </br> 5.4.0-1010-azure to 5.4.0-1048-azure | 20.04 LTS |[9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8)| 5.4.0-26-generic to 5.4.0-60-generic </br> -generic 5.4.0-1010-azure to 5.4.0-1043-azure </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic | 20.04 LTS |[9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533)| 5.4.0-26-generic to 5.4.0-65 </br> -generic 5.4.0-1010-azure to 5.4.0-1039-azure | 20.04 LTS |[9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a)| 5.4.0-26-generic to 5.4.0-59 </br> -generic 5.4.0-1010-azure to 5.4.0-1035-azure |
-20.04 LTS |[9.39](https://support.microsoft.com/help/4597409/) | 5.4.0-26-generic to 5.4.0-53 </br> -generic 5.4.0-1010-azure to 5.4.0-1031-azure
**Note: For Ubuntu 20.04, we had initially rolled out support for kernels 5.8.* but we have since found issues with support for this kernel and hence have redacted these kernels from our support statement for the time being.
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Supported release** | **Mobility service version** | **Kernel version** | | | |
-Debian 7 | [9.39](https://support.microsoft.com/help/4597409/), [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a), [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533), [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8),[9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
+Debian 7 | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a), [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533), [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8),[9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6), [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
|||
-Debian 8 | [9.39](https://support.microsoft.com/help/4597409/), [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a), [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533), [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 |
+Debian 8 | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a), [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533), [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6), [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 |
|||
+Debian 9.1 | [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br>
Debian 9.1 | [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> Debian 9.1 | [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> Debian 9.1 | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.9.0-1-amd64 to 4.9.0-14-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.14-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.14-cloud-amd64 </br> Debian 9.1 | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | 4.9.0-1-amd64 to 4.9.0-14-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.13-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.13-cloud-amd64 </br>
-Debian 9.1 | [9.39](https://support.microsoft.com/help/4597409/) | 4.9.0-1-amd64 to 4.9.0-14-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.12-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.12-cloud-amd64 </br>
|||
+Debian 10 | [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.19.0-5-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64
Debian 10 | [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.19.0-5-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64 Debian 10 | [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.19.0-5-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64 Debian 10 | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.19.0-5-amd64 to 4.19.0-14-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-14-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64
Debian 10 | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-fo
**Release** | **Mobility service version** | **Kernel version** | | | |
+SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.65-azure </br> 4.12.14-16.68-azure |
SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.65-azure | SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.56-azure | SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.44-azure | SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.38-azure|
-SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.39](https://support.microsoft.com/help/4597409/) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.34-azure |
- ### SUSE Linux Enterprise Server 15 supported kernel versions **Release** | **Mobility service version** | **Kernel version** | | | |
+SUSE Linux Enterprise Server 15, SP1, SP2 | [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.47-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.35-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.29-azure
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.39](https://support.microsoft.com/help/4597409/) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.47-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.21-azure
## Linux file systems/guest storage
BTRFS | BTRFS is supported from [Update Rollup 34](https://support.microsoft.com
**Action** | **Details** |
-Resize disk on replicated VM (Not supported for Preview architecture)| Supported on the source VM before failover, directly in the VM properties. No need to disable/re-enable replication.<br/><br/> If you change the source VM after failover, the changes aren't captures.<br/><br/> If you change the disk size on the Azure VM after failover, when you fail back, Site Recovery creates a new VM with the updates.
+Resize disk on replicated VM (Not supported for Preview architecture)| Resizing up on the source VM is supported. Resizing down on the source VM is not supported. Resizing should be performed before failover, directly in the VM properties. No need to disable/re-enable replication.<br/><br/> If you change the source VM after failover, the changes aren't captures.<br/><br/> If you change the disk size on the Azure VM after failover, when you fail back, Site Recovery creates a new VM with the updates.
Add disk on replicated VM | Not supported.<br/> Disable replication for the VM, add the disk, and then re-enable replication. > [!NOTE]
static-web-apps User Information https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/user-information.md
public static class StaticWebAppsAuth
+When a user is logged in, the `x-ms-client-principal` header is added to the requests for user information via the Static Web Apps edge nodes.
+ <sup>1</sup> The [fetch](https://caniuse.com/#feat=fetch) API and [await](https://caniuse.com/#feat=mdn-javascript_operators_await) operator aren't supported in Internet Explorer. ## Next steps
storage Scalability Targets Standard Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/scalability-targets-standard-account.md
Previously updated : 07/22/2021 Last updated : 09/07/2021
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-keys-manage.md
To rotate your storage account access keys with Azure CLI:
```azurecli-interactive az storage account keys renew \ --resource-group <resource-group> \
- --account-name <storage-account>
+ --account-name <storage-account> \
--key primary ``` 1. Update the connection strings in your code to reference the new primary access key.
-2. Regenerate the secondary access key in the same manner. To regenerate the secondary key, use `key2` as the key name instead of `key1`.
+2. Regenerate the secondary access key in the same manner. To regenerate the secondary key, use `secondary` as the key name instead of `primary`.
synapse-analytics Concepts Data Factory Differences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/concepts-data-factory-differences.md
Previously updated : 08/25/2021 Last updated : 09/07/2021
Check below table for features availability:
| **Integration Runtime** | Using SSIS and SSIS Integration Runtime | Γ£ô | Γ£ù | | | Support for Cross-region Integration Runtime (Data Flows) | Γ£ô | Γ£ù | | | Integration Runtime Sharing | Γ£ô<br><small>*Can be shared across different data factories* | Γ£ù |
-| | Time to Live | Γ£ô | Γ£ù |
| **Pipelines Activities** | SSIS Package Activity | Γ£ô | Γ£ù | | | Support for Power Query Activity | Γ£ô | Γ£ù | | **Template Gallery and Knowledge center** | Solution Templates | Γ£ô<br><small>*Azure Data Factory Template Gallery* | Γ£ô<br><small>*Synapse Workspace Knowledge center* |
Check below table for features availability:
| **Monitoring** | Monitoring of Spark Jobs for Data Flow | Γ£ù | Γ£ô<br><small>*Leverage the Synapse Spark pools* | | | Integration with Azure Monitor | Γ£ô | Γ£ù |
-> [!Note]
-> **Time to Live** is an Azure Integration Runtime setting that enables the Spark cluster to *stay warm* for a period of time after an execution of data flow.
->
-- ## Next steps Get started with data integration in your Synapse workspace by learning how to [ingest data into a Azure Data Lake Storage gen2 account](data-integration-data-lake.md).
synapse-analytics Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/metadata/table.md
Spark tables provide different data types than the Synapse SQL engines. The foll
|||| | `LongType`, `long`, `bigint` | `bigint` | **Spark**: *LongType* represents 8-byte signed integer numbers. [Reference](/sql/t-sql/data-types/int-bigint-smallint-and-tinyint-transact-sql) | | `BooleanType`, `boolean` | `bit` (Parquet), `varchar(6)` (CSV) | |
-| `DecimalType`, `decimal`, `dec`, `numeric` | `decimal` | **Spark**: *DecimalType* represents arbitrary-precision signed decimal numbers. Backed internally by java.math.BigDecimal. A BigDecimal consists of an arbitrary precision integer unscaled value and a 32-bit integer scale. <br> **SQL**: Fixed precision and scale numbers. When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1. The ISO synonyms for decimal are dec and dec(p, s). numeric is functionally identical to decimal. [Reference](/sql/t-sql/data-types/decimal-and-numeric-transact-sql]) |
+| `DecimalType`, `decimal`, `dec`, `numeric` | `decimal` | **Spark**: *DecimalType* represents arbitrary-precision signed decimal numbers. Backed internally by java.math.BigDecimal. A BigDecimal consists of an arbitrary precision integer unscaled value and a 32-bit integer scale. <br> **SQL**: Fixed precision and scale numbers. When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1. The ISO synonyms for decimal are dec and dec(p, s). numeric is functionally identical to decimal. [Reference](/sql/t-sql/data-types/decimal-and-numeric-transact-sql) |
| `IntegerType`, `Integer`, `int` | `int` | **Spark** *IntegerType* represents 4-byte signed integer numbers. [Reference](/sql/t-sql/data-types/int-bigint-smallint-and-tinyint-transact-sql)| | `ByteType`, `Byte`, `tinyint` | `smallint` | **Spark**: *ByteType* represents 1-byte signed integer numbers [-128 to 127] and ShortType represents 2-byte signed integer numbers [-32768 to 32767]. <br> **SQL**: Tinyint represents 1-byte signed integer numbers [0, 255] and smallint represents 2-byte signed integer numbers [-32768, 32767]. [Reference](/sql/t-sql/data-types/int-bigint-smallint-and-tinyint-transact-sql)| | `ShortType`, `Short`, `smallint` | `smallint` | Same as above. |
id | name | birthdate
- [Learn more about Azure Synapse Analytics' shared metadata](overview.md) - [Learn more about Azure Synapse Analytics' shared metadata database](database.md)--
synapse-analytics Apache Spark Development Using Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md
A Synapse notebook is a web interface for you to create files that contain live code, visualizations, and narrative text. Notebooks are a good place to validate ideas and use quick experiments to get insights from your data. Notebooks are also widely used in data preparation, data visualization, machine learning, and other Big Data scenarios.
-With a Synapse notebook, you can:
+With a Synapse notebook, you can:
* Get started with zero setup effort. * Keep data secure with built-in enterprise security features.
Synapse team brought the new notebooks component into Synapse Studio to provide
|Drag and drop to move a cell| Not supported |&#9745;| |Outline (Table of Content)| Not supported |&#9745;| |Variable explorer| Not supported |&#9745;|
-|Format text cell with toolbar buttons|&#9745;| Not available |
+|Format text cell with toolbar buttons|&#9745;| Not supported. Variable explorer only supports python. |
|Code cell commenting| Not supported | &#9745;|
We provide rich operations to develop notebooks:
+ [IDE-style IntelliSense](#ide-style-intellisense) + [Code Snippets](#code-snippets) + [Format text cell with toolbar buttons](#format-text-cell-with-toolbar-buttons)
-+ [Undo cell operation](#undo-cell-operation)
++ [Undo/Redo cell operation](#undo-redo-cell-operation) + [Code cell commenting](#Code-cell-commenting) + [Move a cell](#move-a-cell) + [Delete a cell](#delete-a-cell)
There are multiple ways to add a new cell to your notebook.
# [Preview Notebook](#tab/preview)
-1. Expand the upper left **+ Cell** button, and select **code cell** or **Markdown cell**.
+1. Hover over the space between two cells and select **Code** or **Markdown**.
![Screenshot of add-azure-notebook-cell-with-cell-button](./media/apache-spark-development-using-notebooks/synapse-azure-notebook-add-cell-1.png)
-2. Select the plus sign at the beginning of a cell and select **Code cell** or **Markdown cell**.
- ![Screenshot of add-azure-notebook-cell-between-space](./media/apache-spark-development-using-notebooks/synapse-azure-notebook-add-cell-2.png)
-
-3. Use [aznb Shortcut keys under command mode](#shortcut-keys-under-command-mode). Press **A** to insert a cell above the current cell. Press **B** to insert a cell below the current cell.
+2. Use [aznb Shortcut keys under command mode](#shortcut-keys-under-command-mode). Press **A** to insert a cell above the current cell. Press **B** to insert a cell below the current cell.
There are multiple ways to add a new cell to your notebook.
Synapse notebooks support four Apache Spark languages:
-* pySpark (Python)
+* PySpark (Python)
* Spark (Scala)
-* SparkSQL
-* .NET for Apache Spark (C#)
+* Spark SQL
+* .NET Spark (C#)
You can set the primary language for new added cells from the dropdown list in the top command bar.
The IntelliSense features are at different levels of maturity for different lang
|Languages| Syntax Highlight | Syntax Error Marker | Syntax Code Completion | Variable Code Completion| System Function Code Completion| User Function Code Completion| Smart Indent | Code Folding| |--|--|--|--|--|--|--|--|--| |PySpark (Python)|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
-|Spark (Scala)|Yes|Yes|Yes|Yes|-|-|-|Yes|
-|SparkSQL|Yes|Yes|-|-|-|-|-|-|
+|Spark (Scala)|Yes|Yes|Yes|Yes|Yes|Yes|-|Yes|
+|SparkSQL|Yes|Yes|Yes|Yes|Yes|-|-|-|
|.NET for Spark (C#)|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes| >[!Note]
Format button toolbar is not available for the preview notebook experience yet.
-<h3 id="undo-cell-operation">Undo cell operation</h3>
+<h3 id="undo-redo-cell-operation">Undo/Redo cell operation</h3>
# [Classical Notebook](#tab/classical)
Supported undo cell operations:
> [!NOTE] > In-cell text operations and code cell commenting operations are not undoable.
+> Now you can undo/redo up to the latest 10 historical cell operations.
Select the arrow button at the bottom of the current cell to collapse it. To exp
# [Preview Notebook](#tab/preview)
-Select the **More commands** ellipses (...) on the cell toolbar and **input** to collapse current cell's input. To expand it, Select the **input hidden** while the cell is collapsed.
+Select the **More commands** ellipses (...) on the cell toolbar and **Hide input** to collapse current cell's input. To expand it, Select the **Show input** while the cell is collapsed.
![Animated GIF of azure-notebook-collapse-cell-input](./media/apache-spark-development-using-notebooks/synapse-azure-notebook-collapse-cell-input.gif)
Select the **collapse output** button at the upper left of the current cell outp
# [Preview Notebook](#tab/preview)
-Select the **More commands** ellipses (...) on the cell toolbar and **output** to collapse current cell's output. To expand it, select the same button while the cell's output is hidden.
+Select the **More commands** ellipses (...) on the cell toolbar and **Hide output** to collapse current cell's output. To expand it, select the **Show output** while the cell's output is hidden.
![Animated GIF of azure-notebook-collapse-cell-output](./media/apache-spark-development-using-notebooks/synapse-azure-notebook-collapse-cell-output.gif)
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/microsoft-spark-utilities.md
FS.Rm("file path", true) // Set the last parameter as True to remove all files a
::: zone-end + ## Notebook utilities +
+Not supported.
+++ You can use the MSSparkUtils Notebook Utilities to run a notebook or exit a notebook with a value. Run the following command to get an overview of the available methods:
run(path: String, timeoutSeconds: int, arguments: Map): String -> This method ru
```
-### Run a notebook
-Runs a notebook and returns its exit value. You can run nesting function calls in a notebook interactively or in a pipeline. The notebook being referenced will run on the Spark pool of which notebook calls this function.
+### Reference a notebook
+Reference a notebook and returns its exit value. You can run nesting function calls in a notebook interactively or in a pipeline. The notebook being referenced will run on the Spark pool of which notebook calls this function.
```python
Sample1 run success with input is 20
``` ::: zone-end - :::zone pivot = "programming-language-scala"
-## Notebook utilities
- You can use the MSSparkUtils Notebook Utilities to run a notebook or exit a notebook with a value. Run the following command to get an overview of the available methods:
run(path: String, timeoutSeconds: int, arguments: Map): String -> This method ru
```
-### Run a notebook
-Runs a notebook and returns its exit value. You can run nesting function calls in a notebook interactively or in a pipeline. The notebook being referenced will run on the Spark pool of which notebook calls this function.
+### Reference a notebook
+Reference a notebook and returns its exit value. You can run nesting function calls in a notebook interactively or in a pipeline. The notebook being referenced will run on the Spark pool of which notebook calls this function.
```scala
Returns Azure AD token for a given audience, name (optional). The table below li
|--|--| |Audience Resolve Type|'Audience'| |Storage Audience Resource|'Storage'|
-|Data Warehouse Audience Resource|'DW'|
+|Dedicated SQL pools (Data warehouse)|'DW'|
|Data Lake Audience Resource|'AzureManagement'| |Vault Audience Resource|'DataLakeStore'| |Azure OSSDB Audience Resource|'AzureOSSDB'|
Env.GetClusterId()
::: zone-end +
+## Runtime Context
+
+Mssparkutils runtime utils exposed 3 runtime properties, you can use the mssparkutils runtime context to get the properties listed as below:
+- **Notebookname** - The name of current notbook, will always return value for both interactive mode and pipeline mode.
+- **Pipelinejobid** - The pipeline run id, will return value in pipeline mode and return empty string in interactive mode.
+- **Activityrunid** - The notebook activity run id, will return value in pipeline mode and return empty string in interactive mode.
+
+Currently runtime context support both Python and Scala.
++
+```python
+mssparkutils.runtime.context
+```
++
+```scala
+%%spark
+mssparkutils.runtime.context
+```
+ ## Next steps - [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/master/Notebooks)
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/whats-new.md
Curious about the latest updates for FSLogix? Check out [What's new at FSLogix](
Here's what changed in August 2021:
-### Windows 11 (Preview) on AVD
+### Windows 11 (Preview) for Azure Virtual Desktop
Windows 11 (Preview) images are now available in the Azure Marketplace for customers to test and validate with Azure Virtual Desktop. For more information, see [our announcement](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/windows-11-preview-is-now-available-on-azure-virtual-desktop/ba-p/2666468).
-### Multimedia Redirection (MMR) is now in public preview
+### Multimedia redirection is now in public preview
-Multimedia redirection (MMR) gives you smooth video playback while watching videos in your Azure Virtual Desktop web browser and works with Microsoft Edge and Google Chrome. Learn more at [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/public-preview-announcing-public-preview-of-multimedia/m-p/2663244#M7692).
+Multimedia redirection gives you smooth video playback while watching videos in your Azure Virtual Desktop web browser and works with Microsoft Edge and Google Chrome. Learn more at [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/public-preview-announcing-public-preview-of-multimedia/m-p/2663244#M7692).
### IP virtualization support for Windows Server 2019 IP virtualization is supported on Windows Server 2008 R2 and up. Additional steps are needed to use IP virtualization for Windows Server 2019. For more information, see [our announcement](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/ip-virtualization-support-for-windows-server-2019/m-p/2658650).
-### Windows Defender Application Control and Azure Disk Encryption is now supported
+### Windows Defender Application Control and Azure Disk Encryption support
-Azure Virtual Desktop now supports Windows Defender Application Control to control which drivers and applications are allowed to run on the Windows VM, and Azure Disk Encryption which uses Windows BitLocker to provide volume encryption for the OS and data disks of your VMs. For more information, see [our announcement](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/support-for-windows-defender-application-control-and-azure-disk/m-p/2658633#M7685).
+Azure Virtual Desktop now supports Windows Defender Application Control to control which drivers and applications are allowed to run on Windows virtual machines (VMs), and Azure Disk Encryption, which uses Windows BitLocker to provide volume encryption for the OS and data disks of your VMs. For more information, see [our announcement](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/support-for-windows-defender-application-control-and-azure-disk/m-p/2658633#M7685).
### Signing into Azure AD using smart cards are now supported in Azure Virtual Desktop
-While this isn't a new feature for Azure AD, configuring Active Directory Federation Services to sign in with smart cards is now supported in Azure Virtual Desktop. For more information, see [our announcement](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/signing-in-to-azure-ad-using-smart-cards-now-supported-in-azure/m-p/2654209#M7671).
+While this isn't a new feature for Azure AD, Azure Virtual Desktop now supports configuring Active Directory Federation Services to sign in with smart cards. For more information, see [our announcement](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/signing-in-to-azure-ad-using-smart-cards-now-supported-in-azure/m-p/2654209#M7671).
### Screen capture protection is now generally available
-Prevent sensitive information from being screen captured by software running on the client endpoints with screen capture protection in AVD. Learn more at our [blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/announcing-general-availability-of-screen-capture-protection-for/m-p/2699684).
+Prevent sensitive information from being screen captured by software running on the client endpoints with screen capture protection in Azure Virtual Desktop. Learn more at our [blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/announcing-general-availability-of-screen-capture-protection-for/m-p/2699684).
## July 2021
To learn more about new features, check out [this blog post](https://techcommuni
### Autoscaling tool update
-The latest version of the autoscaling tool that was in preview is now generally available. This tool uses an Azure automation account and the Azure Logic App to automatically shut down and restart session host virtual machines (VMs) within a host pool, reducing infrastructure costs. Learn more at [Scale session hosts using Azure Automation](set-up-scaling-script.md).
+The latest version of the autoscaling tool that was in preview is now generally available. This tool uses an Azure automation account and the Azure Logic App to automatically shut down and restart session host VMs within a host pool, reducing infrastructure costs. Learn more at [Scale session hosts using Azure Automation](set-up-scaling-script.md).
### Azure portal
Here's what this change does for you:
- In this update, you no longer need to run Azure Marketplace or the GitHub template repeatedly to expand a host pool. All you need to expand a host pool is to go to your host pool in the Azure portal and select **+ Add** to deploy additional session hosts. -- Host pool deployment is now fully integrated with the [Azure Shared Image Gallery](../virtual-machines/shared-image-galleries.md). Shared Image Gallery is a separate Azure service that stores virtual machine (VM) image definitions, including image versioning. You can also use global replication to copy and send your images to other Azure regions for local deployment.
+- Host pool deployment is now fully integrated with the [Azure Shared Image Gallery](../virtual-machines/shared-image-galleries.md). Shared Image Gallery is a separate Azure service that stores VM image definitions, including image versioning. You can also use global replication to copy and send your images to other Azure regions for local deployment.
- Monitoring functions that used to be done through PowerShell or the Diagnostics Service web app have now moved to Log Analytics in the Azure portal. You also now have two options to visualize your reports. You can run Kusto queries and use Workbooks to create visual reports.
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
Register and get started with [Flexible orchestration mode](..\virtual-machines\
|-|-|-|-| | Maximum Instance Count (with FD availability guarantee) | 1000 | 3000 | 200 | - ## Troubleshoot scale sets with Flexible orchestration Find the right solution to your troubleshooting scenario.
virtual-machines Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disk-encryption.md
Title: Server-side encryption of Azure managed disks description: Azure Storage protects your data by encrypting it at rest before persisting it to Storage clusters. You can use customer-managed keys to manage encryption with your own keys, or you can rely on Microsoft-managed keys for the encryption of your managed disks. Previously updated : 06/29/2021 Last updated : 09/03/2021
To enable double encryption at rest for managed disks, see our articles covering
> [!IMPORTANT] > Customer-managed keys rely on managed identities for Azure resources, a feature of Azure Active Directory (Azure AD). When you configure customer-managed keys, a managed identity is automatically assigned to your resources under the covers. If you subsequently move the subscription, resource group, or managed disk from one Azure AD directory to another, the managed identity associated with managed disks is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see [Transferring a subscription between Azure AD directories](../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
+## Frequently asked questions
+
+**Q: Is Server-side Encryption enabled by default when I create a managed disk?**
+
+**A:** Yes. Managed disks are encrypted by using server-side encryption and platform-managed keys.
+
+**Q: Is the boot volume encrypted by default on a managed disk?**
+
+**A:** Yes. By default, all managed disks are encrypted, including the OS disk.
+
+**Q: Who manages the encryption keys?**
+
+**A:** Platform-managed keys are managed by Microsoft. You can also use and manage your own keys that are stored in Azure Key Vault.
+
+**Q: Can I disable Server-side Encryption for my managed disks?**
+
+**A:** No.
+
+**Q: Is Server-side Encryption available only in specific regions?**
+
+**A:** No. Server-side Encryption that uses both platform-managed and customer-managed keys are available in all regions where Azure Managed Disks is available.
+
+**Q: Does Azure Site Recovery support Server-side Encryption that uses customer-managed key for on-premises-to-Azure and Azure-to-Azure disaster recovery scenarios?**
+
+**A:** Yes.
+
+**Q: Can I use the Azure Backup service to back up managed disks that are encrypted by server-side encryption that uses customer-managed keys?**
+
+**A:** Yes.
+
+**Q: Are managed snapshots and images encrypted?**
+
+**A:** Yes. All managed snapshots and images are automatically encrypted.
+
+**Q: Can I convert VM unmanaged disks to managed disks if those disks are located on storage accounts that are, or were previously, encrypted?**
+
+**A:** Yes.
+
+**Q: Will an exported VHD from a managed disk or a snapshot also be encrypted?**
+
+**A:** No. But if you export a VHD to an encrypted storage account from an encrypted managed disk or snapshot, then it's encrypted.
+ ## Next steps - Enable end-to-end encryption using encryption at host with either the [Azure PowerShell module](windows/disks-enable-host-based-encryption-powershell.md), the [Azure CLI](linux/disks-enable-host-based-encryption-cli.md), or the [Azure portal](disks-enable-host-based-encryption-portal.md).
virtual-machines Disks Enable Private Links For Import Export Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-enable-private-links-for-import-export-portal.md
description: Enable Private Link for your managed disks with Azure portal. This
Previously updated : 08/19/2021 Last updated : 09/03/2021
Next, you'll need to create a private endpoint and configure it for disk access.
You've now configured a private link that you can use to import and export your managed disk.
+## Frequently asked questions
+
+**Q: What is the benefit of using Private Links for exporting and importing Managed Disks?**
+
+**A:** You can use Private Links to restrict the export and import process to Managed Disks only from your Azure virtual network.
+
+**Q: How can I make sure that a disk can be exported or imported only through Private Links?**
+
+**A:** You must set the **DiskAccessId** property to an instance of a disk access object. Additionally, you can set the **NetworkAccessPolicy** property to **AllowPrivate**.
+
+**Q: Can I link multiple virtual networks to the same disk access object?**
+
+**A:** No. Currently, you can link a disk access object to only one virtual network.
+
+**Q: Can I link a virtual network to a disk access object in another subscription?**
+
+**A:** No. Currently, you can link a disk access object to only a virtual network in the same subscription.
+
+**Q: How many exports or imports that use the same disk access object can occur at the same time?**
+
+**A:** You can have five simultaneous exports or imports.
+
+**Q: Can I use an SAS URI of a disk or snapshot to download the underlying VHD of a VM that's not in the same subnet as the subnet of the private endpoint that's associated with the disk?**
+
+**A:** No. You can do this only for a VM that's in the same subnet as the subnet of the private endpoint that's associated with the disk.
+ ## Next steps - Upload a VHD to Azure or copy a managed disk to another region - [Azure CLI](linux/disks-upload-vhd-to-managed-disk-cli.md) or [Azure PowerShell module](windows/disks-upload-vhd-to-managed-disk-powershell.md)
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-shared.md
description: Learn about sharing Azure managed disks across multiple Linux VMs.
Previously updated : 08/16/2021 Last updated : 09/03/2021
The following is an example of a 4-node Linux cluster with a single writer and t
Ultra shared disks are priced based on provisioned capacity, total provisioned IOPS (diskIOPSReadWrite + diskIOPSReadOnly) and total provisioned Throughput MBps (diskMBpsReadWrite + diskMBpsReadOnly). There is no extra charge for each additional VM mount. For example, an ultra shared disk with the following configuration (diskSizeGB: 1024, DiskIOPSReadWrite: 10000, DiskMBpsReadWrite: 600, DiskIOPSReadOnly: 100, DiskMBpsReadOnly: 1) is charged with 1024 GiB, 10100 IOPS, and 601 MBps regardless of whether it is mounted to two VMs or five VMs.
+## Frequently asked questions
+
+**Q: Is the shared disks feature supported for unmanaged disks or page blobs?**
+
+**A:** No. The feature is supported only for ultra disks and Premium SSD managed disks.
+
+**Q: Which regions support shared disks?**
+
+**A:** For regional information, see our [conceptual article](/azure/virtual-machines/disks-shared).
+
+**Q: Can shared disks be used as an OS disk?**
+
+**A:** No. Shared disks are only supported for data disks.
+
+**Q: Which disk sizes support shared disks?**
+
+**A:** For supported sizes, see our [conceptual article](/azure/virtual-machines/disks-shared).
+
+**Q: If I have an existing disk, can I enable shared disks on it?**
+
+**A:** All managed disks that are created by using API version 2019-07-01 or a later version can enable shared disks. To do this, you have to unmount the disk from all VMs that it is attached to. Next, edit the maxShares property on the disk.
+
+**Q: If I no longer want to use a disk in shared mode, how do I disable it?**
+
+**A:** Unmount the disk from all VMs that it is attached to. Then change the maxShare property on the disk to **1**.
+
+**Q: Can I resize a shared disk?**
+
+**A:** Yes.
+
+**Q: Can I enable write accelerator on a disk that also has shared disks enabled?**
+
+**A:** No. You can't enable write accelerator on a disk that also has shared disks enabled.
+
+**Q: Can I enable host caching for a disk that has shared disks enabled?**
+
+**A:** The only supported host caching option is **None**.
+ ## Next steps If you're interested in enabling and using shared disks for your managed disks, proceed to our article [Enable shared disk](disks-shared-enable.md)
virtual-machines Flexible Virtual Machine Scale Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/flexible-virtual-machine-scale-sets-portal.md
This article steps through using Azure portal to create a virtual machine scale
Before you can deploy virtual machine scale sets in Flexible orchestration mode, you must first register your subscription for the preview feature. Feature registration can take up to 15 minutes.
-During the Flexible orchestration mode for scale sets preview, use the *preview* Azure portal linked in the steps below.
-
-1. Log into the Azure portal at https://preview.portal.azure.com.
+1. Log into the Azure portal at https://portal.azure.com.
1. Go to your **Subscriptions**. 1. Navigate to the details page for the subscription you would like to create a scale set in Flexible orchestration mode by selecting the name of the subscription. 1. In the menu under **Settings**, select **Preview features**.
Once the features have been registered for your subscription, complete the opt-i
### Create a virtual machine scale set in Flexible orchestration mode through the Azure portal.
-During the Flexible orchestration mode for scale sets preview, use the *preview* Azure portal linked in the steps below.
-
-1. Log into the Azure portal at https://preview.portal.azure.com.
+1. Log into the Azure portal at https://portal.azure.com.
1. In the search bar, search for and select **Virtual machine scale sets**. 1. Select **Create** on the **Virtual machine scale sets** page. 1. On the **Create a virtual machine scale set** page, view the **Orchestration** section.
virtual-machines Flexible Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/flexible-virtual-machine-scale-sets.md
Before you can deploy virtual machine scale sets in Flexible orchestration mode,
### Azure portal
-During the Flexible orchestration mode for scale sets preview, use the *preview* Azure portal linked in the steps below.
-
-1. Log into the Azure portal at https://preview.portal.azure.com.
+1. Log into the Azure portal at https://portal.azure.com.
1. Go to your **Subscriptions**. 1. Navigate to the details page for the subscription you would like to create a scale set in Flexible orchestration mode by selecting the name of the subscription. 1. In the menu under **Settings**, select **Preview features**.
Virtual machine scale sets with Flexible orchestration works as a thin orchestra
When you create a VM, you can optionally specify that it is added to a virtual machine scale set. A VM can only be added to a scale set at time of VM creation.
+Flexible orchestration mode can be used with VM SKUs that support [memory preserving updates or live migration](../virtual-machines/maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot), which includes 90% of all IaaS VMs that are deployed in Azure. Broadly this includes general purpose size families such as B-, D-, E- and F-series VMs. Currently, the Flexible mode cannot orchestrate over VM SKUs or families which do not support memory preserving updates, including G-, H-, L-, M-, N- series VMs. You can use the [Compute Resource SKUs API](/rest/api/compute/resource-skus/list) to determine whether a specific VM SKU is supported.
+
+```azurecli-interactive
+az vm list-skus -l eastus --size standard_d2s_v3 --query "[].capabilities[].[name, value]" -o table
+```
## Explicit Network Outbound Connectivity required
With single instance VMs and Virtual machine scale sets with Uniform orchestrati
Common scenarios that will require explicit outbound connectivity include: -- Windows VM activation will require that you have defined outbound connectivity from the VM instance to the Windows Activation Key Management Service (KMS). See [Troubleshoot Windows VM activation problems](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems) for more information.
+- Windows VM activation will require that you have defined outbound connectivity from the VM instance to the Windows Activation Key Management Service (KMS). See [Troubleshoot Windows VM activation problems](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems) for more information.
- Access to storage accounts or Key Vault. Connectivity to Azure services can also be established via [Private Link](../private-link/private-link-overview.md). See [Default outbound access in Azure](https://aka.ms/defaultoutboundaccess) for more details on defining secure outbound connections.
virtual-machines Maintenance Notifications Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/maintenance-notifications-portal.md
Previously updated : 11/19/2019 Last updated : 09/08/2021 #pmcontact: shants
You can use the Azure portal and look for VMs scheduled for maintenance.
2. In the left navigation, click **Virtual Machines**.
-3. In the Virtual Machines pane, select **Edit columns** button to open the list of available columns.
-
-4. Select and add the following columns:
+3. In the Virtual Machines pane, select **Maintenance -> Virtual machine maintenance** button to open the list with maintenance columns.
**Maintenance status**: Shows the maintenance status for the VM. The following are the potential values:
virtual-machines Jboss Eap On Azure Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/redhat/jboss-eap-on-azure-migration.md
If your application uses any databases, you need to capture the following inform
* What is the connection pool configuration? * Where can I find the Java Database Connectivity (JDBC) driver JAR file?
-For more information, see [About JBoss EAP DataSources](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3html/configuration_guide/datasource_management) in the JBoss EAP documentation.
+For more information, see [About JBoss EAP DataSources](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/datasource_management) in the JBoss EAP documentation.
### Determine whether and how the file system is used
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/public-ip-addresses.md
Basic SKU addresses:
## IP address assignment
- Standard public IPv4, Basic public IPv4, and Standard public IPv6 addresses all support **static** assignment. The resource is assigned an IP address at the time it's created. The IP address is released when the resource is deleted.
+Standard public IPv4, Basic public IPv4, and Standard public IPv6 addresses all support **static** assignment. The resource is assigned an IP address at the time it's created. The IP address is released when the resource is deleted.
> [!NOTE] > Even when you set the allocation method to **static**, you cannot specify the actual IP address assigned to the public IP address resource. Azure assigns the IP address from a pool of available IP addresses in the Azure location the resource is created in.
virtual-wan Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/pricing-concepts.md
Virtual WAN comes in two flavors:
* A **Basic virtual WAN**, where users can deploy multiple hubs and use VPN Site-to-site connectivity. A Basic virtual WAN does not have advanced capabilities such as fully meshed hubs, ExpressRoute connectivity, User VPN/Point-to-site VPN connectivity, VNet-to-VNet transitive connectivity, VPN and ExpressRoute transit connectivity, or Azure Firewall. There is no base fee or data processing fee for hubs in a Basic virtual WAN.
-* A **Standard virtual WAN** provides advanced capabilities, such as fully meshed hubs, ExpressRoute connectivity, User VPN/Point-to-site VPN connectivity, VNet-to-VNet transitive connectivity, VPN and ExpressRoute transit connectivity, and Azure Firewall, etc. All of the virtual hub routing is provided by a router that enables multiple services in a virtual hub. There is a base fee for the hub, which is priced at $0.25/hr. There is also a charge for data processing in the virtual hub router for VNet-to-VNet transit connectivity. The data processing charge in the virtual hub router is not applicable for branch-to-branch transfers (Scenario 2, 3), or VNet-to-branch transfers via the same vWAN hub (Scenario 1) in this article.
+* A **Standard virtual WAN** provides advanced capabilities, such as fully meshed hubs, ExpressRoute connectivity, User VPN/Point-to-site VPN connectivity, VNet-to-VNet transitive connectivity, VPN and ExpressRoute transit connectivity, and Azure Firewall, etc. All of the virtual hub routing is provided by a router that enables multiple services in a virtual hub. There is a base fee for the hub, which is priced at $0.25/hr. There is also a charge for data processing in the virtual hub router for VNet-to-VNet transit connectivity. The data processing charge in the virtual hub router is not applicable for branch-to-branch transfers (Scenario 2, 2', 3), or VNet-to-branch transfers via the same vWAN hub (Scenario 1, 1') as shown in the [Pricing Components](#pricing).
## Next steps
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/virtual-wan-faq.md
Virtual WAN comes in two flavors: Basic and Standard. In Basic Virtual WAN, hubs
### How are Availability Zones and resiliency handled in Virtual WAN?
-Virtual WAN is a collection of hubs and services made available inside the hub. The user can have as many Virtual WAN per their need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute etc. Each of these services (except the Azure Firewall) is deployed in an Availability Zones region, that is if the region supports Availability Zones. If a region becomes an Availability Zone after the initial deployment in the hub, the user can recreate the gateways, which will trigger an Availability Zone deployment. All gateways are provisioned in a hub as active-active, implying there is resiliency built in within a hub. Users can connect to multiple hubs if they want resiliency across regions.
+Virtual WAN is a collection of hubs and services made available inside the hub. The user can have as many Virtual WAN per their need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute etc. Each of these services (except the Azure Firewall, which is not enabled by default) is deployed in an Availability Zones region, that is if the region supports Availability Zones. If a region becomes an Availability Zone after the initial deployment in the hub, the user can recreate the gateways, which will trigger an Availability Zone deployment. All gateways are provisioned in a hub as active-active, implying there is resiliency built in within a hub. Users can connect to multiple hubs if they want resiliency across regions. Azure Firewall can be deployed to support Availability Zones using [PowerShell](/powershell/module/az.network/new-azfirewall?view=azps-6.3.0#example-6--create-a-firewall-with-no-rules-and-with-availability-zones) or CLI.
While the concept of Virtual WAN is global, the actual Virtual WAN resource is Resource Manager-based and deployed regionally. If the virtual WAN region itself were to have an issue, all hubs in that virtual WAN will continue to function as is, but the user will not be able to create new hubs until the virtual WAN region is available.
virtual-wan Virtual Wan Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/virtual-wan-site-to-site-portal.md
A hub is a virtual network that can contain gateways for site-to-site, ExpressRo
[!INCLUDE [Create a hub](../../includes/virtual-wan-tutorial-s2s-hub-include.md)]
-## <a name="gateway"></a>Create site-to-site VPN gateway
+## <a name="gateway"></a>Configure a site-to-site gateway
-In this section, you configure site-to-site connectivity settings, and then proceed to create the hub and S2S VPN gateway. A hub and gateway can take about 30 minutes to create.
+In this section, you configure site-to-site connectivity settings, and then proceed to create the hub and site-to-site VPN gateway. A hub and gateway can take about 30 minutes to create.
[!INCLUDE [Create a gateway](../../includes/virtual-wan-tutorial-s2s-gateway-include.md)]
-## <a name="site"></a>Create site
+## <a name="site"></a>Create a site
In this section, you create site. Sites correspond to your physical locations. Create as many sites as you need. For example, if you have a branch office in NY, a branch office in London, and a branch office and LA, you'd create three separate sites. These sites contain your on-premises VPN device endpoints. You can create up to 1000 sites per virtual hub in a virtual WAN. If you had multiple hubs, you can create 1000 per each of those hubs. If you have Virtual WAN partner CPE device, check with them to learn about their automation to Azure. Typically, automation implies a simple click experience to export large-scale branch information into Azure, and setting up connectivity from the CPE to Azure Virtual WAN VPN gateway. For more information, see [Automation guidance from Azure to CPE partners](virtual-wan-configure-automation-providers.md). [!INCLUDE [Create a site](../../includes/virtual-wan-tutorial-s2s-site-include.md)]
-## <a name="connectsites"></a>Connect VPN site to hub
+## <a name="connectsites"></a>Connect the VPN site to a hub
In this section, you connect your VPN site to the hub. [!INCLUDE [Connect VPN sites](../../includes/virtual-wan-tutorial-s2s-connect-vpn-site-include.md)]
-## <a name="vnet"></a>Connect VNet to hub
+## <a name="vnet"></a>Connect a VNet to the hub
In this section, you create a connection between the hub and your VNet.