Updates from: 02/03/2022 02:09:41
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/publisher-verification-overview.md
Publisher verification helps admins and end users understand the authenticity of
When an application is marked as publisher verified, it means that the publisher has verified their identity using a [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process and has associated this MPN account with their application registration. A blue "verified" badge appears on the Azure AD consent prompt and other screens:+ ![Consent prompt](./media/publisher-verification-overview/consent-prompt.png)
+> [!NOTE]
+> We recently changed the color of the "verified" badge from blue to gray. We will revert that change sometime in the last half of February 2022, so the "verified" badge will be blue.
+ This feature is primarily for developers building multi-tenant apps that leverage [OAuth 2.0 and OpenID Connect](active-directory-v2-protocols.md) with the [Microsoft identity platform](v2-overview.md). These apps can sign users in using OpenID Connect, or they may use OAuth 2.0 to request access to data using APIs like [Microsoft Graph](https://developer.microsoft.com/graph/). ## Benefits
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
The following samples show public client desktop applications that access the Mi
> | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-desktop/) | MSAL Java | Integrated Windows authentication | > | Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | MSAL Node | Authorization code with PKCE | > | Powershell | [Call Microsoft Graph by signing in users using username/password](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) | MSAL.NET | Resource owner password credentials |
-> | Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | MSAL Python | Authorization code with PKCE |
+> | Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | MSAL Python | Resource owner password credentials |
> | Universal Window Platform (UWP) | [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-xamarin-native-v2/tree/main/2-With-broker) | MSAL.NET | Web account manager | > | Windows Presentation Foundation (WPF) | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/2.%20Web%20API%20now%20calls%20Microsoft%20Graph) | MSAL.NET | Authorization code with PKCE | > | XAML | &#8226; [Sign in users and call ASP.NET core web API](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/1.%20Desktop%20app%20calls%20Web%20API) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | MSAL.NET | Authorization code with PKCE |
active-directory Scenario Daemon App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-daemon-app-configuration.md
Configuration parameters for the [Node.js daemon sample](https://github.com/Azur
# Credentials TENANT_ID=Enter_the_Tenant_Info_Here CLIENT_ID=Enter_the_Application_Id_Here+
+// You provide either a ClientSecret or a CertificateConfiguration, or a ClientAssertion. These settings are exclusive
CLIENT_SECRET=Enter_the_Client_Secret_Here
+CERTIFICATE_THUMBPRINT=Enter_the_certificate_thumbprint_Here
+CERTIFICATE_PRIVATE_KEY=Enter_the_certificate_private_key_Here
+CLIENT_ASSERTION=Enter_the_Assertion_String_Here
# Endpoints // the Azure AD endpoint is the authority endpoint for token issuance
app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
.WithAuthority(new Uri(config.Authority)) .Build(); ```+ # [Java](#tab/java) In MSAL Java, there are two builders to instantiate the confidential client application with certificates:
ConfidentialClientApplication cca =
# [Node.js](#tab/nodejs)
-The sample application does not implement initialization with certificates at the moment.
+```JavaScript
+
+const config = {
+ auth: {
+ clientId: process.env.CLIENT_ID,
+ authority: process.env.AAD_ENDPOINT + process.env.TENANT_ID,
+ clientCertificate: {
+ thumbprint: process.env.CERTIFICATE_THUMBPRINT, // a 40-digit hexadecimal string
+ privateKey: process.env.CERTIFICATE_PRIVATE_KEY,
+ }
+ }
+};
+
+// Create an MSAL application object
+const cca = new msal.ConfidentialClientApplication(config);
+```
+
+For details, see [Use certificate credentials with MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/certificate-credentials.md).
# [Python](#tab/python)
ConfidentialClientApplication cca =
# [Node.js](#tab/nodejs)
-The sample application does not implement initialization with assertions at the moment.
+```JavaScript
+const clientConfig = {
+ auth: {
+ clientId: process.env.CLIENT_ID,
+ authority: process.env.AAD_ENDPOINT + process.env.TENANT_ID,
+ clientAssertion: process.env.CLIENT_ASSERTION
+ }
+};
+const cca = new msal.ConfidentialClientApplication(clientConfig);
+```
+
+For details, see [Initialize the ConfidentialClientApplication object](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md).
# [Python](#tab/python)
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
To request an access token, make an HTTP POST to the tenant-specific Microsoft i
https://login.microsoftonline.com/<tenant>/oauth2/v2.0/token ``` + There are two cases depending on whether the client application chooses to be secured by a shared secret or a certificate. ### First case: Access token request with a shared secret
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
Previously updated : 1/20/2022 Last updated : 1/31/2022
The What's new in Azure Active Directory? release notes provide information abou
- Plans for changes +
+## July 2021
+
+### New Google sign-in integration for Azure AD B2C and B2B self-service sign-up and invited external users will stop working starting July 12, 2021
+
+**Type:** Plan for change
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Previously we announced that [the exception for Embedded WebViews for Gmail authentication will expire in the second half of 2021](https://www.yammer.com/cepartners/threads/1188371962232832).
+
+On July 7, 2021, we learned from Google that some of these restrictions will apply starting **July 12, 2021**. Azure AD B2B and B2C customers who set up a new Google ID sign-in in their custom or line of business applications to invite external users or enable self-service sign-up will have the restrictions applied immediately. As a result, end-users will be met with an error screen that blocks their Gmail sign-in if the authentication is not moved to a system webview. See the docs linked below for details.
+
+Most apps use system web-view by default, and will not be impacted by this change. This only applies to customers using embedded webviews (the non-default setting.) We advise customers to move their application's authentication to system browsers instead, prior to creating any new Google integrations. To learn how to move to system browsers for Gmail authentications, read the Embedded vs System Web UI section in the [Using web browsers (MSAL.NET)](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) documentation. All MSAL SDKs use the system web-view by default. [Learn more](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
+++
+### Google sign-in on embedded web-views expiring September 30, 2021
+
+**Type:** Plan for change
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+
+About two months ago we announced that the exception for Embedded WebViews for Gmail authentication will expire in the second half of 2021.
+
+Recently, Google has specified the date to be **September 30, 2021**.
+
+Rolling out globally beginning September 30, 2021, Azure AD B2B guests signing in with their Gmail accounts will now be prompted to enter a code in a separate browser window to finish signing in on Microsoft Teams mobile and desktop clients. This applies to invited guests and guests who signed up using Self-Service Sign-Up.
+
+Azure AD B2C customers who have set up embedded webview Gmail authentications in their custom/line of business apps or have existing Google integrations, will no longer can let their users sign in with Gmail accounts. To mitigate this, make sure to modify your apps to use the system browser for sign-in. For more information, read the Embedded vs System Web UI section in the [Using web browsers (MSAL.NET)](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) documentation. All MSAL SDKs use the system web-view by default.
+
+As the device login flow will start rolling out on September 30, 2021, it is likely that it may not be rolled out to your region yet (in which case, your end-users will be met with the error screen shown in the documentation until it gets deployed to your region.)
+
+For details on known impacted scenarios and what experience your users can expect, read [Add Google as an identity provider for B2B guest users](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
+++
+### Bug fixes in My Apps
+
+**Type:** Fixed
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+- Previously, the presence of the banner recommending the use of collections caused content to scroll behind the header. This issue has been resolved.
+- Previously, there was another issue when adding apps to a collection, the order of apps in All Apps collection would get randomly reordered. This issue has also been resolved.
+
+For more information on My Apps, read [Sign in and start apps from the My Apps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+++
+### Public preview - Application authentication method policies
+
+**Type:** New feature
+**Service category:** MS Graph
+**Product capability:** Developer Experience
+
+Application authentication method policies in MS Graph which allow IT admins to enforce lifetime on application password secret credential or block the use of secrets altogether. Policies can be enforced for an entire tenant as a default configuration and it can be scoped to specific applications or service principals. [Learn more](/graph/api/resources/policy-overview).
+
++
+### Public preview - Authentication Methods registration campaign to download Microsoft Authenticator
+
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+The Authenticator registration campaign helps admins to move their organizations to a more secure posture by prompting users to adopt the Microsoft Authenticator app. Prior to this feature, there was no way for an admin to push their users to set up the Authenticator app.
+
+The registration campaign comes with the ability for an admin to scope users and groups by including and excluding them from the registration campaign to ensure a smooth adoption across the organization. [Learn more](../authentication/how-to-mfa-registration-campaign.md)
+
++
+### Public preview - Separation of duties check
+
+**Type:** New feature
+**Service category:** User Access Management
+**Product capability:** Entitlement Management
+
+In Azure AD entitlement management, an administrator can define that an access package is incompatible with another access package or with a group. Users who have the incompatible memberships will be then unable to request more access. [Learn more](../governance/entitlement-management-access-package-request-policy.md#prevent-requests-from-users-with-incompatible-access-preview).
+
++
+### Public preview - Identity Protection logs in Log Analytics, Storage Accounts, and Event Hubs
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+You can now send the risky users and risk detections logs to Azure Monitor, Storage Accounts, or Log Analytics using the Diagnostic Settings in the Azure AD blade. [Learn more](../identity-protection/howto-export-risk-data.md).
+
++
+### Public preview - Application Proxy API addition for backend SSL certificate validation
+
+**Type:** New feature
+**Service category:** App Proxy
+**Product capability:** Access Control
+
+The onPremisesPublishing resource type now includes the property, "isBackendCertificateValidationEnabled" which indicates whether backend SSL certificate validation is enabled for the application. For all new Application Proxy apps, the property will be set to true by default. For all existing apps, the property will be set to false. For more information, read the [onPremisesPublishing resource type](/graph/api/resources/onpremisespublishing?view=graph-rest-beta&preserve-view=true) api.
+
++
+### General availability - Improved Authenticator setup experience for add Azure AD account in Microsoft Authenticator app by directly signing into the app.
+
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+Users can now use their existing authentication methods to directly sign into the Microsoft Authenticator app to set up their credential. Users don't need to scan a QR Code anymore and can use a Temporary Access Pass (TAP) or Password + SMS (or other authentication method) to configure their account in the Authenticator app.
+
+This improves the user credential provisioning process for the Microsoft Authenticator app and gives the end user a self-service method to provision the app. [Learn more](https://support.microsoft.com/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c#sign-in-with-your-credentials).
+
++
+### General availability - Set manager as reviewer in Azure AD entitlement management access packages
+
+**Type:** New feature
+**Service category:** User Access Management
+**Product capability:** Entitlement Management
+
+Access packages in Azure AD entitlement management now support setting the user's manager as the reviewer for regularly occurring access reviews. [Learn more](../governance/entitlement-management-access-reviews-create.md).
+++
+### General availability - Enable external users to self-service sign-up in Azure AD using MSA accounts
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+Users can now enable external users to self-service sign-up in Azure Active Directory using Microsoft accounts. [Learn more](../external-identities/microsoft-account.md).
+
+
+
+### General availability - External Identities Self-Service Sign-Up with Email One-time Passcode
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+
+Now users can enable external users to self-service sign-up in Azure Active Directory using their email and one-time passcode. [Learn more](../external-identities/one-time-passcode.md).
+
++
+### General availability - Anomalous token
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+Anomalous token detection is now available in Identity Protection. This feature can detect that there are abnormal characteristics in the token such as time active and authentication from unfamiliar IP address. [Learn more](../identity-protection/concept-identity-protection-risks.md#sign-in-risk).
+
++
+### General availability - Register or join devices in Conditional Access
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+The Register or join devices user action in Conditional access is now in general availability. This user action allows you to control multifactor authentication (MFA) policies for Azure AD device registration.
+
+Currently, this user action only allows you to enable multifactor authentication as a control when users register or join devices to Azure AD. Other controls that are dependent on or not applicable to Azure AD device registration continue to be disabled with this user action. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions).
+++
+### New provisioning connectors in the Azure AD Application Gallery - July 2021
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Clebex](../saas-apps/clebex-provisioning-tutorial.md)
+- [Exium](../saas-apps/exium-provisioning-tutorial.md)
+- [SoSafe](../saas-apps/sosafe-provisioning-tutorial.md)
+- [Talentech](../saas-apps/talentech-provisioning-tutorial.md)
+- [Thrive LXP](../saas-apps/thrive-lxp-provisioning-tutorial.md)
+- [Vonage](../saas-apps/vonage-provisioning-tutorial.md)
+- [Zip](../saas-apps/zip-provisioning-tutorial.md)
+- [TimeClock 365](../saas-apps/timeclock-365-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, read [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+++
+### Changes to security and Microsoft 365 group settings in Azure portal
+
+**Type:** Changed feature
+**Service category:** Group Management
+**Product capability:** Directory
+
+
+In the past, users could create security groups and Microsoft 365 groups in the Azure portal. Now users will have the ability to create groups across Azure portals, PowerShell, and API. Customers are required to verify and update the new settings have been configured for their organization. [Learn More](../enterprise-users/groups-self-service-management.md#group-settings).
+
++
+### "All Apps" collection has been renamed to "Apps"
+
+**Type:** Changed feature
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+In the My Apps portal, the collection that was called "All Apps" has been renamed to be called "Apps". As the product evolves, "Apps" is a more fitting name for this default collection. [Learn more](../manage-apps/my-apps-deployment-plan.md#plan-the-user-experience).
+
++ ## June 2021 ### Context panes to display risk details in Identity Protection Reports
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
Previously updated : 1/20/2022 Last updated : 1/31/2022
This page is updated monthly, so revisit it regularly. If you're looking for ite
+## January 2022
+
+### Public preview - Custom security attributes
+
+**Type:** New feature
+**Service category:** Directory Management
+**Product capability:** Directory
+
+Enables you to define business-specific attributes that you can assign to Azure AD objects. These attributes can be used to store information, categorize objects, or enforce fine-grained access control. Custom security attributes can be used with Azure attribute-based access control. [Learn more](custom-security-attributes-overview.md).
+
++
+### Public preview - Filter groups in tokens using a substring match
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+In the past, Azure AD only permitted groups to be filtered based on whether they were assigned to an application. Now, you can also use Azure AD to filter the groups included in the token. You can filter with the substring match on the display name or onPremisesSAMAccountName attributes of the group object on the token. Only groups that the user is a member of will be included in the token. This token will be recognized whether it's on the ObjectID or the on premises SAMAccountName or security identifier (SID). This feature can be used together with the setting to include only groups assigned to the application if desired to further filter the list.[Learn more](../hybrid/how-to-connect-fed-group-claims.md)
+++
+### General availability - Continuous Access Evaluation
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** Access Control
+
+With Continuous access evaluation (CAE), critical security events and policies are evaluated in real time. This includes account disable, password reset, and location change. [Learn more](../conditional-access/concept-continuous-access-evaluation.md).
+
++
+### General Availability - User management enhancements are now available
+
+**Type:** New feature
+**Service category:** User Management
+**Product capability:** User Management
+
+The Azure AD portal has been updated to make it easier to find users in the All users and Deleted users pages. Changes in the preview include:
+
+- More visible user properties including object ID, directory sync status, creation type, and identity issuer.
+- **Search now** allows substring search and combined search of names, emails, and object IDs.
+- Enhanced filtering by user type (member, guest, and none), directory sync status, creation type, company name, and domain name.
+- New sorting capabilities on properties like name, user principal name, creation time, and deletion date.
+- A new total users count that updates with any searches or filters.
+
+For more information, go to [User management enhancements (preview) in Azure Active Directory](../enterprise-users/users-search-enhanced.md).
+++
+### General Availability - My Apps customization of default Apps view
+
+**Type:** New feature
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+Customization of the default My Apps view in now in general availability. For more information on My Apps, you can go to [Sign in and start apps from the My Apps portal](https://support.microsoft.com/en-us/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
++
+### General Availability - Audited BitLocker Recovery
+
+**Type:** New feature
+**Service category:** Device Access Management
+**Product capability:** Device Lifecycle Management
+
+BitLocker keys are sensitive security items. Audited BitLocker recovery ensures that when BitLocker keys are read, an audit log is generated so that you can trace who accesses this information for given devices. [Learn more](../devices/device-management-azure-portal.md#view-or-copy-bitlocker-keys).
+++
+### General Availability - Download a list of devices
+
+**Type:** New feature
+**Service category:** Device Registration and Management
+**Product capability:** Device Lifecycle Management
+
+Download a list of your organization's devices to a .csv file for easier reporting and management. [Learn more](../devices/device-management-azure-portal.md#download-devices).
+
++
+### New provisioning connectors in the Azure AD Application Gallery - January 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Autodesk SSO](../saas-apps/autodesk-sso-provisioning-tutorial.md)
+- [Evercate](../saas-apps/evercate-provisioning-tutorial.md)
+- [frankli.io](../saas-apps/frankli-io-provisioning-tutorial.md)
+- [Plandisc](../saas-apps/plandisc-provisioning-tutorial.md)
+- [Swit](../saas-apps/swit-provisioning-tutorial.md)
+- [TerraTrue](../saas-apps/terratrue-provisioning-tutorial.md)
+- [TimeClock 365 SAML](../saas-apps/timeclock-365-saml-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, go to [Automate user provisioning to SaaS applications with Azure AD](../manage-apps/user-provisioning.md).
+++
+### New Federated Apps available in Azure AD Application gallery - January 2022
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In January 2022, weΓÇÖve added the following 47 new applications in our App gallery with Federation support
+
+[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://auth.healthnote.works/oauth), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Active and Thriving - Perth Airport](../saas-apps/active-and-thriving-perth-airport-tutorial.md), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
+
+You can also find the documentation of all the applications from: https://aka.ms/AppsTutorial,
+
+For listing your application in the Azure AD app gallery, read the details in: https://aka.ms/AzureADAppRequest
+++
+### Azure Ad access reviews reviewer recommendations now account for non-interactive sign-in information
+
+**Type:** Changed feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+Azure AD access reviews reviewer recommendations now account for non-interactive sign-in information, improving upon original recommendations based on interactive last sign-ins only. Reviewers can now make more accurate decisions based on the last sign-in activity of the users theyΓÇÖre reviewing. To learn more about how to create access reviews, go to [Create an access review of groups and applications in Azure AD](../governance/create-access-review.md).
+
++
+### Risk reason for offline Azure AD Threat Intelligence risk detection
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+The offline Azure AD Threat Intelligence risk detection can now have a risk reason that will help customers with the risk investigation. If a risk reason is available, it will show up as **Additional Info** in the risk details of that risk event. The information can be found in the Risk detections report. It will also be available through the additionalInfo property of the riskDetections API. [Learn more](../identity-protection/howto-identity-protection-investigate-risk.md).
+
++ ## December 2021 ### Tenant enablement of combined security information registration for Azure Active Directory
This page is updated monthly, so revisit it regularly. If you're looking for ite
**Service category:** MFA **Product capability:** Identity Security & Protection
-We previously announced in April 2020, a new combined registration experience enabling users to register authentication methods for SSPR and multifactor authentication at the same time was generally available for existing customer to opt-in. Any Azure AD tenants created after August 2020 automatically have the default experience set to combined registration. Starting in 2022 Microsoft will be enabling the multifactor authentication and SSPR combined registration experience for existing customers. [Learn more](../authentication/concept-registration-mfa-sspr-combined.md).
+We previously announced in April 2020, a new combined registration experience enabling users to register authentication methods for SSPR and multi-factor authentication at the same time was generally available for existing customer to opt in. Any Azure AD tenants created after August 2020 automatically have the default experience set to combined registration. Starting in 2022 Microsoft will be enabling the multi-factor authentication and SSPR combined registration experience for existing customers. [Learn more](../authentication/concept-registration-mfa-sspr-combined.md).
We previously announced in April 2020, a new combined registration experience en
**Service category:** Microsoft Authenticator App **Product capability:** User Authentication
-To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign-in screen when approving an multifactor authentication notification in the Authenticator app. This feature adds an additional security measure to the Microsoft Authenticator app. [Learn more](../authentication/how-to-mfa-number-match.md).
+To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign in screen when approving a multi-factor authentication notification in the Authenticator app. This feature adds an extra security measure to the Microsoft Authenticator app. [Learn more](../authentication/how-to-mfa-number-match.md).
To prevent accidental notification approvals, admins can now require users to e
**Service category:** Reporting **Product capability:** Monitoring & Reporting
-We are no longer publishing sign-in logs with the following error codes because these events are pre-authentication events that occur before our service has authenticated a user. Because these events happen before authentication, our service is not always able to correctly identify the user. If a user continues on to authenticate, the user sign-in will show up in your tenant Sign-in logs. These logs are no longer visible in the Azure portal UX, and querying these error codes in the Graph API will no longer return results.
+WeΓÇÖre no longer publishing sign-in logs with the following error codes because these events are pre-authentication events that occur before our service has authenticated a user. Because these events happen before authentication, our service isnΓÇÖt always able to correctly identify the user. If a user continues on to authenticate, the user sign-in will show up in your tenant Sign-in logs. These logs are no longer visible in the Azure portal UX, and querying these error codes in the Graph API will no longer return results.
|Error code | Failure reason| | | |
-|50058| Session information is not sufficient for single-sign-on.|
-|16000| Either multiple user identities are available for the current request or selected account is not supported for the scenario.|
+|50058| Session information isnΓÇÖt sufficient for single-sign-on.|
+|16000| Either multiple user identities are available for the current request or selected account isnΓÇÖt supported for the scenario.|
|500581| Rendering JavaScript. Fetching sessions for single-sign-on on V2 with prompt=none requires JavaScript to verify if any MSA accounts are signed in.| |81012| The user trying to sign in to Azure AD is different from the user signed into the device.|
We are no longer publishing sign-in logs with the following error codes because
**Service category:** MFA **Product capability:** Identity Security & Protection
-We previously announced in April 2020, a new combined registration experience enabling users to register authentication methods for SSPR and multifactor authentication at the same time was generally available for existing customer to opt-in. Any Azure AD tenants created after August 2020 automatically have the default experience set to combined registration. Starting 2022, Microsoft will be enabling the MF).
+We previously announced in April 2020, a new combined registration experience enabling users to register authentication methods for SSPR and multi-factor authentication at the same time was generally available for existing customer to opt in. Any Azure AD tenants created after August 2020 automatically have the default experience set to combined registration. Starting 2022, Microsoft will be enabling the MF).
The Public Preview feature for Azure AD Connect Cloud Sync Password writeback pr
**Service category:** Conditional Access for workload identities **Product capability:** Identity Security & Protection
-Previously, Conditional Access policies applied only to users when they access apps and services like SharePoint online or the Azure portal. This preview adds support for Conditional Access policies applied to service principals owned by the organization. You can block service principals from accessing resources from outside trusted named locations or Azure Virtual Networks. [Learn more](../conditional-access/workload-identity.md).
+Previously, Conditional Access policies applied only to users when they access apps and services like SharePoint online or the Azure portal. This preview adds support for Conditional Access policies applied to service principals owned by the organization. You can block service principals from accessing resources from outside trusted-named locations or Azure Virtual Networks. [Learn more](../conditional-access/workload-identity.md).
-### Public preview - Additional attributes available as claims
+### Public preview - Extra attributes available as claims
**Type:** Changed feature **Service category:** Enterprise Apps
Several user attributes have been added to the list of attributes available to m
**Service category:** Authentications (Logins) **Product capability:** Identity Security & Protection
-We have recently added other property to the sign-in logs called "Session Lifetime Policies Applied". This property will list all the session lifetime policies that applied to the sign-in for example, Sign-in frequency, Remember multifactor authentication and Configurable token lifetime. [Learn more](../reports-monitoring/concept-sign-ins.md#authentication-details).
+We have recently added other property to the sign-in logs called "Session Lifetime Policies Applied". This property will list all the session lifetime policies that applied to the sign-in for example, Sign-in frequency, Remember multi-factor authentication and Configurable token lifetime. [Learn more](../reports-monitoring/concept-sign-ins.md#authentication-details).
Updated "switch organizations" user interface in My Account. This visually impro
Sometimes, application developers configure their apps to require more permissions than it's possible to grant. To prevent this from happening, a limit on the total number of required permissions that can be configured for an app registration will be enforced.
-The total number of required permissions for any single application registration mustn't exceed 400 permissions, across all APIs. The change to enforce this limit will begin rolling out mid-October 2021. Applications exceeding the limit can't increase the number of permissions they are configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and may not exceed 50 APIs.
+The total number of required permissions for any single application registration mustn't exceed 400 permissions, across all APIs. The change to enforce this limit will begin rolling out mid-October 2021. Applications exceeding the limit can't increase the number of permissions theyΓÇÖre configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and may not exceed 50 APIs.
In the Azure portal, the required permissions are listed under API permissions for the application you wish to configure. Using Microsoft Graph or Microsoft Graph PowerShell, the required permissions are listed in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. [Learn more](../enterprise-users/directory-service-limits-restrictions.md).
Previously, we announced that starting October 31, 2021, Microsoft Azure Active
**Service category:** Conditional Access **Product capability:** End User Experiences
-If there's no trust relation between a home and resource tenant, a guest user would have previously been asked to re-register their device, which would break the previous registration. However, the user would end up in a registration loop because only home tenant device registration is supported. In this specific scenario, instead of this loop, we have created a new conditional access blocking page. The page tells the end user that they can't get access to conditional access protected resources as a guest user. [Learn more](../external-identities/b2b-quickstart-add-guest-users-portal.md#prerequisites).
+If there's no trust relation between a home and resource tenant, a guest user would have previously been asked to re-register their device, which would break the previous registration. However, the user would end up in a registration loop because only home tenant device registration is supported. In this specific scenario, instead of this loop, weΓÇÖve created a new conditional access blocking page. The page tells the end user that they can't get access to conditional access protected resources as a guest user. [Learn more](../external-identities/b2b-quickstart-add-guest-users-portal.md#prerequisites).
We've released beta MS Graph API for Azure AD access reviews. The API has method
**Product capability:** Identity Security & Protection
-The "Register or join devices" user action is generally available in Conditional access. This user action allows you to control multifactor authentication policies for Azure Active Directory (AD) device registration. Currently, this user action only allows you to enable multifactor authentication as a control when users register or join devices to Azure AD. Other controls that are dependent on or not applicable to Azure AD device registration continue to be disabled with this user action. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions).
+The "Register or join devices" user action is generally available in Conditional access. This user action allows you to control multi-factor authentication policies for Azure Active Directory (AD) device registration. Currently, this user action only allows you to enable multi-factor authentication as a control when users register or join devices to Azure AD. Other controls that are dependent on or not applicable to Azure AD device registration continue to be disabled with this user action. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions).
For more information about how to better secure your organization by using autom
**Product capability:** Identity Security & Protection
-To help administrators understand that their users are blocked for multifactor authentication as a result of fraud report, we have added a new audit event. This audit event is tracked when the user reports fraud. The audit log is available in addition to the existing information in the sign-in logs about fraud report. To learn how to get the audit report, see [multifactor authentication Fraud alert](../authentication/howto-mfa-mfasettings.md#fraud-alert).
+To help administrators understand that their users are blocked for multi-factor authentication as a result of fraud report, weΓÇÖve added a new audit event. This audit event is tracked when the user reports fraud. The audit log is available in addition to the existing information in the sign-in logs about fraud report. To learn how to get the audit report, see [multi-factor authentication Fraud alert](../authentication/howto-mfa-mfasettings.md#fraud-alert).
Deploying MIM for Privileged Access Management with a Windows Server 2012 R2 dom
-## July 2021
-
-### New Google sign-in integration for Azure AD B2C and B2B self-service sign-up and invited external users will stop working starting July 12, 2021
-
-**Type:** Plan for change
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-
-Previously we announced that [the exception for Embedded WebViews for Gmail authentication will expire in the second half of 2021](https://www.yammer.com/cepartners/threads/1188371962232832).
-
-On July 7, 2021, we learned from Google that some of these restrictions will apply starting **July 12, 2021**. Azure AD B2B and B2C customers who set up a new Google ID sign-in in their custom or line of business applications to invite external users or enable self-service sign-up will have the restrictions applied immediately. As a result, end-users will be met with an error screen that blocks their Gmail sign-in if the authentication is not moved to a system webview. See the docs linked below for details.
-
-Most apps use system web-view by default, and will not be impacted by this change. This only applies to customers using embedded webviews (the non-default setting.) We advise customers to move their application's authentication to system browsers instead, prior to creating any new Google integrations. To learn how to move to system browsers for Gmail authentications, read the Embedded vs System Web UI section in the [Using web browsers (MSAL.NET)](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) documentation. All MSAL SDKs use the system web-view by default. [Learn more](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
---
-### Google sign-in on embedded web-views expiring September 30, 2021
-
-**Type:** Plan for change
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-
-About two months ago we announced that the exception for Embedded WebViews for Gmail authentication will expire in the second half of 2021.
-
-Recently, Google has specified the date to be **September 30, 2021**.
-
-Rolling out globally beginning September 30, 2021, Azure AD B2B guests signing in with their Gmail accounts will now be prompted to enter a code in a separate browser window to finish signing in on Microsoft Teams mobile and desktop clients. This applies to invited guests and guests who signed up using Self-Service Sign-Up.
-
-Azure AD B2C customers who have set up embedded webview Gmail authentications in their custom/line of business apps or have existing Google integrations, will no longer can let their users sign in with Gmail accounts. To mitigate this, make sure to modify your apps to use the system browser for sign-in. For more information, read the Embedded vs System Web UI section in the [Using web browsers (MSAL.NET)](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) documentation. All MSAL SDKs use the system web-view by default.
-
-As the device login flow will start rolling out on September 30, 2021, it is likely that it may not be rolled out to your region yet (in which case, your end-users will be met with the error screen shown in the documentation until it gets deployed to your region.)
-
-For details on known impacted scenarios and what experience your users can expect, read [Add Google as an identity provider for B2B guest users](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
---
-### Bug fixes in My Apps
-
-**Type:** Fixed
-**Service category:** My Apps
-**Product capability:** End User Experiences
-
-- Previously, the presence of the banner recommending the use of collections caused content to scroll behind the header. This issue has been resolved. -- Previously, there was another issue when adding apps to a collection, the order of apps in All Apps collection would get randomly reordered. This issue has also been resolved. -
-For more information on My Apps, read [Sign in and start apps from the My Apps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
---
-### Public preview - Application authentication method policies
-
-**Type:** New feature
-**Service category:** MS Graph
-**Product capability:** Developer Experience
-
-Application authentication method policies in MS Graph which allow IT admins to enforce lifetime on application password secret credential or block the use of secrets altogether. Policies can be enforced for an entire tenant as a default configuration and it can be scoped to specific applications or service principals. [Learn more](/graph/api/resources/policy-overview).
-
--
-### Public preview - Authentication Methods registration campaign to download Microsoft Authenticator
-
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** User Authentication
-
-The Authenticator registration campaign helps admins to move their organizations to a more secure posture by prompting users to adopt the Microsoft Authenticator app. Prior to this feature, there was no way for an admin to push their users to set up the Authenticator app.
-
-The registration campaign comes with the ability for an admin to scope users and groups by including and excluding them from the registration campaign to ensure a smooth adoption across the organization. [Learn more](../authentication/how-to-mfa-registration-campaign.md)
-
--
-### Public preview - Separation of duties check
-
-**Type:** New feature
-**Service category:** User Access Management
-**Product capability:** Entitlement Management
-
-In Azure AD entitlement management, an administrator can define that an access package is incompatible with another access package or with a group. Users who have the incompatible memberships will be then unable to request more access. [Learn more](../governance/entitlement-management-access-package-request-policy.md#prevent-requests-from-users-with-incompatible-access-preview).
-
--
-### Public preview - Identity Protection logs in Log Analytics, Storage Accounts, and Event Hubs
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-You can now send the risky users and risk detections logs to Azure Monitor, Storage Accounts, or Log Analytics using the Diagnostic Settings in the Azure AD blade. [Learn more](../identity-protection/howto-export-risk-data.md).
-
--
-### Public preview - Application Proxy API addition for backend SSL certificate validation
-
-**Type:** New feature
-**Service category:** App Proxy
-**Product capability:** Access Control
-
-The onPremisesPublishing resource type now includes the property, "isBackendCertificateValidationEnabled" which indicates whether backend SSL certificate validation is enabled for the application. For all new Application Proxy apps, the property will be set to true by default. For all existing apps, the property will be set to false. For more information, read the [onPremisesPublishing resource type](/graph/api/resources/onpremisespublishing?view=graph-rest-beta&preserve-view=true) api.
-
--
-### General availability - Improved Authenticator setup experience for add Azure AD account in Microsoft Authenticator app by directly signing into the app.
-
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** User Authentication
-
-Users can now use their existing authentication methods to directly sign into the Microsoft Authenticator app to set up their credential. Users don't need to scan a QR Code anymore and can use a Temporary Access Pass (TAP) or Password + SMS (or other authentication method) to configure their account in the Authenticator app.
-
-This improves the user credential provisioning process for the Microsoft Authenticator app and gives the end user a self-service method to provision the app. [Learn more](https://support.microsoft.com/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c#sign-in-with-your-credentials).
-
--
-### General availability - Set manager as reviewer in Azure AD entitlement management access packages
-
-**Type:** New feature
-**Service category:** User Access Management
-**Product capability:** Entitlement Management
-
-Access packages in Azure AD entitlement management now support setting the user's manager as the reviewer for regularly occurring access reviews. [Learn more](../governance/entitlement-management-access-reviews-create.md).
---
-### General availability - Enable external users to self-service sign-up in Azure AD using MSA accounts
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-Users can now enable external users to self-service sign-up in Azure Active Directory using Microsoft accounts. [Learn more](../external-identities/microsoft-account.md).
-
-
-
-### General availability - External Identities Self-Service Sign-Up with Email One-time Passcode
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-
-Now users can enable external users to self-service sign-up in Azure Active Directory using their email and one-time passcode. [Learn more](../external-identities/one-time-passcode.md).
-
--
-### General availability - Anomalous token
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-Anomalous token detection is now available in Identity Protection. This feature can detect that there are abnormal characteristics in the token such as time active and authentication from unfamiliar IP address. [Learn more](../identity-protection/concept-identity-protection-risks.md#sign-in-risk).
-
--
-### General availability - Register or join devices in Conditional Access
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-The Register or join devices user action in Conditional access is now in general availability. This user action allows you to control multifactor authentication (MFA) policies for Azure AD device registration.
-
-Currently, this user action only allows you to enable multifactor authentication as a control when users register or join devices to Azure AD. Other controls that are dependent on or not applicable to Azure AD device registration continue to be disabled with this user action. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions).
---
-### New provisioning connectors in the Azure AD Application Gallery - July 2021
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [Clebex](../saas-apps/clebex-provisioning-tutorial.md)-- [Exium](../saas-apps/exium-provisioning-tutorial.md)-- [SoSafe](../saas-apps/sosafe-provisioning-tutorial.md)-- [Talentech](../saas-apps/talentech-provisioning-tutorial.md)-- [Thrive LXP](../saas-apps/thrive-lxp-provisioning-tutorial.md)-- [Vonage](../saas-apps/vonage-provisioning-tutorial.md)-- [Zip](../saas-apps/zip-provisioning-tutorial.md)-- [TimeClock 365](../saas-apps/timeclock-365-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, read [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
---
-### Changes to security and Microsoft 365 group settings in Azure portal
-
-**Type:** Changed feature
-**Service category:** Group Management
-**Product capability:** Directory
-
-
-In the past, users could create security groups and Microsoft 365 groups in the Azure portal. Now users will have the ability to create groups across Azure portals, PowerShell, and API. Customers are required to verify and update the new settings have been configured for their organization. [Learn More](../enterprise-users/groups-self-service-management.md#group-settings).
-
--
-### "All Apps" collection has been renamed to "Apps"
-
-**Type:** Changed feature
-**Service category:** My Apps
-**Product capability:** End User Experiences
-
-In the My Apps portal, the collection that was called "All Apps" has been renamed to be called "Apps". As the product evolves, "Apps" is a more fitting name for this default collection. [Learn more](../manage-apps/my-apps-deployment-plan.md#plan-the-user-experience).
-
-
active-directory Identity Governance Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/identity-governance-automation.md
+
+ Title: Automate Azure AD Identity Governance tasks with Azure Automation
+description: Learn how to write PowerShell scripts in Azure Automation to interact with Azure Active Directory entitlement management and other features.
+
+documentationCenter: ''
++
+editor:
++
+ na
+ms.devlang: na
++ Last updated : 1/20/2022+++++++
+# Automate Azure AD Identity Governance tasks via Azure Automation and Microsoft Graph
+
+[Azure Automation](/azure/automation/overview) is an Azure cloud service that allows you to automate common or repetitive systems management and processes. Microsoft Graph is the Microsoft unified API endpoint for Azure AD features that manage users, groups, access packages, access reviews, and other resources in the directory. You can manage Azure AD at scale from the PowerShell command line, using the [Microsoft Graph PowerShell SDK](/graph/powershell/get-started). You can also include the Microsoft Graph PowerShell cmdlets from a [PowerShell-based runbook in Azure Automation](/azure/automation/automation-intro), so that you can automate Azure AD tasks from a simple script.
+
+Azure Automation and the PowerShell Graph SDK supports certificate-based authentication and application permissions, so you can have Azure Automation runbooks authenticate to Azure AD without needing a user context.
+
+This article will show you how to get started using Azure Automation for Azure AD Identity Governance, by creating a simple runbook that queries entitlement management via Microsoft Graph PowerShell.
+
+## Create an Azure Automation account
+
+Azure Automation provides a cloud-hosted environment for [runbook execution](/azure/automation/automation-runbook-execution). Those runbooks can start automatically based on a schedule, or be triggered by webhooks or by Logic Apps.
+
+Using Azure Automation requires you to have an Azure subscription.
+
+**Prerequisite role**: Azure subscription or resource group owner
+
+1. Sign in to the Azure portal. Make sure you have access to the subscription or resource group where the Azure Automation account will be located.
+
+1. Select the subscription or resource group, and select **Create**. Type **Automation**, select the **Automation** Azure service from Microsoft, then select **Create**.
+
+1. After the Azure Automation account has been created, select **Access control (IAM)**. Then select **View** in **View access to this resource**. These users and service principals will subsequently be able to interact with the Microsoft service through the scripts to be created in that Azure Automation account.
+1. Review the users and service principals who are listed there and ensure they are authorized. Remove any users who are unauthorized.
+
+## Create a self-signed key pair and certificate on your computer
+
+So that it can operate without needing your personal credentials, the Azure Automation account you created will need to authenticate itself to Azure AD with a certificate.
+
+If you already have a key pair for authenticating your service to Azure AD, and a certificate that you received from a certificate authority, skip to the next section.
+
+To generate a self-signed certificate,
+
+1. Follow the instructions in [how to create a self-signed certificate](../develop/howto-create-self-signed-certificate.md), option 2, to create and export a certificate with its private key.
+
+1. Display the thumbprint of the certificate.
+
+ ```powershell
+ $cert | ft Thumbprint
+
+1. After you have exported the files, you can remove the certificate and key pair from your local user certificate store. In subsequent steps you will remove the `.pfx` and `.crt` files as well, once the certificate and private key have been uploaded to the Azure Automation and Azure AD services.
+
+## Upload the key pair to Azure Automation
+
+Your runbook in Azure Automation will retrieve the private key from the `.pfx` file, and use it for authenticating to Microsoft Graph.
+
+1. In the Azure portal for the Azure Automation account, select **Certificates** and **Add a certificate**.
+
+1. Upload the `.pfx` file created earlier, and type the password you provided when you created the file.
+
+1. After the private key is uploaded, record the certificate expiration date.
+
+1. You can now delete the `.pfx` file from your local computer. However, do not delete the `.crt` file yet, as you will need this file in a subsequent step.
+
+## Add modules for Microsoft Graph to your Azure Automation account
+
+By default, Azure Automation does not have any PowerShell modules preloaded for Microsoft Graph. You will need to add **Microsoft.Graph.Authentication**, and then additional modules, from the gallery to your Automation account. Note that you will need to choose whether to use the beta or v1.0 APIs through those modules, as you cannot mix both in a single runbook.
+
+1. In the Azure portal for the Azure Automation account, select **Modules** and then **Browse gallery**.
+
+1. In the Search bar, type **Microsoft.Graph.Authentication**. Select the module, select **Import**, and select **OK** to have Azure AD begin importing the module. After clicking OK, importing a module may take several minutes. Don't attempt to add more Microsoft Graph modules until the Microsoft.Graph.Authentication module import has completed, since those other modules have Microsoft.Graph.Authentication as a prerequisite.
+
+1. Return to the **Modules** list and select **Refresh**. Once the Status of the **Microsoft.Graph.Authentication** module has changed to **Available**, you can import the next module.
+
+1. If you are using the cmdlets for Azure AD identity governance features, such as entitlement management, then repeat the import process for the module **Microsoft.Graph.Identity.Governance**.
+
+1. Import other modules that your script may require. For example, if you are using Identity Protection, then you may wish to import the **Microsoft.Graph.Identity.SignIns** module.
+
+## Create an app registration and assign permissions
+
+Next, you will create an app registration in Azure AD, so that Azure AD will recognize your Azure Automation runbook's certificate for authentication.
+
+**Prerequisite role**: Global Administrator or other administrator who can consent applications to application permissions
+
+1. In the Azure portal, browse to **Azure Active Directory** > **App registrations**.
+
+1. Select **New registration**.
+
+1. Type a name for the application and select **Register**.
+
+1. Once the application registration is created, take note of the **Application (client) ID** and **Directory (tenant) ID** as you will need these items later.
+
+1. Select **Certificates and Secrets** and **Upload certificate**.
+
+1. Upload the `.crt` file created earlier.
+
+1. Select **API permissions** and **Add a permission**.
+
+1. Select **Microsoft Graph** and **Application permissions**.
+
+1. Select each of the permissions that your Azure Automation account will require, then select **Add permissions**.
+
+ * If your runbook is only performing queries for entitlement management, then it can use the **EntitlementManagement.Read.All** permission.
+ * If your runbook is making changes to entitlement management, for example to create assignments, then use the **EntitlementManagement.ReadWrite.All** permission.
+ * For other APIs, ensure that the necessary permission is added. For example, for identity protection, the **IdentityRiskyUser.Read.All** permission should be added.
+
+10. Select **Grant admin permissions** to give your app those permissions.
+
+## Create Azure Automation variables
+
+In this step, you will create in the Azure automation account three variables that the runbook will use to determine how to authenticate to Azure AD.
+
+1. In the Azure portal, return to the Azure Automation account.
+
+1. Select **Variables**, and **Add variable**.
+
+1. Create a variable named **Thumbprint**. Type, as the value of the variable, the certificate thumbprint that was generated earlier.
+
+1. Create a variable named **ClientId**. Type, as the value of the variable, the client ID for the application registered in Azure AD.
+
+1. Create a variable named **TenantId**. Type, as the value of the variable, the tenant ID of the directory where the application was registered.
+
+## Create an Azure Automation PowerShell runbook that can use Graph
+
+In this step, you will create an initial runbook. You can trigger this runbook to verify the authentication using the certificate created earlier is successful.
+
+1. Select **Runbooks** and **Create a runbook**.
+
+1. Type the name of the runbook, select **PowerShell** as the type of runbook to create, and select **Create**.
+
+1. Once the runbook is created, a text editing pane will appear for you to type in the PowerShell source code of the runbook.
+
+1. Type the following PowerShell into the text editor.
+
+```powershell
+Import-Module Microsoft.Graph.Authentication
+$ClientId = Get-AutomationVariable -Name 'ClientId'
+$TenantId = Get-AutomationVariable -Name 'TenantId'
+$Thumbprint = Get-AutomationVariable -Name 'Thumbprint'
+Connect-MgGraph -clientId $ClientId -tenantid $TenantId -certificatethumbprint $Thumbprint
+```
+
+5. Select **Test pane**, and select **Start**. Wait a few seconds for the Azure Automation processing of your runbook script to complete.
+
+1. If the run of your runbook is successful, then the message **Welcome to Microsoft Graph!** will appear.
+
+Now that you have verified that your runbook can authenticate to Microsoft Graph, extend your runbook by adding cmdlets for interacting with Azure AD features.
+
+## Extend the runbook to use Entitlement Management
+
+If the app registration for your runbook has the **EntitlementManagement.Read.All** or **EntitlementManagement.ReadWrite.All** permissions, then it can use the entitlement management APIs.
+
+1. For example, to get a list of Azure AD entitlement management access packages, you can update the above-created runbook, and replace the text with the following PowerShell.
+
+```powershell
+Import-Module Microsoft.Graph.Authentication
+$ClientId = Get-AutomationVariable -Name 'ClientId'
+$TenantId = Get-AutomationVariable -Name 'TenantId'
+$Thumbprint = Get-AutomationVariable -Name 'Thumbprint'
+$auth = Connect-MgGraph -clientId $ClientId -tenantid $TenantId -certificatethumbprint $Thumbprint
+Select-MgProfile -Name beta
+Import-Module Microsoft.Graph.Identity.Governance
+$ap = Get-MgEntitlementManagementAccessPackage -All -ErrorAction Stop
+$ap | Select-Object -Property Id,DisplayName | ConvertTo-Json
+```
+
+2. Select **Test pane**, and select **Start**. Wait a few seconds for the Azure Automation processing of your runbook script to complete.
+
+3. If the run was successful, the output instead of the welcome message will be a JSON array. The JSON array will include the ID and display name of each access package returned from the query.
+
+## Parse the output of an Azure Automation account in Logic Apps (optional)
+
+Once your runbook is published, your can create a schedule in Azure Automation, and link your runbook to that schedule to run automatically. Scheduling runbooks from Azure Automation is suitable for runbooks that do not need to interact with other Azure or Office 365 services.
+
+If you wish to send the output of your runbook to another service, then you may wish to consider using [Azure Logic Apps](/azure/logic-apps/logic-apps-overview) to start your Azure Automation runbook, as Logic Apps can also parse the results.
+
+1. In Azure Logic Apps, create a Logic App in the Logic Apps Designer starting with **Recurrence**.
+
+1. Add the operation **Create job** from **Azure Automation**. Authenticate to Azure AD, and select the Subscription, Resource Group, Automation Account created earlier. Select **Wait for Job**.
+
+1. Add the parameter **Runbook name** and type the name of the runbook to be started.
+
+1. Select **New step** and add the operation **Get job output**. Select the same Subscription, Resource Group, Automation Account as the previous step, and select the Dynamic value of the **Job ID** from the previous step.
+
+1. You can then add more operations to the Logic App, such as the [**Parse JSON** action](/azure/logic-apps/logic-apps-perform-data-operations#parse-json-action), that use the **Content** returned when the runbook completes.
+
+Note that in Azure Automation, a PowerShell runbook can fail to complete if it tries to write a large amount of data to the output stream at once. You can typically work around this issue by having the runbook output just the information needed by the Logic App, such as by using the `Select-Object -Property` cmdlet to exclude unneeded properties.
+
+## Plan to keep the certificate up to date
+
+If you created a self-signed certificate following the steps above for authentication, keep in mind that the certificate will have a limited lifetime before it will expire. You will need to regenerate the certificate and upload the new certificate before its expiration date.
+
+There are two places where you can see the expiration date in the Azure portal.
+
+* In Azure Automation, the **Certificates** screen displays the expiration date of the certificate.
+* In Azure AD, on the app registration, the **Certificates & secrets** screen displays the expiration date of the certificate used for the Azure Automation account.
+
+## Next steps
+
+- [Create an Automation account using the Azure portal](/azure/automation/quickstarts/create-account-portal)
+- [Manage access to resources in Active Directory entitlement management using Microsoft Graph PowerShell](/powershell/microsoftgraph/tutorial-entitlement-management?view=graph-powershell-beta)
active-directory How To Connect Health Adfs Risky Ip Workbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip-workbook.md
AD FS customers may expose password authentication endpoints to the internet to provide authentication services for end users to access SaaS applications such as Microsoft 365. In this case, it is possible for a bad actor to attempt logins against your AD FS system to guess an end userΓÇÖs password and get access to application resources. AD FS provides the extranet account lockout functionality to prevent these types of attacks since AD FS in Windows Server 2012 R2. If you are on a lower version, we strongly recommend that you upgrade your AD FS system to Windows Server 2016. <br />
-Additionally, it is possible for a single IP address to attempt multiple logins against multiple users. In these cases, the number of attempts per user may be under the threshold for account lockout protection in AD FS. Azure AD Connect Health now provides the ΓÇ£Risky IP reportΓÇ¥ that detects this condition and notifies administrators when this occurs. The following are the key benefits for this report:
+Additionally, it is possible for a single IP address to attempt multiple logins against multiple users. In these cases, the number of attempts per user may be under the threshold for account lockout protection in AD FS. Azure AD Connect Health now provides the ΓÇ£Risky IP reportΓÇ¥ that detects this condition and notifies administrators. The following are the key benefits for this report:
- Detection of IP addresses that exceed a threshold of failed password-based logins - Supports failed logins due to bad password or due to extranet lockout state - Supports enabling alerts through Azure Alerts
Additionally, it is possible for a single IP address to attempt multiple logins
## What is in the report?
-The Risky IP report workbook is powered from data in the ADFSSignInLogs stream and has pre-existing queries to be able to quickly visualize and analyze risky IPs. The parameters can be configured and customized for threshold counts. The workbook is also configurable based on queries, and each query can be updated and modified based on the organizationΓÇÖs needs.
+The Risky IP report workbook is powered from data in the ADFSSignInLogs stream and can quickly visualize and analyze risky IPs. The parameters can be configured and customized for threshold counts. The workbook is also configurable based on queries, and each query can be updated and modified based on the organizationΓÇÖs needs.
The risky IP workbook analyzes data from ADFSSignInLogs to help you detect password spray or password brute force attacks. The workbook has two parts. The first part "Risky IP Analysis" identifies risky IP addresses based on designated error thresholds and detection window length. The second part provides the sign-in details and error counts for selected IPs.
Each item in the Risky IP report table shows aggregated information about failed
Filter the report by IP address or user name to see an expanded view of sign-ins details for each risky IP event.
+## Accessing the workbook
+
+To access the workbook:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to **Azure Active Directory** > **Monitoring** > **Workbooks**.
+3. Select the Risky IP report workbook.
+ ## Load balancer IP addresses in the list Load balancer aggregate failed sign-in activities and hit the alert threshold. If you are seeing load balancer IP addresses, it is highly likely that your external load balancer is not sending the client IP address when it passes the request to the Web Application Proxy server. Please configure your load balancer correctly to pass forward client IP address.
active-directory F5 Big Ip Header Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-header-advanced.md
The secure hybrid access solution for this scenario is made up of:
- **BIG-IP**: Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP, before performing header-based SSO to the backend application.
-![Screenshot shows the architecture flow diagram](./media/f5-big-ip-header-advanced/flow-diagram.png)
+![Screenshot shows the architecture flow diagram](./media/f5-big-ip-easy-button-header/sp-initiated-flow.png)
| Step | Description | |:-|:--|
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
The SHA solution for this scenario consists of the following elements:
The following image illustrates the SAML SP-initiated flow for this scenario, but IdP-initiated flow is also supported.
-![Diagram of the scenario architecture.](./media/f5-big-ip-kerberos-advanced/scenario-architecture.png)
+![Diagram of the scenario architecture.](./media/f5-big-ip-kerberos-easy-button/scenario-architecture.png)
| Step| Description | | -- |-|
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
+
+ Title: Configure F5 BIG-IP Easy Button for SSO to Oracle EBS
+description: Learn to implement SHA with header-based SSO to Oracle EBS using F5ΓÇÖs BIG-IP Easy Button guided configuration
+++++++ Last updated : 1/31/2022++++
+# Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle EBS
+
+In this article, you'll learn to implement Secure Hybrid Access (SHA) with header-based single sign-on (SSO) to Oracle Enterprise Business Suite (EBS) using F5ΓÇÖs BIG-IP Easy Button guided configuration.
+
+Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
+
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/conditional-access/overview)
+
+* Full SSO between Azure AD and BIG-IP published services
+
+* Manage Identities and access from a single control plane, [the Azure portal](https://portal.azure.com/)
+
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](http://f5-aad-integration.md/) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+
+## Scenario description
+
+For this scenario, use an **Oracle EBS application using HTTP authorization headers** to manage access to protected content.
+
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+
+Having a BIG-IP in front of the app enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
+
+## Scenario architecture
+
+The secure hybrid access solution for this scenario is made up of several components including a multi-tiered Oracle architecture:
+
+**Oracle EBS Application:** BIG-IP published service to be protected by Azure AD SHA.
+
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP.
+
+**Oracle Internet Directory (OID):** Hosts the user database. BIG-IP checks via LDAP for authorization attributes.
+
+**Oracle AccessGate:** Validates authorization attributes through back channel with OID service, before issuing EBS access cookies
+
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the Oracle service.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+
+![Secure hybrid access - SP initiated flow](./media/f5-big-ip-oracle/sp-initiated-flow.png)
+
+| Steps| Description |
+| -- |-|
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
+| 4| User is redirected back to BIG-IP with issued token and claims |
+| 5| BIG-IP authenticates user and performs LDAP query for user Unique ID (UID) attribute |
+| 6| BIG-IP injects returned UID attribute as user_orclguid header in EBS session cookie request to Oracle AccessGate |
+| 7| Oracle AccessGate validates UID against Oracle Internet Directory (OID) service and issues EBS access cookie
+| 8| EBS user headers and cookie sent to application and returns the payload to the user |
+
+## Prerequisites
+
+Prior BIG-IP experience isnΓÇÖt necessary, but you need:
+
+* An Azure AD free subscription or above
+
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](./f5-bigip-deployment-guide.md)
+
+* Any of the following F5 BIG-IP license SKUs
+
+ * F5 BIG-IP® Best bundle
+
+ * F5 BIG-IP Access Policy ManagerΓäó (APM) standalone license
+
+ * F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD or created directly within Azure AD and flowed back to your on-premises directory
+
+* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
+
+* [SSL certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS
+
+* An existing Oracle EBS suite including Oracle AccessGate and an LDAP enabled OID (Oracle Internet Database)
+
+## BIG-IP configuration methods
+
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+
+>[!NOTE]
+> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+
+## Register Easy Button
+
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform](../develop/quickstart-register-app.md).
+
+A BIG-IP must also be registered as a client in Azure AD, before it is allowed to establish a trust in between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
+
+1. Sign in to the [Azure AD portal](https://portal.azure.com/) with Application Administrative rights
+
+2. From the left navigation pane, select the **Azure Active Directory** service
+
+3. Under Manage, select **App registrations > New registration**
+
+4. Enter a display name for your application. For example, F5 BIG-IP Easy Button
+
+5. Specify who can use the application > **Accounts in this organizational directory only**
+
+6. Select **Register** to complete the initial app registration
+
+7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions:
+
+ * Application.Read.All
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+
+8. Grant admin consent for your organization
+
+9. Go to **Certificates & Secrets**, generate a new **Client secret** and note it down
+
+10. Go to **Overview**, note the **Client ID** and **Tenant ID**
+
+## Configure Easy Button
+
+Initiate **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+
+1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
+
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
+
+2. Review the list of configuration steps and select **Next**
+
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
+
+3. Follow the sequence of steps required to publish your application.
+
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox)
+
+### Configuration Properties
+
+The **Configuration Properties** tab creates up a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
+
+Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
+
+1. Provide a unique **Configuration Name** that enables an admin to easily distinguish between Easy Button configurations
+
+2. Enable **Single Sign-On (SSO) & HTTP Headers**
+
+3. Enter the **Tenant Id, Client ID**, and **Client Secret** you noted down from your registered application
+
+4. Before you select **Next**, confirm that BIG-IP can successfully connect to your tenant.
+
+ ![ Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-oracle/configuration-general-and-service-account-properties.png)
+
+### Service Provider
+
+The **Service Provider** settings define the SAML SP properties for the APM instance representing the application protected through SHA.
+
+1. Enter **Host**. This is the public FQDN of the application being secured. You need a corresponding DNS record for clients to resolve this address, but using a localhost record is fine during testing
+
+2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
+
+ ![Screenshot for Service Provider settings](./media/f5-big-ip-oracle/service-provider-settings.png)
+
+ Next, under optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
+
+5. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab.
+
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png)
+
+6. Check **Enable Encrypted Assertion**
+
+7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM uses to decrypt Azure AD assertions
+
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP uploads to Azure AD for encrypting the issued SAML assertions.
+
+ ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
+
+### Azure Active Directory
+
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. In this example, select **Oracle E-Business Suite > Add**. This adds the template for the Oracle E-business Suite
+
+![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-oracle/azure-configuration-add-big-ip-application.png)
+
+#### Azure Configuration
+
+1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users see on [MyApps portal](https://myapplications.microsoft.com/)
+
+2. In the **Sign On URL (optional)** enter the public FQDN of the EBS application being secured, along with the default path for the Oracle EBS homepage
+
+ ![Screenshot for Azure configuration add display info](./media/f5-big-ip-oracle/azure-configuration-add-display-info.png)
+
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+4. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
+
+5. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
+
+6. **User and User Groups** are used to authorize access to the application. They are dynamically added from the tenant. **Add** a user or group that you can use later for testing, otherwise all access will be denied
+
+ ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
+
+#### User Attributes & Claims
+
+When a user successfully authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims** tab shows the default claims to issue for the new application. It also lets you configure more claims.
+
+![Screenshot for Azure configuration ΓÇô User attributes & claims](./media/f5-big-ip-easy-button-ldap/user-attributes-claims.png)
+
+You can include additional Azure AD attributes if necessary, but the example PeopleSoft scenario only requires the default attributes.
+
+#### Additional User Attributes
+
+The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+
+1. Enable the **Advanced Settings** option
+
+2. Check the **LDAP Attributes** check box
+
+3. Select **Create New** in **Choose Authentication Server**
+
+4. Select **Use pool** or **Direct** server connection mode depending on your setup. This provides the **Server Address** of the target LDAP service. If using a single LDAP server, select **Direct**.
+
+5. Enter **Service Port** as 3060 (Default), 3161 (Secure), or any other port your Oracle LDAP service operates on
+
+6. Enter the **Base Search DN** (distinguished name) from which to search. This search DN is used to search groups across a whole directory.
+
+7. Set the **Admin DN** to the exact distinguished name for the account the APM will use to authenticate for LDAP queries, along with its password
+
+ ![Screenshot for additional user attributes](./media/f5-big-ip-oracle/additional-user-attributes.png)
+
+8. Leave all default **LDAP Schema Attributes**
+
+ ![Screenshot for LDAP schema attributes](./media/f5-big-ip-oracle/ldap-schema-attributes.png)
+
+9. Under **LDAP Query Properties**, set the **Search Dn** to the base node of the LDAP server from which to search for user objects
+
+10. Add the name of the user object attribute that must be returned from the LDAP directory. For EBS, the default is **orclguid**
+
+ ![Screenshot for LDAP query properties.png](./media/f5-big-ip-oracle/ldap-query-properties.png)
+
+#### Conditional Access Policy
+
+Conditional Access policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+
+The **Available Policies** view, by default, will list all Conditional Access policies that do not include user-based actions.
+
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+
+To select a policy to be applied to the application being published:
+
+1. Select the desired policy in the **Available Policies** list
+
+2. Select the right arrow and move it to the **Selected Policies** list
+
+ The selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the policy is not enforced.
+
+ ![Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
+
+> [!NOTE]
+> The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+
+### Virtual Server Properties
+
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP.
+
+2. Enter **Service Port** as *443* for HTTPS
+
+3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
+
+4. Select **Client SSL Profile** to enable the virtual server for HTTPS so that client connections are encrypted over TLS. Select the client SSL profile you created as part of the prerequisites or leave the default if testing
+
+ ![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
+
+### Pool Properties
+
+The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
+
+1. Choose from **Select a Pool**. Create a new pool or select an existing one
+
+2. Choose the **Load Balancing Method** as *Round Robin*
+
+3. Update the **Pool Servers**. Select an existing node or specify an IP and port for the servers hosting the Oracle EBS application.
+
+ ![Screenshot for Application pool](./media/f5-big-ip-oracle/application-pool.png)
+
+4. The **Access Gate Pool** specifies the servers Oracle EBS uses for mapping an SSO authenticated user to an Oracle E-Business Suite session. Update **Pool Servers** with the IP and port for of the Oracle application servers hosting the application
+
+ ![Screenshot for AccessGate pool](./media/f5-big-ip-oracle/accessgate-pool.png)
+
+#### Single Sign-On & HTTP Headers
+
+The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO to published applications. As the PeopleSoft application expects headers, enable **HTTP Headers** and enter the following properties.
+
+* **Header Operation:** replace
+* **Header Name:** USER_NAME
+* **Header Value:** %{session.sso.token.last.username}
+
+
+* **Header Operation:** replace
+* **Header Name:** USER_ORCLGUID
+* **Header Value:** %{session.ldap.last.attr.orclguid}
+
+ ![ Screenshot for SSO and HTTP headers](./media/f5-big-ip-oracle/sso-and-http-headers.png)
+
+>[!NOTE]
+>APM session variables defined within curly brackets are CASE sensitive. If you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure.
+
+### Session Management
+
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Consult [F5 documentation](https://support.f5.com/csp/article/K18390492) for details on these settings.
+
+What isnΓÇÖt covered however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button deploys a SAML application to your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
+
+During deployment, the SAML federation metadata for the published application is imported from your tenant, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign outs terminate the session between a client and Azure AD.
+
+## Summary
+
+Select **Deploy** to commit all settings and verify that the application has appeared in your tenant. This last step provides breakdown of all applied settings before theyΓÇÖre committed. Your application should now be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals.
+
+## Next steps
+
+From a browser, connect to the **PeopleSoft applicationΓÇÖs external URL** or select the applicationΓÇÖs icon in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+
+## Advanced deployment
+
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](./f5-big-ip-kerberos-advanced.md). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+> [!NOTE]
+> Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+
+## Troubleshooting
+
+There can be many factors leading to failure to access a published application. BIG-IP logging can help quickly isolate all sorts of issues with connectivity, policy violations, or misconfigured variable mappings.
+
+Start troubleshooting by increasing the log verbosity level.
+
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+
+2. Select the row for your published application then **Edit > Access System Logs**
+
+3. Select **Debug** from the SSO list then **OK**
+
+Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data. If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+
+1. Navigate to **Access > Overview > Access reports**
+
+2. Run the report for the last hour to see logs provide any clues. The **View session** variables link for your session will also help understand if the APM is receiving the expected claims from Azure AD
+
+If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+
+1. In which case you should head to **Access Policy > Overview > Active Sessions** and select the link for your active session
+
+2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes
+
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+
+The following command from a bash shell validates the APM service account used for LDAP queries and can successfully authenticate and query a user object:
+
+```ldapsearch -xLLL -H 'ldap://192.168.0.58' -b "CN=oraclef5,dc=contoso,dc=lds" -s sub -D "CN=f5-apm,CN=partners,DC=contoso,DC=lds" -w 'P@55w0rd!' "(cn=testuser)" ```
+
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this [F5 knowledge article on LDAP Query](https://techdocs.f5.com/en-us/bigip-16-1-0/big-ip-access-policy-manager-authentication-methods/ldap-query.html).
active-directory Groups Activate Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/groups-activate-roles.md
na Previously updated : 10/07/2021 Last updated : 02/02/2022
If you do not require activation of a role that requires approval, you can cance
When you select **Cancel**, the request will be canceled. To activate the role again, you will have to submit a new request for activation.
+## Deactivate a role assignment
+
+When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. When you select **Deactivate**, there's a short time lag before the role is deactivated. Also, you can't deactivate a role assignment within five minutes after activation.
+ ## Troubleshoot ### Permissions are not granted after activating a role
active-directory Groups Assign Member Owner https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md
na Previously updated : 11/09/2021 Last updated : 02/02/2022
Azure Active Directory (Azure AD) Privileged Identity Management (PIM) can help you manage the eligibility and activation of assignments to privileged access groups in Azure AD. You can assign eligibility to members or owners of the group.
+When a role is assigned, the assignment:
+- Can't be assigned for a duration of less than five minutes
+- Can't be removed within five minutes of it being assigned
+ >[!NOTE] >Every user who is eligible for membership in or ownership of a privileged access group must have an Azure AD Premium P2 license. For more information, see [License requirements to use Privileged Identity Management](subscription-requirements.md).
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
Title: Activate my Azure AD roles in PIM - Azure Active Directory | Microsoft Docs
+ Title: Activate Azure AD roles in PIM - Azure Active Directory | Microsoft Docs
description: Learn how to activate Azure AD roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''
Previously updated : 10/07/2021 Last updated : 02/02/2022
-# Activate my Azure AD roles in PIM
+# Activate an Azure AD role in PIM
Azure Active Directory (Azure AD) Privileged Identity Management (PIM) simplifies how enterprises manage privileged access to resources in Azure AD and other Microsoft online services like Microsoft 365 or Microsoft Intune.
When you need to assume an Azure AD role, you can request activation by opening
![Screen to provide security verification such as a PIN code](./media/pim-resource-roles-activate-your-roles/resources-mfa-enter-code.png)
-1. After multi-factor authentication, select **Activate before proceeding**.
+1. After multifactor authentication, select **Activate before proceeding**.
![Verify my identity with MFA before role activates](./media/pim-how-to-activate-role/activate-role-mfa-banner.png)
GET https://graph.microsoft.com/beta/roleManagement/directory/roleEligibilitySch
#### HTTP response
-To save space we're showing only the response for one roles, but all eligible role assignments that you can activate will be listed.
+To save space we're showing only the response for one role, but all eligible role assignments that you can activate will be listed.
````HTTP {
You can view the status of your pending requests to activate.
## Cancel a pending request for new version
-If you do not require activation of a role that requires approval, you can cancel a pending request at any time.
+If you don't require activation of a role that requires approval, you can cancel a pending request at any time.
1. Open Azure AD Privileged Identity Management.
If you do not require activation of a role that requires approval, you can cance
1. For the role that you want to cancel, select the **Cancel** link.
- When you select Cancel, the request will be canceled. To activate the role again, you will have to submit a new request for activation.
+ When you select Cancel, the request will be canceled. To activate the role again, you'll have to submit a new request for activation.
![My request list with Cancel action highlighted](./media/pim-resource-roles-activate-your-roles/resources-my-requests-cancel.png)
+## Deactivate a role assignment
+
+When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. When you select **Deactivate**, there's a short time lag before the role is deactivated. Also, you can't deactivate a role assignment within five minutes after activation.
+ ## Troubleshoot portal delay ### Permissions aren't granted after activating a role
-When you activate a role in Privileged Identity Management, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, sign out of the portal you are trying to perform the action and then sign back in. In the Azure portal, PIM signs you out and back in automatically.
+When you activate a role in Privileged Identity Management, the activation might not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may cause a delay before the change takes effect. If your activation is delayed, sign out of the portal you're trying to perform the action and then sign back in. In the Azure portal, PIM signs you out and back in automatically.
## Next steps
active-directory Pim How To Add Role To User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md
Previously updated : 10/07/2021 Last updated : 02/02/2022
The Azure AD Privileged Identity Management (PIM) service also allows Privileged
Privileged Identity Management support both built-in and custom Azure AD roles. For more information on Azure AD custom roles, see [Role-based access control in Azure Active Directory](../roles/custom-overview.md).
+>[!Note]
+>When a role is assigned, the assignment:
+>- Can't be asigned for a duration of less than five minutes
+>- Can't be removed within five minutes of it being assigned
+ ## Assign a role Follow these steps to make a user eligible for an Azure AD admin role.
active-directory Pim Resource Roles Activate Your Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md
na Previously updated : 10/07/2021 Last updated : 02/02/2022
If you do not require activation of a role that requires approval, you can cance
![My request list with Cancel action highlighted](./media/pim-resource-roles-activate-your-roles/resources-my-requests-cancel.png)
+## Deactivate a role assignment
+
+When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. When you select **Deactivate**, there's a short time lag before the role is deactivated. Also, you can't deactivate a role assignment within five minutes after activation.
+ ## Troubleshoot ### Permissions are not granted after activating a role
active-directory Pim Resource Roles Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md
na Previously updated : 09/28/2021 Last updated : 02/02/2022
Privileged Identity Management support both built-in and custom Azure roles. For
You can use the Azure attribute-based access control (Azure ABAC) preview to place resource conditions on eligible role assignments using Privileged Identity Management (PIM). With PIM, your end users must activate an eligible role assignment to get permission to perform certain actions. Using Azure attribute-based access control conditions in PIM enables you not only to limit a userΓÇÖs role permissions to a resource using fine-grained conditions, but also to use PIM to secure the role assignment with a time-bound setting, approval workflow, audit trail, and so on. For more information, see [Azure attribute-based access control public preview](../../role-based-access-control/conditions-overview.md).
+>[!Note]
+>When a role is assigned, the assignment:
+>- Can't be assign for a duration of less than five minutes
+>- Can't be removed within five minutes of it being assigned
+ ## Assign a role Follow these steps to make a user eligible for an Azure resource role.
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Use the following table to better understand how to resolve errors that you find
||| |Conflict, EntryConflict|Correct the conflicting attribute values in either Azure AD or the application. Or, review your matching attribute configuration if the conflicting user account was supposed to be matched and taken over. Review the [documentation](../app-provisioning/customize-application-attributes.md) for more information on configuring matching attributes.| |TooManyRequests|The target app rejected this attempt to update the user because it's overloaded and receiving too many requests. There's nothing to do. This attempt will automatically be retired. Microsoft has also been notified of this issue.|
-|InternalServerError |The target app returned an unexpected error. A service issue with the target application might be preventing this from working. This attempt will automatically be retired in 40 minutes.|
+|InternalServerError |The target app returned an unexpected error. A service issue with the target application might be preventing this from working. This attempt will automatically be retried in 40 minutes.|
|InsufficientRights, MethodNotAllowed, NotPermitted, Unauthorized| Azure AD authenticated with the target application but was not authorized to perform the update. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md).| |UnprocessableEntity|The target application returned an unexpected response. The configuration of the target application might not be correct, or a service issue with the target application might be preventing this from working.|
-|WebExceptionProtocolError |An HTTP protocol error occurred in connecting to the target application. There is nothing to do. This attempt will automatically be retired in 40 minutes.|
+|WebExceptionProtocolError |An HTTP protocol error occurred in connecting to the target application. There is nothing to do. This attempt will automatically be retried in 40 minutes.|
|InvalidAnchor|A user that was previously created or matched by the provisioning service no longer exists. Ensure that the user exists. To force a new matching of all users, use the Microsoft Graph API to [restart the job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta&preserve-view=true). <br><br>Restarting provisioning will trigger an initial cycle, which can take time to complete. Restarting provisioning also deletes the cache that the provisioning service uses to operate. That means all users and groups in the tenant will have to be evaluated again, and certain provisioning events might be dropped.| |NotImplemented | The target app returned an unexpected response. The configuration of the app might not be correct, or a service issue with the target app might be preventing this from working. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md). | |MandatoryFieldsMissing, MissingValues |The user could not be created because required values are missing. Correct the missing attribute values in the source record, or review your matching attribute configuration to ensure that the required fields are not omitted. [Learn more](../app-provisioning/customize-application-attributes.md) about configuring matching attributes.|
Use the following table to better understand how to resolve errors that you find
* [Check the status of user provisioning](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) * [Problem configuring user provisioning to an Azure AD Gallery application](../app-provisioning/application-provisioning-config-problem.md)
-* [Graph API for provisioning logs](/graph/api/resources/provisioningobjectsummary)
+* [Graph API for provisioning logs](/graph/api/resources/provisioningobjectsummary)
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
To generate a lastSignInDateTime timestamp, you need a successful sign-in. Becau
### For how long is the last sign-in retained?
-The last sign-in date is associated with the user object. The value is retained until the sign-in of the user.
+The last sign-in date is associated with the user object. The value is retained until the next sign-in of the user.
## Next steps
active-directory Cornerstone Ondemand Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cornerstone-ondemand-provisioning-tutorial.md
This tutorial demonstrates the steps to perform in Cornerstone OnDemand and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and deprovision users or groups to Cornerstone OnDemand. > [!NOTE]
+> This Conerstone OnDemand automatic provisioning service is deprecated and support will end soon.
> This tutorial describes a connector that's built on top of the Azure AD user provisioning service. For information on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to software-as-a-service (SaaS) applications with Azure Active Directory](../app-provisioning/user-provisioning.md). ## Prerequisites
active-directory Kronos Workforce Dimensions Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/kronos-workforce-dimensions-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Kronos Workforce Dimensions | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Kronos Workforce Dimensions'
description: Learn how to configure single sign-on between Azure Active Directory and Kronos Workforce Dimensions.
Previously updated : 07/19/2021 Last updated : 01/27/2021
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Kronos Workforce Dimensions
+# Tutorial: Azure AD SSO integration with Kronos Workforce Dimensions
In this tutorial, you'll learn how to integrate Kronos Workforce Dimensions with Azure Active Directory (Azure AD). When you integrate Kronos Workforce Dimensions with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Kronos Workforce Dimensions single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Kronos Workforce Dimensions you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Kronos Workforce Dimensions you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Lucid Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/lucid-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Lucid (All Products) | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Lucid (All Products)'
description: Learn how to configure single sign-on between Azure Active Directory and Lucid (All Products).
Previously updated : 11/04/2020 Last updated : 01/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Lucid (All Products)
+# Tutorial: Azure AD SSO integration with Lucid (All Products)
In this tutorial, you'll learn how to integrate Lucid (All Products) with Azure Active Directory (Azure AD). When you integrate Lucid (All Products) with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Lucid (All Products) single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Lucid (All Products) supports **SP and IDP** initiated SSO
-* Lucid (All Products) supports **Just In Time** user provisioning
+* Lucid (All Products) supports **SP and IDP** initiated SSO.
+* Lucid (All Products) supports **Just In Time** user provisioning.
+ > [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant. --
-## Adding Lucid (All Products) from the gallery
+## Add Lucid (All Products) from the gallery
To configure the integration of Lucid (All Products) into Azure AD, you need to add Lucid (All Products) from the gallery to your list of managed SaaS apps.
To configure the integration of Lucid (All Products) into Azure AD, you need to
1. In the **Add from the gallery** section, type **Lucid (All Products)** in the search box. 1. Select **Lucid (All Products)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Lucid (All Products) Configure and test Azure AD SSO with Lucid (All Products) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Lucid (All Products).
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Lucid (All Products)** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following step:
In the **Reply URL** text box, type a URL using the following pattern: `https://lucid.app/saml/sso/<TENANT_NAME>?idpHash=<HASH_ID>`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up Lucid (All Products)** section, copy the appropriate URL(s) based on your requirement. ![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-1. Click on **Test this application** in Azure portal. This will redirect to Lucid (All Products) Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to the Lucid (All Products) sign-on URL where you can initiate the login flow.
-1. Go to Lucid (All Products) Sign-on URL directly and initiate the login flow from there.
+* Go to Lucid (All Products) Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lucid (All Products) for which you set up the SSO
-
-You can also use Microsoft Access Panel to test the application in any mode. When you click the Lucid (All Products) tile in the Access Panel, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lucid (All Products) for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lucid (All Products) for which you set up the SSO.
+You can also use Microsoft My Apps to test the application in any mode. When you click the Lucid (All Products) tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lucid (All Products) for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next Steps
-Once you configure Lucid (All Products) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Lucid (All Products) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Mondaycom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/mondaycom-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with monday.com | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with monday.com'
description: Learn how to configure single sign-on between Azure Active Directory and monday.com.
Previously updated : 02/08/2021 Last updated : 01/28/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with monday.com
+# Tutorial: Azure AD SSO integration with monday.com
In this tutorial, you'll learn how to integrate monday.com with Azure Active Directory (Azure AD). When you integrate monday.com with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * monday.com single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* monday.com supports **SP and IDP** initiated SSO
+* monday.com supports **SP and IDP** initiated SSO.
* monday.com supports [**automated** user provisioning and deprovisioning](mondaycom-provisioning-tutorial.md) (recommended).
-* monday.com supports **Just In Time** user provisioning
+* monday.com supports **Just In Time** user provisioning.
## Add monday.com from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section. > [!Note]
- > If the **Identifier** and **Reply URL** values do not get populated automatically, then fill in the values manually. The **Identifier** and the **Reply URL** are the same and value is in the following pattern: `https://<your-domain>.monday.com/saml/saml_callback`
+ > If the **Identifier** and **Reply URL** values do not get populated automatically, then fill in the values manually. The **Identifier** and the **Reply URL** are the same and value is in the following pattern: `https://<YOUR_DOMAIN>.monday.com/saml/saml_callback`
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Setup configuration](common/setup-sso.png)
-1. If you want to setup monday.com manually, open a new web browser window and sign in to monday.com as an administrator and perform the following steps:
+1. If you want to set up monday.com manually, open a new web browser window and sign in to monday.com as an administrator and perform the following steps:
-1. Go to the **Profile** on the top right corner of page and click on **Admin**.
+1. Go to the **Profile** on the top-right corner of page and click on **Admin**.
- ![Screenshot shows the Admin profile selected.](./media/mondaycom-tutorial/configuration-1.png)
+ ![Screenshot shows the Admin profile selected.](./media/mondaycom-tutorial/admin.png)
1. Select **Security** and make sure to click on **Open** next to SAML.
- ![Screenshot shows the Security tab with the option to Open next to SAML.](./media/mondaycom-tutorial/configuration-2.png)
+ ![Screenshot shows the Security tab with the option to Open next to SAML.](./media/mondaycom-tutorial/security.png)
1. Fill in the details below from your IDP.
- ![Screenshot shows the SAML provider where you can enter information from your I D P.](./media/mondaycom-tutorial/configuration-3.png)
+ ![Screenshot shows the SAML provider where you can enter information from your I D P.](./media/mondaycom-tutorial/configuration.png)
> [!NOTE]
- > For more details refer [this](https://support.monday.com/hc/articles/360000460605-SAML-Single-Sign-on?abcb=34642) article
+ > For more details refer [this](https://support.monday.com/hc/articles/360000460605-SAML-Single-Sign-on?abcb=34642) article.
### Create monday.com test user
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure monday.com you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure monday.com you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Oracle Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/oracle-cloud-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Oracle Cloud Infrastructure Console | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Oracle Cloud Infrastructure Console'
description: Learn how to configure single sign-on between Azure Active Directory and Oracle Cloud Infrastructure Console.
Previously updated : 10/04/2020 Last updated : 01/28/2022
-# Tutorial: Integrate Oracle Cloud Infrastructure Console with Azure Active Directory
+# Tutorial: Azure AD SSO integration with Oracle Cloud Infrastructure Console
In this tutorial, you'll learn how to integrate Oracle Cloud Infrastructure Console with Azure Active Directory (Azure AD). When you integrate Oracle Cloud Infrastructure Console with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Oracle Cloud Infrastructure Console single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Oracle Cloud Infrastructure Console supports **SP** initiated SSO. * Oracle Cloud Infrastructure Console supports [**Automated** user provisioning and deprovisioning](oracle-cloud-infrastructure-console-provisioning-tutorial.md) (recommended).
-## Adding Oracle Cloud Infrastructure Console from the gallery
+## Add Oracle Cloud Infrastructure Console from the gallery
To configure the integration of Oracle Cloud Infrastructure Console into Azure AD, you need to add Oracle Cloud Infrastructure Console from the gallery to your list of managed SaaS apps.
Configure and test Azure AD SSO with Oracle Cloud Infrastructure Console using a
To configure and test Azure AD SSO with Oracle Cloud Infrastructure Console, perform the following steps: 1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on with B. Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** to enable B. Simon to use Azure AD single sign-on.
-1. **[Configure Oracle Cloud Infrastructure Console](#configure-oracle-cloud-infrastructure-console)** to configure the SSO settings on application side.
- 1. **[Create Oracle Cloud Infrastructure Console test user](#create-oracle-cloud-infrastructure-console-test-user)** to have a counterpart of B. Simon in Oracle Cloud Infrastructure Console that is linked to the Azure AD representation of user.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on with B. Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** to enable B. Simon to use Azure AD single sign-on.
+1. **[Configure Oracle Cloud Infrastructure Console SSO](#configure-oracle-cloud-infrastructure-console-sso)** to configure the SSO settings on application side.
+ 1. **[Create Oracle Cloud Infrastructure Console test user](#create-oracle-cloud-infrastructure-console-test-user)** to have a counterpart of B. Simon in Oracle Cloud Infrastructure Console that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal. 1. In the Azure portal, on the **Oracle Cloud Infrastructure Console** application integration page, find the **Manage** section and select **Single sign-on**. 1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** page, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
> [!NOTE] > You will get the Service Provider metadata file from the **Configure Oracle Cloud Infrastructure Console Single Sign-On** section of the tutorial.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Once the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in **Basic SAML Configuration** section textbox. > [!NOTE]
- > If the **Identifier** and **Reply URL** values do not get auto polulated, then fill in the values manually according to your requirement.
+ > If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
In the **Sign-on URL** text box, type a URL using the following pattern: `https://console.<REGIONNAME>.oraclecloud.com/`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Save**.
- ![image2](./media/oracle-cloud-tutorial/config07.png)
+ ![Screenshot showing image2](./media/oracle-cloud-tutorial/attributes.png)
- ![image3](./media/oracle-cloud-tutorial/config11.png)
+ ![Screenshot showing image3](./media/oracle-cloud-tutorial/claims.png)
1. Click the **pen** next to **Groups returned in claim**.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Save**.
- ![image4](./media/oracle-cloud-tutorial/config08.png)
+ ![Screenshot showing image4](./media/oracle-cloud-tutorial/groups.png)
1. On the **Set up Oracle Cloud Infrastructure Console** section, copy the appropriate URL(s) based on your requirement.
In this section, you'll enable B. Simon to use Azure single sign-on by granting
1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Oracle Cloud Infrastructure Console
+## Configure Oracle Cloud Infrastructure Console SSO
1. In a different web browser window, sign in to Oracle Cloud Infrastructure Console as an Administrator. 1. Click on the left side of the menu and click on **Identity** then navigate to **Federation**.
- ![Configuration1](./media/oracle-cloud-tutorial/config01.png)
+ ![Screenshot showing Configuration1](./media/oracle-cloud-tutorial/menu.png)
1. Save the **Service Provider metadata file** by clicking the **Download this document** link and upload it into the **Basic SAML Configuration** section of Azure portal and then click on **Add Identity Provider**.
- ![Configuration2](./media/oracle-cloud-tutorial/config02.png)
+ ![Screenshot showing Configuration2](./media/oracle-cloud-tutorial/metadata.png)
1. On the **Add Identity Provider** pop-up, perform the following steps:
- ![Configuration3](./media/oracle-cloud-tutorial/config03.png)
+ ![Screenshot showing Configuration3](./media/oracle-cloud-tutorial/file.png)
1. In the **NAME** text box, enter your name.
In this section, you'll enable B. Simon to use Azure single sign-on by granting
1. Click **Continue** and on the **Edit Identity Provider** section perform the following steps:
- ![Configuration4](./media/oracle-cloud-tutorial/configure-09.png)
+ ![Screenshot showing Configuration4](./media/oracle-cloud-tutorial/mapping.png)
1. The **IDENTITY PROVIDER GROUP** should be selected as Azure AD Group Object ID. The GROUP ID should be the GUID of the group from Azure Active Directory. The group needs to be mapped with corresponding group in **OCI GROUP** field.
In this section, you'll enable B. Simon to use Azure single sign-on by granting
Oracle Cloud Infrastructure Console supports just-in-time provisioning, which is by default. There is no action item for you in this section. A new user does not get created during an attempt to access and also no need to create the user.
-### Test SSO
+## Test SSO
-When you select the Oracle Cloud Infrastructure Console tile in the Access Panel, you will be redirected to the Oracle Cloud Infrastructure Console sign in page. Select the **IDENTITY PROVIDER** from the drop-down menu and click **Continue** as shown below to sign in. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+When you select the Oracle Cloud Infrastructure Console tile in the My Apps, you will be redirected to the Oracle Cloud Infrastructure Console sign-in page. Select the **IDENTITY PROVIDER** from the drop-down menu and click **Continue** as shown below to sign in. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-![Configuration](./media/oracle-cloud-tutorial/config10.png)
+![Screenshot showing Configuration](./media/oracle-cloud-tutorial/tenant.png)
## Next steps
-Once you configure the Oracle Cloud Infrastructure Console you can enforce session controls, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session controls extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure the Oracle Cloud Infrastructure Console you can enforce session controls, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
The following diagram illustrates the Azure AD Verifiable Credentials architectu
![Diagram that illustrates the Azure AD Verifiable Credentials architecture.](media/verifiable-credentials-configure-tenant/verifiable-credentials-architecture.png)
+See a [video walkthrough](https://www.youtube.com/watch?v=8jqjHjQo-3c) of setting up the Azure AD Verifiable Credential service, including all prerequisites, like Azure AD and an Azure subscription.
+ ## Prerequisites - If you don't have Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-about.md
OSM provides the following capabilities and features:
- Define and execute fine grained access control policies for services. - Monitor and debug services using observability and insights into application metrics. - Integrate with external certificate management.
+- Integrates with existing ingress solutions such as the [Azure Gateway Ingress Controller][agic], [NGINX][nginx], and [Contour][contour]. For more details on how ingress works with OSM, see [Using Ingress to manage external access to services within the cluster][osm-ingress]. For an example on integrating OSM with Contour for ingress, see [Ingress with Contour][osm-contour]. For an example on integrating OSM with ingress controllers that use the `networking.k8s.io/v1` API, such as NGINX, see [Ingress with Kubernetes Nginx Ingress Controller][osm-nginx].
## Example scenarios
The OSM AKS add-on has the following limitations:
* [Iptables redirection][ip-tables-redirection] for port IP address and port range exclusion must be enabled using `kubectl patch` after installation. For more details, see [iptables redirection][ip-tables-redirection]. * Pods that are onboarded to the mesh that need access to IMDS, Azure DNS, or the Kubernetes API server must have their IP addresses to the global list of excluded outbound IP ranges using [Global outbound IP range exclusions][global-exclusion].
+## Next steps
+
+After enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep template][osm-bicep], you can:
+* [Deploy a sample application][osm-deploy-sample-app]
+* [Onboard an existing application][osm-onboard-app]
+
+[ip-tables-redirection]: https://release-v1-0.docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/
+[global-exclusion]: https://release-v1-0.docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/#global-outbound-ip-range-exclusions
[osm-azure-cli]: open-service-mesh-deploy-addon-az-cli.md [osm-bicep]: open-service-mesh-deploy-addon-bicep.md
+[osm-deploy-sample-app]: https://release-v1-0.docs.openservicemesh.io/docs/getting_started/install_apps/
+[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/
[ip-tables-redirection]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/
-[global-exclusion]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/#global-outbound-ip-range-exclusions
+[global-exclusion]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/#global-outbound-ip-range-exclusions
+[agic]: ../application-gateway/ingress-controller-overview.md
+[nginx]: https://github.com/kubernetes/ingress-nginx
+[contour]: https://projectcontour.io/
+[osm-ingress]: https://release-v1-0.docs.openservicemesh.io/docs/guides/traffic_management/ingress/
+[osm-contour]: https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_contour
+[osm-nginx]: https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx
aks Open Service Mesh Azure Application Gateway Ingress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-azure-application-gateway-ingress.md
- Title: Using Azure Application Gateway Ingress
-description: How to use Azure Application Gateway Ingress with Open Service Mesh
-- Previously updated : 8/26/2021---
-# Deploy an application managed by Open Service Mesh (OSM) using Azure Application Gateway ingress AKS add-on
-
-In this tutorial, you will:
-
-> [!div class="checklist"]
->
-> - View the current OSM cluster configuration
-> - Create the namespace(s) for OSM to manage deployed applications in the namespace(s)
-> - Onboard the namespaces to be managed by OSM
-> - Deploy the sample application
-> - Verify the application running inside the AKS cluster
-> - Create an Azure Application Gateway to be used as the ingress controller for the application
-> - Expose a service via the Azure Application Gateway ingress to the internet
-
-## Before you begin
-
-The steps detailed in this walkthrough assume that you have previously enabled the OSM AKS add-on for your AKS cluster. If not, review the article [Deploy the OSM AKS add-on](./open-service-mesh-deploy-addon-az-cli.md) before proceeding. Also, your AKS cluster needs to be version Kubernetes `1.19+` and above, have Kubernetes RBAC enabled, and have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
-
-You must have the following resources installed:
--- The Azure CLI, version 2.20.0 or later-- OSM version v0.11.1 or later-- JSON processor "jq" version 1.6+-
-## View and verify the current OSM cluster configuration
-
-Once the OSM add-on for AKS has been enabled on the AKS cluster, you can view the current configuration parameters in the osm-mesh-config resource. Run the following command to view the properties:
-
-```azurecli-interactive
-kubectl get meshconfig osm-mesh-config -n kube-system -o yaml
-```
-
-Output shows the current OSM MeshConfig for the cluster.
-
-```
-apiVersion: config.openservicemesh.io/v1alpha1
-kind: MeshConfig
-metadata:
- creationTimestamp: "0000-00-00A00:00:00A"
- generation: 1
- name: osm-mesh-config
- namespace: kube-system
- resourceVersion: "2494"
- uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31
-spec:
- certificate:
- serviceCertValidityDuration: 24h
- featureFlags:
- enableEgressPolicy: true
- enableMulticlusterMode: false
- enableWASMStats: true
- observability:
- enableDebugServer: true
- osmLogLevel: info
- tracing:
- address: jaeger.osm-system.svc.cluster.local
- enable: false
- endpoint: /api/v2/spans
- port: 9411
- sidecar:
- configResyncInterval: 0s
- enablePrivilegedInitContainer: false
- envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
- initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
- logLevel: error
- maxDataPlaneConnections: 0
- resources: {}
- traffic:
- enableEgress: true
- enablePermissiveTrafficPolicyMode: true
- inboundExternalAuthorization:
- enable: false
- failureModeAllow: false
- statPrefix: inboundExtAuthz
- timeout: 1s
- useHTTPSIngress: false
-```
-
-Notice the **enablePermissiveTrafficPolicyMode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
-
-## Create namespaces for the application
-
-In this tutorial we will be using the OSM bookstore application that has the following application components:
--- `bookbuyer`-- `bookthief`-- `bookstore`-- `bookwarehouse`-
-Create namespaces for each of these application components.
-
-```azurecli-interactive
-for i in bookstore bookbuyer bookthief bookwarehouse; do kubectl create ns $i; done
-```
-
-You should see the following output:
-
-```Output
-namespace/bookstore created
-namespace/bookbuyer created
-namespace/bookthief created
-namespace/bookwarehouse created
-```
-
-## Onboard the namespaces to be managed by OSM
-
-When you add the namespaces to the OSM mesh, this will allow the OSM controller to automatically inject the Envoy sidecar proxy containers with your application. Run the following command to onboard the OSM bookstore application namespaces.
-
-```azurecli-interactive
-osm namespace add bookstore bookbuyer bookthief bookwarehouse
-```
-
-You should see the following output:
-
-```Output
-Namespace [bookstore] successfully added to mesh [osm]
-Namespace [bookbuyer] successfully added to mesh [osm]
-Namespace [bookthief] successfully added to mesh [osm]
-Namespace [bookwarehouse] successfully added to mesh [osm]
-```
-
-## Deploy the Bookstore application
-
-```azurecli-interactive
-SAMPLE_VERSION=v0.11
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookbuyer.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookthief.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookstore.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookwarehouse.yaml
-```
-
-All of the deployment outputs are summarized below.
-
-```Output
-serviceaccount/bookbuyer created
-service/bookbuyer created
-deployment.apps/bookbuyer created
-
-serviceaccount/bookthief created
-service/bookthief created
-deployment.apps/bookthief created
-
-service/bookstore created
-serviceaccount/bookstore created
-deployment.apps/bookstore created
-
-serviceaccount/bookwarehouse created
-service/bookwarehouse created
-deployment.apps/bookwarehouse created
-```
-
-## Update the `Bookbuyer` Service
-
-Update the `bookbuyer` service to the correct inbound port configuration with the following service manifest.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-apiVersion: v1
-kind: Service
-metadata:
- name: bookbuyer
- namespace: bookbuyer
- labels:
- app: bookbuyer
-spec:
- ports:
- - port: 14001
- name: inbound-port
- selector:
- app: bookbuyer
-EOF
-```
-
-## Verify the Bookstore application
-
-As of now we have deployed the bookstore multi-container application, but it is only accessible from within the AKS cluster. Later we will add the Azure Application Gateway ingress controller to expose the application outside the AKS cluster. To verify that the application is running inside the cluster, we will use a port forward to view the `bookbuyer` component UI.
-
-First let's get the `bookbuyer` pod's name
-
-```azurecli-interactive
-kubectl get pod -n bookbuyer
-```
-
-You should see output similar to the following. Your `bookbuyer` pod will have a unique name appended.
-
-```Output
-NAME READY STATUS RESTARTS AGE
-bookbuyer-7676c7fcfb-mtnrz 2/2 Running 0 7m8s
-```
-
-Once we have the pod's name, we can now use the port-forward command to set up a tunnel from our local system to the application inside the AKS cluster. Run the following command to set up the port forward for the local system port 8080. Again use your specific `bookbuyer` pod name.
-
-```azurecli-interactive
-kubectl port-forward bookbuyer-7676c7fcfb-mtnrz -n bookbuyer 8080:14001
-```
-
-You should see output similar to this.
-
-```Output
-Forwarding from 127.0.0.1:8080 -> 14001
-Forwarding from [::1]:8080 -> 14001
-```
-
-While the port forwarding session is in place, navigate to the following url from a browser `http://localhost:8080`. You should now be able to see the `bookbuyer` application UI in the browser similar to the image below.
-
-![OSM bookbuyer app for App Gateway UI image](./media/aks-osm-addon/osm-agic-bookbuyer-img.png)
-
-## Create an Azure Application Gateway to expose the `bookbuyer` application
-
-> [!NOTE]
-> The following directions will create a new instance of the Azure Application Gateway to be used for ingress. If you have an existing Azure Application Gateway you wish to use, skip to the section for enabling the Application Gateway Ingress Controller add-on.
-
-### Deploy a new Application Gateway
-
-> [!NOTE]
-> We are referencing existing documentation for enabling the Application Gateway Ingress Controller add-on for an existing AKS cluster. Some modifications have been made to suit the OSM materials. More detailed documentation on the subject can be found [here](../application-gateway/tutorial-ingress-controller-add-on-existing.md).
-
-You'll now deploy a new Application Gateway, to simulate having an existing Application Gateway that you want to use to load balance traffic to your AKS cluster, _myCluster_. The name of the Application Gateway will be _myApplicationGateway_, but you will need to first create a public IP resource, named _myPublicIp_, and a new virtual network called _myVnet_ with address space 11.0.0.0/8, and a subnet with address space 11.1.0.0/16 called _mySubnet_, and deploy your Application Gateway in _mySubnet_ using _myPublicIp_.
-
-When using an AKS cluster and Application Gateway in separate virtual networks, the address spaces of the two virtual networks must not overlap. The default address space that an AKS cluster deploys in is 10.0.0.0/8, so we set the Application Gateway virtual network address prefix to 11.0.0.0/8.
-
-```azurecli-interactive
-az group create --name myResourceGroup --location eastus2
-az network public-ip create -n myPublicIp -g MyResourceGroup --allocation-method Static --sku Standard
-az network vnet create -n myVnet -g myResourceGroup --address-prefix 11.0.0.0/8 --subnet-name mySubnet --subnet-prefix 11.1.0.0/16
-az network application-gateway create -n myApplicationGateway -l eastus2 -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet
-```
-
-> [!NOTE]
-> Application Gateway Ingress Controller (AGIC) add-on **only** supports Application Gateway v2 SKUs (Standard and WAF), and **not** the Application Gateway v1 SKUs.
-
-### Enable the AGIC add-on for an existing AKS cluster through Azure CLI
-
-If you'd like to continue using Azure CLI, you can continue to enable the AGIC add-on in the AKS cluster you created, _myCluster_, and specify the AGIC add-on to use the existing Application Gateway you created, _myApplicationGateway_.
-
-```azurecli-interactive
-appgwId=$(az network application-gateway show -n myApplicationGateway -g myResourceGroup -o tsv --query "id")
-az aks enable-addons -n myCluster -g myResourceGroup -a ingress-appgw --appgw-id $appgwId
-```
-
-You can verify the Azure Application Gateway AKS add-on has been enabled by the following command.
-
-```azurecli-interactive
-az aks list -g osm-aks-rg -o json | jq -r .[].addonProfiles.ingressApplicationGateway.enabled
-```
-
-This command should show the output as `true`.
-
-### Peer the two virtual networks together
-
-Since we deployed the AKS cluster in its own virtual network and the Application Gateway in another virtual network, you'll need to peer the two virtual networks together in order for traffic to flow from the Application Gateway to the pods in the cluster. Peering the two virtual networks requires running the Azure CLI command two separate times, to ensure that the connection is bi-directional. The first command will create a peering connection from the Application Gateway virtual network to the AKS virtual network; the second command will create a peering connection in the other direction.
-
-```azurecli-interactive
-nodeResourceGroup=$(az aks show -n myCluster -g myResourceGroup -o tsv --query "nodeResourceGroup")
-aksVnetName=$(az network vnet list -g $nodeResourceGroup -o tsv --query "[0].name")
-
-aksVnetId=$(az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query "id")
-az network vnet peering create -n AppGWtoAKSVnetPeering -g myResourceGroup --vnet-name myVnet --remote-vnet $aksVnetId --allow-vnet-access
-
-appGWVnetId=$(az network vnet show -n myVnet -g myResourceGroup -o tsv --query "id")
-az network vnet peering create -n AKStoAppGWVnetPeering -g $nodeResourceGroup --vnet-name $aksVnetName --remote-vnet $appGWVnetId --allow-vnet-access
-```
-
-## Expose the `bookbuyer` service to the internet
-
-Apply the following ingress manifest to the AKS cluster to expose the `bookbuyer` service to the internet via the Azure Application Gateway.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- name: bookbuyer-ingress
- namespace: bookbuyer
- annotations:
- kubernetes.io/ingress.class: azure/application-gateway
-
-spec:
-
- rules:
- - host: bookbuyer.contoso.com
- http:
- paths:
- - path: /
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-EOF
-```
-
-You should see the following output
-
-```Output
-Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
-ingress.extensions/bookbuyer-ingress created
-```
-
-Since the host name in the ingress manifest is a pseudo name used for testing, the DNS name will not be available on the internet. We can alternatively use the curl program and past the hostname header to the Azure Application Gateway public IP address and receive a 200 code successfully connecting us to the `bookbuyer` service.
-
-```azurecli-interactive
-appGWPIP=$(az network public-ip show -g MyResourceGroup -n myPublicIp -o tsv --query "ipAddress")
-curl -H 'Host: bookbuyer.contoso.com' http://$appGWPIP/
-```
-
-You should see the following output
-
-```Output
-<!doctype html>
-<html itemscope="" itemtype="http://schema.org/WebPage" lang="en">
- <head>
- <meta content="Bookbuyer" name="description">
- <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
- <title>Bookbuyer</title>
- <style>
- #navbar {
- width: 100%;
- height: 50px;
- display: table;
- border-spacing: 0;
- white-space: nowrap;
- line-height: normal;
- background-color: #0078D4;
- background-position: left top;
- background-repeat-x: repeat;
- background-image: none;
- color: white;
- font: 2.2em "Fira Sans", sans-serif;
- }
- #main {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.8em "Fira Sans", sans-serif;
- }
- li {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.2em "Consolas", sans-serif;
- }
- </style>
- <script>
- setTimeout(function(){window.location.reload(1);}, 1500);
- </script>
- </head>
- <body bgcolor="#fff">
- <div id="navbar">
- &#128214; Bookbuyer
- </div>
- <div id="main">
- <ul>
- <li>Total books bought: <strong>5969</strong>
- <ul>
- <li>from bookstore V1: <strong>277</strong>
- <li>from bookstore V2: <strong>5692</strong>
- </ul>
- </li>
- </ul>
- </div>
-
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
-
- Current Time: <strong>Fri, 26 Mar 2021 16:34:30 UTC</strong>
- </body>
-</html>
-```
-
-## Troubleshooting
--- [AGIC Troubleshooting Documentation](../application-gateway/ingress-controller-troubleshoot.md)-- [Additional troubleshooting tools are available on AGIC's GitHub repo](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/troubleshootings/troubleshooting-installing-a-simple-application.md)
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-deploy-addon-az-cli.md
Alternatively, you can uninstall the OSM add-on and the related resources from y
## Next steps
-This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed an running. To deploy a sample application on your OSM mesh, see [Manage a new application with OSM on AKS][osm-sample]
+This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed an running. With the the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
[aks-ephemeral]: cluster-configuration.md#ephemeral-os [osm-sample]: open-service-mesh-deploy-new-application.md [osm-uninstall]: open-service-mesh-uninstall-add-on.md
-[smi]: https://smi-spec.io/
+[smi]: https://smi-spec.io/
+[osm-deploy-sample-app]: https://release-v1-0.docs.openservicemesh.io/docs/getting_started/install_apps/
+[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-deploy-addon-bicep.md
az group delete --name osm-bicep-test
Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh (OSM) add-on from your AKS cluster][osm-uninstall].
+## Next steps
+
+This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed an running. With the the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
+ <!-- Links --> <!-- Internal -->
Alternatively, you can uninstall the OSM add-on and the related resources from y
[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update [osm-uninstall]: open-service-mesh-uninstall-add-on.md
+[osm-deploy-sample-app]: https://release-v1-0.docs.openservicemesh.io/docs/getting_started/install_apps/
+[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/
aks Open Service Mesh Deploy Existing Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-deploy-existing-application.md
- Title: Onboard applications to Open Service Mesh
-description: How to onboard an application to Open Service Mesh
-- Previously updated : 8/26/2021---
-# Onboarding applications to Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on
-
-The following guide describes how to onboard a kubernetes microservice to OSM.
-
-## Before you begin
-
-The steps detailed in this walk-through assume that you've previously enabled the OSM AKS add-on for your AKS cluster. If not, review the article [Deploy the OSM AKS add-on](./open-service-mesh-deploy-addon-az-cli.md) before proceeding. Also, your AKS cluster needs to be version Kubernetes `1.19+` and above, have Kubernetes RBAC enabled, and have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
-
-You must have the following resources installed:
--- The Azure CLI, version 2.20.0 or later-- OSM add-on version v0.11.1 or later-- OSM CLI version v0.11.1 or later-
-## Verify the Open Service Mesh (OSM) Permissive Traffic Mode Policy
-
-The OSM Permissive Traffic Policy mode is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
-
-To verify the current permissive traffic mode of OSM for your cluster, run the following command:
-
-```azurecli-interactive
-kubectl get meshconfig osm-mesh-config -n kube-system -o jsonpath='{.spec.traffic.enablePermissiveTrafficPolicyMode}{"\n"}'
-true
-```
-
-If the **enablePermissiveTrafficPolicyMode** is configured to **true**, you can safely onboard your namespaces without any disruption to your service-to-service communications. If the **enablePermissiveTrafficPolicyMode** is configured to **false**, you'll need to ensure you have the correct [SMI](https://smi-spec.io/) traffic access policy manifests deployed. You'll also need to ensure you have a service account representing each service deployed in the namespace. For more detailed information about permissive traffic mode, please visit and read the [Permissive Traffic Policy Mode](https://docs.openservicemesh.io/docs/guides/traffic_management/permissive_mode/) article.
-
-## Onboard applications with Open Service Mesh (OSM) Permissive Traffic Policy configured as True
-
-1. Refer to the [application requirements](https://docs.openservicemesh.io/docs/guides/app_onboarding/prereqs/) guide before onboarding applications.
-
-1. If an application in the mesh needs to communicate with the Kubernetes API server, the user needs to explicitly allow this either by using IP range exclusion or by creating an egress policy.
-
-1. Onboard Kubernetes Namespaces to OSM
-
- To onboard a namespace containing applications to be managed by OSM, run the `osm namespace add` command:
-
- ```console
- $ osm namespace add <namespace>
- ```
-
- By default, the `osm namespace add` command enables automatic sidecar injection for pods in the namespace.
-
- To disable automatic sidecar injection as a part of enrolling a namespace into the mesh, use `osm namespace add <namespace> --disable-sidecar-injection`.
- Once a namespace has been onboarded, pods can be enrolled in the mesh by configuring automatic sidecar injection. See the [Sidecar Injection](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) document for more details.
-
-1. Deploy new applications or redeploy existing applications
-
- By default, new deployments in onboarded namespaces are enabled for automatic sidecar injection. This means that when a new pod is created in a managed namespace, OSM will automatically inject the sidecar proxy to the Pod.
- Existing deployments need to be restarted so that OSM can automatically inject the sidecar proxy upon Pod re-creation. Pods managed by a deployment can be restarted using the `kubectl rollout restart deploy` command.
-
- In order to route protocol specific traffic correctly to service ports, configure the application protocol to use. Refer to the [application protocol selection guide](https://docs.openservicemesh.io/docs/guides/app_onboarding/app_protocol_selection/) to learn more.
--
-## Onboard existing deployed applications with Open Service Mesh (OSM) Permissive Traffic Policy configured as False
-
-When the OSM configuration for the permissive traffic policy is set to `false`, OSM will require explicit [SMI](https://smi-spec.io/) traffic access policies deployed for the service-to-service communication to happen within your cluster. Since OSM uses Kubernetes service accounts to implement access control policies between applications in the mesh, apply [SMI](https://smi-spec.io/) traffic access policies to authorize traffic flow between applications.
-
-For example SMI policies, please see the following examples:
- - [demo/deploy-traffic-specs.sh](https://github.com/openservicemesh/osm/blob/release-v0.11/demo/deploy-traffic-specs.sh)
- - [demo/deploy-traffic-split.sh](https://github.com/openservicemesh/osm/blob/release-v0.11/demo/deploy-traffic-split.sh)
- - [demo/deploy-traffic-target.sh](https://github.com/openservicemesh/osm/blob/release-v0.11/demo/deploy-traffic-target.sh)
--
-#### Removing Namespaces
-Namespaces can be removed from the OSM mesh with the `osm namespace remove` command:
-
-```console
-$ osm namespace remove <namespace>
-```
-
-> [!NOTE]
->
-> - The **`osm namespace remove`** command only tells OSM to stop applying updates to the sidecar proxy configurations in the namespace. It **does not** remove the proxy sidecars. This means the existing proxy configuration will continue to be used, but it will not be updated by the OSM control plane. If you wish to remove the proxies from all pods, remove the pods' namespaces from the mesh using OSM LCI and redeploy the corresponding pods or deployments.
aks Open Service Mesh Deploy New Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-deploy-new-application.md
- Title: Manage a new application with Open Service Mesh
-description: How to manage a new application with Open Service Mesh
-- Previously updated : 11/10/2021---
-# Manage a new application with Open Service Mesh (OSM) on Azure Kubernetes Service (AKS)
-
-This article shows you how to run a sample application on your OSM mesh running on AKS.
-
-## Prerequisites
--- An existing AKS cluster with the AKS OSM add-on installed. If you need to create a cluster or enable the AKS OSM add-on on an existing cluster, see [Install the Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on using Azure CLI][osm-cli]-- OSM mesh version v0.11.1 or later running on your cluster.-- The Azure CLI, version 2.20.0 or later.-- The latest version of the OSM CLI.-
-## Verify your mesh has permissive mode enabled
-
-Use `kubectl get meshconfig osm-mesh-config` to verify *enablePermissveTrafficPolicyMode* is *true*. For example:
-
-```azurecli-interactive
-kubectl get meshconfig osm-mesh-config -n kube-system -o=jsonpath='{$.spec.traffic.enablePermissiveTrafficPolicyMode}'
-```
-
-If permissive mode is not enabled, you can enable it using `kubectl patch meshconfig osm-mesh-config`. For example:
-
-```azurecli-interactive
-kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge
-```
-
-## Create and onboard the namespaces to be managed by OSM
-
-When you add namespaces to the OSM mesh, the OSM controller automatically injects the Envoy sidecar proxy containers with applications deployed in those namespaces. Use `kubectl create ns` to create the *bookstore*, *bookbuyer*, *bookthief*, and *bookwarehouse* namespaces, then use `osm namespace add` to add those namespaces to your mesh.
-
-```azurecli-interactive
-kubectl create ns bookstore
-kubectl create ns bookbuyer
-kubectl create ns bookthief
-kubectl create ns bookwarehouse
-
-osm namespace add bookstore bookbuyer bookthief bookwarehouse
-```
-
-You should see the following output:
-
-```output
-namespace/bookstore created
-namespace/bookbuyer created
-namespace/bookthief created
-namespace/bookwarehouse created
-
-Namespace [bookstore] successfully added to mesh [osm]
-Namespace [bookbuyer] successfully added to mesh [osm]
-Namespace [bookthief] successfully added to mesh [osm]
-Namespace [bookwarehouse] successfully added to mesh [osm]
-```
-
-## Deploy the sample application to the AKS cluster
-
-Use `kubectl apply` to deploy the sample application to your cluster.
-
-```azurecli-interactive
-SAMPLE_VERSION=v0.11
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookbuyer.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookthief.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookstore.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookwarehouse.yaml
-```
-
-You should see the following output:
-
-```output
-serviceaccount/bookbuyer created
-deployment.apps/bookbuyer created
-serviceaccount/bookthief created
-deployment.apps/bookthief created
-service/bookstore created
-serviceaccount/bookstore created
-deployment.apps/bookstore created
-serviceaccount/bookwarehouse created
-service/bookwarehouse created
-deployment.apps/bookwarehouse created
-```
-
-The sample application is an example of a multi-tiered application that works well for testing service mesh functionality. The application consists of four
-
-![OSM sample application architecture](./media/aks-osm-addon/osm-bookstore-app-arch.png)
-
-Both the *bookbuyer* and *bookthief* service communicate to the *bookstore* service to retrieve books from the *bookstore* service. The *bookstore* service retrieves books from the *bookwarehouse* service. This application helps demonstrate how a service mesh can be used to protect and authorize communications between the services. For example, later sections show how to disable permissive traffic mode and use SMI policies to secure access to services.
-
-## Access the bookbuyer and bookthief services using port forwarding
-
-Use `kubectl get pod` to get the name of the *bookbuyer* pod in the *bookbuyer* namespace. For example:
-
-```output
-$ kubectl get pod -n bookbuyer
-
-NAME READY STATUS RESTARTS AGE
-bookbuyer-1245678901-abcde 2/2 Running 0 7m8s
-```
-
-Open a new terminal and use `kubectl port forward` to begin forwarding traffic between your development computer and the *bookbuyer* pod. For example:
-
-```output
-$ kubectl port-forward bookbuyer-1245678901-abcde -n bookbuyer 8080:14001
-Forwarding from 127.0.0.1:8080 -> 14001
-Forwarding from [::1]:8080 -> 14001
-```
-
-The above example shows traffic is being forwarded between port 8080 on your development computer and 14001 on pod *bookbuyer-1245678901-abcde*.
-
-Go to `http://localhost:8080` on a web browser and confirm you see the *bookbuyer* application. For example:
-
-![OSM bookbuyer application](./media/aks-osm-addon/osm-bookbuyer-service-ui.png)
-
-Notice the number of bought books continues to increase. Stop the port forwarding command.
-
-Use `kubectl get pod` to get the name of the *bookthief* pod in the *bookthief* namespace. For example:
-
-```output
-$ kubectl get pod -n bookthief
-
-NAME READY STATUS RESTARTS AGE
-bookthief-1245678901-abcde 2/2 Running 0 7m8s
-```
-
-Open a new terminal and use `kubectl port forward` to begin forwarding traffic between your development computer and the *bookthief* pod. For example:
-
-```output
-$ kubectl port-forward bookthief-1245678901-abcde -n bookthief 8080:14001
-Forwarding from 127.0.0.1:8080 -> 14001
-Forwarding from [::1]:8080 -> 14001
-```
-
-The above example shows traffic is being forwarded between port 8080 on your development computer and 14001 on pod *bookthief-1245678901-abcde*.
-
-Go to `http://localhost:8080` on a web browser and confirm you see the *bookthief* application. For example:
-
-![OSM bookthief application](./media/aks-osm-addon/osm-bookthief-service-ui.png)
-
-Notice the number of stolen books continues to increase. Stop the port forwarding command.
-
-## Disable permissive traffic mode on your mesh
-
-When permissive traffic mode is enabled, you do not need to define explicit [SMI][smi] policies for services to communicate with other services in onboarded namespaces. For more information on permissive traffic mode in OSM, see [Permissive Traffic Policy Mode][osm-permissive-traffic-mode].
-
-In the sample application with permissive mode enabled, both the *bookbuyer* and *bookthief* services can communicate with the *bookstore* service and obtain books.
-
-Use `kubectl patch meshconfig osm-mesh-config` to disable permissive traffic mode:
-
-```azurecli-interactive
-kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":false}}}' --type=merge
-```
-
-The following example output shows the *osm-mesh-config* has been updated:
-
-```output
-$ kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":false}}}' --type=merge
-
-meshconfig.config.openservicemesh.io/osm-mesh-config patched
-```
-
-Repeat the steps from the previous section to forward traffic between the *bookbuyer* service and your development computer. Confirm the counter is no longer incrementing, even if you refresh the page. Stop the port forwarding command and repeat the steps to forward traffic between the *bookthief* service and your development computer. Confirm the counter is no longer incrementing even if you refresh the page. Stop the port forwarding command.
-
-## Apply an SMI traffic access policy for buying books
-
-Create `allow-bookbuyer-smi.yaml` using the following YAML:
-
-```yaml
-apiVersion: access.smi-spec.io/v1alpha3
-kind: TrafficTarget
-metadata:
- name: bookbuyer-access-bookstore
- namespace: bookstore
-spec:
- destination:
- kind: ServiceAccount
- name: bookstore
- namespace: bookstore
- rules:
- - kind: HTTPRouteGroup
- name: bookstore-service-routes
- matches:
- - buy-a-book
- - books-bought
- sources:
- - kind: ServiceAccount
- name: bookbuyer
- namespace: bookbuyer
-
-apiVersion: specs.smi-spec.io/v1alpha4
-kind: HTTPRouteGroup
-metadata:
- name: bookstore-service-routes
- namespace: bookstore
-spec:
- matches:
- - name: books-bought
- pathRegex: /books-bought
- methods:
- - GET
- headers:
- - "user-agent": ".*-http-client/*.*"
- - "client-app": "bookbuyer"
- - name: buy-a-book
- pathRegex: ".*a-book.*new"
- methods:
- - GET
- - name: update-books-bought
- pathRegex: /update-books-bought
- methods:
- - POST
-
-kind: TrafficTarget
-apiVersion: access.smi-spec.io/v1alpha3
-metadata:
- name: bookstore-access-bookwarehouse
- namespace: bookwarehouse
-spec:
- destination:
- kind: ServiceAccount
- name: bookwarehouse
- namespace: bookwarehouse
- rules:
- - kind: HTTPRouteGroup
- name: bookwarehouse-service-routes
- matches:
- - restock-books
- sources:
- - kind: ServiceAccount
- name: bookstore
- namespace: bookstore
- - kind: ServiceAccount
- name: bookstore-v2
- namespace: bookstore
-
-apiVersion: specs.smi-spec.io/v1alpha4
-kind: HTTPRouteGroup
-metadata:
- name: bookwarehouse-service-routes
- namespace: bookwarehouse
-spec:
- matches:
- - name: restock-books
- methods:
- - POST
- headers:
- - host: bookwarehouse.bookwarehouse
-```
-
-The above creates the following SMI access policies that allow the *bookbuyer* service to communicate with the *bookstore* service for buying books. It also allows the *bookstore* service to communicate with the *bookwarehouse* service for restocking books.
-
-Use `kubectl apply` to apply the SMI access policies.
-
-```azurecli-interactive
-kubectl apply -f allow-bookbuyer-smi.yaml
-```
-
-The following example output shows the SMI access policies successfully applied:
-
-```output
-$ kubectl apply -f allow-bookbuyer-smi.yaml
-
-traffictarget.access.smi-spec.io/bookbuyer-access-bookstore-v1 created
-httproutegroup.specs.smi-spec.io/bookstore-service-routes created
-traffictarget.access.smi-spec.io/bookstore-access-bookwarehouse created
-httproutegroup.specs.smi-spec.io/bookwarehouse-service-routes created
-```
-
-Repeat the steps from the previous section to forward traffic between the *bookbuyer* service and your development computer. Confirm the counter is incrementing. Stop the port forwarding command and repeat the steps to forward traffic between the *bookthief* service and your development computer. Confirm the counter is not incrementing even if you refresh the page. Stop the port forwarding command.
-
-## Apply an SMI traffic split policy for buying books
-
-In addition to access policies, you can also use SMI to create traffic split policies. Traffic split policies allow you to configure the distribution of communications from one service to multiple services as a backend. This capability can help you test a new version of a backend service by sending a small portion of traffic to it while sending the rest of traffic to the current version of the backend service. This capability can also help progressively transition more traffic to the new version of a service and reduce traffic to the previous version over time.
-
-The following diagram shows an SMI Traffic Split policy that sends 25% of traffic to the *bookstore-v1* service and 75% of traffic to the *bookstore-v2* service.
-
-![OSM bookbuyer traffic split diagram](./media/aks-osm-addon/osm-bookbuyer-traffic-split-diagram.png)
-
-Create `bookbuyer-v2.yaml` using the following YAML:
-
-```yaml
-apiVersion: v1
-kind: Service
-metadata:
- name: bookstore-v2
- namespace: bookstore
- labels:
- app: bookstore-v2
-spec:
- ports:
- - port: 14001
- name: bookstore-port
- selector:
- app: bookstore-v2
-
-# Deploy bookstore-v2 Service Account
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- name: bookstore-v2
- namespace: bookstore
-
-# Deploy bookstore-v2 Deployment
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: bookstore-v2
- namespace: bookstore
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: bookstore-v2
- template:
- metadata:
- labels:
- app: bookstore-v2
- spec:
- serviceAccountName: bookstore-v2
- containers:
- - name: bookstore
- image: openservicemesh/bookstore:v0.8.0
- imagePullPolicy: Always
- ports:
- - containerPort: 14001
- name: web
- command: ["/bookstore"]
- args: ["--path", "./", "--port", "14001"]
- env:
- - name: BOOKWAREHOUSE_NAMESPACE
- value: bookwarehouse
- - name: IDENTITY
- value: bookstore-v2
-
-kind: TrafficTarget
-apiVersion: access.smi-spec.io/v1alpha3
-metadata:
- name: bookbuyer-access-bookstore-v2
- namespace: bookstore
-spec:
- destination:
- kind: ServiceAccount
- name: bookstore-v2
- namespace: bookstore
- rules:
- - kind: HTTPRouteGroup
- name: bookstore-service-routes
- matches:
- - buy-a-book
- - books-bought
- sources:
- - kind: ServiceAccount
- name: bookbuyer
- namespace: bookbuyer
-```
-
-The above creates a *bookstore-v2* service and SMI policies that allow the *bookbuyer* service to communicate with the *bookstore-v2* service for buying books. It also uses the SMI policies created in the previous section to allow the *bookstore-v2* service to communicate with the *bookwarehouse* service for restocking books.
-
-Use `kubectl apply` to deploy *bookstore-v2* and apply the SMI access policies.
-
-```azurecli-interactive
-kubectl apply -f bookbuyer-v2.yaml
-```
-
-The following example output shows the SMI access policies successfully applied:
-
-```output
-$ kubectl apply -f bookbuyer-v2.yaml
-
-service/bookstore-v2 configured
-serviceaccount/bookstore-v2 created
-deployment.apps/bookstore-v2 created
-traffictarget.access.smi-spec.io/bookstore-v2 created
-```
-
-Create `bookbuyer-split-smi.yaml` using the following YAML:
-
-```yaml
-apiVersion: split.smi-spec.io/v1alpha2
-kind: TrafficSplit
-metadata:
- name: bookstore-split
- namespace: bookstore
-spec:
- service: bookstore.bookstore
- backends:
- - service: bookstore
- weight: 25
- - service: bookstore-v2
- weight: 75
-```
-
-The above creates an SMI policy that splits traffic for the *bookstore* service. The original or v1 version of *bookstore* receives 25% of traffic and *bookstore-v2* receives 75% of traffic.
-
-Use `kubectl apply` to apply the SMI split policy.
-
-```azurecli-interactive
-kubectl apply -f bookbuyer-split-smi.yaml
-```
-
-The following example output shows the SMI access policies successfully applied:
-
-```output
-$ kubectl apply -f bookbuyer-split-smi.yaml
-
-trafficsplit.split.smi-spec.io/bookstore-split created
-```
-
-Repeat the steps from the previous section to forward traffic between the *bookbuyer* service and your development computer. Confirm the counter is incrementing for both *bookstore v1* and *bookstore v2*. Also confirm the number for *bookstore v2* is incrementing faster than for *bookstore v1*.
-
-![OSM bookbuyer books bought UI](./media/aks-osm-addon/osm-bookbuyer-traffic-split-ui.png)
-
-Stop the port forwarding command.
--
-[osm-cli]: open-service-mesh-deploy-addon-az-cli.md
-[osm-permissive-traffic-mode]: https://docs.openservicemesh.io/docs/guides/traffic_management/permissive_mode/
-[smi]: https://smi-spec.io/
aks Open Service Mesh Nginx Ingress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-nginx-ingress.md
- Title: Using NGINX Ingress
-description: How to use NGINX Ingress with Open Service Mesh
-- Previously updated : 8/26/2021---
-# Deploy an application managed by Open Service Mesh (OSM) with NGINX ingress
-
-Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh, allowing users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
-
-In this tutorial, you will:
-
-> [!div class="checklist"]
->
-> - View the current OSM cluster configuration
-> - Create the namespace(s) for OSM to manage deployed applications in the namespace(s)
-> - Onboard the namespaces to be managed by OSM
-> - Deploy the sample application
-> - Verify the application running inside the AKS cluster
-> - Create a NGINX ingress controller used for the appliction
-> - Expose a service via the Azure Application Gateway ingress to the internet
-
-## Before you begin
-
-The steps detailed in this article assume that you've created an AKS cluster (Kubernetes `1.19+` and above, with Kubernetes RBAC enabled), have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
-
-You must have the following resources installed:
--- The Azure CLI, version 2.20.0 or later-- OSM version v0.11.1 or later-- JSON processor "jq" version 1.6+-
-### View and verify the current OSM cluster configuration
-
-Once the OSM add-on for AKS has been enabled on the AKS cluster, you can view the current configuration parameters in the osm-mesh-config resource. Run the following command to view the properties:
-
-```azurecli-interactive
-kubectl get meshconfig osm-mesh-config -n osm-system -o yaml
-```
-
-Output shows the current OSM configuration for the cluster.
-
-```
-apiVersion: config.openservicemesh.io/v1alpha1
-kind: MeshConfig
-metadata:
- creationTimestamp: "0000-00-00A00:00:00A"
- generation: 1
- name: osm-mesh-config
- namespace: kube-system
- resourceVersion: "2494"
- uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31
-spec:
- certificate:
- serviceCertValidityDuration: 24h
- featureFlags:
- enableEgressPolicy: true
- enableMulticlusterMode: false
- enableWASMStats: true
- observability:
- enableDebugServer: true
- osmLogLevel: info
- tracing:
- address: jaeger.osm-system.svc.cluster.local
- enable: false
- endpoint: /api/v2/spans
- port: 9411
- sidecar:
- configResyncInterval: 0s
- enablePrivilegedInitContainer: false
- envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
- initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
- logLevel: error
- maxDataPlaneConnections: 0
- resources: {}
- traffic:
- enableEgress: true
- enablePermissiveTrafficPolicyMode: true
- inboundExternalAuthorization:
- enable: false
- failureModeAllow: false
- statPrefix: inboundExtAuthz
- timeout: 1s
- useHTTPSIngress: false
-```
-
-Notice the **enablePermissiveTrafficPolicyMode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services. For more detailed information about permissive traffic mode, please visit and read the [Permissive Traffic Policy Mode](https://docs.openservicemesh.io/docs/guides/traffic_management/permissive_mode/) article.
-
-## Create namespaces for the application
-
-In this tutorial we will be using the OSM `bookstore` application that has the following application components:
--- `bookbuyer`-- `bookthief`-- `bookstore`-- `bookwarehouse`-
-Create namespaces for each of these application components.
-
-```azurecli-interactive
-for i in bookstore bookbuyer bookthief bookwarehouse; do kubectl create ns $i; done
-```
-
-You should see the following output:
-
-```Output
-namespace/bookstore created
-namespace/bookbuyer created
-namespace/bookthief created
-namespace/bookwarehouse created
-```
-
-## Onboard the namespaces to be managed by OSM
-
-Adding the namespaces to the OSM mesh will allow the OSM controller to automatically inject the Envoy sidecar proxy containers with your application. Run the following command to onboard the OSM `bookstore` application namespaces.
-
-```azurecli-interactive
-osm namespace add bookstore bookbuyer bookthief bookwarehouse
-```
-
-You should see the following output:
-
-```Output
-Namespace [bookstore] successfully added to mesh [osm]
-Namespace [bookbuyer] successfully added to mesh [osm]
-Namespace [bookthief] successfully added to mesh [osm]
-Namespace [bookwarehouse] successfully added to mesh [osm]
-```
-
-## Deploy the Bookstore application to the AKS cluster
-
-```azurecli-interactive
-SAMPLE_VERSION=v0.11
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookbuyer.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookthief.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookstore.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookwarehouse.yaml
-```
-
-All of the deployment outputs are summarized below.
-
-```Output
-serviceaccount/bookbuyer created
-service/bookbuyer created
-deployment.apps/bookbuyer created
-
-serviceaccount/bookthief created
-service/bookthief created
-deployment.apps/bookthief created
-
-service/bookstore created
-serviceaccount/bookstore created
-deployment.apps/bookstore created
-
-serviceaccount/bookwarehouse created
-service/bookwarehouse created
-deployment.apps/bookwarehouse created
-```
-
-## Update the Bookbuyer service
-
-Update the `bookbuyer` service to the correct inbound port configuration with the following service manifest.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-apiVersion: v1
-kind: Service
-metadata:
- name: bookbuyer
- namespace: bookbuyer
- labels:
- app: bookbuyer
-spec:
- ports:
- - port: 14001
- name: inbound-port
- selector:
- app: bookbuyer
-EOF
-```
-
-## Verify the Bookstore application running inside the AKS cluster
-
-As of now we have deployed the `bookstore` mulit-container application, but it is only accessible from within the AKS cluster. Later we will add the Azure Application Gateway ingress controller to expose the application outside the AKS cluster. To verify that the application is running inside the cluster, we will use a port forward to view the `bookbuyer` component UI.
-
-First let's get the `bookbuyer` pod's name
-
-```azurecli-interactive
-kubectl get pod -n bookbuyer
-```
-
-You should see output similar to the following. Your `bookbuyer` pod will have a unique name appended.
-
-```Output
-NAME READY STATUS RESTARTS AGE
-bookbuyer-7676c7fcfb-mtnrz 2/2 Running 0 7m8s
-```
-
-Once we have the pod's name, we can now use the port-forward command to set up a tunnel from our local system to the application inside the AKS cluster. Run the following command to set up the port forward for the local system port 8080. Again use your specified bookbuyer pod name.
-
-```azurecli-interactive
-kubectl port-forward bookbuyer-7676c7fcfb-mtnrz -n bookbuyer 8080:14001
-```
-
-You should see similar output below:
-
-```Output
-Forwarding from 127.0.0.1:8080 -> 14001
-Forwarding from [::1]:8080 -> 14001
-```
-
-While the port forwarding session is in place, navigate to the following url from a browser `http://localhost:8080`. You should now be able to see the `bookbuyer` application UI in the browser similar to the image below.
-
-![OSM bookbuyer app for NGINX UI image](./media/aks-osm-addon/osm-agic-bookbuyer-img.png)
-
-## Create an NGINX ingress controller in Azure Kubernetes Service (AKS)
-
-An ingress controller is a piece of software that provides reverse proxy, configurable traffic routing, and TLS termination for Kubernetes services. Kubernetes ingress resources are used to configure the ingress rules and routes for individual Kubernetes services. Using an ingress controller and ingress rules, a single IP address can be used to route traffic to multiple services in a Kubernetes cluster.
-
-We will utilize the ingress controller to expose the application managed by OSM to the internet. To create the ingress controller, use Helm to install nginx-ingress. For added redundancy, two replicas of the NGINX ingress controllers are deployed with the `--set controller.replicaCount` parameter. To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster.
-
-The ingress controller will be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. A node selector is specified using the `--set nodeSelector` parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node.
-
-> [!TIP]
-> The following example creates a Kubernetes namespace for the ingress resources named _ingress-basic_. Specify a namespace for your own environment as needed.
-
-```azurecli-interactive
-# Create a namespace for your ingress resources
-kubectl create namespace ingress-basic
-
-# Add the ingress-nginx repository
-helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
-
-# Update the helm repo(s)
-helm repo update
-
-# Use Helm to deploy an NGINX ingress controller in the ingress-basic namespace
-helm install nginx-ingress ingress-nginx/ingress-nginx \
- --namespace ingress-basic \
- --set controller.replicaCount=1 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
-```
-
-A Kubernetes load balancer service is created for the NGINX ingress controller. A dynamic public IP address is assigned, as shown in the following example output:
-
-```Output
-$ kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller
-
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
-nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.74.133 EXTERNAL_IP 80:32486/TCP,443:30953/TCP 44s app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
-```
-
-No ingress rules have been created. Currently the NGINX ingress controller's default 404 page is displayed if you browse to the internal IP address. Ingress rules are configured in the following steps.
-
-## Expose the bookbuyer service to the internet
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
- name: bookbuyer-ingress
- namespace: bookbuyer
- annotations:
- kubernetes.io/ingress.class: nginx
-
-spec:
-
- rules:
- - host: bookbuyer.contoso.com
- http:
- paths:
- - path: /
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-EOF
-```
-
-You should see the following output:
-
-```Output
-Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
-ingress.extensions/bookbuyer-ingress created
-```
-
-## View the NGINX logs
-
-```azurecli-interactive
-POD=$(kubectl get pods -n ingress-basic | grep 'nginx-ingress' | awk '{print $1}')
-
-kubectl logs $POD -n ingress-basic -f
-```
-
-Output shows the NGINX ingress controller status when ingress rule has been applied successfully:
-
-```Output
-I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-basic", Name:"nginx-ingress-ingress-nginx-controller-54cf6c8bf4-jdvrw", UID:"3ebbe5e5-50ef-481d-954d-4b82a499ebe1", APIVersion:"v1", ResourceVersion:"3272", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
-I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"bookbuyer", Name:"bookbuyer-ingress", UID:"e1018efc-8116-493c-9999-294b4566819e", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"5460", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
-I0321 <date> 6 controller.go:146] "Configuration changes detected, backend reload required"
-I0321 <date> 6 controller.go:163] "Backend successfully reloaded"
-I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-basic", Name:"nginx-ingress-ingress-nginx-controller-54cf6c8bf4-jdvrw", UID:"3ebbe5e5-50ef-481d-954d-4b82a499ebe1", APIVersion:"v1", ResourceVersion:"3272", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
-```
-
-## View the NGINX services and bookbuyer service externally
-
-```azurecli-interactive
-kubectl get services -n ingress-basic
-```
-
-```Output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.100.23 20.193.1.74 80:31742/TCP,443:32683/TCP 4m15s
-nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.0.163.98 <none> 443/TCP 4m15s
-```
-
-Since the host name in the ingress manifest is a pseudo name used for testing, the DNS name will not be available on the internet. We can alternatively use the curl program and past the hostname header to the NGINX public IP address and receive a 200 code successfully connecting us to the bookbuyer service.
-
-```azurecli-interactive
-curl -H 'Host: bookbuyer.contoso.com' http://EXTERNAL-IP/
-```
-
-You should see the following output:
-
-```Output
-<!doctype html>
-<html itemscope="" itemtype="http://schema.org/WebPage" lang="en">
- <head>
- <meta content="Bookbuyer" name="description">
- <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
- <title>Bookbuyer</title>
- <style>
- #navbar {
- width: 100%;
- height: 50px;
- display: table;
- border-spacing: 0;
- white-space: nowrap;
- line-height: normal;
- background-color: #0078D4;
- background-position: left top;
- background-repeat-x: repeat;
- background-image: none;
- color: white;
- font: 2.2em "Fira Sans", sans-serif;
- }
- #main {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.8em "Fira Sans", sans-serif;
- }
- li {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.2em "Consolas", sans-serif;
- }
- </style>
- <script>
- setTimeout(function(){window.location.reload(1);}, 1500);
- </script>
- </head>
- <body bgcolor="#fff">
- <div id="navbar">
- &#128214; Bookbuyer
- </div>
- <div id="main">
- <ul>
- <li>Total books bought: <strong>1833</strong>
- <ul>
- <li>from bookstore V1: <strong>277</strong>
- <li>from bookstore V2: <strong>1556</strong>
- </ul>
- </li>
- </ul>
- </div>
-
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
-
- Current Time: <strong>Fri, 26 Mar 2021 15:02:53 UTC</strong>
- </body>
-</html>
-```
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
Azure AD pod identity supports two modes of operation:
* **Standard Mode**: In this mode, the following two components are deployed to the AKS cluster: * [Managed Identity Controller (MIC)](https://azure.github.io/aad-pod-identity/docs/concepts/mic/): An MIC is a Kubernetes controller that watches for changes to pods, [AzureIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentity/) and [AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentitybinding/) through the Kubernetes API Server. When it detects a relevant change, the MIC adds or deletes [AzureAssignedIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureassignedidentity/) as needed. Specifically, when a pod is scheduled, the MIC assigns the managed identity on Azure to the underlying virtual machine scale set used by the node pool during the creation phase. When all pods using the identity are deleted, it removes the identity from the virtual machine scale set of the node pool, unless the same managed identity is used by other pods. The MIC takes similar actions when AzureIdentity or AzureIdentityBinding are created or deleted. * [Node Managed Identity (NMI)](https://azure.github.io/aad-pod-identity/docs/concepts/nmi/): NMI is a pod that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the [Azure Instance Metadata Service](../virtual-machines/linux/instance-metadata-service.md?tabs=linux) on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure AD tenant on behalf of the application.
-* **Managed Mode**: This mode offers only NMI. The identity needs to be manually assigned and managed by the user. For more information, see [Pod identity in managed mode](https://azure.github.io/aad-pod-identity/docs/configure/pod_identity_in_managed_mode/).
+* **Managed Mode**: This mode offers only NMI. When installed via the AKS cluster add-on, Azure manages creation of Kubernetes primitives (AzureIdentity and AzureIdentityBinding) and identity assignment in response to CLI commands by the user. Otherwise, if installed via Helm chart, the identity needs to be manually assigned and managed by the user. For more information, see [Pod identity in managed mode](https://azure.github.io/aad-pod-identity/docs/configure/pod_identity_in_managed_mode/).
When you install the Azure AD pod identity via Helm chart or YAML manifest as shown in the [Installation Guide](https://azure.github.io/aad-pod-identity/docs/getting-started/installation/), you can choose between the `standard` and `managed` mode. If you instead decide to install the Azure AD pod identity using the AKS cluster add-on as shown in this article, the setup will use the `managed` mode.
az aks update -g $MY_RESOURCE_GROUP -n $MY_CLUSTER --enable-pod-identity
## Using Kubenet network plugin with Azure Active Directory pod-managed identities > [!IMPORTANT]
-> Running aad-pod-identity in a cluster with Kubenet is not a recommended configuration because of the security implication. Please follow the mitigation steps and configure policies before enabling aad-pod-identity in a cluster with Kubenet.
+> Running aad-pod-identity in a cluster with Kubenet is not a recommended configuration due to security concerns. Default Kubenet configuration fails to prevent ARP spoofing, which could be utilized by a pod to act as another pod and gain access to an identity it's not intended to have. Please follow the mitigation steps and configure policies before enabling aad-pod-identity in a cluster with Kubenet.
### Mitigation
az aks update -g $MY_RESOURCE_GROUP -n $MY_CLUSTER --enable-pod-identity --enabl
> [!IMPORTANT] > You must have the relevant permissions (for example, Owner) on your subscription to create the identity.
-Create an identity using [az identity create][az-identity-create] and set the *IDENTITY_CLIENT_ID* and *IDENTITY_RESOURCE_ID* variables.
+Create an identity which will be used by the demo pod with [az identity create][az-identity-create] and set the *IDENTITY_CLIENT_ID* and *IDENTITY_RESOURCE_ID* variables.
```azurecli-interactive az group create --name myIdentityResourceGroup --location eastus
export IDENTITY_RESOURCE_ID="$(az identity show -g ${IDENTITY_RESOURCE_GROUP} -n
## Assign permissions for the managed identity
+The managed identity that will be assigned to the pod needs to be granted permissions that align with the actions it will be taking.
+ To run the demo, the *IDENTITY_CLIENT_ID* managed identity must have Virtual Machine Contributor permissions in the resource group that contains the virtual machine scale set of your AKS cluster. ```azurecli-interactive
az aks pod-identity add --resource-group myResourceGroup --cluster-name myAKSClu
> [!NOTE] > When you assign the pod identity by using `pod-identity add`, the Azure CLI attempts to grant the Managed Identity Operator role over the pod identity (*IDENTITY_RESOURCE_ID*) to the cluster identity.
+Azure will create an AzureIdentity resource in your cluster representing the identity in Azure, and an AzureIdentityBinding resource which connects the AzureIdentity to a selector. You can view these resources with
+
+```azurecli-interactive
+kubectl get azureidentity -n $POD_IDENTITY_NAMESPACE
+kubectl get azureidentitybinding -n $POD_IDENTITY_NAMESPACE
+```
+ ## Run a sample application
-For a pod to use an Azure AD pod-managed identity, the pod needs an *aadpodidbinding* label with a value that matches a selector from a *AzureIdentityBinding*. To run a sample application using an Azure AD pod-managed identity, create a `demo.yaml` file with the following contents. Replace *POD_IDENTITY_NAME*, *IDENTITY_CLIENT_ID*, and *IDENTITY_RESOURCE_GROUP* with the values from the previous steps. Replace *SUBSCRIPTION_ID* with your subscription ID.
+For a pod to use AAD pod-managed identity, the pod needs an *aadpodidbinding* label with a value that matches a selector from a *AzureIdentityBinding*. By default, the selector will match the name of the pod identity, but it can also be set using the `--binding-selector` option when calling `az aks pod-identity add`.
+
+To run a sample application using AAD pod-managed identity, create a `demo.yaml` file with the following contents. Replace *POD_IDENTITY_NAME*, *IDENTITY_CLIENT_ID*, and *IDENTITY_RESOURCE_GROUP* with the values from the previous steps. Replace *SUBSCRIPTION_ID* with your subscription ID.
> [!NOTE] > In the previous steps, you created the *POD_IDENTITY_NAME*, *IDENTITY_CLIENT_ID*, and *IDENTITY_RESOURCE_GROUP* variables. You can use a command such as `echo` to display the value you set for variables, for example `echo $IDENTITY_NAME`.
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
az aks nodepool add \
--no-wait ```
-> [!NOTE]
-> A taint can only be set for node pools during node pool creation.
- The following example output from the [az aks nodepool list][az-aks-nodepool-list] command shows that *taintnp* is *Creating* nodes with the specified *nodeTaints*: ```console
az aks nodepool add \
--labels dept=IT costcenter=9999 \ --no-wait ```-
-> [!NOTE]
-> Label can only be set for node pools during node pool creation. Labels must also be a key/value pair and have a [valid syntax][kubernetes-label-syntax].
- The following example output from the [az aks nodepool list][az-aks-nodepool-list] command shows that *labelnp* is *Creating* nodes with the specified *nodeLabels*: ```console
analysis-services Analysis Services Async Refresh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-async-refresh.md
description: Describes how to use the Azure Analysis Services REST API to code a
Previously updated : 04/15/2020 Last updated : 02/02/2022
analysis-services Analysis Services Bcdr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-bcdr.md
description: This article describes how Azure Analysis Services provides high av
Previously updated : 03/29/2021 Last updated : 02/02/2022
analysis-services Analysis Services Connect Excel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-connect-excel.md
description: Learn how to connect to an Azure Analysis Services server by using
Previously updated : 12/01/2020 Last updated : 02/02/2022
analysis-services Analysis Services Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-connect.md
description: Learn how to connect to and get data from an Analysis Services serv
Previously updated : 12/01/2020 Last updated : 02/02/2022
analysis-services Analysis Services Database Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-database-users.md
description: Learn how to manage database roles and users on an Analysis Service
Previously updated : 04/27/2021 Last updated : 02/02/2022
analysis-services Analysis Services Datasource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-datasource.md
description: Describes data sources and connectors supported for tabular 1200 an
Previously updated : 03/29/2021 Last updated : 02/02/2022
analysis-services Analysis Services Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-gateway.md
description: An On-premises gateway is necessary if your Analysis Services serve
Previously updated : 11/09/2021 Last updated : 02/02/2022 # Connecting to on-premises data sources with On-premises data gateway
-The on-premises data gateway provides secure data transfer between on-premises data sources and your Azure Analysis Services servers in the cloud. In addition to working with multiple Azure Analysis Services servers in the same region, the latest version of the gateway also works with Azure Logic Apps, Power BI, Power Apps, and Power Automate. While the gateway you install is the same across all of these services, Azure Analysis Services and Logic Apps have some additional steps.
+The On-premises data gateway provides secure data transfer between on-premises data sources and your Azure Analysis Services servers in the cloud. In addition to working with multiple Azure Analysis Services servers in the same region, the gateway also works with Azure Logic Apps, Power BI, Power Apps, and Power Automate. While the gateway you install is the same across all of these services, Azure Analysis Services and Logic Apps have some additional steps required for successful installation.
-Information provided here is specific to how Azure Analysis Services works with the On-premises Data Gateway. To learn more about the gateway in general and how it works with other services, see [What is an on-premises data gateway?](/data-integration/gateway/service-gateway-onprem).
+Information provided here is specific to how Azure Analysis Services works with the On-premises data gateway. To learn more about the gateway in general and how it works with other services, see [What is an On-premises data gateway?](/data-integration/gateway/service-gateway-onprem).
For Azure Analysis Services, getting setup with the gateway the first time is a four-part process:
For Azure Analysis Services, getting setup with the gateway the first time is a
- **Create a gateway resource in Azure** - In this step, you create a gateway resource in Azure. -- **Connect the gateway resource to servers** - Once you have a gateway resource, you can begin connecting servers to it. You can connect multiple servers and other resources provided they are in the same region.
+- **Connect the gateway resource to servers** - Once you have a gateway resource, you can begin connecting your servers to it. You can connect multiple servers and other resources provided they are in the same region.
## Installing
-When installing for an Azure Analysis Services environment, it's important you follow the steps described in [Install and configure on-premises data gateway for Azure Analysis Services](analysis-services-gateway-install.md). This article is specific to Azure Analysis Services. It includes additional steps required to setup an On-premises data gateway resource in Azure, and connect your Azure Analysis Services server to the resource.
+When installing for an Azure Analysis Services environment, it's important you follow the steps described in [Install and configure on-premises data gateway for Azure Analysis Services](analysis-services-gateway-install.md). This article is specific to Azure Analysis Services. It includes additional steps required to setup an On-premises data gateway resource in Azure, and connect your Azure Analysis Services server to the gateway resource.
## Connecting to a gateway resource in a different subscription
analysis-services Analysis Services Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-manage-users.md
description: This article describes how Azure Analysis Services uses Azure Activ
Previously updated : 12/01/2020 Last updated : 02/02/2022
Azure Analysis Services supports [Azure AD B2B collaboration](../active-director
All client applications and tools use one or more of the Analysis Services [client libraries](/analysis-services/client-libraries?view=azure-analysis-services-current&preserve-view=true) (AMO, MSOLAP, ADOMD) to connect to a server.
-All three client libraries support both Azure AD interactive flow, and non-interactive authentication methods. The two non-interactive methods, Active Directory Password and Active Directory Integrated Authentication methods can be used in applications utilizing AMOMD and MSOLAP. These two methods never result in pop-up dialog boxes.
+All three client libraries support both Azure AD interactive flow, and non-interactive authentication methods. The two non-interactive methods, Active Directory Password and Active Directory Integrated Authentication methods can be used in applications utilizing AMOMD and MSOLAP. These two methods never result in pop-up dialog boxes for sign in.
-Client applications like Excel and Power BI Desktop, and tools like SSMS and Analysis Services projects extension for Visual Studio install the latest versions of the libraries when updated to the latest release. Power BI Desktop, SSMS, and Analysis Services projects extension are updated monthly. Excel is [updated with Microsoft 365](https://support.microsoft.com/office/when-do-i-get-the-newest-features-for-microsoft-365-da36192c-58b9-4bc9-8d51-bb6eed468516). Microsoft 365 updates are less frequent, and some organizations use the deferred channel, meaning updates are deferred up to three months.
+Client applications like Excel and Power BI Desktop, and tools like SSMS and Analysis Services projects extension for Visual Studio install the latest versions of the client libraries with regular updates. Power BI Desktop, SSMS, and Analysis Services projects extension are updated monthly. Excel is [updated with Microsoft 365](https://support.microsoft.com/office/when-do-i-get-the-newest-features-for-microsoft-365-da36192c-58b9-4bc9-8d51-bb6eed468516). Microsoft 365 updates are less frequent, and some organizations use the deferred channel, meaning updates are deferred up to three months.
-Depending on the client application or tool you use, the type of authentication and how you sign in may be different. Each application may support different features for connecting to cloud services like Azure Analysis Services.
+Depending on the client application or tools you use, the type of authentication and how you sign in may be different. Each application may support different features for connecting to cloud services like Azure Analysis Services.
-Power BI Desktop, Visual Studio, and SSMS support Active Directory Universal Authentication, an interactive method that also supports Azure AD Multi-Factor Authentication (MFA). Azure AD MFA helps safeguard access to data and applications while providing a simple sign-in process. It delivers strong authentication with several verification options (phone call, text message, smart cards with pin, or mobile app notification). Interactive MFA with Azure AD can result in a pop-up dialog box for validation. **Universal Authentication is recommended**.
+Power BI Desktop, Visual Studio, and SSMS support Active Directory Universal Authentication, an interactive method that also supports Azure AD Multi-Factor Authentication (MFA). Azure AD MFA helps safeguard access to data and applications while providing a simple sign in process. It delivers strong authentication with several verification options (phone call, text message, smart cards with pin, or mobile app notification). Interactive MFA with Azure AD can result in a pop-up dialog box for validation. **Universal Authentication is recommended**.
If signing in to Azure by using a Windows account, and Universal Authentication is not selected or available (Excel), [Active Directory Federation Services (AD FS)](/windows-server/identity/ad-fs/deployment/how-to-connect-fed-azure-adfs) is required. With Federation, Azure AD and Microsoft 365 users are authenticated using on-premises credentials and can access Azure resources.
Excel users can connect to a server by using a Windows account, an organization
## User permissions
-**Server administrators** are specific to an Azure Analysis Services server instance. They connect with tools like Azure portal, SSMS, and Visual Studio to perform tasks like adding databases and managing user roles. By default, the user that creates the server is automatically added as an Analysis Services server administrator. Other administrators can be added by using Azure portal or SSMS. Server administrators must have an account in the Azure AD tenant in the same subscription. To learn more, see [Manage server administrators](analysis-services-server-admins.md).
+**Server administrators** are specific to an Azure Analysis Services server instance. They connect with tools like Azure portal, SSMS, and Visual Studio to perform tasks like configuring settings and managing user roles. By default, the user that creates the server is automatically added as an Analysis Services server administrator. Other administrators can be added by using Azure portal or SSMS. Server administrators must have an account in the Azure AD tenant in the same subscription. To learn more, see [Manage server administrators](analysis-services-server-admins.md).
**Database users** connect to model databases by using client applications like Excel or Power BI. Users must be added to database roles. Database roles define administrator, process, or read permissions for a database. It's important to understand database users in a role with administrator permissions is different than server administrators. However, by default, server administrators are also database administrators. To learn more, see [Manage database roles and users](analysis-services-database-users.md).
analysis-services Analysis Services Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-manage.md
description: This article describes the tools used to manage administration and
Previously updated : 10/28/2019 Last updated : 02/02/2022 # Manage Analysis Services
-Once you've created an Analysis Services server in Azure, there may be some administration and management tasks you need to perform right away or sometime down the road. For example, run processing to the refresh data, control who can access the models on your server, or monitor your server's health. Some management tasks can only be performed in Azure portal, others in SQL Server Management Studio (SSMS), and some tasks can be done in either.
+Once you've created an Analysis Services server resource in Azure, there may be some administration and management tasks you need to perform right away or sometime down the road. For example, run processing to the refresh data, control who can access the models on your server, or monitor your server's health. Some management tasks can only be performed in Azure portal, others in SQL Server Management Studio (SSMS), and some tasks can be done in either.
## Azure portal [Azure portal](https://portal.azure.com/) is where you can create and delete servers, monitor server resources, change size, and manage who has access to your servers. If you're having some problems, you can also submit a support request.
To get all the latest features, and the smoothest experience when connecting to
![Connect in SSMS](./media/analysis-services-manage/aas-manage-connect-ssms.png) +
+## External open source tools
+
+**Tabular Editor** - An open-source tool for creating, maintaining, and managing tabular models using an intuitive, lightweight editor. A hierarchical view shows all objects in your tabular model. Objects are organized by display folders with support for multi-select property editing and DAX syntax highlighting. XMLA read-only is required for query operations. Read-write is required for metadata operations. To learn more, see [tabulareditor.github.io](https://tabulareditor.github.io/).
+
+**ALM Toolkit** - An open-source schema compare tool for Analysis Services tabular models and Power BI datasets, most often used for application lifecycle management (ALM) scenarios. Perform deployment across environments and retain incremental refresh historical data. Diff and merge metadata files, branches and repos. Reuse common definitions between datasets. Read-only is required for query operations. Read-write is required for metadata operations. To learn more, seeΓÇ»[alm-toolkit.com](http://alm-toolkit.com/).
+
+**DAX Studio** – An open-source tool for DAX authoring, diagnosis, performance tuning, and analysis. Features include object browsing, integrated tracing, query execution breakdowns with detailed statistics, DAX syntax highlighting and formatting. XMLA read-only is required for query operations. To learn more, see [daxstudio.org](https://daxstudio.org/).
+ ## Server administrators and database users In Azure Analysis Services, there are two types of users, server administrators and database users. Both types of users must be in your Azure Active Directory and must be specified by organizational email address or UPN. To learn more, see [Authentication and user permissions](analysis-services-manage-users.md). - ## Troubleshooting connection problems When connecting using SSMS, if you run into problems, you may need to clear the login cache. Nothing is cached to disc. To clear the cache, close and restart the connect process. ## Next steps If you haven't already deployed a tabular model to your new server, now is a good time. To learn more, see [Deploy to Azure Analysis Services](analysis-services-deploy.md).
-If you've deployed a model to your server, you're ready to connect to it using a client or browser. To learn more, see [Get data from Azure Analysis Services server](analysis-services-connect.md).
+If you've deployed a model to your server, you're ready to connect to it using a client application or tool. To learn more, see [Get data from Azure Analysis Services server](analysis-services-connect.md).
analysis-services Analysis Services Server Admins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-server-admins.md
description: This article describes how to manage server administrators for an A
Previously updated : 2/4/2021 Last updated : 02/02/2022
analysis-services Analysis Services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-service-principal.md
description: Learn how to create a service principal for automating Azure Analys
Previously updated : 04/27/2021 Last updated : 02/02/2022
# Automation with service principals
-Service principals are an Azure Active Directory application resource you create within your tenant to perform unattended resource and service level operations. They're a unique type of *user identity* with an application ID and password or certificate. A service principal has only those permissions necessary to perform tasks defined by the roles and permissions for which it's assigned.
+Service principals are an Azure Active Directory application resource you create within your tenant to perform unattended resource and service level operations. They're a unique type of *user identity* with an application ID and password or certificate. A service principal has only those permissions necessary to perform tasks defined by the roles and permissions for which it is assigned.
In Analysis Services, service principals are used with Azure Automation, PowerShell unattended mode, custom client applications, and web apps to automate common tasks. For example, provisioning servers, deploying models, data refresh, scale up/down, and pause/resume can all be automated by using service principals. Permissions are assigned to service principals through role membership, much like regular Azure AD UPN accounts.
analysis-services Analysis Services Vnet Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-vnet-gateway.md
description: Learn how to configure an Azure Analysis Services server to use a g
Previously updated : 04/27/2021 Last updated : 02/02/2022
This article describes the Azure Analysis Services **AlwaysUseGateway** server p
## Server access to VNet data sources
-If your data sources are accessed through a VNet, your Azure Analysis Services server must connect to those data sources as if they are on-premises, in your own environment. You can configure the **AlwaysUseGateway** server property to specify the server to access all data sources through an [On-premises gateway](analysis-services-gateway.md).
+If your data sources are accessed through a VNet, your Azure Analysis Services server must connect to those data sources as if they are on-premises, in your own environment. You must configure the **AlwaysUseGateway** server property to specify the server resource to access all data sources through an [On-premises data gateway](analysis-services-gateway.md).
-Azure SQL Managed Instance data sources run within Azure VNet with a private IP address. If public endpoint is enabled on the instance, a gateway is not required. If public endpoint is not enabled, an On-premises Data Gateway is required and the AlwaysUseGateway property must be set to true.
+Azure SQL Managed Instance data sources run within Azure VNet with a private IP address. If public endpoint is enabled on the instance, a gateway is not required. If public endpoint is not enabled, an On-premises data gateway is required and the AlwaysUseGateway property must be set to true.
> [!NOTE]
-> This property is effective only when an [On-premises Data Gateway](analysis-services-gateway.md) is installed and configured. The gateway can be on the VNet.
+> This property is effective only when an [On-premises data gateway](analysis-services-gateway.md) is installed and configured. The gateway can be on the VNet.
## Configure AlwaysUseGateway property
analysis-services Analysis Services Tutorial Pbid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/tutorials/analysis-services-tutorial-pbid.md
description: In this tutorial, learn how to get an Analysis Services server name from the Azure portal and then connect to the server by using Power BI Desktop. Previously updated : 10/12/2021 Last updated : 02/02/2022 #Customer intent: As a BI developer, I want to connect to a sample tabular model on a server and create a basic report by using the Power BI Desktop client application.
In this tutorial, you use Power BI Desktop to connect to the adventureworks samp
- [Install the newest Power BI Desktop](https://powerbi.microsoft.com/desktop). ## Sign in to the Azure portal
-In this tutorial, you sing in to the portal to get the server name only. Typically, users would get the server name from the server administrator.
+In this tutorial, you sign in to the portal to get the server name only. Typically, users would get the server name from the server administrator.
Sign in to the [portal](https://portal.azure.com/). ## Get server name
-In order to connect to your server from Power BI Desktop, you first need the server name. You can get the server name from the portal.
+In order to connect to your server from Power BI Desktop, you first need the server name.
In **Azure portal** > server > **Overview** > **Server name**, copy the server name.
In **Azure portal** > server > **Overview** > **Server name**, copy the server n
If no longer needed, do not save your report or delete the file if you did save. ## Next steps
-In this tutorial, you learned how to use Power BI Desktop to connect to a data model on a server and create a basic report. If you're not familiar with how to create a data model, see the [Adventure Works Internet Sales tabular data modeling tutorial](/analysis-services/tutorial-tabular-1400/as-adventure-works-tutorial) in the SQL Server Analysis Services docs.
+In this tutorial, you learned how to use Power BI Desktop to connect to a data model on a server and create a basic report. If you're not familiar with how to create a data model, see the [Adventure Works Internet Sales tabular data modeling tutorial](/analysis-services/tutorial-tabular-1400/as-adventure-works-tutorial) in the SQL Server Analysis Services docs.
api-management Api Management Get Started Publish Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-get-started-publish-versions.md
You can interact directly with version sets by using the Azure CLI:
To see all your version sets, run the [az apim api versionset list](/cli/azure/apim/api/versionset#az_apim_api_versionset_list) command: ```azurecli
-az apim api versionset list --resource-group apim-hello-word-resource-group \
+az apim api versionset list --resource-group apim-hello-world-resource-group \
--service-name apim-hello-world --output table ```
When the Azure portal creates a version set for you, it assigns an alphanumeric
To see details about a version set, run the [az apim api versionset show](/cli/azure/apim/api/versionset#az_apim_api_versionset_show) command: ```azurecli
-az apim api versionset show --resource-group apim-hello-word-resource-group \
+az apim api versionset show --resource-group apim-hello-world-resource-group \
--service-name apim-hello-world --version-set-id 00000000000000000000000 ```
app-service Configure Authentication File Based https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-file-based.md
The following exhausts possible configuration options within the file:
"redirectToProvider": "<default provider alias>", "excludedPaths": [ "/path1",
- "/path2"
+ "/path2",
+ "/path3/subpath/*"
] }, "httpSettings": {
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/how-to-migrate.md
Title: How to migrate App Service Environment v2 to App Service Environment v3
-description: Learn how to migrate your App Service Environment v2 to App Service Environment v3
+ Title: Use the migration feature to migrate App Service Environment v2 to App Service Environment v3
+description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 using the migration feature
Previously updated : 2/01/2022 Last updated : 2/2/2022 zone_pivot_groups: app-service-cli-portal
-# How to migrate App Service Environment v2 to App Service Environment v3
+# Use the migration feature to migrate App Service Environment v2 to App Service Environment v3
-An App Service Environment v2 can be migrated to an [App Service Environment v3](overview.md). To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
+An App Service Environment v2 can be automatically migrated to an [App Service Environment v3](overview.md) using the migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
> [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
An App Service Environment v2 can be migrated to an [App Service Environment v3]
## Prerequisites
-Ensure you understand how migrating to an App Service Environment v3 will affect your applications. Review the [migration process](migrate.md#overview-of-the-migration-process) to understand the process timeline and where and when you'll need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which may answer some questions you currently have.
+Ensure you understand how migrating to an App Service Environment v3 will affect your applications. Review the [migration process](migrate.md#overview-of-the-migration-process-using-the-migration-feature) to understand the process timeline and where and when you'll need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which may answer some questions you currently have.
::: zone pivot="experience-azcli"
-The recommended experience for migration is using the [Azure portal](how-to-migrate.md?pivots=experience-azp). If you decide to use the Azure CLI to carry out the migration, you should follow the below steps in order and as written since you'll be making Azure REST API calls. The recommended way for making these API calls is by using the [Azure CLI](/cli/azure/). For information about other methods, see [Getting Started with Azure REST](/rest/api/azure/).
+The recommended experience for the migration feature is using the [Azure portal](how-to-migrate.md?pivots=experience-azp). If you decide to use the Azure CLI to carry out the migration, you should follow the steps described here in order and as written since you'll be making Azure REST API calls. The recommended way for making these API calls is by using the [Azure CLI](/cli/azure/). For information about other methods, see [Getting Started with Azure REST](/rest/api/azure/).
For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use the [Azure Cloud Shell](https://shell.azure.com/).
ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --quer
## 2. Validate migration is supported
-The following command will check whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. If your environment [won't be supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see [migration alternatives](migration-alternatives.md).
+The following command will check whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. If your environment [won't be supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
```azurecli az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=validation"
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG
From the [Azure portal](https://portal.azure.com), navigate to the **Overview** page for the App Service Environment you'll be migrating. The platform will validate if migration is supported for your App Service Environment. Wait a couple seconds after the page loads for this validation to take place.
-If migration is supported for your App Service Environment, there are three ways to access the migration feature. These methods include a banner at the top of the overview page, a new item in the left-hand side menu called **Migration (preview)**, and an info box on the **Configuration** page. Select any of these methods to move on to the next step in the migration process.
+If migration is supported for your App Service Environment, there are three ways to access the migration feature. These methods include a banner at the top of the Overview page, a new item in the left-hand side menu called **Migration (preview)**, and an info box on the **Configuration** page. Select any of these methods to move on to the next step in the migration process.
![migration access points](./media/migration/portal-overview.png) ![configuration page view](./media/migration/configuration-migration-support.png)
-If you don't see these elements, your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state (which blocks migration). If your environment [won't be supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see [migration alternatives](migration-alternatives.md).
+If you don't see these elements, your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state (which blocks migration). If your environment [won't be supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
The migration page will guide you through the series of steps to complete the migration.
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migrate.md
Title: Migration to App Service Environment v3
-description: Overview of the migration process to App Service Environment v3
+ Title: Migrate to App Service Environment v3 by using the migration feature
+description: Overview of the migration feature for migration to App Service Environment v3
Previously updated : 1/28/2022 Last updated : 2/2/2022
-# Migration to App Service Environment v3
+# Migration to App Service Environment v3 using the migration feature
-App Service can now migrate your App Service Environment v2 to an [App Service Environment v3](overview.md). If you want to migrate an App Service Environment v1 to an App Service Environment v3, see the [migration alternatives documentation](migration-alternatives.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+App Service can now automate migration of your App Service Environment v2 to an [App Service Environment v3](overview.md). If you want to migrate an App Service Environment v1 to an App Service Environment v3, see the [manual migration options documentation](migration-alternatives.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
> [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
App Service can now migrate your App Service Environment v2 to an [App Service E
## Supported scenarios
-At this time, App Service Environment migrations to v3 support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions:
+At this time, App Service Environment migrations to v3 using the migration feature support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions:
- West Central US - Canada Central
The following scenarios aren't supported in this version of the feature:
- [Zone pinned](zone-redundancy.md) App Service Environment v2 - App Service Environment in a region not listed in the supported regions
-The migration feature doesn't plan on supporting App Service Environment v1 within a classic VNet. See [migration alternatives](migration-alternatives.md) if your App Service Environment falls into this category.
+The migration feature doesn't plan on supporting App Service Environment v1 within a classic VNet. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into this category.
The App Service platform will review your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you won't be able to migrate at this time using the migration feature. If your environment is in an unhealthy or suspended state, you won't be able to migrate until you make the needed updates.
-## Overview of the migration process
+## Overview of the migration process using the migration feature
Migration consists of a series of steps that must be followed in order. Key points are given for a subset of the steps. It's important to understand what will happen during these steps and how your environment and apps will be impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-migrate.md).
Once the new IPs are created, you'll have the new default outbound to the intern
### Delegate your App Service Environment subnet
-App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. If the App Service Environment's subnet isn't delegated or it's delegated to a different resource, migration will fail.
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Migration will not succeed if the App Service Environment's subnet isn't delegated or it's delegated to a different resource.
### Migrate to App Service Environment v3
There's no cost to migrate your App Service Environment. You'll stop being charg
## Frequently asked questions - **What if migrating my App Service Environment is not currently supported?**
- You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see [migration alternatives](migration-alternatives.md).
+ You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md). This doc will be updated as additional regions and supported scenarios become available.
- **Will I experience downtime during the migration?**
- Yes, you should expect about one hour of downtime during the migration step so plan accordingly. If downtime isn't an option for you, see [migration alternatives](migration-alternatives.md).
+ Yes, you should expect about one hour of downtime during the migration step so plan accordingly. If downtime isn't an option for you, see the [manual migration options](migration-alternatives.md).
- **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?** No, all of your apps running on the old environment will be automatically migrated to the new environment and run like before. No user input is needed. - **What if my App Service Environment has a custom domain suffix?**
- You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see [migration alternatives](migration-alternatives.md).
+ You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
- **What if my App Service Environment is zone pinned?**
- Zone pinned App Service Environment is currently not a supported scenario for migration. When supported, zone pinned App Service Environments will be migrated to zone redundant App Service Environment v3.
+ Zone pinned App Service Environment is currently not a supported scenario for migration using the migration feature. When supported, zone pinned App Service Environments will be migrated to zone redundant App Service Environment v3.
- **What properties of my App Service Environment will change?** You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). - **What happens if migration fails or there is an unexpected issue during the migration?**
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migration-alternatives.md
Title: Alternative methods for migrating to App Service Environment v3
-description: Migrate to App Service Environment v3 Without Using the Migration Feature
+ Title: Migrate to App Service Environment v3
+description: How to migrate your applications to App Service Environment v3
Previously updated : 1/28/2022 Last updated : 2/2/2022
-# Migrate to App Service Environment v3 without using the migration feature
+# Migrate to App Service Environment v3
> [!NOTE]
-> The App Service Environment v3 [migration feature](migrate.md) is now available for a set of supported environment configurations. Consider that feature which provides an automated migration path to [App Service Environment v3](overview.md).
+> The App Service Environment v3 [migration feature](migrate.md) is now available for a set of supported environment configurations in certain regions. Consider that feature which provides an automated migration path to [App Service Environment v3](overview.md).
>
-If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios). If your environment isn't currently supported by the migration feature, you can wait for support if your scenario is listed in the [upcoming supported scenarios](migrate.md#migration-feature-limitations). Otherwise, you can choose to use one of the alternative migration options given in this article.
+If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios). If your environment isn't currently supported by the migration feature, you can wait for support if your scenario is listed in the [upcoming supported scenarios](migrate.md#migration-feature-limitations). Otherwise, you can choose to use one of the manual migration options given in this article.
-If your App Service Environment [won't be supported for migration](migrate.md#migration-feature-limitations) with the migration feature, you must use one of the alternative methods to migrate to App Service Environment v3.
+If your App Service Environment [won't be supported for migration](migrate.md#migration-feature-limitations) with the migration feature, you must use one of the manual methods to migrate to App Service Environment v3.
## Prerequisites
Scenario: An existing app running on an App Service Environment v1 or App Servic
For any migration method that doesn't use the [migration feature](migrate.md), you'll need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that will involve new (and for internet-facing environments, additional) IP addresses. You'll need to update any infrastructure that relies on these IPs.
-Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) on the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
+Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) in the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
### Checklist before migrating apps
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
1. Use your App Service Environment v3 name for **Region**. 1. Choose whether or not to clone your deployment source. 1. You can use an existing Windows **App Service plan** from your new environment if you created one already, or create a new one. The available Windows App Service plans in your new App Service Environment v3, if any, will be listed in the dropdown.
-1. Modify **SKU and size** as needed using one of the Isolated v2 options if creating a new App Service plan. Note App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plan. For more information, see [App Service Environment v3 pricing](overview.md#pricing).
+1. Modify **SKU and size** as needed using one of the Isolated v2 options if creating a new App Service plan. Note App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plan. For more information, see [App Service Environment v3 SKU details](overview.md#pricing).
![clone sample](./media/migration/portal-clone-sample.png)
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. At this time, all deployment methods except FTP are supported on App Service Environment v3. You don't need to make updates when you deploy your apps to your new environment unless you want to make changes or take advantage of App Service Environment v3's dedicated features.
-You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in your new environment. To export a template for just your app, head over to your App Service and go to **Export template** under **Automation**.
+You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in or with your new environment. To export a template for just your app, head over to your App Service and go to **Export template** under **Automation**.
![export from toc](./media/migration/export-toc.png)
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python.md
To run the application locally:
:::image type="content" source="./media/quickstart-python/run-flask-app-localhost.png" alt-text="Screenshot of the Flask app running locally in a browser":::
-Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
### [Django](#tab/django)
Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
:::image type="content" source="./media/quickstart-python/run-django-app-localhost.png" alt-text="Screenshot of the Django app running locally in a browser":::
-Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
To create Azure resources in VS Code, you must have the [Azure Tools extension p
-Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
## 3 - Deploy your application code to Azure
To deploy a web app from VS Code, you must have the [Azure Tools extension pack]
-Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
## 4 - Browse to the app
The Python sample code is running a Linux container in App Service using a built
**Congratulations!** You have deployed your Python app to App Service.
-Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
## 5 - Stream logs
Starting Live Log Stream
-
-Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
## Clean up resources
The `--no-wait` argument allows the command to return before the operation is co
-Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
## Next steps
automation Automation Create Alert Triggered Runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-create-alert-triggered-runbook.md
Ensure your VM is running. Navigate to the runbook **Stop-AzureVmInResponsetoVMA
:::image type="content" source="./media/automation-create-alert-triggered-runbook/job-result-portal.png" alt-text="Showing output from job.":::
+## Common Azure VM management operations
+
+Azure Automation provides scripts for common Azure VM management operations like restart VM, stop VM, delete VM, scale up and down scenarios in Runbook gallery. The scripts can also be found in the Azure Automation [GitHub repository](https://github.com/azureautomation) You can also use these scripts as mentioned in the above steps.
+
+|**Azure VM management operations** | **Details**|
+| | |
+[Stop-Azure-VM-On-Alert](https://github.com/azureautomation/Stop-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.
+[Restart-Azure-VM-On-Alert](https://github.com/azureautomation/Restart-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.
+[Delete-Azure-VM-On-Alert](https://github.com/azureautomation/Delete-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.
+[ScaleDown-Azure-VM-On-Alert](https://github.com/azureautomation/ScaleDown-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.
+[ScaleUp-Azure-VM-On-Alert](https://github.com/azureautomation/ScaleUp-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.
+ ## Next steps
-* To discover different ways to start a runbook, see [Start a runbook](./start-runbooks.md).
-* To create an activity log alert, see [Create activity log alerts](../azure-monitor/alerts/activity-log-alerts.md).
-* To learn how to create a near real-time alert, see [Create an alert rule in the Azure portal](../azure-monitor/alerts/alerts-metric.md?toc=/azure/azure-monitor/toc.json).
+* Discover different ways to start a runbook, see [Start a runbook](./start-runbooks.md).
+* Create an activity log alert, see [Create activity log alerts](../azure-monitor/alerts/activity-log-alerts.md).
+* Learn how to create a near real-time alert, see [Create an alert rule in the Azure portal](../azure-monitor/alerts/alerts-metric.md?toc=/azure/azure-monitor/toc.json).
azure-arc Create Complete Managed Instance Directly Connected https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-complete-managed-instance-directly-connected.md
To quickly create a Kubernetes cluster, use Azure Kubernetes Services (AKS).
1. Create a resource group, or specify an existing resource group. 1. Specify a cluster name 1. Specify a region
- 1. Under **Availability zones**, select **None**.
+ 1. Under **Availability zones**, remove all selected zones. You should not specify any zones.
1. Verify the Kubernetes version. For minimum supported version, see [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md). 1. Under **Node size**, select a node size for your cluster based on the [Sizing guidance](sizing-guidance.md). 1. For **Scale method**, select **Manual**.
azure-arc Deploy Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md
To support Active Directory authentication on SQL Managed Instance, new spec fie
Prepare the following yaml specification to deploy a SQL Managed Instance. The fields described above should be specified in the spec. ```yaml
+apiVersion: v1
+data:
+ password: <your base64 encoded password>
+ username: <your base64 encoded username>
+kind: Secret
+metadata:
+ name: my-login-secret
+type: Opaque
+ apiVersion: sql.arcdata.microsoft.com/v2 kind: SqlManagedInstance metadata:
spec:
keytabSecret: <Keytab secret name> primary:
- type: NodePort
+ type: LoadBalancer
dnsName: <Endpoint DNS name> port: <Endpoint port number> storage:
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
This output should not include AKV secrets provider. If you don't have any other
## Reconciliation and Troubleshooting Azure Key Vault secrets provider extension is self-healing. All extension components that are deployed on the cluster at the time of extension installation are reconciled to their original state in case somebody tries to intentionally or unintentionally change or delete them. The only exception to that is CRDs. In case the CRDs are deleted, they are not reconciled. You can bring them back by using the 'az k8s-exstension create' command again and providing the existing extension instance name.
-Some common issues and troubleshooting steps for Azure Key Vault secrets provider are captured in the open source documentation [here](https://azure.github.io/secrets-store-csi-driver-provider-azure/troubleshooting/) for your reference.
+Some common issues and troubleshooting steps for Azure Key Vault secrets provider are captured in the open source documentation [here](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/troubleshooting/) for your reference.
Additional troubleshooting steps that are specific to the Secrets Store CSI Driver Interface can be referenced [here](https://secrets-store-csi-driver.sigs.k8s.io/troubleshooting.html).
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
For usage details, see the following documents:
* [Migrate to Flux v2 Helm from Flux v1 Helm](https://fluxcd.io/docs/migration/helm-operator-migration/) * [Flux Helm controller](https://fluxcd.io/docs/components/helm/)
+### Use the GitRepository source for Helm charts
+
+If your Helm charts are stored in the `GitRepository` source that you configure as part of the `fluxConfigurations` resource, you can add an annotation to your HelmRelease yaml to indicate that the configured source should be used as the source of the Helm charts. The annotation is `clusterconfig.azure.com/use-managed-source: "true"`, and here is a usage example:
+
+```console
+
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: somename
+ namespace: somenamespace
+ annotations:
+ clusterconfig.azure.com/use-managed-source: "true"
+spec:
+ ...
+```
+
+By using this annotation, the HelmRelease that is deployed will be patched with the reference to the configured source. Note that only GitRepository source is supported for this currently.
+ ## Migrate from Flux v1 If you've been using Flux v1 in Azure Arc-enabled Kubernetes or AKS clusters and want to migrate to using Flux v2 in the same clusters, you first need to delete the Flux v1 `sourceControlConfigurations` from the clusters. The `microsoft.flux` cluster extension won't be installed if there are `sourceControlConfigurations` resources installed in the cluster.
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus.md
When using service bus extension version 5.x and higher, the following global co
|||| |prefetchCount|0|Gets or sets the number of messages that the message receiver can simultaneously request.| |autoCompleteMessages|true|Determines whether or not to automatically complete messages after successful execution of the function and should be used in place of the `autoComplete` configuration setting.|
-|maxAutoLockRenewalDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically. This only applies for functions that receive a batch of messages.|
-|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.|
-|maxConcurrentSessions|8|The maximum number of sessions that can be handled concurrently per scaled instance.|
-|maxMessages|1000|The maximum number of messages that will be passed to each function call. This only applies for functions that receive a batch of messages.|
-|sessionIdleTimeout|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the processor will close the session and attempt to process another session.|
+|maxAutoLockRenewalDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically. This setting only applies for functions that receive a single message at a time.|
+|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the should be initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently. This setting only applies for functions that receive a single message at a time.|
+|maxConcurrentSessions|8|The maximum number of sessions that can be handled concurrently per scaled instance. This setting only applies for functions that receive a single message at a time.|
+|maxMessages|1000|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.|
+|sessionIdleTimeout|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the processor will close the session and attempt to process another session. This setting only applies for functions that receive a single message at a time.|
|enableCrossEntityTransactions|false|Whether or not to enable transactions that span multiple entities on a Service Bus namespace.| ### Retry settings
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-develop-vs.md
Visual Studio doesn't automatically upload the settings in local.settings.json w
Your code can also read the function app settings values as environment variables. For more information, see [Environment variables](functions-dotnet-class-library.md#environment-variables).
-## Configure your build output settings
-
-When building an Azure Functions project, the build tools optimize the output so that only one copy of any assemblies that are shared with the functions runtime are preserved. The result is an optimized build that saves as much space as possible. However, when you move to a more recent version of any of your project assemblies, the build tools might not know that these assemblies must be preserved. To make sure that these assemblies are preserved during the optimization process, you can specify them using `FunctionsPreservedDependencies` elements in the project (.csproj) file:
-
-```xml
- <ItemGroup>
- <FunctionsPreservedDependencies Include="Microsoft.AspNetCore.Http.dll" />
- <FunctionsPreservedDependencies Include="Microsoft.AspNetCore.Http.Extensions.dll" />
- <FunctionsPreservedDependencies Include="Microsoft.AspNetCore.Http.Features.dll" />
- </ItemGroup>
-```
- ## Configure the project for local development The Functions runtime uses an Azure Storage account internally. For all trigger types other than HTTP and webhooks, set the `Values.AzureWebJobsStorage` key to a valid Azure Storage account connection string. Your function app can also use the [Azure Storage Emulator](../storage/common/storage-use-emulator.md) for the `AzureWebJobsStorage` connection setting that's required by the project. To use the emulator, set the value of `AzureWebJobsStorage` to `UseDevelopmentStorage=true`. Change this setting to an actual storage account connection string before deployment.
azure-functions Functions Identity Based Connections Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-identity-based-connections-tutorial.md
Last updated 10/20/2021
# Tutorial: Create a function app that connects to Azure services using identities instead of secrets
-This tutorial shows you how to configure a function app using Azure Active Directory identities instead of secrets or connection strings, where possible. Using identities helps you avoid accidentally leaking sensitive secrets and can provide better visibility into how data is accessed. To learn more about identity-based connections, see [configure an identity-based connection.](functions-reference.md#configure-an-identity-based-connection).
+This tutorial shows you how to configure a function app using Azure Active Directory identities instead of secrets or connection strings, where possible. Using identities helps you avoid accidentally leaking sensitive secrets and can provide better visibility into how data is accessed. To learn more about identity-based connections, see [configure an identity-based connection](functions-reference.md#configure-an-identity-based-connection).
While the procedures shown work generally for all languages, this tutorial currently supports C# class library functions on Windows specifically.
In order to use Azure Key Vault, your app will need to have an identity that can
1. Select **Save**. It might take a minute or two for the role to show up when you refresh the role assignments list for the identity.
-The identity will now be able to read secrets stored in the vault. Later in the tutorial, you will add additional role assignments for different purposes.
+The identity will now be able to read secrets stored in the key vault. Later in the tutorial, you will add additional role assignments for different purposes.
### Generate a template for creating a function app
Next you will update your function app to use its system-assigned identity when
| Option | Suggested value | Description | | | - | -- | | **Name** | AzureWebJobsStorage__accountName | Update the name from **AzureWebJobsStorage** to the exact name `AzureWebJobsStorage__accountName`. This setting tells the host to use the identity instead of looking for a stored secret. The new setting uses a double underscore (`__`), which is a special character in application settings. |
- | **Value** | Your account name | Update the name from the connection string to just your **AccountName**. |
+ | **Value** | Your account name | Update the name from the connection string to just your **StorageAccountName**. |
This configuration will let the system know that it should use an identity to connect to the resource.
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
On Linux, the function app must have its `kind` set to `functionapp,linux`, and
} ```
-The [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings aren't supported on Linux.
+The [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings aren't supported on a Linux Consumption plan.
<a name="premium"></a> ## Deploy on Premium plan
azure-functions Functions Openapi Definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-openapi-definition.md
To create an API Management instance linked to your function app:
![Create new API Management service](media/functions-openapi-definitions/new-apim-service-openapi.png)
-1. Choose **Create** to create the API Management instance, which may take several minutes.
+1. Choose **Export** to create the API Management instance, which may take several minutes.
1. After Azure creates the instance, it enables the **Enable Application Insights** option on the page. Select it to send logs to the same place as the function application.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-python.md
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.
By default, the Functions runtime collects logs and other telemetry data generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure), which sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib). >[!NOTE]
->To use the OpenCensus Python extensions, you need to enable [Python worker extensions](#python-worker-extensions) in your function app by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1` in your [application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+>To use the OpenCensus Python extensions, you need to enable [Python worker extensions](#python-worker-extensions) in your function app by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1`. You also need to switch to using the Application Insights connection string by adding the [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string) setting to your [application settings](functions-how-to-use-azure-function-app-settings.md#settings), if it's not already there.
```
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-run-local.md
The following considerations apply to project initialization:
+ When you don't provide a project name, the current folder is initialized.
-+ If you plan to publish your project to a custom Linux container, use the `--dockerfile` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function on Linux using a custom image](functions-create-function-linux-custom-image.md).
++ If you plan to publish your project to a custom Linux container, use the `--docker` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function on Linux using a custom image](functions-create-function-linux-custom-image.md). Certain languages may have additional considerations:
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory B2C](https://azure.microsoft.com/services/active-directory-b2c/) | &#x2705; | &#x2705; | | [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | &#x2705; | &#x2705; | | [Azure Active Directory Provisioning Service](../../active-directory/app-provisioning/user-provisioning.md)| &#x2705; | &#x2705; |
+| [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | &#x2705; | &#x2705; | | [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | &#x2705; | &#x2705; | | [Azure Arc-enabled Servers](../../azure-arc/servers/overview.md) | &#x2705; | &#x2705; | | [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; |
-| [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; |
| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | | [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | | [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Database for MariaDB](https://azure.microsoft.com/services/mariadb/) | &#x2705; | &#x2705; | | [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/) | &#x2705; | &#x2705; | | [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | &#x2705; | &#x2705; |
-| [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; |
| [Azure Databricks](https://azure.microsoft.com/services/databricks/) **&ast;&ast;** | &#x2705; | &#x2705; | | [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | | [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Functions](https://azure.microsoft.com/services/functions/) | &#x2705; | &#x2705; | | [Azure Health Bot](/healthbot/) | &#x2705; | &#x2705; | | [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/) | &#x2705; | &#x2705; |
-| [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; |
| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; | | [Azure Immersive Reader](https://azure.microsoft.com/services/immersive-reader/) | &#x2705; | &#x2705; | | [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Marketplace portal](https://azuremarketplace.microsoft.com/) | &#x2705; | &#x2705; | | [Azure Maps](https://azure.microsoft.com/services/azure-maps/) | &#x2705; | &#x2705; | | [Azure Media Services](https://azure.microsoft.com/services/media-services/) | &#x2705; | &#x2705; |
-| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | &#x2705; | &#x2705; |
| [Azure Monitor](https://azure.microsoft.com/services/monitor/) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | | [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | &#x2705; | &#x2705; | | [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | | [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; |
-| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; |
| [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | &#x2705; | &#x2705; | | [Azure Sphere](https://azure.microsoft.com/services/azure-sphere/) | &#x2705; | &#x2705; | | [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | | [Cognitive | [Cognitive
-| [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; |
| [Cognitive | [Cognitive | [Cognitive
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | | [Data Factory](https://azure.microsoft.com/services/data-factory/) | &#x2705; | &#x2705; | | [Dataverse](/powerapps/maker/common-data-service/data-platform-intro) (incl. [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake)) | &#x2705; | &#x2705; |
-| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; |
| [Dynamics 365 Commerce](https://dynamics.microsoft.com/commerce/overview/)| &#x2705; | &#x2705; | | [Dynamics 365 Customer Service](https://dynamics.microsoft.com/customer-service/overview/)| &#x2705; | &#x2705; | | [Dynamics 365 Field Service](https://dynamics.microsoft.com/field-service/overview/)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | | [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | | [Load Balancer](https://azure.microsoft.com/services/load-balancer/) | &#x2705; | &#x2705; |
-| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; |
| [Microsoft Azure Attestation](https://azure.microsoft.com/services/azure-attestation/)| &#x2705; | &#x2705; | | [Microsoft Azure Marketplace portal](https://azuremarketplace.microsoft.com/marketplace/)| &#x2705; | &#x2705; | | [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (incl. [UEBA](../../sentinel/identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba)) | &#x2705; | &#x2705; | | [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | | [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; |
-| [Multi-factor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) (incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md)) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory (Free and Basic)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Active Directory (Premium P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Cognitive Search](https://azure.microsoft.com/services/search/) (formerly Azure Search) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Databricks](https://azure.microsoft.com/services/databricks/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure DNS](https://azure.microsoft.com/services/dns/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) **&ast;&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Policy](https://azure.microsoft.com/services/azure-policy/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Policy's guest configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Scheduler](../../scheduler/scheduler-intro.md) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Web Application Firewall](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Batch](https://azure.microsoft.com/services/batch/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Cognitive | [Cognitive
-| [Container Instances](https://azure.microsoft.com/services/container-instances/)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Container Instances](https://azure.microsoft.com/services/container-instances/)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Container Registry](https://azure.microsoft.com/services/container-registry/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dynamics 365 Project Service Automation](/dynamics365/project-operations/psa/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Sales](https://dynamics.microsoft.com/sales/overview/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Supply Chain Management](https://dynamics.microsoft.com/supply-chain-management/overview/) | &#x2705; | &#x2705; | | | |
-| [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | &#x2705; | | | | [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Defender for Identity](/defender-for-identity/what-is) (formerly Azure Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/) (formerly Azure Security for IoT) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Graph](/graph/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Microsoft Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (formerly Azure Sentinel) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Multi-factor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
azure-government Documentation Government Plan Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-plan-security.md
The isolation of the Azure Government environment is achieved through a series o
- Physically isolated hardware - Physical barriers to the hardware using biometric devices and cameras - Conditional access (Azure RBAC, workflow)-- Specific credentials and multifactor authentication for logical access
+- Specific credentials and multi-factor authentication for logical access
- Infrastructure for Azure Government is located within the United States Within the Azure Government network, internal network system components are isolated from other system components through implementation of separate subnets and access control policies on management interfaces. Azure Government doesn't directly peer with the public internet or with the Microsoft corporate network. Azure Government directly peers to the commercial Microsoft Azure network, which has routing and transport capabilities to the Internet and the Microsoft Corporate network. Azure Government limits its exposed surface area by applying extra protections and communications capabilities of our commercial Azure network. In addition, Azure Government ExpressRoute (ER) uses peering with our customerΓÇÖs networks over non-Internet private circuits to route ER customer ΓÇ£DMZΓÇ¥ networks using specific Border Gateway Protocol (BGP)/AS peering as a trust boundary for application routing and associated policy enforcement.
Microsoft takes strong measures to protect your data from inappropriate access o
Microsoft engineers can be granted access to customer data using temporary credentials via **Just-in-Time (JIT)** access. There must be an incident logged in the Azure Incident Management system that describes the reason for access, approval record, what data was accessed, etc. This approach ensures that there's appropriate oversight for all access to customer data and that all JIT actions (consent and access) are logged for audit. Evidence that procedures have been established for granting temporary access for Azure personnel to customer data and applications upon appropriate approval for customer support or incident handling purposes is available from the Azure [SOC 2 Type 2 attestation report](/azure/compliance/offerings/offering-soc-2) produced by an independent third-party auditing firm.
-JIT access works with multifactor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published guidance on [securing privileged access](/security/compass/overview). Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they don't have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
+JIT access works with multi-factor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published guidance on [securing privileged access](/security/compass/overview). Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they don't have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
### Customer Lockbox
azure-maps Render Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/render-coverage.md
Title: Render coverage | Microsoft Azure Maps
description: Learn whether Azure Maps renders various regions with detailed or simplified data. See the level it uses for raster-tile and vector-tile maps in those regions. Previously updated : 03/22/2019 Last updated : 01/14/2022
Azure Maps uses both raster tiles and vector tiles to create maps. At the lowest
However, Maps doesn't have the same level of information and accuracy for all regions. The following tables detail the level of information you can render for each region.
-## Legend
+### Legend
-| Symbol | Meaning |
-|--||
-| Γ£ô | Region is represented with detailed data. |
-| ├ÿ | Region is represented with simplified data. |
--
-## Africa
--
-| Country/Region | Raster Tiles Unified | Vector Tiles Unified |
-| | :: | :: |
-| Algeria | Γ£ô | Γ£ô |
-| Angola | Γ£ô | Γ£ô |
-| Benin | Γ£ô | Γ£ô |
-| Botswana | Γ£ô | Γ£ô |
-| Burkina Faso | Γ£ô | Γ£ô |
-| Burundi | Γ£ô | Γ£ô |
-| Cabo Verde | Γ£ô | Γ£ô |
-| Cameroon | Γ£ô | Γ£ô |
-| Central African Republic | Γ£ô | ├ÿ |
-| Chad | Γ£ô | ├ÿ |
-| Comoros | Γ£ô | ├ÿ |
-| Democratic Republic of the Congo | Γ£ô | Γ£ô |
-| C├┤te d'Ivoire | Γ£ô | ├ÿ |
-| Djibouti | Γ£ô | ├ÿ |
-| Egypt | Γ£ô | Γ£ô |
-| Equatorial Guinea | Γ£ô | ├ÿ |
-| Eritrea | Γ£ô | ├ÿ |
-| Ethiopia | Γ£ô | ├ÿ |
-| Gabon | Γ£ô | Γ£ô |
-| Gambia | Γ£ô | ├ÿ |
-| Ghana | Γ£ô | Γ£ô |
-| Guinea | Γ£ô | ├ÿ |
-| Guinea-Bissau | Γ£ô | ├ÿ |
-| Kenya | Γ£ô | Γ£ô |
-| Lesotho | Γ£ô | Γ£ô |
-| Liberia | Γ£ô | ├ÿ |
-| Libya | Γ£ô | ├ÿ |
-| Madagascar | Γ£ô | ├ÿ |
-| Malawi | Γ£ô | Γ£ô |
-| Mali | Γ£ô | Γ£ô |
-| Mauritania | Γ£ô | Γ£ô |
-| Mauritius | Γ£ô | Γ£ô |
-| Mayotte | Γ£ô | Γ£ô |
-| Morocco | Γ£ô | Γ£ô |
-| Mozambique | Γ£ô | Γ£ô |
-| Namibia | Γ£ô | Γ£ô |
-| Niger | Γ£ô | Γ£ô |
-| Nigeria | Γ£ô | Γ£ô |
-| Réunion | ✓ | ✓ |
-| Rwanda | Γ£ô | Γ£ô |
-| Saint Helena, Ascension and Tristan da Cunha | Γ£ô | ├ÿ |
-| São Tomé and Príncipe | ✓ | Ø |
-| Senegal | Γ£ô | Γ£ô |
-| Sierra Leone | Γ£ô | Γ£ô |
-| Somalia | Γ£ô | Γ£ô |
-| South Africa | Γ£ô | Γ£ô |
-| South Sudan | Γ£ô | Γ£ô |
-| Sudan | Γ£ô | Γ£ô |
-| Swaziland | Γ£ô | Γ£ô |
-| United Republic of Tanzania | Γ£ô | Γ£ô |
-| Togo | Γ£ô | Γ£ô |
-| Tunisia | Γ£ô | Γ£ô |
-| Uganda | Γ£ô | Γ£ô |
-| Zambia | Γ£ô | Γ£ô |
-| Zimbabwe | Γ£ô | Γ£ô |
+| Symbol | Meaning |
+|--|-|
+| Γ£ô | Country is provided with detailed data. |
+| Γùæ | Country is provided with simplified data. |
+| Country is missing | Country data is not provided. |
## Americas
-| Country/Region | Raster Tiles Unified | Vector Tiles Unified |
-| | :: | :: |
-| Anguilla | Γ£ô | Γ£ô |
-| Antigua and Barbuda | Γ£ô | Γ£ô |
-| Argentina | Γ£ô | Γ£ô |
-| Aruba | Γ£ô | Γ£ô |
-| Bahamas | Γ£ô | Γ£ô |
-| Barbados | Γ£ô | Γ£ô |
-| Belize | Γ£ô | Γ£ô |
-| Bermuda | Γ£ô | Γ£ô |
-| Plurinational State of Bolivia | Γ£ô | Γ£ô |
-| Bonaire, Sint Eustatius, and Saba | Γ£ô | Γ£ô |
-| Brazil | Γ£ô | Γ£ô |
-| Canada | Γ£ô | Γ£ô |
-| Cayman Islands | Γ£ô | Γ£ô |
-| Chile | Γ£ô | Γ£ô |
-| Colombia | Γ£ô | Γ£ô |
-| Costa Rica | Γ£ô | Γ£ô |
-| Cuba | Γ£ô | Γ£ô |
-| Curaçao | ✓ | ✓ |
-| Dominica | Γ£ô | Γ£ô |
-| Dominican Republic | Γ£ô | Γ£ô |
-| Ecuador | Γ£ô | Γ£ô |
-| Falkland Islands (Malvinas) | Γ£ô | Γ£ô |
-| French Guiana | Γ£ô | Γ£ô |
-| Greenland | Γ£ô | ├ÿ |
-| Grenada | Γ£ô | Γ£ô |
-| Guadeloupe | Γ£ô | Γ£ô |
-| Guatemala | Γ£ô | Γ£ô |
-| Guyana | Γ£ô | Γ£ô |
-| Haiti | Γ£ô | Γ£ô |
-| Honduras | Γ£ô | Γ£ô |
-| Jamaica | Γ£ô | Γ£ô |
-| Martinique | Γ£ô | Γ£ô |
-| Mexico | Γ£ô | Γ£ô |
-| Montserrat | Γ£ô | Γ£ô |
-| Nicaragua | Γ£ô | Γ£ô |
-| Northern Mariana Islands | Γ£ô | Γ£ô |
-| Panama | Γ£ô | Γ£ô |
-| Paraguay | Γ£ô | Γ£ô |
-| Peru | Γ£ô | Γ£ô |
-| Puerto Rico | Γ£ô | Γ£ô |
-| Quebec (Canada) | Γ£ô | Γ£ô |
-| Saint Barthélemy | ✓ | ✓ |
-| Saint Kitts and Nevis | Γ£ô | Γ£ô |
-| Saint Lucia | Γ£ô | Γ£ô |
-| Saint Martin (French) | Γ£ô | Γ£ô |
-| Saint Pierre and Miquelon | Γ£ô | Γ£ô |
-| Saint Vincent and the Grenadines | Γ£ô | Γ£ô |
-| Sint Maarten (Dutch) | Γ£ô | Γ£ô |
-| South Georgia and the South Sandwich Islands | Γ£ô | Γ£ô |
-| Suriname | Γ£ô | Γ£ô |
-| Trinidad and Tobago | Γ£ô | Γ£ô |
-| Turks and Caicos Islands | Γ£ô | Γ£ô |
-| United States | Γ£ô | Γ£ô |
-| Uruguay | Γ£ô | Γ£ô |
-| Venezuela | Γ£ô | Γ£ô |
-| Virgin Islands, British | Γ£ô | Γ£ô |
-| Virgin Islands, U.S. | Γ£ô | Γ£ô |
-
-## Asia
-
-| Country/Region | Raster Tiles Unified | Vector Tiles Unified |
-| | :: | :: |
-| Afghanistan | | ├ÿ |
-| Bahrain | Γ£ô | Γ£ô |
-| Bangladesh | | ├ÿ |
-| Bhutan | | ├ÿ |
-| British Indian Ocean Territory | | ├ÿ |
-| Brunei | Γ£ô | Γ£ô |
-| Cambodia | | ├ÿ |
-| China | | ├ÿ |
-| Cocos (Keeling) Islands | | ├ÿ |
-| Democratic People's Republic of Korea | | ├ÿ |
-| Hong Kong SAR | Γ£ô | Γ£ô |
-| India | ├ÿ | Γ£ô |
-| Indonesia | Γ£ô | Γ£ô |
-| Iran | | ├ÿ |
-| Iraq | Γ£ô | Γ£ô |
-| Israel | | Γ£ô |
-| Japan | | ├ÿ |
-| Jordan | Γ£ô | Γ£ô |
-| Kazakhstan | | Γ£ô |
-| Kuwait | Γ£ô | Γ£ô |
-| Kyrgyzstan | | ├ÿ |
-| Lao People's Democratic Republic | | ├ÿ |
-| Lebanon | Γ£ô | Γ£ô |
-| Macao SAR | Γ£ô | Γ£ô |
-| Malaysia | Γ£ô | Γ£ô |
-| Maldives | | ├ÿ |
-| Mongolia | | ├ÿ |
-| Myanmar | | ├ÿ |
-| Nepal | | ├ÿ |
-| Oman | Γ£ô | Γ£ô |
-| Pakistan | | ├ÿ |
-| Philippines | Γ£ô | Γ£ô |
-| Qatar | Γ£ô | Γ£ô |
-| Republic of Korea | Γ£ô | ├ÿ |
-| Saudi Arabia | Γ£ô | Γ£ô |
-| Senkaku Islands | | Γ£ô |
-| Singapore | Γ£ô | Γ£ô|
-| Sri Lanka | | ├ÿ |
-| Syrian Arab Republic | | ├ÿ |
-| Taiwan | Γ£ô | Γ£ô |
-| Tajikistan | | ├ÿ |
-| Thailand | Γ£ô | Γ£ô |
-| Timor-Leste | | ├ÿ |
-| Turkmenistan | | ├ÿ |
-| United Arab Emirates | Γ£ô | Γ£ô |
-| United States Minor Outlying Islands | | ├ÿ |
-| Uzbekistan | | ├ÿ |
-| Vietnam | Γ£ô | Γ£ô |
-| Yemen | Γ£ô | Γ£ô |
-
-## Oceania
-
-| Country/Region | Raster Tiles Unified | Vector Tiles Unified |
-| | :: | :: |
-| American Samoa | | Γ£ô |
-| Australia | Γ£ô | Γ£ô |
-| Cook Islands | | ├ÿ |
-| Fiji | | ├ÿ |
-| French Polynesia | | ├ÿ |
-| Guam | Γ£ô | Γ£ô |
-| Kiribati | | ├ÿ |
-| Marshall Islands | | ├ÿ |
-| Micronesia | | ├ÿ |
-| Nauru | | ├ÿ |
-| New Caledonia | | ├ÿ |
-| New Zealand | Γ£ô | Γ£ô |
-| Niue | | ├ÿ |
-| Norfolk Island | | ├ÿ |
-| Palau | | ├ÿ |
-| Papua New Guinea | | ├ÿ |
-| Pitcairn | | ├ÿ |
-| Samoa | | ├ÿ |
-| Solomon Islands | | ├ÿ|
-| Tokelau | | ├ÿ |
-| Tonga | | ├ÿ |
-| Tuvalu | | ├ÿ |
-| Vanuatu | | ├ÿ |
-| Wallis and Futuna | | ├ÿ |
-
+| Country/Region | Coverage |
+|-|:--:|
+| Anguilla | Γ£ô |
+| Antigua & Barbuda | Γ£ô |
+| Argentina | Γ£ô |
+| Aruba | Γ£ô |
+| Bahamas | Γ£ô |
+| Barbados | Γ£ô |
+| Bermuda | Γ£ô |
+| Bonaire, St Eustatius & Saba | Γ£ô |
+| Brazil | Γ£ô |
+| British Virgin Islands | Γ£ô |
+| Canada | Γ£ô |
+| Cayman Islands | Γ£ô |
+| Chile | Γ£ô |
+| Clipperton Island | Γ£ô |
+| Colombia | Γ£ô |
+| Curaçao | ✓ |
+| Dominica | Γ£ô |
+| Falkland Islands | Γ£ô |
+| Grenada | Γ£ô |
+| Guadeloupe | Γ£ô |
+| Haiti | Γ£ô |
+| Jamaica | Γ£ô |
+| Martinique | Γ£ô |
+| Mexico | Γ£ô |
+| Montserrat | Γ£ô |
+| Peru | Γ£ô |
+| Puerto Rico | Γ£ô |
+| Saint Barthélemy | ✓ |
+| Saint Kitts & Nevis | Γ£ô |
+| Saint Lucia | Γ£ô |
+| Saint Martin | Γ£ô |
+| Saint Pierre & Miquelon | Γ£ô |
+| Saint Vincent & Grenadines | Γ£ô |
+| Sint Maarten | Γ£ô |
+| South Georgia & Sandwich Islands | Γ£ô |
+| Trinidad & Tobago | Γ£ô |
+| Turks & Caicos Islands | Γ£ô |
+| U.S. Virgin Islands | Γ£ô |
+| United States | Γ£ô |
+| Uruguay | Γ£ô |
+| Venezuela | Γ£ô |
+
+## Asia Pacific
+
+| Country/Region | Coverage |
+|-|:--:|
+| Australia | Γ£ô |
+| Brunei | Γ£ô |
+| Cambodia | Γ£ô |
+| Guam | Γ£ô |
+| Hong Kong | Γ£ô |
+| India | Γ£ô |
+| Indonesia | Γ£ô |
+| Laos | Γ£ô |
+| Macao | Γ£ô |
+| Malaysia | Γ£ô |
+| Myanmar | Γ£ô |
+| New Zealand | Γ£ô |
+| Philippines | Γ£ô |
+| Singapore | Γ£ô |
+| South Korea | Γùæ |
+| Taiwan | Γ£ô |
+| Thailand | Γ£ô |
+| Vietnam | Γ£ô |
## Europe
-| Country/Region | Raster Tiles Unified | Vector Tiles Unified |
-| | :: | :: |
-| Albania | Γ£ô | Γ£ô |
-| Andorra | Γ£ô | Γ£ô |
-| Armenia | Γ£ô | ├ÿ |
-| Austria | Γ£ô | Γ£ô |
-| Azerbaijan | Γ£ô | ├ÿ |
-| Belarus | ├ÿ | Γ£ô |
-| Belgium | Γ£ô | Γ£ô |
-| Bosnia-Herzegovina | Γ£ô | Γ£ô |
-| Bulgaria | Γ£ô | Γ£ô |
-| Croatia | Γ£ô | Γ£ô |
-| Cyprus | Γ£ô | Γ£ô |
-| Czech Republic | Γ£ô | Γ£ô |
-| Denmark | Γ£ô | Γ£ô |
-| Estonia | Γ£ô | Γ£ô |
-| Faroe Islands | Γ£ô | ├ÿ |
-| Finland | Γ£ô | Γ£ô |
-| France | Γ£ô | Γ£ô |
-| Georgia | Γ£ô | ├ÿ |
-| Germany | Γ£ô | Γ£ô |
-| Gibraltar | Γ£ô | Γ£ô |
-| Greece | Γ£ô | Γ£ô |
-| Guernsey | Γ£ô | Γ£ô |
-| Hungary | Γ£ô | Γ£ô |
-| Iceland | Γ£ô | Γ£ô |
-| Ireland | Γ£ô | Γ£ô |
-| Isle of Man | Γ£ô | Γ£ô |
-| Italy | Γ£ô | Γ£ô |
-| Jan Mayen | Γ£ô | Γ£ô |
-| Jersey | Γ£ô | Γ£ô |
-| Latvia | Γ£ô | Γ£ô |
-| Liechtenstein | Γ£ô | Γ£ô |
-| Lithuania | Γ£ô | Γ£ô |
-| Luxembourg | Γ£ô | Γ£ô |
-| North Macedonia | Γ£ô | Γ£ô |
-| Malta | Γ£ô | Γ£ô |
-| Moldova | Γ£ô | Γ£ô |
-| Monaco | Γ£ô | Γ£ô |
-| Montenegro | Γ£ô | Γ£ô |
-| Netherlands | Γ£ô | Γ£ô |
-| Norway | Γ£ô | Γ£ô |
-| Poland | Γ£ô | Γ£ô |
-| Portugal | Γ£ô | Γ£ô |
-| Romania | Γ£ô | Γ£ô |
-| Russian Federation | Γ£ô | Γ£ô |
-| San Marino | Γ£ô | Γ£ô |
-| Serbia | Γ£ô | Γ£ô |
-| Slovakia | Γ£ô | Γ£ô |
-| Slovenia | Γ£ô | Γ£ô |
-| Southern Kurils | Γ£ô | Γ£ô |
-| Spain | Γ£ô | Γ£ô |
-| Svalbard | Γ£ô | Γ£ô |
-| Sweden | Γ£ô | Γ£ô |
-| Switzerland | Γ£ô | Γ£ô |
-| Turkey | Γ£ô | Γ£ô |
-| Ukraine | Γ£ô | Γ£ô |
-| United Kingdom | Γ£ô | Γ£ô |
-| Vatican City | Γ£ô | Γ£ô |
-
-## Next steps
-
-For more information about Azure Maps rendering, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
-
-Learn about the [coverage areas for the Maps routing service](routing-coverage.md).
+| Country/Region | Coverage |
+|--|:--:|
+| Albania | Γ£ô |
+| Andorra | Γ£ô |
+| Austria | Γ£ô |
+| Belarus | Γ£ô |
+| Belgium | Γ£ô |
+| Bosnia-Herzegovina | Γ£ô |
+| Bulgaria | Γ£ô |
+| Croatia | Γ£ô |
+| Cyprus | Γ£ô |
+| Czech Republic | Γ£ô |
+| Denmark | Γ£ô |
+| Estonia | Γ£ô |
+| Finland | Γ£ô |
+| France | Γ£ô |
+| Germany | Γ£ô |
+| Gibraltar | Γ£ô |
+| Greece | Γ£ô |
+| Hungary | Γ£ô |
+| Iceland | Γ£ô |
+| Ireland | Γ£ô |
+| Italy | Γ£ô |
+| Latvia | Γ£ô |
+| Liechtenstein | Γ£ô |
+| Lithuania | Γ£ô |
+| Luxembourg | Γ£ô |
+| Macedonia | Γ£ô |
+| Malta | Γ£ô |
+| Moldova | Γ£ô |
+| Monaco | Γ£ô |
+| Montenegro | Γ£ô |
+| Netherlands | Γ£ô |
+| Norway | Γ£ô |
+| Poland | Γ£ô |
+| Portugal | Γ£ô |
+| Romania | Γ£ô |
+| Russian Federation | Γ£ô |
+| San Marino | Γ£ô |
+| Serbia | Γ£ô |
+| Slovakia | Γ£ô |
+| Slovenia | Γ£ô |
+| Spain | Γ£ô |
+| Sweden | Γ£ô |
+| Switzerland | Γ£ô |
+| Turkey | Γ£ô |
+| Ukraine | Γ£ô |
+| United Kingdom | Γ£ô |
+| Vatican City | Γ£ô |
+
+## Middle East & Africa
+
+| Country/Region | Coverage |
+||:--:|
+| Algeria | Γ£ô |
+| Angola | Γ£ô |
+| Bahrain | Γ£ô |
+| Benin | Γ£ô |
+| Botswana | Γ£ô |
+| Burkina Faso | Γ£ô |
+| Burundi | Γ£ô |
+| Cameroon | Γ£ô |
+| Congo | Γ£ô |
+| Democratic Republic of Congo | Γ£ô |
+| Egypt | Γ£ô |
+| Gabon | Γ£ô |
+| Ghana | Γ£ô |
+| Iraq | Γ£ô |
+| Jordan | Γ£ô |
+| Kenya | Γ£ô |
+| Kuwait | Γ£ô |
+| Lebanon | Γ£ô |
+| Lesotho | Γ£ô |
+| Malawi | Γ£ô |
+| Mali | Γ£ô |
+| Mauritania | Γ£ô |
+| Mauritius | Γ£ô |
+| Mayotte | Γ£ô |
+| Morocco | Γ£ô |
+| Mozambique | Γ£ô |
+| Namibia | Γ£ô |
+| Niger | Γ£ô |
+| Nigeria | Γ£ô |
+| Oman | Γ£ô |
+| Qatar | Γ£ô |
+| Reunion | Γ£ô |
+| Rwanda | Γ£ô |
+| Saudi Arabia | Γ£ô |
+| Senegal | Γ£ô |
+| South Africa | Γ£ô |
+| Swaziland | Γ£ô |
+| Tanzania | Γ£ô |
+| Togo | Γ£ô |
+| Tunisia | Γ£ô |
+| Uganda | Γ£ô |
+| United Arab Emirates | Γ£ô |
+| Yemen | Γ£ô |
+| Zambia | Γ£ô |
+| Zimbabwe | Γ£ô |
+
+## Additional information
+
+- See [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) for more information about Azure Maps rendering.
+
+- [Azure Maps routing service](routing-coverage.md).
azure-maps Traffic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/traffic-coverage.md
Title: Traffic coverage | Microsoft Azure Maps
description: Learn about traffic coverage in Azure Maps. See whether information on traffic flow and incidents is available in various regions throughout the world. Previously updated : 09/22/2018 Last updated : 01/13/2022
Azure Maps provides rich traffic information in the form of traffic **flow** and **incidents**. This data can be visualized on maps or used to generate smarter routes that factor in real driving conditions.
-However, Maps doesn't have the same level of information and accuracy for all countries or regions. The following table provides information about what kind of traffic information you can request from each country or region:
+The following tables provide information about what kind of traffic information you can request from each country or region. If a market is missing in the following tables, it is not currently supported.
## Americas
-|Country/Region |Incidents |Flow |
-||::|::|
-|Argentina |Γ£ô |Γ£ô |
-|Brazil |Γ£ô |Γ£ô |
-|Canada |Γ£ô |Γ£ô |
-|Chile |Γ£ô |Γ£ô |
-|Colombia |Γ£ô |Γ£ô |
-|Mexico |Γ£ô |Γ£ô |
-|Peru |Γ£ô |Γ£ô |
-|United States |Γ£ô |Γ£ô |
-|+Puerto Rico |Γ£ô |Γ£ô |
-|Uruguay |Γ£ô |Γ£ô |
-
+| Country/Region | Incidents | Flow |
+|-|::|:-:|
+| Argentina | Γ£ô | Γ£ô |
+| Brazil | Γ£ô | Γ£ô |
+| Canada | Γ£ô | Γ£ô |
+| Chile | Γ£ô | Γ£ô |
+| Colombia | Γ£ô | Γ£ô |
+| Guadeloupe | Γ£ô | Γ£ô |
+| Martinique | Γ£ô | Γ£ô |
+| Mexico | Γ£ô | Γ£ô |
+| Peru | Γ£ô | Γ£ô |
+| United States | Γ£ô | Γ£ô |
+| Uruguay | Γ£ô | Γ£ô |
## Asia Pacific
-|Country/Region |Incidents |Flow |
-||::|::|
-|Australia |Γ£ô |Γ£ô |
-|Brunei |Γ£ô |Γ£ô |
-|Hong Kong SAR |Γ£ô |Γ£ô |
-|India |Γ£ô |Γ£ô |
-|Indonesia |Γ£ô |Γ£ô |
-|Kazakhstan |Γ£ô |Γ£ô |
-|Macao SAR |Γ£ô |Γ£ô |
-|Malaysia |Γ£ô |Γ£ô |
-|New Zealand |Γ£ô |Γ£ô |
-|Philippines |Γ£ô |Γ£ô |
-|Singapore |Γ£ô |Γ£ô |
-|Taiwan |Γ£ô |Γ£ô |
-|Thailand |Γ£ô |Γ£ô |
-|Vietnam |Γ£ô |Γ£ô |
-
+| Country/Region | Incidents | Flow |
+|-|::|:-:|
+| Australia | Γ£ô | Γ£ô |
+| Brunei | Γ£ô | Γ£ô |
+| Hong Kong | Γ£ô | Γ£ô |
+| India | Γ£ô | Γ£ô |
+| Indonesia | Γ£ô | Γ£ô |
+| Kazakhstan | Γ£ô | Γ£ô |
+| Macao | Γ£ô | Γ£ô |
+| Malaysia | Γ£ô | Γ£ô |
+| New Zealand | Γ£ô | Γ£ô |
+| Philippines | Γ£ô | Γ£ô |
+| Singapore | Γ£ô | Γ£ô |
+| Taiwan | Γ£ô | Γ£ô |
+| Thailand | Γ£ô | Γ£ô |
+| Vietnam | Γ£ô | Γ£ô |
## Europe
-|Country/Region |Incidents |Flow |
-||::|::|
-|Andorra |Γ£ô |Γ£ô |
-|Austria |Γ£ô |Γ£ô |
-|Belarus |Γ£ô |Γ£ô |
-|Belgium |Γ£ô |Γ£ô |
-|Bosnia and Herzegovina |Γ£ô |Γ£ô |
-|Bulgaria |Γ£ô |Γ£ô |
-|Croatia |Γ£ô |Γ£ô |
-|Czech Republic |Γ£ô |Γ£ô |
-|Denmark |Γ£ô |Γ£ô |
-|Estonia | | Γ£ô |
-|Finland |Γ£ô |Γ£ô |
-|+Åland Islands |✓ |✓ |
-|France |Γ£ô |Γ£ô |
-|Monaco |Γ£ô |Γ£ô |
-|Germany |Γ£ô |Γ£ô |
-|Greece |Γ£ô |Γ£ô |
-|Hungary |Γ£ô |Γ£ô |
-|Iceland |Γ£ô |Γ£ô |
-|Ireland |Γ£ô |Γ£ô |
-|Italy |Γ£ô |Γ£ô |
-|Kazakhstan |Γ£ô |Γ£ô |
-|Latvia |Γ£ô |Γ£ô |
-|Lesotho |Γ£ô |Γ£ô |
-|Liechtenstein |Γ£ô |Γ£ô |
-|Lithuania |Γ£ô |Γ£ô |
-|Luxembourg |Γ£ô |Γ£ô |
-|Malta |Γ£ô |Γ£ô |
-|Monaco |Γ£ô |Γ£ô |
-|Netherlands |Γ£ô |Γ£ô |
-|Norway |Γ£ô |Γ£ô |
-|Poland |Γ£ô |Γ£ô |
-|Portugal |Γ£ô |Γ£ô |
-|+Azores and Madeira |Γ£ô |Γ£ô |
-|Romania |Γ£ô |Γ£ô |
-|Russian Federation |Γ£ô |Γ£ô |
-|San Marino |Γ£ô |Γ£ô |
-|Serbia |Γ£ô |Γ£ô |
-|Slovakia |Γ£ô |Γ£ô |
-|Slovenia |Γ£ô |Γ£ô |
-|Spain |Γ£ô |Γ£ô |
-|+Andorra |Γ£ô |Γ£ô |
-|+Balearic Islands |Γ£ô |Γ£ô |
-|+Canary Islands |Γ£ô |Γ£ô |
-|Sweden |Γ£ô |Γ£ô |
-|Switzerland |Γ£ô |Γ£ô |
-|Turkey |Γ£ô |Γ£ô |
-|Ukraine |Γ£ô |Γ£ô |
-|United Kingdom |Γ£ô |Γ£ô |
-|+Gibraltar |Γ£ô |Γ£ô |
-|+Guernsey & Jersey |Γ£ô |Γ£ô |
-|+Isle of Man |Γ£ô |Γ£ô |
-|Vatican City |Γ£ô |Γ£ô |
-
+| Country/Region | Incidents | Flow |
+||::|:-:|
+| Belarus | Γ£ô | Γ£ô |
+| Belgium | Γ£ô | Γ£ô |
+| Bosnia and Herzegovina | Γ£ô | Γ£ô |
+| Bulgaria | Γ£ô | Γ£ô |
+| Croatia | Γ£ô | Γ£ô |
+| Cyprus | Γ£ô | Γ£ô |
+| Czech Republic | Γ£ô | Γ£ô |
+| Denmark | Γ£ô | Γ£ô |
+| Estonia | Γ£ô | Γ£ô |
+| Finland | Γ£ô | Γ£ô |
+| France | Γ£ô | Γ£ô |
+| Germany | Γ£ô | Γ£ô |
+| Gibraltar | Γ£ô | Γ£ô |
+| Greece | Γ£ô | Γ£ô |
+| Hungary | Γ£ô | Γ£ô |
+| Iceland | Γ£ô | Γ£ô |
+| Ireland | Γ£ô | Γ£ô |
+| Italy | Γ£ô | Γ£ô |
+| Latvia | Γ£ô | Γ£ô |
+| Liechtenstein | Γ£ô | Γ£ô |
+| Lithuania | Γ£ô | Γ£ô |
+| Luxembourg | Γ£ô | Γ£ô |
+| Malta | Γ£ô | Γ£ô |
+| Monaco | Γ£ô | Γ£ô |
+| Netherlands | Γ£ô | Γ£ô |
+| Norway | Γ£ô | Γ£ô |
+| Poland | Γ£ô | Γ£ô |
+| Portugal | Γ£ô | Γ£ô |
+| Romania | Γ£ô | Γ£ô |
+| Russian Federation | Γ£ô | Γ£ô |
+| San Marino | Γ£ô | Γ£ô |
+| Serbia | Γ£ô | Γ£ô |
+| Slovakia | Γ£ô | Γ£ô |
+| Slovenia | Γ£ô | Γ£ô |
+| Spain | Γ£ô | Γ£ô |
+| Sweden | Γ£ô | Γ£ô |
+| Switzerland | Γ£ô | Γ£ô |
+| Turkey | Γ£ô | Γ£ô |
+| Ukraine | Γ£ô | Γ£ô |
+| United Kingdom | Γ£ô | Γ£ô |
## Middle East and Africa
-|Country/Region |Incidents |Flow |
-||::|::|
-|Bahrain |Γ£ô |Γ£ô |
-|Egypt |Γ£ô |Γ£ô |
-|Israel |Γ£ô |Γ£ô |
-|Kenya |Γ£ô |Γ£ô |
-|Kuwait |Γ£ô |Γ£ô |
-|Morocco |Γ£ô |Γ£ô |
-|Mozambique |Γ£ô |Γ£ô |
-|Nigeria |Γ£ô |Γ£ô |
-|Oman |Γ£ô |Γ£ô |
-|Qatar |Γ£ô |Γ£ô |
-|Saudi Arabia |Γ£ô |Γ£ô |
-|South Africa |Γ£ô |Γ£ô |
-|United Arab Emirates |Γ£ô |Γ£ô |
-
-## Next steps
-
-For more information about Azure Maps traffic data, see the [Traffic](/rest/api/maps/traffic) reference pages.
+| Country/Region | Incidents | Flow |
+|-|::|:-:|
+| Bahrain | Γ£ô | Γ£ô |
+| Egypt | Γ£ô | Γ£ô |
+| Israel | Γ£ô | Γ£ô |
+| Kenya | Γ£ô | Γ£ô |
+| Kuwait | Γ£ô | Γ£ô |
+| Lesotho | Γ£ô | Γ£ô |
+| Morocco | Γ£ô | Γ£ô |
+| Mozambique | Γ£ô | Γ£ô |
+| Nigeria | Γ£ô | Γ£ô |
+| Oman | Γ£ô | Γ£ô |
+| Qatar | Γ£ô | Γ£ô |
+| Reunion | Γ£ô | Γ£ô |
+| Saudi Arabia | Γ£ô | Γ£ô |
+| South Africa | Γ£ô | Γ£ô |
+| United Arab Emirates | Γ£ô | Γ£ô |
+
+## Additional information
+
+For more information about incorporating Azure Maps traffic data into your mapping applications, see the [Traffic](/rest/api/maps/traffic) REST API reference.
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agent-manage.md
The following steps demonstrate how to reconfigure the Linux agent if you decide
The agent service does not need to be restarted in order for the changes to take effect. ## Update proxy settings
-To configure the agent to communicate to the service through a proxy server or [Log Analytics gateway](./gateway.md) after deployment, use one of the following methods to complete this task.
+Log Analytics Agent (MMA) does not use the system proxy settings. Hence, user has to pass proxy setting while installing MMA and these settings will be stored under MMA configuration(registry) on VM. To configure the agent to communicate to the service through a proxy server or [Log Analytics gateway](./gateway.md) after deployment, use one of the following methods to complete this task.
### Windows agent
To configure the agent to communicate to the service through a proxy server or [
4. Click **Use a proxy server** and provide the URL and port number of the proxy server or gateway. If your proxy server or Log Analytics gateway requires authentication, type the username and password to authenticate and then click **OK**. + #### Update settings using PowerShell Copy the following sample PowerShell code, update it with information specific to your environment, and save it with a PS1 file name extension. Run the script on each computer that connects directly to the Log Analytics workspace in Azure Monitor.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-manage.md
WE strongly recommended to update to generally available versions listed as foll
|:|:|:|:|:| | June 2021 | General availability announced. <ul><li>All features except metrics destination now generally available</li><li>Production quality, security and compliance</li><li>Availability in all public regions</li><li>Performance and scale improvements for higher EPS</li></ul> [Learn more](https://azure.microsoft.com/updates/azure-monitor-agent-and-data-collection-rules-now-generally-available/) | 1.0.12.0 | 1.9.1.0 | | July 2021 | <ul><li>Support for direct proxies</li><li>Support for Log Analytics gateway</li></ul> [Learn more](https://azure.microsoft.com/updates/general-availability-azure-monitor-agent-and-data-collection-rules-now-support-direct-proxies-and-log-analytics-gateway/) | 1.1.1.0 | 1.10.5.0 |
-| August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>1</sup> |
-| September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Addressed regression introduced in 1.1.3.1<sup>2</sup> for Arc Windows servers</li></ul> | 1.1.3.2 | 1.12.2.0 <sup>2</sup> |
-| December 2021 | Fixed issues impacting Linux Arc-enabled servers | N/A | 1.14.7.0<sup>3</sup> |
+| August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>Hotfix</sup> |
+| September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Fixed issue for Arc Windows servers</li></ul> | 1.1.3.2<sup>Hotfix</sup> | 1.12.2.0 <sup>1</sup> |
+| December 2021 | <ul><li>Fixed issues impacting Linux Arc-enabled servers</li><li>'Heartbeat' table > 'Category' column reports "Azure Monitor Agent" in Log Analytics for Windows</li></ul> | 1.1.4.0 | 1.14.7.0<sup>2</sup> |
+| January 2021 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><ul> | Not available yet | 1.15.2.0<sup>Hotfix</sup> |
-<sup>1</sup> Do not use AMA Linux version 1.10.7.0
-<sup>2</sup> Known regression where it's not working on Arc-enabled servers
-<sup>3</sup> Bug identified wherein Linux performance counters data stops flowing on restarting/rebooting the machine(s). Fix underway and will be available in next monthly version update.
+<sup>Hotfix</sup> Do not use AMA Linux versions v1.10.7, v1.15.1 and AMA Windows v1.1.3.1. Please use hotfixed versions listed above.
+<sup>1</sup> Known issue: No data collected from Linux Arc-enabled servers
+<sup>2</sup> Known issue: Linux performance counters data stops flowing on restarting/rebooting the machine(s)
## Prerequisites
To uninstall the Azure Monitor agent using the Azure portal, navigate to your vi
### Update To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above. -
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature. Navigate to your virtual machine or scale set, select the **Extensions** tab and click on **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that pops up, click **Enable automatic upgrade**.
## Using Resource Manager template
Remove-AzVMExtension -Name AMALinux -ResourceGroupName <resource-group-name> -VM
### Update on Azure virtual machines
-To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
+To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following PowerShell commands.
+# [Windows](#tab/PowerShellWindows)
+```powershell
+Set-AzVMExtension -ExtensionName AMAWindows -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorWindowsAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true
+```
+# [Linux](#tab/PowerShellLinux)
+```powershell
+Set-AzVMExtension -ExtensionName AMALinux -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorLinuxAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true
+```
+
+
### Install on Azure Arc-enabled servers Use the following PowerShell commands to install the Azure Monitor agent on Azure Arc-enabled servers.
az vm extension delete --resource-group <resource-group-name> --vm-name <virtual
### Update on Azure virtual machines
-To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
+To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following CLI commands.
+# [Windows](#tab/CLIWindows)
+```azurecli
+az vm extension set -name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true
+```
+# [Linux](#tab/CLILinux)
+```azurecli
+az vm extension set -name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true
+```
++ ### Install on Azure Arc-enabled servers Use the following CLI commands to install the Azure Monitor agent onAzure Azure Arc-enabled servers.
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The Azure Monitor agent replaces the following legacy agents that are currently
In addition to consolidating this functionality into a single agent, the Azure Monitor agent provides the following benefits over the existing agents: - **Scope of monitoring:** Centrally configure collection for different sets of data from different sets of VMs.-- **Linux multi-homing:** Send data from Linux VMs to multiple workspaces.
+- **Linux multi-homing:** Send data from Windows and Linux VMs to multiple Log Analytics workspaces (i.e. "multi-homing") and/or other [supported destinations](#data-sources-and-destinations).
- **Windows event filtering:** Use XPATH queries to filter which Windows events are collected. - **Improved extension management:** The Azure Monitor agent uses a new method of handling extensibility that's more transparent and controllable than management packs and Linux plug-ins in the current Log Analytics agents.
Azure Monitor agent is available in all public regions that support Log Analytic
## Supported operating systems For a list of the Windows and Linux operating system versions that are currently supported by the Azure Monitor agent, see [Supported operating systems](agents-overview.md#supported-operating-systems).
+## Data sources and destinations
+The following table lists the types of data you can currently collect with the Azure Monitor agent by using data collection rules and where you can send that data. For a list of insights, solutions, and other solutions that use the Azure Monitor agent to collect other kinds of data, see [What is monitored by Azure Monitor?](../monitor-reference.md).
+
+The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log Analytics workspace supporting Azure Monitor Logs.
+
+| Data source | Destinations | Description |
+|:|:|:|
+| Performance | Azure Monitor Metrics (preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads |
+| Windows event logs | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system |
+| Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system |
+
+<sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.
+<sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee and CEF (Common Event Format).
+ ## Supported services and features The following table shows the current support for the Azure Monitor agent with other Azure services.
As such, ensure you're not collecting the same data from both agents. If you are
## Costs There's no cost for the Azure Monitor agent, but you might incur charges for the data ingested. For details on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-## Data sources and destinations
-The following table lists the types of data you can currently collect with the Azure Monitor agent by using data collection rules and where you can send that data. For a list of insights, solutions, and other solutions that use the Azure Monitor agent to collect other kinds of data, see [What is monitored by Azure Monitor?](../monitor-reference.md).
-
-The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log Analytics workspace supporting Azure Monitor Logs.
-
-| Data source | Destinations | Description |
-|:|:|:|
-| Performance | Azure Monitor Metrics (preview)<sup>1</sup><br>Log Analytics workspace | Numerical values measuring performance of different aspects of operating system and workloads |
-| Windows event logs | Log Analytics workspace | Information sent to the Windows event logging system |
-| Syslog | Log Analytics workspace | Information sent to the Linux event logging system |
-
-<sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
## Security The Azure Monitor agent doesn't require any keys but instead requires a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). You must have a system-assigned managed identity enabled on each virtual machine before you deploy the agent.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To specify other logs and performance counters from the [currently supported dat
[![Data source custom](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
-On the **Destination** tab, add one or more destinations for the data source. Windows event and Syslog data sources can only send to Azure Monitor Logs. Performance counters can send to both Azure Monitor Metrics and Azure Monitor Logs.
+On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of same of different types, for instance multiple Log Analytics workspaces (i.e. "multi-homing"). Windows event and Syslog data sources can only send to Azure Monitor Logs. Performance counters can send to both Azure Monitor Metrics and Azure Monitor Logs.
[![Destination](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/log-analytics-agent.md
If you plan to use the Azure Automation Hybrid Runbook Worker to connect to and
### Proxy configuration
-The Windows and Linux agent supports communicating either through a proxy server or Log Analytics gateway to Azure Monitor using the HTTPS protocol. Both anonymous and basic authentication (username/password) are supported. For the Windows agent connected directly to the service, the proxy configuration is specified during installation or [after deployment](../agents/agent-manage.md#update-proxy-settings) from Control Panel or with PowerShell.
+The Windows and Linux agent supports communicating either through a proxy server or Log Analytics gateway to Azure Monitor using the HTTPS protocol. Both anonymous and basic authentication (username/password) are supported. For the Windows agent connected directly to the service, the proxy configuration is specified during installation or [after deployment](../agents/agent-manage.md#update-proxy-settings) from Control Panel or with PowerShell. Log Analytics Agent (MMA) does not use the system proxy settings. Hence, user has to pass proxy setting while installing MMA and these settings will be stored under MMA configuration(registry) on VM.
For the Linux agent, the proxy server is specified during installation or [after installation](../agents/agent-manage.md#update-proxy-settings) by modifying the proxy.conf configuration file. The Linux agent proxy configuration value has the following syntax:
azure-monitor Alerts Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-activity-log.md
For example:
```
-For more information about the activity log fields, see [Azure activity log event schema](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fazure-monitor%2Fplatform%2Factivity-log-schema&data=02%7C01%7CNoga.Lavi%40microsoft.com%7C90b7c2308c0647c0347908d7c9a2918d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637199572373563632&sdata=6QXLswwZgUHFXCuF%2FgOSowLzA8iOALVgvL3GMVhkYJY%3D&reserved=0).
+For more information about the activity log fields, see [Azure activity log event schema](../essentials/activity-log-schema.md).
> [!NOTE] > It might take up to 5 minutes for the new activity log alert rule to become active.
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-web-apps-java.md
# Application Monitoring for Azure App Service and Java
-Monitoring of your Java-based web applications running on [Azure App Services](../../app-service/index.yml) does not require any modifications to the code. This article will walk you through enabling Azure Monitor application insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
+Monitoring of your Java web applications running on [Azure App Services](../../app-service/index.yml) does not require any modifications to the code. This article will walk you through enabling Azure Monitor Application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
## Enable Application Insights
-The recommended way to enable application monitoring for Java application running on Azure App Services is through Azure portal. Turning on application monitoring in Azure portal will automatically instrument your application with application insights.
+The recommended way to enable application monitoring for Java applications running on Azure App Services is through Azure portal.
+Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes.
+You can apply additional configurations, and then based on your specific scenario you [add your own custom telemetry](./java-in-process-agent.md#modify-telemetry) if needed.
### Auto-instrumentation through Azure portal
-This method requires no code change or advanced configurations, making it the easiest way to get started with monitoring for Azure App Services. You can apply additional configurations, and then based on your specific scenario you can evaluate whether more advanced monitoring through [manual instrumentation](./java-2x-get-started.md?tabs=maven) is needed.
-
-### Enable backend monitoring
-
-You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. Application Insights for Java is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows - code-based apps. It is important to know how your application will be monitored. The integration adds [Application Insights Java 3.x](./java-in-process-agent.md) and you will get the telemetry auto-collected.
+You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required.
+Application Insights for Java is integrated with Azure App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps.
+The integration adds [Application Insights Java 3.x](./java-in-process-agent.md) and you will get the telemetry auto-collected.
1. **Select Application Insights** in the Azure control panel for your app service, then select **Enable**.
You can turn on monitoring for your Java apps running in Azure App Service just
> [!NOTE] > When you select **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**.
- :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot of Change your resource dropdown.":::
+ :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot of Change your resource dropdown.":::
-3. This step is not required. After specifying which resource to use, you can configure the Java agent. If you do not configure the Java agent, default configurations will apply.
+3. This last step is optional. After specifying which resource to use, you can configure the Java agent. If you do not configure the Java agent, default configurations will apply.
The full [set of configurations](./java-standalone-config.md) is available, you just need to paste a valid [json file](./java-standalone-config.md#an-example). **Exclude the connection string and any configurations that are in preview** - you will be able to add the items that are currently in preview as they become generally available.
To enable client-side monitoring for your Java application, you need to [manuall
## Automate monitoring
-### Application settings
- In order to enable telemetry collection with Application Insights, only the following Application settings need to be set:
-|App setting name | Definition | Value |
-|--|:|-:|
-|ApplicationInsightsAgent_EXTENSION_VERSION | Controls runtime monitoring | `~2` for Windows or `~3` for Linux |
-|XDT_MicrosoftApplicationInsights_Java | Flag to control that Java agent is included | 0 or 1 only applicable in Windows
-|APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL | Only use it if you need to debug the integration of Application Insights with App Service | debug
+
+### Application settings definitions
+| App setting name | Definition | Value |
+|||:|
+| ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` in Windows or `~3` in Linux. |
+| XDT_MicrosoftApplicationInsights_Java | Flag to control if Java agent is included. | 0 or 1 (only applicable in Windows). |
> [!NOTE] > Profiler and snapshot debugger are not available for Java applications
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-web-apps-nodejs.md
# Application Monitoring for Azure App Service and Node.js
-Enabling monitoring on your Node.js based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
+Monitoring of your Node.js web applications running on [Azure App Services](../../app-service/index.yml) does not require any modifications to the code. This article will walk you through enabling Azure Monitor Application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
+
+## Enable Application Insights
+
+The easiest way to enable application monitoring for Node.js applications running on Azure App Services is through Azure portal.
+Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes.
> [!NOTE] > If both agent-based monitoring and manual SDK-based instrumentation is detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below.
-## Enable agent-based monitoring
+### Auto-instrumentation through Azure portal
-You can monitor your Node.js apps running in Azure App Service without any code change, just with a couple of simple steps. Application insights for Node.js applications is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps. The integration is in public preview. The integration adds Node.js SDK, which is in GA.
+You can turn on monitoring for your Node.js apps running in Azure App Service just with one click, no code change required.
+Application Insights for Node.js is integrated with Azure App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps.
+The integration is in public preview. The integration adds Node.js SDK, which is in GA.
1. **Select Application Insights** in the Azure control panel for your app service, then select **Enable**.
You can monitor your Node.js apps running in Azure App Service without any code
2. Choose to create a new resource, or select an existing Application Insights resource for this application.
- > [!NOTE]
- > When you click **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**.
-
+ > [!NOTE]
+ > When you select **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**.
+ :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot of Change your resource dropdown.":::
-
-3. Once you have specified which resource to use, you are all set to go.
+3. Once you have specified which resource to use, you are all set to go.
:::image type="content"source="./media/azure-web-apps-nodejs/app-service-node.png" alt-text="Screenshot of instrument your application.":::
To enable client-side monitoring for your Node.js application, you need to [manu
## Automate monitoring
-In order to enable telemetry collection with Application Insights, only the Application settings need to be set:
-
+In order to enable telemetry collection with Application Insights, only the following Application settings need to be set:
### Application settings definitions
-|App setting name | Definition | Value |
-|--|:|-:|
-|ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` in Windows or `~3` in Linux |
-|XDT_MicrosoftApplicationInsights_NodeJS | Flag to control if node.js Agent is included. | 0 or 1 only applicable in Windows. |
+| App setting name | Definition | Value |
+|||:|
+| ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` in Windows or `~3` in Linux. |
+| XDT_MicrosoftApplicationInsights_NodeJS | Flag to control if node.js agent is included. | 0 or 1 (only applicable in Windows). |
+> [!NOTE]
+> Profiler and snapshot debugger are not available for Node.js applications
[!INCLUDE [azure-web-apps-arm-automation](../../../includes/azure-monitor-app-insights-azure-web-apps-arm-automation.md)] - ## Troubleshooting Below is our step-by-step troubleshooting guide for extension/agent based monitoring for Node.js based applications running on Azure App Services.
Below is our step-by-step troubleshooting guide for extension/agent based monito
- Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.`
- If it is not running, follow the [enable Application Insights monitoring instructions](#enable-agent-based-monitoring).
+ If it is not running, follow the [enable Application Insights monitoring instructions](#enable-application-insights).
- Navigate to *D:\local\Temp\status.json* and open *status.json*.
Below is our step-by-step troubleshooting guide for extension/agent based monito
## Release notes
-For the latest updates and bug fixes [consult the release notes](web-app-extension-release-notes.md).
+For the latest updates and bug fixes, [consult the release notes](web-app-extension-release-notes.md).
## Next steps+ * [Monitor Azure Functions with Application Insights](monitor-functions.md). * [Enable Azure diagnostics](../agents/diagnostics-extension-to-application-insights.md) to be sent to Application Insights. * [Monitor service health metrics](../data-platform.md) to make sure your service is available and responsive.
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
If you want to attach custom dimensions to your logs, use [Log4j 1.2 MDC](https:
For help with troubleshooting, see [Troubleshooting](java-standalone-troubleshoot.md).
+## Release notes
+
+See the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub.
+ ## Support To get support:
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
To set the retention of a particular data type (in this example SecurityEvent) t
Valid values for `retentionInDays` are from 4 through 730.
-The `Usage` and `AzureActivity` data types can't be set with custom retention. They take on the maximum of the default workspace retention or 90 days.
- A great tool to connect directly to Azure Resource Manager to set retention by data type is the OSS tool [ARMclient](https://github.com/projectkudu/ARMClient). Learn more about ARMclient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and Daniel Bowbyes. Here's an example using ARMClient, setting SecurityEvent data to a 730-day retention: ```
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
na Previously updated : 07/02/2021 Last updated : 02/02/2022 # Linux NFS read-ahead best practices for Azure NetApp Files
Read-ahead can be defined either dynamically per NFS mount using the following s
To show the current read-ahead value (the returned value is in KiB), run the following command:
-`$ ./readahead.sh show <mount-point>`
+`$ ./readahead.sh show <mount-point>`
To set a new value for read-ahead, run the following command:
-`$ ./readahead.sh show <mount-point> [read-ahead-kb]`
+`$ ./readahead.sh set <mount-point> [read-ahead-kb]`
### Example
azure-percept How To Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-deploy-model.md
Follow this guide to deploy a vision AI model to your Azure Percept DK from with
:::image type="content" source="./media/how-to-deploy-model/select-device.png" alt-text="Percept devices list.":::
-1. On the next page, click **Deploy a sample model** if you would like to deploy one of the pre-trained sample vision models. If you would like to deploy an existing [custom no-code vision solution](./tutorial-nocode-vision.md), click **Deploy a Custom Vision project**. If you do not see your Custom Vision projects, set project's domain to one of Compact domains on [Custom Vision portal](https://www.customvision.ai/) and train a model again. Only Compact domains support model export to edge devices.
+1. On the next page, click **Deploy a sample model** if you would like to deploy one of the pre-trained sample vision models. If you would like to deploy an existing [custom no-code vision solution](./tutorial-nocode-vision.md), click **Deploy a Custom Vision project**. If you do not see your Custom Vision projects, set project's domain to "General (Compact)" on [Custom Vision portal](https://www.customvision.ai/) and train a model again. Other domains are not supported currently.
:::image type="content" source="./media/how-to-deploy-model/deploy-model.png" alt-text="Model choices for deployment.":::
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | namespaces | global | 6-50 | Alphanumerics and hyphens.<br><br>Start with letter. End with letter or number. | > | namespaces / AuthorizationRules | namespace | 1-50 | Alphanumerics, periods, hyphens and underscores.<br><br>Start and end with letter or number. | > | namespaces / disasterRecoveryConfigs | global | 6-50 | Alphanumerics and hyphens.<br><br>Start with letter. End with alphanumeric. |
-> | namespaces / eventhubs | namespace | 1-50 | Alphanumerics, periods, hyphens and underscores.<br><br>Start and end with letter or number. |
+> | namespaces / eventhubs | namespace | 1-256 | Alphanumerics, periods, hyphens and underscores.<br><br>Start and end with letter or number. |
> | namespaces / eventhubs / authorizationRules | event hub | 1-50 | Alphanumerics, periods, hyphens and underscores.<br><br>Start and end with letter or number. | > | namespaces / eventhubs / consumergroups | event hub | 1-50 | Alphanumerics, periods, hyphens and underscores.<br><br>Start and end with letter or number. |
azure-sql Authentication Azure Ad Only Authentication Create Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-azure-ad-only-authentication-create-server.md
The [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql
The following section provides you with examples and scripts on how to create a logical server or managed instance with an Azure AD admin set for the server or instance, and have Azure AD-only authentication enabled during server creation. For more information on the feature, see [Azure AD-only authentication](authentication-azure-ad-only-authentication.md).
-In our examples, we're enabling Azure AD-only authentication during server or managed instance creation, with a system assigned server admin and password. This will prevent server admin access when Azure AD-only authentication is enabled, and only allows the Azure AD admin to access the resource. It's optional to add parameters to the APIs to include your own server admin and password during server creation. However, the password cannot be reset until you disable Azure AD-only authentication.
+In our examples, we're enabling Azure AD-only authentication during server or managed instance creation, with a system assigned server admin and password. This will prevent server admin access when Azure AD-only authentication is enabled, and only allows the Azure AD admin to access the resource. It's optional to add parameters to the APIs to include your own server admin and password during server creation. However, the password cannot be reset until you disable Azure AD-only authentication. An example of how to use these optional parameters to specify the server admin login name is presented in the [PowerShell](?tabs=azure-powershell#azure-sql-database) tab on this page.
> [!NOTE] > To change the existing properties after server or managed instance creation, other existing APIs should be used. For more information, see [Managing Azure AD-only authentication using APIs](authentication-azure-ad-only-authentication.md#managing-azure-ad-only-authentication-using-apis) and [Configure and manage Azure AD authentication with Azure SQL](authentication-aad-configure.md).
Replace the following values in the example:
New-AzSqlServer -ResourceGroupName "<ResourceGroupName>" -Location "<Location>" -ServerName "<ServerName>" -ServerVersion "12.0" -ExternalAdminName "<AzureADAccount>" -EnableActiveDirectoryOnlyAuthentication ```
+Here is an example of specifying the server admin name (instead of letting the server admin name being automatically created) at the time of logical server creation. As mentioned earlier, this login is not usable when Azure AD-only authentication is enabled.
+
+```powershell
+$cred = Get-Credential
+New-AzSqlServer -ResourceGroupName "<ResourceGroupName>" -Location "<Location>" -ServerName "<ServerName>" -ServerVersion "12.0" -ExternalAdminName "<AzureADAccount>" -EnableActiveDirectoryOnlyAuthentication -SqlAdministratorCredentials $cred
+```
+ For more information, see [New-AzSqlServer](/powershell/module/az.sql/new-azsqlserver). # [Rest API](#tab/rest-api)
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-overview.md
Previously updated : 10/25/2021 Last updated : 2/2/2022 # Use auto-failover groups to enable transparent and coordinated geo-failover of multiple databases
The failover group will manage geo-failover of all databases on the primary mana
### <a name="using-read-write-listener-for-oltp-workload"></a> Use the read-write listener to connect to the primary managed instance
-For read-write workloads, use `<fog-name>.zone_id.database.windows.net` as the server name. Connections will be automatically directed to the primary. This name does not change after failover. The geo-failover involves updating the DNS record, so the client connections are redirected to the new primary only after the client DNS cache is refreshed. Because the secondary instance shares the DNS zone with the primary, the client application will be able to reconnect to it using the same server-side SAN certificate.
+For read-write workloads, use `<fog-name>.zone_id.database.windows.net` as the server name. Connections will be automatically directed to the primary. This name does not change after failover. The geo-failover involves updating the DNS record, so the client connections are redirected to the new primary only after the client DNS cache is refreshed. Because the secondary instance shares the DNS zone with the primary, the client application will be able to reconnect to it using the same server-side SAN certificate. The read-write listener and read-only listener cannot be reached via [public endpoint for managed instance](../managed-instance/public-endpoint-configure.md).
### <a name="using-read-only-listener-to-connect-to-the-secondary-instance"></a> Use the read-only listener to connect to the geo-secondary managed instance
-If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-secondary. To connect directly to the geo-secondary, use `<fog-name>.secondary.<zone_id>.database.windows.net` as the server name.
+If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-secondary. To connect directly to the geo-secondary, use `<fog-name>.secondary.<zone_id>.database.windows.net` as the server name. The read-write listener and read-only listener cannot be reached via [public endpoint for managed instance](../managed-instance/public-endpoint-configure.md).
> [!NOTE] > In the Business Critical tier, SQL Managed Instance supports the use of [read-only replicas](read-scale-out.md) to offload read-only query workloads, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-replicated secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location.
When you set up a failover group between primary and secondary SQL Managed Insta
## <a name="upgrading-or-downgrading-primary-database"></a> Scale primary database
-You can scale up or scale down the primary database to a different compute size (within the same service tier) without disconnecting any geo-secondaries. WWhen scaling up, we recommend that you scale up the geo-secondary first, and then scale up the primary. When scaling down, reverse the order: scale down the primary first, and then scale down the secondary. When you scale a database to a different service tier, this recommendation is enforced.
+You can scale up or scale down the primary database to a different compute size (within the same service tier) without disconnecting any geo-secondaries. When scaling up, we recommend that you scale up the geo-secondary first, and then scale up the primary. When scaling down, reverse the order: scale down the primary first, and then scale down the secondary. When you scale a database to a different service tier, this recommendation is enforced.
This sequence is recommended specifically to avoid the problem where the geo-secondary at a lower SKU gets overloaded and must be re-seeded during an upgrade or downgrade process. You could also avoid the problem by making the primary read-only, at the expense of impacting all read-write workloads against the primary.
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
Previously updated : 1/14/2022 Last updated : 2/2/2022 # Hyperscale service tier
These are the current limitations to the Hyperscale service tier as of GA. We'r
| Shrink Database | DBCC SHRINKDATABASE or DBCC SHRINKFILE isn't currently supported for Hyperscale databases. | | Database integrity check | DBCC CHECKDB isn't currently supported for Hyperscale databases. DBCC CHECKTABLE ('TableName') WITH TABLOCK and DBCC CHECKFILEGROUP WITH TABLOCK may be used as a workaround. See [Data Integrity in Azure SQL Database](https://azure.microsoft.com/blog/data-integrity-in-azure-sql-database/) for details on data integrity management in Azure SQL Database. | | Elastic Jobs | Using a Hyperscale database as the Job database is not supported. However, elastic jobs can target Hyperscale databases in the same way as any other Azure SQL database. |
+|Data Sync| Using a Hyperscale database as a Hub or Sync Metadata database is not supported. However, a Hyperscale database can be a member database in a Data Sync topology. |
## Next steps
azure-sql Sql Data Sync Data Sql Server Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-data-sync-data-sql-server-sql-database.md
Previously updated : 09/09/2021 Last updated : 2/2/2022 # What is SQL Data Sync for Azure?
Provisioning and deprovisioning during sync group creation, update, and deletion
- Moving servers between different subscriptions isn't supported. - If two primary keys are only different in case (e.g. Foo and foo), Data Sync won't support this scenario. - Truncating tables is not an operation supported by Data Sync (changes won't be tracked).-- Hyperscale databases are not supported.
+- Using a Hyperscale database as a Hub or Sync Metadata database is not supported. However, a Hyperscale database can be a member database in a Data Sync topology.
- Memory-optimized tables are not supported. #### Unsupported data types
azure-sql User Initiated Failover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/user-initiated-failover.md
Last updated 02/27/2021
# User-initiated manual failover on SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article explains how to manually failover a primary node on SQL Managed Instance General Purpose (GP) and Business Critical (BC) service tiers, and how to manually failover a secondary read-only replica node on the BC service tier only.
+This article explains how to manually failover a primary node on SQL Managed Instance General Purpose (GP) and Business Critical (BC) service tiers, and how to manually failover a secondary read-only replica node on the BC service tier only.
+
+> [!NOTE]
+> This article is not related with cross-region failovers on [auto-failover groups](../database/auto-failover-group-overview.md).
## When to use manual failover
backup Backup Azure Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-delete-vault.md
Title: Delete a Microsoft Azure Recovery Services vault description: In this article, learn how to remove dependencies and then delete an Azure Backup Recovery Services vault. Previously updated : 12/20/2021 Last updated : 01/28/2022
Choose a client:
>The following operation is destructive and can't be undone. All backup data and backup items associated with the protected server will be permanently deleted. Proceed with caution. >[!Note]
->If you're sure that all backed-up items in the vault are no longer required and want to delete them at once without reviewing, [run this PowerShell script](?tabs=powershell#script-for-delete-vault). The script will delete all backup items recursively and eventually the entire vault.
+>If you're sure that all backed-up items in the vault are no longer required and want to delete them at once without reviewing, [run this PowerShell script](./scripts/delete-recovery-services-vault.md). The script will delete all backup items recursively and eventually the entire vault.
To delete a vault, follow these steps:
Follow these steps:
Install-Module -Name Az.RecoveryServices -Repository PSGallery -Force -AllowClobber ``` -- **Step 3**: Copy the following script, change the parameters (vault name, resource group name, subscription name, and subscription ID), and run it in your PowerShell environment.
-
- The file prompts the user for authentication. Provide the user details to start the vault deletion process.
-
- Alternately, you can use Cloud Shell in Azure portal for vaults with fewer backups.
+- **Step 3**: Save the PowerShell script in .ps1 format. Then, to run the script in your PowerShell console, type `./NameOfFile.ps1`. This recursively deletes all backup items and eventually the entire Recovery Services vault.
- :::image type="content" source="./media/backup-azure-delete-vault/delete-vault-using-cloud-shell-inline.png" alt-text="Screenshot showing to delete a vault using Cloud Shell." lightbox="./media/backup-azure-delete-vault/delete-vault-using-cloud-shell-expanded.png":::
+ >[!Note]
+ >To access the PowerShell script for vault deletion, see the [PowerShell script for vault deletion](./scripts/delete-recovery-services-vault.md) article.
**Run the script in the PowerShell console**
Follow these steps:
1. Delete Disaster Recovery items 1. Remove private endpoints
-###### Script for delete vault
-
-```azurepowershell-interactive
-Connect-AzAccount
-
-$VaultName = "Vault name" #enter vault name
-$Subscription = "Subscription name" #enter Subscription name
-$ResourceGroup = "Resource group name" #enter Resource group name
-$SubscriptionId = "Subscription ID" #enter Subscription ID
-
-Select-AzSubscription $Subscription
-$VaultToDelete = Get-AzRecoveryServicesVault -Name $VaultName -ResourceGroupName $ResourceGroup
-Set-AzRecoveryServicesAsrVaultContext -Vault $VaultToDelete
-
-Set-AzRecoveryServicesVaultProperty -Vault $VaultToDelete.ID -SoftDeleteFeatureState Disable #disable soft delete
-Write-Host "Soft delete disabled for the vault" $VaultName
-$containerSoftDelete = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID | Where-Object {$_.DeleteState -eq "ToBeDeleted"} #fetch backup items in soft delete state
-foreach ($softitem in $containerSoftDelete)
-{
- Undo-AzRecoveryServicesBackupItemDeletion -Item $softitem -VaultId $VaultToDelete.ID -Force #undelete items in soft delete state
-}
-#Invoking API to disable enhanced security
-$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
-$profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
-$accesstoken = Get-AzAccessToken
-$token = $accesstoken.Token
-$authHeader = @{
- 'Content-Type'='application/json'
- 'Authorization'='Bearer ' + $token
-}
-$body = @{properties=@{enhancedSecurityState= "Disabled"}}
-$restUri = 'https://management.azure.com/subscriptions/'+$SubscriptionId+'/resourcegroups/'+$ResourceGroup+'/providers/Microsoft.RecoveryServices/vaults/'+$VaultName+'/backupconfig/vaultconfig?api-version=2019-05-13'
-$response = Invoke-RestMethod -Uri $restUri -Headers $authHeader -Body ($body | ConvertTo-JSON -Depth 9) -Method PATCH
-
-#Fetch all protected items and servers
-$backupItemsVM = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID
-$backupItemsSQL = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $VaultToDelete.ID
-$backupItemsAFS = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -VaultId $VaultToDelete.ID
-$backupItemsSAP = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType SAPHanaDatabase -VaultId $VaultToDelete.ID
-$backupContainersSQL = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"}
-$protectableItemsSQL = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $VaultToDelete.ID | Where-Object {$_.IsAutoProtected -eq $true}
-$backupContainersSAP = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"}
-$StorageAccounts = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -VaultId $VaultToDelete.ID
-$backupServersMARS = Get-AzRecoveryServicesBackupContainer -ContainerType "Windows" -BackupManagementType MAB -VaultId $VaultToDelete.ID
-$backupServersMABS = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID| Where-Object { $_.BackupManagementType -eq "AzureBackupServer" }
-$backupServersDPM = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID | Where-Object { $_.BackupManagementType-eq "SCDPM" }
-$pvtendpoints = Get-AzPrivateEndpointConnection -PrivateLinkResourceId $VaultToDelete.ID
-
-foreach($item in $backupItemsVM)
- {
- Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete Azure VM backup items
- }
-Write-Host "Disabled and deleted Azure VM backup items"
-
-foreach($item in $backupItemsSQL)
- {
- Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete SQL Server in Azure VM backup items
- }
-Write-Host "Disabled and deleted SQL Server backup items"
-
-foreach($item in $protectableItems)
- {
- Disable-AzRecoveryServicesBackupAutoProtection -BackupManagementType AzureWorkload -WorkloadType MSSQL -InputItem $item -VaultId $VaultToDelete.ID #disable auto-protection for SQL
- }
-Write-Host "Disabled auto-protection and deleted SQL protectable items"
-
-foreach($item in $backupContainersSQL)
- {
- Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister SQL Server in Azure VM protected server
- }
-Write-Host "Deleted SQL Servers in Azure VM containers"
-
-foreach($item in $backupItemsSAP)
- {
- Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete SAP HANA in Azure VM backup items
- }
-Write-Host "Disabled and deleted SAP HANA backup items"
-
-foreach($item in $backupContainersSAP)
- {
- Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister SAP HANA in Azure VM protected server
- }
-Write-Host "Deleted SAP HANA in Azure VM containers"
-
-foreach($item in $backupItemsAFS)
- {
- Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete Azure File Shares backup items
- }
-Write-Host "Disabled and deleted Azure File Share backups"
-
-foreach($item in $StorageAccounts)
- {
- Unregister-AzRecoveryServicesBackupContainer -container $item -Force -VaultId $VaultToDelete.ID #unregister storage accounts
- }
-Write-Host "Unregistered Storage Accounts"
-
-foreach($item in $backupServersMARS)
- {
- Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister MARS servers and delete corresponding backup items
- }
-Write-Host "Deleted MARS Servers"
-
-foreach($item in $backupServersMABS)
- {
- Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister MABS servers and delete corresponding backup items
- }
-Write-Host "Deleted MAB Servers"
-
-foreach($item in $backupServersDPM)
- {
- Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister DPM servers and delete corresponding backup items
- }
-Write-Host "Deleted DPM Servers"
-
-#Deletion of ASR Items
-
-$fabricObjects = Get-AzRecoveryServicesAsrFabric
-if ($null -ne $fabricObjects) {
- # First DisableDR all VMs.
- foreach ($fabricObject in $fabricObjects) {
- $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
- foreach ($containerObject in $containerObjects) {
- $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
- # DisableDR all protected items
- foreach ($protectedItem in $protectedItems) {
- Write-Host "Triggering DisableDR(Purge) for item:" $protectedItem.Name
- Remove-AzRecoveryServicesAsrReplicationProtectedItem -InputObject $protectedItem -Force
- Write-Host "DisableDR(Purge) completed"
- }
-
- $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
- -ProtectionContainer $containerObject
- # Remove all Container Mappings
- foreach ($containerMapping in $containerMappings) {
- Write-Host "Triggering Remove Container Mapping: " $containerMapping.Name
- Remove-AzRecoveryServicesAsrProtectionContainerMapping -ProtectionContainerMapping $containerMapping -Force
- Write-Host "Removed Container Mapping."
- }
- }
- $NetworkObjects = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject
- foreach ($networkObject in $NetworkObjects)
- {
- #Get the PrimaryNetwork
- $PrimaryNetwork = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject -FriendlyName $networkObject
- $NetworkMappings = Get-AzRecoveryServicesAsrNetworkMapping -Network $PrimaryNetwork
- foreach ($networkMappingObject in $NetworkMappings)
- {
- #Get the Neetwork Mappings
- $NetworkMapping = Get-AzRecoveryServicesAsrNetworkMapping -Name $networkMappingObject.Name -Network $PrimaryNetwork
- Remove-AzRecoveryServicesAsrNetworkMapping -InputObject $NetworkMapping
- }
- }
- # Remove Fabric
- Write-Host "Triggering Remove Fabric:" $fabricObject.FriendlyName
- Remove-AzRecoveryServicesAsrFabric -InputObject $fabricObject -Force
- Write-Host "Removed Fabric."
- }
-}
-
-foreach($item in $pvtendpoints)
- {
- $penamesplit = $item.Name.Split(".")
- $pename = $penamesplit[0]
- Remove-AzPrivateEndpointConnection -ResourceId $item.PrivateEndpoint.Id -Force #remove private endpoint connections
- Remove-AzPrivateEndpoint -Name $pename -ResourceGroupName $ResourceGroup -Force #remove private endpoints
- }
-Write-Host "Removed Private Endpoints"
-
-#Recheck ASR items in vault
-$fabricCount = 0
-$ASRProtectedItems = 0
-$ASRPolicyMappings = 0
-$fabricObjects = Get-AzRecoveryServicesAsrFabric
-if ($null -ne $fabricObjects) {
- foreach ($fabricObject in $fabricObjects) {
- $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
- foreach ($containerObject in $containerObjects) {
- $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
- foreach ($protectedItem in $protectedItems) {
- $ASRProtectedItems++
- }
- $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
- -ProtectionContainer $containerObject
- foreach ($containerMapping in $containerMappings) {
- $ASRPolicyMappings++
- }
- }
- $fabricCount++
- }
-}
-#Recheck presence of backup items in vault
-$backupItemsVMFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID
-$backupItemsSQLFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $VaultToDelete.ID
-$backupContainersSQLFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"}
-$protectableItemsSQLFin = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $VaultToDelete.ID | Where-Object {$_.IsAutoProtected -eq $true}
-$backupItemsSAPFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType SAPHanaDatabase -VaultId $VaultToDelete.ID
-$backupContainersSAPFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"}
-$backupItemsAFSFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -VaultId $VaultToDelete.ID
-$StorageAccountsFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -VaultId $VaultToDelete.ID
-$backupServersMARSFin = Get-AzRecoveryServicesBackupContainer -ContainerType "Windows" -BackupManagementType MAB -VaultId $VaultToDelete.ID
-$backupServersMABSFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID| Where-Object { $_.BackupManagementType -eq "AzureBackupServer" }
-$backupServersDPMFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID | Where-Object { $_.BackupManagementType-eq "SCDPM" }
-$pvtendpointsFin = Get-AzPrivateEndpointConnection -PrivateLinkResourceId $VaultToDelete.ID
-Write-Host "Number of backup items left in the vault and which need to be deleted:" $backupItemsVMFin.count "Azure VMs" $backupItemsSQLFin.count "SQL Server Backup Items" $backupContainersSQLFin.count "SQL Server Backup Containers" $protectableItemsSQLFin.count "SQL Server Instances" $backupItemsSAPFin.count "SAP HANA backup items" $backupContainersSAPFin.count "SAP HANA Backup Containers" $backupItemsAFSFin.count "Azure File Shares" $StorageAccountsFin.count "Storage Accounts" $backupServersMARSFin.count "MARS Servers" $backupServersMABSFin.count "MAB Servers" $backupServersDPMFin.count "DPM Servers" $pvtendpointsFin.count "Private endpoints"
-Write-Host "Number of ASR items left in the vault and which need to be deleted:" $ASRProtectedItems "ASR protected items" $ASRPolicyMappings "ASR policy mappings" $fabricCount "ASR Fabrics" $pvtendpointsFin.count "Private endpoints. Warning: This script will only remove the replication configuration from Azure Site Recovery and not from the source. Please cleanup the source manually. Visit https://go.microsoft.com/fwlink/?linkid=2182781 to learn more"
-Remove-AzRecoveryServicesVault -Vault $VaultToDelete
-#Finish
-
-```
--
-To delete individual backup items or to write your own script, use the following PowerShell commands:
-
-To stop protection and delete the backup data:
--- If you're using SQL in Azure VMs backup and enabled autoprotection for SQL instances, first disable the autoprotection.
+To delete an individual backup items or write your own script, use the following PowerShell commands:
+
+- Stop protection and delete the backup data:
+
+ If you're using SQL in Azure VMs backup and enabled autoprotection for SQL instances, first disable the autoprotection.
```PowerShell Disable-AzRecoveryServicesBackupAutoProtection
To stop protection and delete the backup data:
[Learn more](/powershell/module/az.recoveryservices/disable-azrecoveryservicesbackupautoprotection) on how to disable protection for an Azure Backup-protected item. -- Stop protection and delete data for all backup-protected items in cloud (for example: IaaS VM, Azure file share, and so on):
+- Stop protection and delete data for all backup-protected items in cloud (for example, IaaS VM, Azure file share, and so on):
```PowerShell Disable-AzRecoveryServicesBackupProtection
To stop protection and delete the backup data:
[<CommonParameters>] ```
- [Learn more](/powershell/module/az.recoveryservices/disable-azrecoveryservicesbackupprotection) about disables protection for a Backup-protected item.
+ [Learn more](/powershell/module/az.recoveryservices/disable-azrecoveryservicesbackupprotection) about disabling protection for a Backup-protected item.
After deleting the backed-up data, unregister any on-premises containers and management servers.
For more information on the ARMClient command, see [ARMClient README](https://gi
## Next steps -- [Learn about Recovery Services vaults](backup-azure-recovery-services-vault-overview.md)-- [Learn about monitoring and managing Recovery Services vaults](backup-azure-manage-windows-server.md)
+- [Learn about Recovery Services vaults](backup-azure-recovery-services-vault-overview.md).
+- [Learn about monitoring and managing Recovery Services vaults](backup-azure-manage-windows-server.md).
backup Delete Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/scripts/delete-recovery-services-vault.md
+
+ Title: Script Sample - Delete a Recovery Services vault
+description: Learn about how to use a PowerShell script to delete a Recovery Services vault.
+ Last updated : 01/30/2022+++++
+# PowerShell script to delete a Recovery Services vault
+
+This script helps you to delete a Recovery Services vault.
+
+## How to execute the script?
+
+1. Save the script in the following section on your machine with a name of your choice and _.ps1_ extension.
+1. In the script, change the parameters (vault name, resource group name, subscription name, and subscription ID).
+1. To run it in your PowerShell environment, continue with the next steps.
+
+ Alternatively, you can use Cloud Shell in Azure portal for vaults with fewer backups.
+
+ :::image type="content" source="../media/backup-azure-delete-vault/delete-vault-using-cloud-shell-inline.png" alt-text="Screenshot showing to delete a vault using Cloud Shell." lightbox="../media/backup-azure-delete-vault/delete-vault-using-cloud-shell-expanded.png":::
+
+1. To upgrade to the latest version of PowerShell 7, if not done, run the following command in the PowerShell window:
+
+ ```azurepowershell-interactive
+ iex "& { $(irm https://aka.ms/install-powershell.ps1) } -UseMSI"
+ ```
+
+1. Launch PowerShell 7 as Administrator.
+1. Before you run the script for vault deletion, run the following command to upgrade the _Az module_ to the latest version:
+
+ ```azurepowershell-interactive
+ Uninstall-Module -Name Az.RecoveryServices
+ Set-ExecutionPolicy -ExecutionPolicy Unrestricted
+ Install-Module -Name Az.RecoveryServices -Repository PSGallery -Force -AllowClobber
+ ```
+
+1. In the PowerShell window, change the path to the location the file is present, and then run the file using **./NameOfFile.ps1**.
+1. Provide authentication via browser by signing into your Azure account.
+
+The script will continue to delete all the backup items and ultimately the entire vault recursively.
+
+## Script
+
+```azurepowershell-interactive
+Connect-AzAccount
+
+$VaultName = "Vault name" #enter vault name
+$Subscription = "Subscription name" #enter Subscription name
+$ResourceGroup = "Resource group name" #enter Resource group name
+$SubscriptionId = "Subscription ID" #enter Subscription ID
+
+Select-AzSubscription $Subscription
+$VaultToDelete = Get-AzRecoveryServicesVault -Name $VaultName -ResourceGroupName $ResourceGroup
+Set-AzRecoveryServicesAsrVaultContext -Vault $VaultToDelete
+
+Set-AzRecoveryServicesVaultProperty -Vault $VaultToDelete.ID -SoftDeleteFeatureState Disable #disable soft delete
+Write-Host "Soft delete disabled for the vault" $VaultName
+$containerSoftDelete = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID | Where-Object {$_.DeleteState -eq "ToBeDeleted"} #fetch backup items in soft delete state
+foreach ($softitem in $containerSoftDelete)
+{
+ Undo-AzRecoveryServicesBackupItemDeletion -Item $softitem -VaultId $VaultToDelete.ID -Force #undelete items in soft delete state
+}
+#Invoking API to disable enhanced security
+$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+$profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
+$accesstoken = Get-AzAccessToken
+$token = $accesstoken.Token
+$authHeader = @{
+ 'Content-Type'='application/json'
+ 'Authorization'='Bearer ' + $token
+}
+$body = @{properties=@{enhancedSecurityState= "Disabled"}}
+$restUri = 'https://management.azure.com/subscriptions/'+$SubscriptionId+'/resourcegroups/'+$ResourceGroup+'/providers/Microsoft.RecoveryServices/vaults/'+$VaultName+'/backupconfig/vaultconfig?api-version=2019-05-13' #Replace "management.azure.com" with "management.usgovcloudapi.net" if your subscription is in USGov.
+$response = Invoke-RestMethod -Uri $restUri -Headers $authHeader -Body ($body | ConvertTo-JSON -Depth 9) -Method PATCH
++
+#Fetch all protected items and servers
+$backupItemsVM = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID
+$backupItemsSQL = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $VaultToDelete.ID
+$backupItemsAFS = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -VaultId $VaultToDelete.ID
+$backupItemsSAP = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType SAPHanaDatabase -VaultId $VaultToDelete.ID
+$backupContainersSQL = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"}
+$protectableItemsSQL = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $VaultToDelete.ID | Where-Object {$_.IsAutoProtected -eq $true}
+$backupContainersSAP = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"}
+$StorageAccounts = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -VaultId $VaultToDelete.ID
+$backupServersMARS = Get-AzRecoveryServicesBackupContainer -ContainerType "Windows" -BackupManagementType MAB -VaultId $VaultToDelete.ID
+$backupServersMABS = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID| Where-Object { $_.BackupManagementType -eq "AzureBackupServer" }
+$backupServersDPM = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID | Where-Object { $_.BackupManagementType-eq "SCDPM" }
+$pvtendpoints = Get-AzPrivateEndpointConnection -PrivateLinkResourceId $VaultToDelete.ID
+
+foreach($item in $backupItemsVM)
+ {
+ Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete Azure VM backup items
+ }
+Write-Host "Disabled and deleted Azure VM backup items"
+
+foreach($item in $backupItemsSQL)
+ {
+ Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete SQL Server in Azure VM backup items
+ }
+Write-Host "Disabled and deleted SQL Server backup items"
+
+foreach($item in $protectableItems)
+ {
+ Disable-AzRecoveryServicesBackupAutoProtection -BackupManagementType AzureWorkload -WorkloadType MSSQL -InputItem $item -VaultId $VaultToDelete.ID #disable auto-protection for SQL
+ }
+Write-Host "Disabled auto-protection and deleted SQL protectable items"
+
+foreach($item in $backupContainersSQL)
+ {
+ Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister SQL Server in Azure VM protected server
+ }
+Write-Host "Deleted SQL Servers in Azure VM containers"
+
+foreach($item in $backupItemsSAP)
+ {
+ Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete SAP HANA in Azure VM backup items
+ }
+Write-Host "Disabled and deleted SAP HANA backup items"
+
+foreach($item in $backupContainersSAP)
+ {
+ Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister SAP HANA in Azure VM protected server
+ }
+Write-Host "Deleted SAP HANA in Azure VM containers"
+
+foreach($item in $backupItemsAFS)
+ {
+ Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete Azure File Shares backup items
+ }
+Write-Host "Disabled and deleted Azure File Share backups"
+
+foreach($item in $StorageAccounts)
+ {
+ Unregister-AzRecoveryServicesBackupContainer -container $item -Force -VaultId $VaultToDelete.ID #unregister storage accounts
+ }
+Write-Host "Unregistered Storage Accounts"
+
+foreach($item in $backupServersMARS)
+ {
+ Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister MARS servers and delete corresponding backup items
+ }
+Write-Host "Deleted MARS Servers"
+
+foreach($item in $backupServersMABS)
+ {
+ Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister MABS servers and delete corresponding backup items
+ }
+Write-Host "Deleted MAB Servers"
+
+foreach($item in $backupServersDPM)
+ {
+ Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister DPM servers and delete corresponding backup items
+ }
+Write-Host "Deleted DPM Servers"
+
+#Deletion of ASR Items
+
+$fabricObjects = Get-AzRecoveryServicesAsrFabric
+if ($null -ne $fabricObjects) {
+ # First DisableDR all VMs.
+ foreach ($fabricObject in $fabricObjects) {
+ $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
+ foreach ($containerObject in $containerObjects) {
+ $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
+ # DisableDR all protected items
+ foreach ($protectedItem in $protectedItems) {
+ Write-Host "Triggering DisableDR(Purge) for item:" $protectedItem.Name
+ Remove-AzRecoveryServicesAsrReplicationProtectedItem -InputObject $protectedItem -Force
+ Write-Host "DisableDR(Purge) completed"
+ }
+
+ $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
+ -ProtectionContainer $containerObject
+ # Remove all Container Mappings
+ foreach ($containerMapping in $containerMappings) {
+ Write-Host "Triggering Remove Container Mapping: " $containerMapping.Name
+ Remove-AzRecoveryServicesAsrProtectionContainerMapping -ProtectionContainerMapping $containerMapping -Force
+ Write-Host "Removed Container Mapping."
+ }
+ }
+ $NetworkObjects = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject
+ foreach ($networkObject in $NetworkObjects)
+ {
+ #Get the PrimaryNetwork
+ $PrimaryNetwork = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject -FriendlyName $networkObject
+ $NetworkMappings = Get-AzRecoveryServicesAsrNetworkMapping -Network $PrimaryNetwork
+ foreach ($networkMappingObject in $NetworkMappings)
+ {
+ #Get the Neetwork Mappings
+ $NetworkMapping = Get-AzRecoveryServicesAsrNetworkMapping -Name $networkMappingObject.Name -Network $PrimaryNetwork
+ Remove-AzRecoveryServicesAsrNetworkMapping -InputObject $NetworkMapping
+ }
+ }
+ # Remove Fabric
+ Write-Host "Triggering Remove Fabric:" $fabricObject.FriendlyName
+ Remove-AzRecoveryServicesAsrFabric -InputObject $fabricObject -Force
+ Write-Host "Removed Fabric."
+ }
+}
+
+foreach($item in $pvtendpoints)
+ {
+ $penamesplit = $item.Name.Split(".")
+ $pename = $penamesplit[0]
+ Remove-AzPrivateEndpointConnection -ResourceId $item.PrivateEndpoint.Id -Force #remove private endpoint connections
+ Remove-AzPrivateEndpoint -Name $pename -ResourceGroupName $ResourceGroup -Force #remove private endpoints
+ }
+Write-Host "Removed Private Endpoints"
+
+#Recheck ASR items in vault
+$fabricCount = 0
+$ASRProtectedItems = 0
+$ASRPolicyMappings = 0
+$fabricObjects = Get-AzRecoveryServicesAsrFabric
+if ($null -ne $fabricObjects) {
+ foreach ($fabricObject in $fabricObjects) {
+ $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
+ foreach ($containerObject in $containerObjects) {
+ $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
+ foreach ($protectedItem in $protectedItems) {
+ $ASRProtectedItems++
+ }
+ $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
+ -ProtectionContainer $containerObject
+ foreach ($containerMapping in $containerMappings) {
+ $ASRPolicyMappings++
+ }
+ }
+ $fabricCount++
+ }
+}
+#Recheck presence of backup items in vault
+$backupItemsVMFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID
+$backupItemsSQLFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $VaultToDelete.ID
+$backupContainersSQLFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"}
+$protectableItemsSQLFin = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $VaultToDelete.ID | Where-Object {$_.IsAutoProtected -eq $true}
+$backupItemsSAPFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType SAPHanaDatabase -VaultId $VaultToDelete.ID
+$backupContainersSAPFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"}
+$backupItemsAFSFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -VaultId $VaultToDelete.ID
+$StorageAccountsFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -VaultId $VaultToDelete.ID
+$backupServersMARSFin = Get-AzRecoveryServicesBackupContainer -ContainerType "Windows" -BackupManagementType MAB -VaultId $VaultToDelete.ID
+$backupServersMABSFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID| Where-Object { $_.BackupManagementType -eq "AzureBackupServer" }
+$backupServersDPMFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID | Where-Object { $_.BackupManagementType-eq "SCDPM" }
+$pvtendpointsFin = Get-AzPrivateEndpointConnection -PrivateLinkResourceId $VaultToDelete.ID
+Write-Host "Number of backup items left in the vault and which need to be deleted:" $backupItemsVMFin.count "Azure VMs" $backupItemsSQLFin.count "SQL Server Backup Items" $backupContainersSQLFin.count "SQL Server Backup Containers" $protectableItemsSQLFin.count "SQL Server Instances" $backupItemsSAPFin.count "SAP HANA backup items" $backupContainersSAPFin.count "SAP HANA Backup Containers" $backupItemsAFSFin.count "Azure File Shares" $StorageAccountsFin.count "Storage Accounts" $backupServersMARSFin.count "MARS Servers" $backupServersMABSFin.count "MAB Servers" $backupServersDPMFin.count "DPM Servers" $pvtendpointsFin.count "Private endpoints"
+Write-Host "Number of ASR items left in the vault and which need to be deleted:" $ASRProtectedItems "ASR protected items" $ASRPolicyMappings "ASR policy mappings" $fabricCount "ASR Fabrics" $pvtendpointsFin.count "Private endpoints. Warning: This script will only remove the replication configuration from Azure Site Recovery and not from the source. Please cleanup the source manually. Visit https://go.microsoft.com/fwlink/?linkid=2182781 to learn more"
+Remove-AzRecoveryServicesVault -Vault $VaultToDelete
+#Finish
+
+```
+
+## Next steps
+
+[Learn more](../backup-azure-delete-vault.md) about vault deletion process.
cdn Cdn Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-features.md
The following table compares the features available with each product.
| Easy integration with Azure services, such as [Storage](cdn-create-a-storage-account-with-cdn.md), [Web Apps](cdn-add-to-web-app.md), and [Media Services](../media-services/previous/media-services-portal-manage-streaming-endpoints.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | Management via [REST API](/rest/api/cdn/), [.NET](cdn-app-dev-net.md), [Node.js](cdn-app-dev-node.md), or [PowerShell](cdn-manage-powershell.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [Compression MIME types](./cdn-improve-performance.md) |Configurable |Configurable |Configurable |Configurable |
-| Compression encodings |gzip, brotli |gzip |gzip, deflate, bzip2, brotli |gzip, deflate, bzip2, brotli |
+| Compression encodings |gzip, brotli |gzip |gzip, deflate, bzip2 |gzip, deflate, bzip2 |
## Migration
cognitive-services Call Center Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/call-center-transcription.md
Title: Call Center Transcription - Speech service
-description: A common scenario for speech-to-text is transcribing large volumes of telephony data that come from various systems, such as Interactive Voice Response (IVR). Using Speech service and the Unified speech model, a business can get high-quality transcriptions with audio capture systems.
+description: A common scenario for speech-to-text is transcribing large volumes of telephony data that come from various systems, such as interactive voice response (IVR). By using Speech service and the Unified speech model, a business can get high-quality transcriptions with audio capture systems.
# Speech service for telephony data
-Telephony data that is generated through landlines, mobile phones, and radios are typically low quality, and narrowband in the range of 8 KHz, which creates challenges when converting speech-to-text. The latest speech recognition models from the Speech service excel at transcribing this telephony data, even in cases when the data is difficult for a human to understand. These models are trained with large volumes of telephony data, and have best-in-market recognition accuracy, even in noisy environments.
+Telephony data that's generated through landlines, mobile phones, and radios is ordinarily of low quality. This data is also narrowband, in the range of 8&nbsp;KHz, which can create challenges when you're converting speech to text.
-A common scenario for speech-to-text is transcribing large volumes of telephony data that may come from various systems, such as Interactive Voice Response (IVR). The audio these systems provide can be stereo or mono, and raw with little-to-no post processing done on the signal. Using the Speech service and the Unified speech model, a business can get high-quality transcriptions, whatever systems are used to capture audio.
+The latest Speech service speech-recognition models excel at transcribing this telephony data, even when the data is difficult for a human to understand. These models are trained with large volumes of telephony data, and they have best-in-market recognition accuracy, even in noisy environments.
-Telephony data can be used to better understand your customers' needs, identify new marketing opportunities, or evaluate the performance of call center agents. After the data is transcribed, a business can use the output for purposes such as improved telemetry, identifying key phrases, or analyzing customer sentiment.
+A common scenario for speech-to-text is the transcription of large volumes of telephony data that comes from a variety of systems, such as interactive voice response (IVR). The audio that these systems provide can be stereo or mono, and raw, with little to no post-processing done on the signal. By using Speech service and the Unified speech model, your business can get high-quality transcriptions, whatever systems you use to capture audio.
-The technologies outlined in this page are by Microsoft internally for various support call processing services, both in real-time and batch mode.
+You can use telephony data to better understand your customers' needs, identify new marketing opportunities, or evaluate the performance of call center agents. After the data is transcribed, your business can use the output for improving telemetry, identifying key phrases, analyzing customer *sentiment*, and other purposes.
-Let's review some of the technology and related features the Speech service offers.
+The technologies outlined in this article are from Microsoft internally for various support-call processing services, both in real-time and batch mode.
+
+This article discusses some of the technology and related features that Speech service offers.
> [!IMPORTANT]
-> The Speech service Unified model is trained with diverse data and offers a single-model solution to a number of scenario from Dictation to Telephony analytics.
+> The Speech service Unified model is trained with diverse data and offers a single-model solution to many scenarios, from dictation to telephony analytics.
+
+## Azure technology for call centers
-## Azure Technology for Call Centers
+Beyond the functional aspect of the Speech service features, their primary purpose, as applied to the call center, is to improve the customer experience in three separate domains:
-Beyond the functional aspect of the Speech service features, their primary purpose ΓÇô when applied to the call center ΓÇô is to improve the customer experience. Three clear domains exist in this regard:
+- Post-call analytics, which is essentially the batch processing of call recordings after the call.
+- Real-time analytics, which is the processing of an audio signal to extract various insights as the call is taking place (with sentiment as a prominent use case).
+- Voice assistants (bots), which either drive the dialogue between customers and the bot in an attempt to solve their issues, without agent participation, or apply AI protocols to assist the agent.
-- Post-call analytics, which is essentially batch processing of call recordings after the call.-- Real-time analytics, which is processing of the audio signal to extract various insights as the call is taking place (with sentiment being a prominent use case).-- Voice assistants (bots), either driving the dialogue between the customer and the bot in an attempt to solve the customer's issue with no agent participation, or being the application of artificial intelligence (AI) protocols to assist the agent.
+Here is an architecture diagram showing a typical implementation of a batch scenario:
+![Diagram of call center transcription architecture.](media/scenarios/call-center-transcription-architecture.png)
-A typical architecture diagram of the implementation of a batch scenario is depicted in the picture below
-![Call center transcription architecture](media/scenarios/call-center-transcription-architecture.png)
+## Components of speech analytics technology
-## Speech Analytics Technology Components
+Whether the domain is post-call or real-time, Azure offers a set of mature and emerging technologies to help improve the customer experience.
-Whether the domain is post-call or real-time, Azure offers a set of mature and emerging technologies to improve the customer experience.
+### Speech-to-text
-### Speech to text (STT)
+[Speech-to-text](speech-to-text.md) is the most sought-after feature in any call center solution. Because many of the downstream analytics processes rely on transcribed text, the word error rate (WER) metric is of utmost importance. One of the key challenges in call center transcription is the noise thatΓÇÖs prevalent in the call center (for example, other agents speaking in the background), the rich variety of language locales and dialects, and the low quality of the actual telephone signal.
-[Speech-to-text](speech-to-text.md) is the most sought-after feature in any call center solution. Because many of the downstream analytics processes rely on transcribed text, the word error rate (_WER_) is of utmost importance. One of the key challenges in call center transcription is the noise thatΓÇÖs prevalent in the call center (for example other agents speaking in the background), the rich variety of language locales and dialects as well as the low quality of the actual telephone signal. WER is highly correlated with how well the acoustic and language models are trained for a given locale, thus the ability to customize the model to your locale is important. Our latest Unified version 4.x models are the solution to both transcription accuracy and latency. Trained with tens of thousands of hours of acoustic data and billions of lexical information, Unified models are the most accurate models in the market to transcribe call center data.
+WER is highly correlated with how well the acoustic and language models are trained for a specific locale. Therefore, it's important to be able to customize the model to your locale. Our latest Unified version 4.x models are the solution to both transcription accuracy and latency. Because they're trained with tens of thousands of hours of acoustic data and billions of bits of lexical information, Unified models are the most accurate in the market for transcribing call center data.
### Sentiment
-Gauging whether the customer had a good experience is one of the most important areas of Speech analytics when applied to the call center space. Our [Batch Transcription API](batch-transcription.md) offers sentiment analysis per utterance. You can aggregate the set of values obtained as part of a call transcript to determine the sentiment of the call for both your agents and the customer.
+In the call center space, the ability to gauge whether customers have had a good experience is one of the most important areas of Speech analytics. The Microsoft [Batch Transcription API](batch-transcription.md) offers sentiment analysis per utterance. You can aggregate the set of values that are obtained as part of a call transcript to determine the sentiment of the call for both your agents and the customer.
### Silence (non-talk)
-It is not uncommon for 35 percent of a support call to be what we call non-talk time. Some scenarios for which non-talk occurs are: agents looking up prior case history with a customer, agents using tools that allow them to access the customer's desktop and perform functions, customers sitting on hold waiting for a transfer, and so on. It is extremely important to gauge when silence is occurring in a call as there are number of important customer sensitivities that occur around these types of scenarios and where they occur in the call.
+It's not uncommon for as much as 35 percent of a support call to be what's called *non-talk time*. Some scenarios during which non-talk occurs might include:
+* Agents taking time to look up prior case history with a customer.
+* Agents using tools that allow them to access the customer's desktop and perform certain functions.
+* Customers waiting on hold for a call transfer.
+
+It's important to gauge when silence is occurring in a call, because critical customer sensitivities can result from these types of scenarios and where they occur in the call.
### Translation
-Some companies are experimenting with providing translated transcripts from foreign language support calls so that delivery managers can understand the world-wide experience of their customers. Our [translation](./speech-translation.md) capabilities are unsurpassed. We can translate audio-to-audio or audio-to-text for a large number of locales.
+Some companies are experimenting with providing translated transcripts from foreign-language support calls, so that delivery managers can understand the world-wide experience of their customers. Speech service's [translation](./speech-translation.md) capabilities are excellent, featuring the audio-to-audio or audio-to-text translation for a large number of locales.
-### Text to Speech
+### Text-to-speech
-[Text-to-speech](text-to-speech.md) is another important area in implementing bots that interact with the customers. The typical pathway is that the customer speaks, their voice is transcribed to text, the text is analyzed for intents, a response is synthesized based on the recognized intent, and then an asset is either surfaced to the customer or a synthesized voice response is generated. Of course all of this has to occur quickly ΓÇô thus low-latency is an important component in the success of these systems.
+[Text-to-speech](text-to-speech.md) is another important technology where bots interact with customers. The typical pathway is that a customer speaks, the voice is transcribed to text, the text is analyzed for intents, a response is synthesized based on the recognized intent, and then an asset is either surfaced to the customer or a synthesized voice response is generated. Because this entire process must occur quickly, low latency is an important component in the success of these systems.
-Our end-to-end latency is considerably low for the various technologies involved such as [Speech-to-text](speech-to-text.md), [LUIS](https://azure.microsoft.com/services/cognitive-services/language-understanding-intelligent-service/), [Bot Framework](https://dev.botframework.com/), [Text-to-speech](text-to-speech.md).
+Speech service's end-to-end latency is considerably low for the various technologies involved, such as [speech-to-text](speech-to-text.md), [Language Understanding (LUIS)](https://azure.microsoft.com/services/cognitive-services/language-understanding-intelligent-service/), [Bot Framework](https://dev.botframework.com/), and [text-to-speech](text-to-speech.md).
-Our new voices are also indistinguishable from human voices. You can use our voices to give your bot its unique personality.
+Our new synthesized voices are also nearly indistinguishable from human voices. You can use them to give your bot its unique personality.
### Search
-Another staple of analytics is to identify interactions where a specific event or experience has occurred. This is typically done with one of two approaches; either an ad hoc search where the user simply types a phrase and the system responds, or a more structured query where an analyst can create a set of logical statements that identify a scenario in a call, and then each call can be indexed against that set of queries. A good search example is the ubiquitous compliance statement "this call shall be recorded for quality purposes... ". Many companies want to make sure that their agents are providing this disclaimer to customers before the call is actually recorded. Most analytics systems have the ability to trend the behaviors found by query/search algorithms, and this reporting of trends is ultimately one of the most important functions of an analytics system. Through [Cognitive services directory](https://azure.microsoft.com/services/cognitive-services/directory/search/) your end-to-end solution can be significantly enhanced with indexing and search capabilities.
+Another staple of analytics is to identify interactions where a specific event or experience has occurred. You would ordinarily do this with either of two approaches:
+* An ad hoc search, where users simply type a phrase and the system responds.
+* A more structured query where an analyst can create a set of logical statements that identify a scenario in a call, and then each call can be indexed against that set of queries.
-### Key Phrase Extraction
+A good search example is the ubiquitous compliance statement, "This call will be recorded for quality purposes." Many companies want to make sure that their agents are providing this disclaimer to customers before the call is actually recorded. Most analytics systems have the ability to trend the behaviors found by query or search algorithms, and this reporting of trends is ultimately one of the most important functions of an analytics system. Through the [Cognitive Services directory](https://azure.microsoft.com/services/cognitive-services/directory/search/), your end-to-end solution can be significantly enhanced with indexing and search capabilities.
-This area is one of the more challenging analytics applications and one that is benefiting from the application of AI and machine learning. The primary scenario in this case is to infer customer intent. Why is the customer calling? What is the customer problem? Why did the customer have a negative experience? Our [Language service](https://azure.microsoft.com/services/cognitive-services/text-analytics/) provides a set of analytics out of the box for quickly upgrading your end-to-end solution for extracting those important keywords or phrases.
+### Key phrase extraction
-Let's now have a look at the batch processing and the real-time pipelines for speech recognition in a bit more detail.
+This area is one of the more challenging analytics applications, and one that benefits from the application of AI and machine learning. The primary scenario in this case is to infer customer intent. Why is the customer calling? What is the customer's problem? Why did the customer have a negative experience? [Cognitive Service for Language](https://azure.microsoft.com/services/cognitive-services/text-analytics/) provides a set of analytics out of the box for quickly upgrading your end-to-end solution for extracting those important keywords or phrases.
+
+The next sections cover batch processing and the real-time pipelines for speech recognition in a bit more detail.
## Batch transcription of call center data
-For transcribing bulk audio we developed the [Batch Transcription API](batch-transcription.md). The Batch Transcription API was developed to transcribe large amounts of audio data asynchronously. With regard to transcribing call center data, our solution is based on these pillars:
+To transcribe audio in bulk, Microsoft developed the [Batch Transcription API](batch-transcription.md), which transcribes large amounts of audio data asynchronously. For transcribing call center data specifically, this solution is based on three pillars:
+
+- **Accuracy**: By applying fourth-generation Unified models, we offer high-quality transcription.
+- **Latency**: Bulk transcriptions must be performed quickly. The transcription jobs that are initiated via the [Batch Transcription API](batch-transcription.md) are queued immediately, and when the job starts running, it's performed faster than real-time transcription.
+- **Security**: We understand that calls might contain sensitive data, so security is our highest priority. To this end, our service has obtained (ISO), SOC, HIPAA, and PCI certifications.
-- **Accuracy** - With fourth-generation Unified models, we offer unsurpassed transcription quality.-- **Latency** - We understand that when doing bulk transcriptions, the transcriptions are needed quickly. The transcription jobs initiated via the [Batch Transcription API](batch-transcription.md) will be queued immediately, and once the job starts running it's performed faster than real-time transcription.-- **Security** - We understand that calls may contain sensitive data. Rest assured that security is one of our highest priorities. Our service has obtained ISO, SOC, HIPAA, PCI certifications.
+Call centers generate large volumes of audio data on a daily basis. If your business stores telephony data in a central location, such as an Azure storage account, you can use the [Batch Transcription API](batch-transcription.md) to asynchronously request and receive transcriptions.
-Call centers generate large volumes of audio data on a daily basis. If your business stores telephony data in a central location, such as Azure Storage, you can use the [Batch Transcription API](batch-transcription.md) to asynchronously request and receive transcriptions.
+A typical solution uses these products and
-A typical solution uses these
+- **Speech service**: For transcribing speech-to-text. A standard subscription for Speech service is required to use the Batch Transcription API. Free subscriptions will not work.
+- **[Azure storage account](https://azure.microsoft.com/services/storage/)**: For storing telephony data and the transcripts that are returned by the Batch Transcription API. This storage account should use notifications, specifically for when new files are added. These notifications are used to trigger the transcription process.
+- **[Azure Functions](../../azure-functions/index.yml)**: For creating the shared access signature (SAS) URI for each recording, and triggering the HTTP POST request to start a transcription. Additionally, you use Azure Functions to create requests to retrieve and delete transcriptions by using the Batch Transcription API.
-- The Speech service is used to transcribe speech-to-text. A standard subscription (S0) for the Speech service is required to use the Batch Transcription API. Free subscriptions (F0) will not work.-- [Azure Storage](https://azure.microsoft.com/services/storage/) is used to store telephony data, and the transcripts returned by the Batch Transcription API. This storage account should use notifications, specifically for when new files are added. These notifications are used to trigger the transcription process.-- [Azure Functions](../../azure-functions/index.yml) is used to create the shared access signatures (SAS) URI for each recording, and trigger the HTTP POST request to start a transcription. Additionally, Azure Functions is used to create requests to retrieve and delete transcriptions using the Batch Transcription API.
+Internally, Microsoft uses these technologies to support Microsoft customer calls in batch mode, as shown in the following diagram:
-Internally we are using the above technologies to support Microsoft customer calls in Batch mode.
## Real-time transcription for call center data
-Some businesses are required to transcribe conversations in real-time. Real-time transcription can be used to identify key-words and trigger searches for content and resources relevant to the conversation, for monitoring sentiment, to improve accessibility, or to provide translations for customers and agents who aren't native speakers.
+Some businesses are required to transcribe conversations in real time. You can use real-time transcription to identify keywords and trigger searches for content and resources that are relevant to the conversation, to monitor sentiment, to improve accessibility, or to provide translations for customers and agents who aren't native speakers.
For scenarios that require real-time transcription, we recommend using the [Speech SDK](speech-sdk.md). Currently, speech-to-text is available in [more than 20 languages](language-support.md), and the SDK is available in C++, C#, Java, Python, JavaScript, Objective-C, and Go. Samples are available in each language on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk). For the latest news and updates, see [Release notes](releasenotes.md).
-Internally we are using the above technologies to analyze in real-time Microsoft customer calls as they happen, as illustrated in the following diagram.
+Internally, Microsoft uses the previously mentioned technologies to analyze Microsoft customer calls in real time, as shown in the following diagram:
-![Batch Architecture](media/scenarios/call-center-reatime-pipeline.png)
+![Diagram showing the technologies that are used to support Microsoft customer calls in real time.](media/scenarios/call-center-reatime-pipeline.png)
-## A word on IVRs
+## About interactive voice responses
-The Speech service can be easily integrated in any solution by using either the [Speech SDK](speech-sdk.md) or the [REST API](./overview.md#reference-docs). However, call center transcription may require additional technologies. Typically, a connection between an IVR system and Azure is required. Although we do not offer such components, here is a description what a connection to an IVR entails.
+You can easily integrate Speech service into any solution by using either the [Speech SDK](speech-sdk.md) or the [REST API](./overview.md#reference-docs). However, call center transcription might require additional technologies. Ordinarily, a connection between an IVR system and Azure is required. Although we don't offer such components, the next paragraph describes what a connection to an IVR entails.
-Several IVR or telephony service products (such as Genesys or AudioCodes) offer integration capabilities that can be leveraged to enable inbound and outbound audio pass-through to an Azure service. Basically, a custom Azure service might provide a specific interface to define phone call sessions (such as Call Start or Call End) and expose a WebSocket API to receive inbound stream audio that is used with the Speech service. Outbound responses, such as conversation transcription or connections with the Bot Framework, can be synthesized with Microsoft's text-to-speech service and returned to the IVR for playback.
+Several IVR or telephony service products (such as Genesys or AudioCodes) offer integration capabilities that can be applied to enable an inbound and outbound audio pass-through to an Azure service. Basically, a custom Azure service might provide a specific interface to define phone call sessions (such as Call Start or Call End) and expose a WebSocket API to receive inbound stream audio that's used with Speech service. Outbound responses, such as a conversation transcription or connections with the Bot Framework, can be synthesized with the Microsoft text-to-speech service and returned to the IVR for playback.
-Another scenario is direct integration with Session Initiation Protocol (SIP). An Azure service connects to a SIP Server, thus getting an inbound stream and an outbound stream, which is used for the speech-to-text and text-to-speech phases. To connect to a SIP Server there are commercial software offerings, such as Ozeki SDK, or the [Microsoft Graph communications API](/graph/api/resources/communications-api-overview), that are designed to support this type of scenario for audio calls.
+Another scenario is direct integration with the Session Initiation Protocol (SIP). An Azure service connects to a SIP server to get an inbound and outbound stream, which is used for the speech-to-text and text-to-speech phases. To connect to a SIP server there are commercial software offerings, such as Ozeki SDK, or the [Microsoft Graph Communications API](/graph/api/resources/communications-api-overview), that are designed to support this type of scenario for audio calls.
## Customize existing experiences
- The Speech service works well with built-in models. However, you may want to further customize and tune the experience for your product or environment. Customization options range from acoustic model tuning to unique voice fonts for your brand. After you've built a custom model, you can use it with any of the Speech service features in real-time or batch mode.
+The Speech service works well with built-in models. However, you might want to further customize and tune the experience for your product or environment. Customization options range from acoustic model tuning to unique voice fonts for your brand. After you've built a custom model, you can use it with any of the Speech service features in real-time or batch mode.
| Speech service | Model | Description | | -- | -- | -- |
-| Speech-to-text | [Acoustic model](./how-to-custom-speech-train-model.md) | Create a custom acoustic model for applications, tools, or devices that are used in particular environments like in a car or on a factory floor, each with specific recording conditions. Examples include accented speech, specific background noises, or using a specific microphone for recording. |
-| | [Language model](./how-to-custom-speech-train-model.md) | Create a custom language model to improve transcription of industry-specific vocabulary and grammar, such as medical terminology, or IT jargon. |
-| | [Pronunciation model](./how-to-custom-speech-train-model.md) | With a custom pronunciation model, you can define the phonetic form and display for a word or term. It's useful for handling customized terms, such as product names or acronyms. All you need to get started is a pronunciation file, which is a simple `.txt` file. |
-| Text-to-speech | [Voice font](./how-to-custom-voice-create-voice.md) | Custom voice fonts allow you to create a recognizable, one-of-a-kind voice for your brand. It only takes a small amount of data to get started. The more data that you provide, the more natural and human-like your voice font will sound. |
+| Speech-to-text | [Acoustic model](./how-to-custom-speech-train-model.md) | Create a custom acoustic model for applications, tools, or devices that are used in particular environments, such as in a car or on a factory floor, each with its own recording conditions. Examples include accented speech, background noises, or using a specific microphone for recording. |
+| | [Language model](./how-to-custom-speech-train-model.md) | Create a custom language model to improve transcription of industry-specific vocabulary and grammar, such as medical terminology or IT jargon. |
+| | [Pronunciation model](./how-to-custom-speech-train-model.md) | With a custom pronunciation model, you can define the phonetic form and display for a word or term. It's useful for handling customized terms, such as product names or acronyms. All you need to get started is a pronunciation file, which is a simple .txt file. |
+| Text-to-speech | [Voice font](./how-to-custom-voice-create-voice.md) | with custom voice fonts, you can create a recognizable, one-of-a-kind voice for your brand. It takes only a small amount of data to get started. The more data you provide, the more natural and human-like your voice font will sound. |
## Sample code
-Sample code is available on GitHub for each of the Speech service features. These samples cover common scenarios like reading audio from a file or stream, continuous and at-start recognition, and working with custom models. Use these links to view SDK and REST samples:
+Sample code is available on GitHub for each of the Speech service features. These samples cover common scenarios, such as reading audio from a file or stream, continuous and at-start recognition, and working with custom models. To view SDK and REST samples, see:
- [Speech-to-text and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk) - [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)
cognitive-services Custom Commands References https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-commands-references.md
Custom Commands supports the following parameter types:
* Temperature * Url
-Every locale supports the "String" parameter type, but availability of all other types differs by locale. Custom Commands uses LUIS's prebuilt entity resolution, so the availability of a parameter type in a locale depends on LUIS's prebuilt entity support in that locale. You can find [more details on LUIS's prebuilt entity support per locale](../luis/luis-reference-prebuilt-entities.md).
+Every locale supports the "String" parameter type, but availability of all other types differs by locale. Custom Commands uses LUIS's prebuilt entity resolution, so the availability of a parameter type in a locale depends on LUIS's prebuilt entity support in that locale. You can find [more details on LUIS's prebuilt entity support per locale](../luis/luis-reference-prebuilt-entities.md). Custom LUIS entities (such as machine learned entities) are currently not supported.
Some parameter types like Number, String and DateTime support default value configuration, which you can configure from the portal.
cognitive-services How To Select Audio Input Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-select-audio-input-devices.md
Title: How to select an audio input device with the Speech SDK
+ Title: Select an audio input device with the Speech SDK
-description: 'Learn about selecting audio input devices in the Speech SDK (C++, C#, Python, Objective-C, Java, JavaScript) by obtaining the IDs of the audio devices connected to a system.'
+description: 'Learn about selecting audio input devices in the Speech SDK (C++, C#, Python, Objective-C, Java, and JavaScript) by obtaining the IDs of the audio devices connected to a system.'
ms.devlang: cpp, csharp, java, javascript, objective-c, python
-# How to: Select an audio input device with the Speech SDK
+# Select an audio input device with the Speech SDK
-Version 1.3.0 of the Speech SDK introduces an API to select the audio input. This article describes how to obtain the IDs of the audio devices connected to a system. These can then be used in the Speech SDK by configuring the audio device through the `AudioConfig` object:
+Version 1.3.0 of the Speech SDK introduces an API to select the audio input. This article describes how to obtain the IDs of the audio devices connected to a system. These IDs can then be used in the Speech SDK. You configure the audio device through the `AudioConfig` object:
```C++ audioConfig = AudioConfig.FromMicrophoneInput("<device id>");
audioConfig = AudioConfiguration.fromMicrophoneInput("<device id>");
``` > [!Note]
-> Microphone usage is not available for JavaScript running in Node.js
+> Microphone use isn't available for JavaScript running in Node.js.
-## Audio device IDs on Windows for Desktop applications
+## Audio device IDs on Windows for desktop applications
-Audio device [endpoint ID strings](/windows/desktop/CoreAudio/endpoint-id-strings) can be retrieved from the [`IMMDevice`](/windows/desktop/api/mmdeviceapi/nn-mmdeviceapi-immdevice) object in Windows for Desktop applications.
+Audio device [endpoint ID strings](/windows/desktop/CoreAudio/endpoint-id-strings) can be retrieved from the [`IMMDevice`](/windows/desktop/api/mmdeviceapi/nn-mmdeviceapi-immdevice) object in Windows for desktop applications.
The following code sample illustrates how to use it to enumerate audio devices in C++:
void ListEndpoints()
PROPVARIANT varName; for (ULONG i = 0; i < count; i++) {
- // Get pointer to endpoint number i.
+ // Get the pointer to endpoint number i.
hr = pCollection->Item(i, &pEndpoint); EXIT_ON_ERROR(hr);
void ListEndpoints()
STGM_READ, &pProps); EXIT_ON_ERROR(hr);
- // Initialize container for property value.
+ // Initialize the container for property value.
PropVariantInit(&varName); // Get the endpoint's friendly-name property. hr = pProps->GetValue(PKEY_Device_FriendlyName, &varName); EXIT_ON_ERROR(hr);
- // Print endpoint friendly name and endpoint ID.
+ // Print the endpoint friendly name and endpoint ID.
printf("Endpoint %d: \"%S\" (%S)\n", i, varName.pwszVal, pwszID); CoTaskMemFree(pwszID);
Exit:
} ```
-In C#, the [NAudio](https://github.com/naudio/NAudio) library can be used to access the CoreAudio API and enumerate devices as follows:
+In C#, you can use the [NAudio](https://github.com/naudio/NAudio) library to access the CoreAudio API and enumerate devices as follows:
```cs using System;
A sample device ID is `{0.0.1.00000000}.{5f23ab69-6181-4f4a-81a4-45414013aac8}`.
## Audio device IDs on UWP
-On the Universal Windows Platform (UWP), audio input devices can be obtained using the `Id()` property of the corresponding [`DeviceInformation`](/uwp/api/windows.devices.enumeration.deviceinformation) object.
+On the Universal Windows Platform (UWP), you can obtain audio input devices by using the `Id()` property of the corresponding [`DeviceInformation`](/uwp/api/windows.devices.enumeration.deviceinformation) object.
-The following code samples show how to do this in C++ and C#:
+The following code samples show how to do this step in C++ and C#:
```cpp #include <winrt/Windows.Foundation.h>
A sample device ID is `\\\\?\\SWD#MMDEVAPI#{0.0.1.00000000}.{5f23ab69-6181-4f4a-
## Audio device IDs on Linux
-The device IDs are selected using standard ALSA device IDs.
+The device IDs are selected by using standard ALSA device IDs.
The IDs of the inputs attached to the system are contained in the output of the command `arecord -L`.
-Alternatively, they can be obtained using the [ALSA C library](https://www.alsa-project.org/alsa-doc/alsa-lib/).
+Alternatively, they can be obtained by using the [ALSA C library](https://www.alsa-project.org/alsa-doc/alsa-lib/).
Sample IDs are `hw:1,0` and `hw:CARD=CC,DEV=0`.
For example, the UID for the built-in microphone is `BuiltInMicrophoneDevice`.
## Audio device IDs on iOS
-Audio device selection with the Speech SDK is not supported on iOS. However, apps using the SDK can influence audio routing through the [`AVAudioSession`](https://developer.apple.com/documentation/avfoundation/avaudiosession?language=objc) Framework.
+Audio device selection with the Speech SDK isn't supported on iOS. Apps that use the SDK can influence audio routing through the [`AVAudioSession`](https://developer.apple.com/documentation/avfoundation/avaudiosession?language=objc) Framework.
For example, the instruction
For example, the instruction
withOptions:AVAudioSessionCategoryOptionAllowBluetooth error:NULL]; ```
-enables the use of a Bluetooth headset for a speech-enabled app.
+Enables the use of a Bluetooth headset for a speech-enabled app.
## Audio device IDs in JavaScript
-In JavaScript the [MediaDevices.enumerateDevices()](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices) method can be used to enumerate the media devices and find a device ID to pass to `fromMicrophone(...)`.
+In JavaScript, the [MediaDevices.enumerateDevices()](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices) method can be used to enumerate the media devices and find a device ID to pass to `fromMicrophone(...)`.
## Next steps > [!div class="nextstepaction"]
-> [Explore our samples on GitHub](https://aka.ms/csspeech/samples)
+> [Explore samples on GitHub](https://aka.ms/csspeech/samples)
## See also
cognitive-services How To Specify Source Language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-specify-source-language.md
Title: How to specify source language for speech to text
+ Title: Specify source language for speech to text
-description: The Speech SDK allows you to specify the source language when converting speech to text. This article describes how to use the FromConfig and SourceLanguageConfig methods to let the Speech service know the source language and provide a custom model target.
+description: The Speech SDK allows you to specify the source language when you convert speech to text. This article describes how to use the FromConfig and SourceLanguageConfig methods to let the Speech service know the source language and provide a custom model target.
ms.devlang: cpp, csharp, java, javascript, objective-c, python
-# Specify source language for speech to text
+# Specify source language for speech-to-text
-In this article, you'll learn how to specify the source language for an audio input passed to the Speech SDK for speech recognition. Additionally, example code is provided to specify a custom speech model for improved recognition.
+In this article, you'll learn how to specify the source language for an audio input passed to the Speech SDK for speech recognition. The example code that's provided specifies a custom speech model for improved recognition.
::: zone pivot="programming-language-csharp"
-## How to specify source language in C#
+## Specify source language in C#
-In the following example, the source language is provided explicitly as a parameter using `SpeechRecognizer` construct.
+In the following example, the source language is provided explicitly as a parameter by using the `SpeechRecognizer` construct:
```csharp var recognizer = new SpeechRecognizer(speechConfig, "de-DE", audioConfig); ```
-In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
+In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
```csharp var sourceLanguageConfig = SourceLanguageConfig.FromLanguage("de-DE"); var recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig); ```
-In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
+In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
```csharp var sourceLanguageConfig = SourceLanguageConfig.FromLanguage("de-DE", "The Endpoint ID for your custom model.");
var recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioC
``` >[!Note]
-> `SpeechRecognitionLanguage` and `EndpointId` set methods are deprecated from the `SpeechConfig` class in C#. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
+> The `SpeechRecognitionLanguage` and `EndpointId` set methods are deprecated from the `SpeechConfig` class in C#. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
::: zone-end ::: zone pivot="programming-language-cpp"
+## Specify source language in C++
-## How to specify source language in C++
-
-In the following example, the source language is provided explicitly as a parameter using the `FromConfig` method.
+In the following example, the source language is provided explicitly as a parameter by using the `FromConfig` method.
```C++ auto recognizer = SpeechRecognizer::FromConfig(speechConfig, "de-DE", audioConfig); ```
-In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `FromConfig` when creating the `recognizer`.
+In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to `FromConfig` when you create the `recognizer` construct.
```C++ auto sourceLanguageConfig = SourceLanguageConfig::FromLanguage("de-DE"); auto recognizer = SpeechRecognizer::FromConfig(speechConfig, sourceLanguageConfig, audioConfig); ```
-In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. The `sourceLanguageConfig` is passed as a parameter to `FromConfig` when creating the `recognizer`.
+In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to `FromConfig` when you create the `recognizer` construct.
```C++ auto sourceLanguageConfig = SourceLanguageConfig::FromLanguage("de-DE", "The Endpoint ID for your custom model.");
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, sourceLanguageConfi
``` >[!Note]
-> `SetSpeechRecognitionLanguage` and `SetEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
+> `SetSpeechRecognitionLanguage` and `SetEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
::: zone-end ::: zone pivot="programming-language-java"
-## How to specify source language in Java
+## Specify source language in Java
-In the following example, the source language is provided explicitly when creating a new `SpeechRecognizer`.
+In the following example, the source language is provided explicitly when you create a new `SpeechRecognizer` construct.
```Java SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, "de-DE", audioConfig); ```
-In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter when creating a new `SpeechRecognizer`.
+In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter when you create a new `SpeechRecognizer` construct.
```Java SourceLanguageConfig sourceLanguageConfig = SourceLanguageConfig.fromLanguage("de-DE"); SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig); ```
-In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter when creating a new `SpeechRecognizer`.
+In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter when you create a new `SpeechRecognizer` construct.
```Java SourceLanguageConfig sourceLanguageConfig = SourceLanguageConfig.fromLanguage("de-DE", "The Endpoint ID for your custom model.");
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, sourceLanguageC
``` >[!Note]
-> `setSpeechRecognitionLanguage` and `setEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
+> `setSpeechRecognitionLanguage` and `setEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
::: zone-end ::: zone pivot="programming-language-python"
-## How to specify source language in Python
+## Specify source language in Python
-In the following example, the source language is provided explicitly as a parameter using `SpeechRecognizer` construct.
+In the following example, the source language is provided explicitly as a parameter by using the `SpeechRecognizer` construct.
```Python speech_recognizer = speechsdk.SpeechRecognizer( speech_config=speech_config, language="de-DE", audio_config=audio_config) ```
-In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `SourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
+In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `SourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
```Python source_language_config = speechsdk.languageconfig.SourceLanguageConfig("de-DE")
speech_recognizer = speechsdk.SpeechRecognizer(
speech_config=speech_config, source_language_config=source_language_config, audio_config=audio_config) ```
-In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `SourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
+In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `SourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
```Python source_language_config = speechsdk.languageconfig.SourceLanguageConfig("de-DE", "The Endpoint ID for your custom model.")
speech_recognizer = speechsdk.SpeechRecognizer(
``` >[!Note]
-> `speech_recognition_language` and `endpoint_id` properties are deprecated from the `SpeechConfig` class in Python. The use of these properties is discouraged, and they shouldn't be used when constructing a `SpeechRecognizer`.
+> The `speech_recognition_language` and `endpoint_id` properties are deprecated from the `SpeechConfig` class in Python. The use of these properties is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
::: zone-end ::: zone pivot="programming-language-more"
-## How to specify source language in Javascript
+## Specify source language in JavaScript
-The first step is to create a `SpeechConfig`:
+The first step is to create a `SpeechConfig` construct:
```Javascript var speechConfig = sdk.SpeechConfig.fromSubscription("YourSubscriptionkey", "YourRegion");
If you're using a custom model for recognition, you can specify the endpoint wit
speechConfig.endpointId = "The Endpoint ID for your custom model."; ```
-## How to specify source language in Objective-C
+## Specify source language in Objective-C
-In the following example, the source language is provided explicitly as a parameter using `SPXSpeechRecognizer` construct.
+In the following example, the source language is provided explicitly as a parameter by using the `SPXSpeechRecognizer` construct.
```Objective-C SPXSpeechRecognizer* speechRecognizer = \ [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig language:@"de-DE" audioConfiguration:audioConfig]; ```
-In the following example, the source language is provided using `SPXSourceLanguageConfiguration`. Then, the `SPXSourceLanguageConfiguration` is passed as a parameter to `SPXSpeechRecognizer` construct.
+In the following example, the source language is provided by using `SPXSourceLanguageConfiguration`. Then, `SPXSourceLanguageConfiguration` is passed as a parameter to the `SPXSpeechRecognizer` construct.
```Objective-C SPXSourceLanguageConfiguration* sourceLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"de-DE"];
SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpe
audioConfiguration:audioConfig]; ```
-In the following example, the source language and custom endpoint are provided using `SPXSourceLanguageConfiguration`. Then, the `SPXSourceLanguageConfiguration` is passed as a parameter to `SPXSpeechRecognizer` construct.
+In the following example, the source language and custom endpoint are provided by using `SPXSourceLanguageConfiguration`. Then, `SPXSourceLanguageConfiguration` is passed as a parameter to the `SPXSpeechRecognizer` construct.
```Objective-C SPXSourceLanguageConfiguration* sourceLanguageConfig = \
SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpe
``` >[!Note]
-> `speechRecognitionLanguage` and `endpointId` properties are deprecated from the `SPXSpeechConfiguration` class in Objective-C. The use of these properties is discouraged, and they shouldn't be used when constructing a `SPXSpeechRecognizer`.
+> The `speechRecognitionLanguage` and `endpointId` properties are deprecated from the `SPXSpeechConfiguration` class in Objective-C. The use of these properties is discouraged. Don't use them when you create a `SPXSpeechRecognizer` construct.
::: zone-end ## See also
-* For a list of supported languages and locales for speech to text, see [Language support](language-support.md).
+For a list of supported languages and locales for speech-to-text, see [Language support](language-support.md).
## Next steps
-* [Speech SDK reference documentation](speech-sdk.md)
+See the [Speech SDK reference documentation](speech-sdk.md).
cognitive-services How To Use Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-audio-input-streams.md
Title: Speech SDK audio input stream concepts
-description: An overview of the capabilities of the Speech SDK's audio input stream API.
+description: An overview of the capabilities of the Speech SDK audio input stream API.
# About the Speech SDK audio input stream API
-The Speech SDK's **Audio Input Stream** API provides a way to stream audio into the recognizers instead of using either the microphone or the input file APIs.
+The Speech SDK audio input stream API provides a way to stream audio into the recognizers instead of using either the microphone or the input file APIs.
-The following steps are required when using audio input streams:
+The following steps are required when you use audio input streams:
-- Identify the format of the audio stream. The format must be supported by the Speech SDK and the Speech service. Currently, only the following configuration is supported:
+- Identify the format of the audio stream. The format must be supported by the Speech SDK and the Azure Cognitive Services Speech service. Currently, only the following configuration is supported:
- Audio samples are in PCM format, one channel, 16 bits per sample, 8000 or 16000 samples per second (16000 or 32000 bytes per second), two block align (16 bit including padding for a sample).
+ Audio samples are:
- The corresponding code in the SDK to create the audio format looks like this:
+ - PCM format
+ - One channel
+ - 16 bits per sample, 8,000 or 16,000 samples per second (16,000 bytes or 32,000 bytes per second)
+ - Two-block aligned (16 bit including padding for a sample)
+
+ The corresponding code in the SDK to create the audio format looks like this example:
```csharp byte channels = 1;
The following steps are required when using audio input streams:
var audioFormat = AudioStreamFormat.GetWaveFormatPCM(samplesPerSecond, bitsPerSample, channels); ``` -- Make sure your code provides the RAW audio data according to these specifications. Also assure 16-bit samples arrive in little-endian format. Signed samples are also supported. If your audio source data doesn't match the supported formats, the audio must be transcoded into the required format.
+- Make sure that your code provides the RAW audio data according to these specifications. Also, make sure that 16-bit samples arrive in little-endian format. Signed samples are also supported. If your audio source data doesn't match the supported formats, the audio must be transcoded into the required format.
-- Create your own audio input stream class derived from `PullAudioInputStreamCallback`. Implement the `Read()` and `Close()` members. The exact function signature is language-dependent, but the code will look similar to this code sample:
+- Create your own audio input stream class derived from `PullAudioInputStreamCallback`. Implement the `Read()` and `Close()` members. The exact function signature is language-dependent, but the code looks similar to this code sample:
```csharp public class ContosoAudioStream : PullAudioInputStreamCallback {
The following steps are required when using audio input streams:
} public int Read(byte[] buffer, uint size) {
- // returns audio data to the caller.
- // e.g. return read(config.YYY, buffer, size);
+ // Returns audio data to the caller.
+ // E.g., return read(config.YYY, buffer, size);
} public void Close() {
- // close and cleanup resources.
+ // Close and clean up resources.
} }; ```
The following steps are required when using audio input streams:
var speechConfig = SpeechConfig.FromSubscription(...); var recognizer = new SpeechRecognizer(speechConfig, audioConfig);
- // run stream through recognizer
+ // Run stream through recognizer.
var result = await recognizer.RecognizeOnceAsync(); var text = result.GetText();
cognitive-services Keyword Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/keyword-recognition-overview.md
Title: Keyword recognition - Speech service
-description: An overview of the features, capabilities, and restrictions for keyword recognition using the Speech Software Development Kit (SDK).
+description: An overview of the features, capabilities, and restrictions for keyword recognition by using the Speech Software Development Kit (SDK).
# Keyword recognition
-Keyword recognition detects a word or short phrase within a stream of audio. It's also referred to as keyword spotting.
+Keyword recognition detects a word or short phrase within a stream of audio. It's also referred to as keyword spotting.
The most common use case of keyword recognition is voice activation of virtual assistants. For example, "Hey Cortana" is the keyword for the Cortana assistant. Upon recognition of the keyword, a scenario-specific action is carried out. For virtual assistant scenarios, a common resulting action is speech recognition of audio that follows the keyword. Generally, virtual assistants are always listening. Keyword recognition acts as a privacy boundary for the user. A keyword requirement acts as a gate that prevents unrelated user audio from crossing the local device to the cloud.
-To balance accuracy, latency, and computational complexity, keyword recognition is implemented as a multi-stage system. For all stages beyond the first, audio is only processed if the stage prior to it believed to have recognized the keyword of interest.
+To balance accuracy, latency, and computational complexity, keyword recognition is implemented as a multistage system. For all stages beyond the first, audio is only processed if the stage prior to it believed to have recognized the keyword of interest.
-The current system is designed with multiple stages spanning across the edge and cloud:
+The current system is designed with multiple stages that span the edge and cloud:
-![Multiple stages of keyword recognition across edge and cloud.](media/custom-keyword/kw-recognition-multi-stage.png)
+![Diagram that shows multiple stages of keyword recognition across the edge and cloud.](media/custom-keyword/kw-recognition-multi-stage.png)
Accuracy of keyword recognition is measured via the following metrics:
-* **Correct accept rate (CA)** ΓÇô Measures the systemΓÇÖs ability to recognize the keyword when it is spoken by an end-user. This is also known as the true positive rate.
-* **False accept rate (FA)** ΓÇô Measures the systemΓÇÖs ability to filter out audio that is not the keyword spoken by an end-user. This is also known as the false positive rate.
-The goal is to maximize the correct accept rate while minimizing the false accept rate. The current system is designed to detect a keyword or phrase preceded by a short amount of silence. Detecting a keyword in the middle of a sentence or utterance is not supported.
+* **Correct accept rate**: Measures the system's ability to recognize the keyword when it's spoken by a user. The correct accept rate is also known as the true positive rate.
+* **False accept rate**: Measures the system's ability to filter out audio that isn't the keyword spoken by a user. The false accept rate is also known as the false positive rate.
-## Custom Keyword for on-device models
+The goal is to maximize the correct accept rate while minimizing the false accept rate. The current system is designed to detect a keyword or phrase preceded by a short amount of silence. Detecting a keyword in the middle of a sentence or utterance isn't supported.
-The [Custom Keyword portal on Speech Studio](https://speech.microsoft.com/customkeyword) allows you to generate keyword recognition models that execute at the edge by specifying any word or short phrase. You can further personalize your keyword model by choosing the right pronunciations.
+## Custom keyword for on-device models
+
+With the [Custom Keyword portal on Speech Studio](https://speech.microsoft.com/customkeyword), you can generate keyword recognition models that execute at the edge by specifying any word or short phrase. You can further personalize your keyword model by choosing the right pronunciations.
### Pricing
-There's no cost to using Custom Keyword for generating models, including both Basic and Advanced models. There is also no cost for running models on-device with the Speech SDK.
+There's no cost to use custom keyword to generate models, including both Basic and Advanced models. There's also no cost to run models on-device with the Speech SDK.
### Types of models
-Custom Keyword allows you to generate two types of on-device models for any keyword.
+You can use custom keyword to generate two types of on-device models for any keyword.
| Model type | Description | | - | -- |
-| Basic | Best suited for demo or rapid prototyping purposes. Models are generated with a common base model and can take up to 15 minutes to be ready. Models may not have optimal accuracy characteristics. |
-| Advanced | Best suited for product integration purposes. Models are generated with adaptation of a common base model using simulated training data to improve accuracy characteristics. It can take up to 48 hours for models to be ready. |
+| Basic | Best suited for demo or rapid prototyping purposes. Models are generated with a common base model and can take up to 15 minutes to be ready. Models might not have optimal accuracy characteristics. |
+| Advanced | Best suited for product integration purposes. Models are generated with adaptation of a common base model by using simulated training data to improve accuracy characteristics. It can take up to 48 hours for models to be ready. |
> [!NOTE]
-> You can view a list of regions that support the **Advanced** model type in the [Keyword recognition region support](regions.md#keyword-recognition) documentation.
+> You can view a list of regions that support the **Advanced** model type in the [keyword recognition region support](regions.md#keyword-recognition) documentation.
-Neither model type requires you to upload training data. Custom Keyword fully handles data generation and model training.
+Neither model type requires you to upload training data. Custom keyword fully handles data generation and model training.
### Pronunciations
-When creating a new model, Custom Keyword automatically generates possible pronunciations of the provided keyword. You can listen to each pronunciation and choose all that closely represent the way you expect end-users to say the keyword. All other pronunciations should not be selected.
+When you create a new model, custom keyword automatically generates possible pronunciations of the provided keyword. You can listen to each pronunciation and choose all variations that closely represent the way you expect users to say the keyword. All other pronunciations shouldn't be selected.
-It is important to be deliberate about the pronunciations you select to ensure the best accuracy characteristics. For example, choosing more pronunciations than needed can lead to higher false accept rates. Choosing too few pronunciations, where not all expected variations are covered, can lead to lower correct accept rates.
+It's important to be deliberate about the pronunciations you select to ensure the best accuracy characteristics. For example, if you choose more pronunciations than you need, you might get higher false accept rates. If you choose too few pronunciations, where not all expected variations are covered, you might get lower correct accept rates.
-### Testing models
+### Test models
-Once on-device models are generated by Custom Keyword, they can be tested directly on the portal. The portal allows you to speak directly into your browser and get keyword recognition results.
+After on-device models are generated by custom keyword, they can be tested directly on the portal. You can use the portal to speak directly into your browser and get keyword recognition results.
-## Keyword Verification
+## Keyword verification
-Keyword Verification is a cloud service that reduces the impact of false accepts from on-device models with robust models running on Azure. There is no tuning or training required for Keyword Verification to work with your keyword. Incremental model updates are continually deployed to the service to improve accuracy and latency, completely transparent to client applications.
+Keyword verification is a cloud service that reduces the impact of false accepts from on-device models with robust models running on Azure. Tuning or training isn't required for keyword verification to work with your keyword. Incremental model updates are continually deployed to the service to improve accuracy and latency and are transparent to client applications.
### Pricing
-Keyword Verification is always used in combination with Speech-to-text, and there is no cost to using Keyword Verification beyond the cost of Speech-to-text.
+Keyword verification is always used in combination with speech-to-text. There's no cost to use keyword verification beyond the cost of speech-to-text.
+
+### Keyword verification and speech-to-text
-### Keyword Verification and Speech-to-text
+When keyword verification is used, it's always in combination with speech-to-text. Both services run in parallel, which means audio is sent to both services for simultaneous processing.
-When Keyword Verification is used, it is always in combination with Speech-to-text. Both services run in parallel. This means that audio is sent to both services for simultaneous processing.
+![Diagram that shows parallel processing of keyword verification and speech-to-text.](media/custom-keyword/kw-verification-parallel-processing.png)
-![Parallel processing of Keyword Verification and Speech-to-text.](media/custom-keyword/kw-verification-parallel-processing.png)
+Running keyword verification and speech-to-text in parallel yields the following benefits:
-Running Keyword Verification and Speech-to-text in parallel yields the following benefits:
-* **No additional latency on Speech-to-text results** ΓÇô Parallel execution means Keyword Verification adds no latency, and the client receives Speech-to-text results just as quickly. If Keyword Verification determines the keyword was not present in the audio, Speech-to-text processing is terminated, which protects against unnecessary Speech-to-text processing. However, network and cloud model processing increases the user-perceived latency of voice activation. For details, see [Recommendations and guidelines](keyword-recognition-guidelines.md).
-* **Forced keyword prefix in Speech-to-text results** ΓÇô Speech-to-text processing will ensure that the results sent to the client are prefixed with the keyword. This allows for increased accuracy in the Speech-to-text results for speech that follows the keyword.
-* **Increased Speech-to-text timeout** ΓÇô Due to the expected presence of the keyword at the beginning of audio, Speech-to-text will allow for a longer pause of up to 5 seconds after the keyword, before determining end of speech and terminating Speech-to-text processing. This ensures the end-user experience is correctly handled for both staged commands (*\<keyword> \<pause> \<command>*) and chained commands (*\<keyword> \<command>*).
+* **No other latency on speech-to-text results**: Parallel execution means that keyword verification adds no latency. The client receives speech-to-text results as quickly. If keyword verification determines the keyword wasn't present in the audio, speech-to-text processing is terminated. This action protects against unnecessary speech-to-text processing. Network and cloud model processing increases the user-perceived latency of voice activation. For more information, see [Recommendations and guidelines](keyword-recognition-guidelines.md).
+* **Forced keyword prefix in speech-to-text results**: Speech-to-text processing ensures that the results sent to the client are prefixed with the keyword. This behavior allows for increased accuracy in the speech-to-text results for speech that follows the keyword.
+* **Increased speech-to-text timeout**: Because of the expected presence of the keyword at the beginning of audio, speech-to-text allows for a longer pause of up to five seconds after the keyword before it determines the end of speech and terminates speech-to-text processing. This behavior ensures that the user experience is correctly handled for staged commands (*\<keyword> \<pause> \<command>*) and chained commands (*\<keyword> \<command>*).
-### Keyword Verification responses and latency considerations
+### Keyword verification responses and latency considerations
-For each request to the service, Keyword Verification will return one of two responses: Accepted or Rejected. The processing latency varies depending on the length of the keyword and the length of the audio segment expected to contain the keyword. Processing latency does not include network cost between the client and Azure Speech services.
+For each request to the service, keyword verification returns one of two responses: accepted or rejected. The processing latency varies depending on the length of the keyword and the length of the audio segment expected to contain the keyword. Processing latency doesn't include network cost between the client and Azure Speech services.
-| Keyword Verification response | Description |
+| Keyword verification response | Description |
| -- | -- |
-| Accepted | Indicates the service believed the keyword was present in the audio stream provided as part of the request. |
-| Rejected | Indicates the service believed the keyword was not present in the audio stream provided as part of the request. |
+| Accepted | Indicates the service believed that the keyword was present in the audio stream provided as part of the request. |
+| Rejected | Indicates the service believed that the keyword wasn't present in the audio stream provided as part of the request. |
+
+Rejected cases often yield higher latencies as the service processes more audio than accepted cases. By default, keyword verification processes a maximum of two seconds of audio to search for the keyword. If the keyword is determined not to be present in two seconds, the service times out and signals a rejected response to the client.
+
+### Use keyword verification with on-device models from custom keyword
-Rejected cases often yield higher latencies as the service processes more audio than accepted cases. By default, Keyword Verification will process a maximum of two seconds of audio to search for the keyword. If the keyword is determined not to be present in the two seconds, the service will time out and signal a rejected response to the client.
+The Speech SDK enables seamless use of on-device models generated by using custom keyword with keyword verification and speech-to-text. It transparently handles:
-### Using Keyword Verification with on-device models from Custom Keyword
+* Audio gating to keyword verification and speech recognition based on the outcome of an on-device model.
+* Communicating the keyword to keyword verification.
+* Communicating any more metadata to the cloud for orchestrating the end-to-end scenario.
-The Speech SDK facilitates seamless use of on-device models generated using Custom Keyword with Keyword Verification and Speech-to-text. It transparently handles:
-* Audio gating to Keyword Verification & Speech recognition based on the outcome of on-device model.
-* Communicating the keyword to the Keyword Verification service.
-* Communicating any additional metadata to the cloud for orchestrating the end-to-end scenario.
+You don't need to explicitly specify any configuration parameters. All necessary information is automatically extracted from the on-device model generated by custom keyword.
-You do not need to explicitly specify any configuration parameters. All necessary information will automatically be extracted from the on-device model generated by Custom Keyword.
+The sample and tutorials linked here show how to use the Speech SDK:
-The sample and tutorials linked below show how to use the Speech SDK:
* [Voice assistant samples on GitHub](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant) * [Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)
- * [Tutorial: Create a Custom Commands application with simple voice commands](./how-to-develop-custom-commands-application.md)
+ * [Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)
## Speech SDK integration and scenarios
-The Speech SDK facilitates easy use of personalized on-device keyword recognition models generated with Custom Keyword and the Keyword Verification service. To ensure your product needs can be met, the SDK supports two scenarios:
+The Speech SDK enables easy use of personalized on-device keyword recognition models generated with custom keyword and keyword verification. To ensure that your product needs can be met, the SDK supports the following two scenarios:
| Scenario | Description | Samples | | -- | -- | - |
-| End-to-end keyword recognition with Speech-to-text | Best suited for products that will use a customized on-device keyword model from Custom Keyword with Azure SpeechΓÇÖs Keyword Verification and Speech-to-text services. This is the most common scenario. | <ul><li>[Voice assistant sample code.](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)</li><li>[Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK.](./tutorial-voice-enable-your-bot-speech-sdk.md)</li><li>[Tutorial: Create a Custom Commands application with simple voice commands.](./how-to-develop-custom-commands-application.md)</li></ul> |
-| Offline keyword recognition | Best suited for products without network connectivity that will use a customized on-device keyword model from Custom Keyword. | <ul><li>[C# on Windows UWP sample.](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer)</li><li>[Java on Android sample.](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer)</li></ul>
+| End-to-end keyword recognition with speech-to-text | Best suited for products that will use a customized on-device keyword model from custom keyword with Azure Speech keyword verification and speech-to-text. This scenario is the most common. | <ul><li>[Voice assistant sample code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)</li><li>[Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)</li><li>[Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)</li></ul> |
+| Offline keyword recognition | Best suited for products without network connectivity that will use a customized on-device keyword model from custom keyword. | <ul><li>[C# on Windows UWP sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer)</li><li>[Java on Android sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer)</li></ul>
## Next steps
-* [Read the quickstart to generate on-device keyword recognition models using Custom Keyword.](custom-keyword-basics.md)
-* [Learn more about Voice Assistants.](voice-assistants.md)
+* [Read the quickstart to generate on-device keyword recognition models using custom keyword](custom-keyword-basics.md)
+* [Learn more about voice assistants](voice-assistants.md)
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
See below for information about changes to Speech services and resources.
* Speech SDK 1.20.0 released January 2022. Updates include extended programming language support for DialogServiceConnector, Unity on Linux, enhancements to IntentRecognizer, added support for Python 3.10, and a fix to remove a 10-second delay while stopping a speech recognizer (when using a PushAudioInputStream, and no new audio is pushed in after StopContinuousRecognition is called). * Speech CLI 1.20.0 released January 2022. Updates include microphone input for Speaker recognition and expanded support for Intent recognition.
-* Speaker Recognition service is generally available (GA). With [Speaker Recognition](./speaker-recognition-overview.md) you can accurately verify and identify speakers by their unique voice characteristics.
-* Custom Neural Voice extended to support [49 locales](./language-support.md#custom-neural-voice).
-* Prebuilt Neural Voice added new [languages and variants](./language-support.md#prebuilt-neural-voices).
-* Commitment Tiers added to [pricing options](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+* TTS Service January 2022, added 10 new languages and variants for Neural text-to-speech and new voices in preview for en-GB, fr-FR and de-DE.
+* Containers v3.0.0 released January 2022, with support for using containers in disconnected environments.
## Release notes
cognitive-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md
Title: Speech phonetic alphabets - Speech service
-description: Speech service phonetic alphabet and International Phonetic Alphabet (IPA) examples.
+description: This article presents Speech service phonetic alphabet and International Phonetic Alphabet (IPA) examples.
# SSML phonetic alphabets
-Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to improve pronunciation of Text-to-speech voices. See [Use phonemes to improve pronunciation](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation) to learn when and how to use each alphabet.
+Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to improve the pronunciation of text-to-speech voices. To learn when and how to use each alphabet, see [Use phonemes to improve pronunciation](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
## Speech service phonetic alphabet
-For some locales, the Speech service defines its own phonetic alphabets that typically map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The 7 locales that support `sapi` are: `en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, and `zh-TW`.
+For some locales, Speech service defines its own phonetic alphabets, which ordinarily map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The seven locales that support the Microsoft Speech API (SAPI, or `sapi`) are en-US, fr-FR, de-DE, es-ES, ja-JP, zh-CN, and zh-TW.
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
#### English suprasegmentals
-|Example 1 (Onset for consonant, word initial for vowel)|Example 2 (Intervocalic for consonant, word medial nucleus for vowel)|Example 3 (Coda for consonant, word final for vowel)|Comments|
+|Example&nbsp;1 (onset for consonant, word-initial for vowel)|Example&nbsp;2 (intervocalic for consonant, word-medial nucleus for vowel)|Example&nbsp;3 (coda for consonant, word-final for vowel)|Comments|
|--|--|--|--|
-| burger /b er **1** r - g ax r/ | falafel /f ax - l aa **1** - f ax l/ | guitar /g ih - t aa **1** r/ | Speech service phone set put stress after the vowel of the stressed syllable |
-| inopportune /ih **2** - n aa - p ax r - t uw 1 n/ | dissimilarity /d ih - s ih **2**- m ax - l eh 1 - r ax - t iy/ | workforce /w er 1 r k - f ao **2** r s/ | Speech service phone set put stress after the vowel of the sub-stressed syllable |
+| burger /b er **1** r - g ax r/ | falafel /f ax - l aa **1** - f ax l/ | guitar /g ih - t aa **1** r/ | The Speech service phone set puts stress after the vowel of the stressed syllable. |
+| inopportune /ih **2** - n aa - p ax r - t uw 1 n/ | dissimilarity /d ih - s ih **2**- m ax - l eh 1 - r ax - t iy/ | workforce /w er 1 r k - f ao **2** r s/ | The Speech service phone set puts stress after the vowel of the sub-stressed syllable. |
#### English vowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-||--|--| | iy | `i` | **ea**t | f**ee**l | vall**ey** | | ih | `ɪ` | **i**f | f**i**ll | | | ey | `eɪ` | **a**te | g**a**te | d**ay** |
-| eh | `ɛ` | **e**very | p**e**t | m**eh** (rare word finally) |
-| ae | `æ` | **a**ctive | c**a**t | n**ah** (rare word finally) |
-| aa | `ɑ` | **o**bstinate | p**o**ppy | r**ah** (rare word finally) |
+| eh | `ɛ` | **e**very | p**e**t | m**eh** (rare word-final) |
+| ae | `æ` | **a**ctive | c**a**t | n**ah** (rare word-final) |
+| aa | `ɑ` | **o**bstinate | p**o**ppy | r**ah** (rare word-final) |
| ao | `ɔ` | **o**range | c**au**se | Ut**ah** | | uh | `ʊ` | b**oo**k | | | | ow | `oʊ` | **o**ld | cl**o**ne | g**o** |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
#### English R-colored vowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--|-|| | ih r | `ɪɹ` | **ear**s | t**ir**amisu | n**ear** | | eh r | `ɛɹ` | **air**plane | app**ar**ently | sc**ar**e |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
| er r | `ɝ` | **ear**th | b**ir**d | f**ur** | | ax r | `ɚ` | | all**er**gy | supp**er** |
-#### English Semivowels
+#### English semivowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|||--| | w | `w` | **w**ith, s**ue**de | al**w**ays | | | y | `j` | **y**ard, f**e**w | on**i**on | | #### English aspirated oral stops
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--|-|| | p | `p` | **p**ut | ha**pp**en | fla**p** | | b | `b` | **b**ig | num**b**er | cra**b** |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
| k | `k` | **c**ut | sla**ck**er | Ira**q** | | g | `g` | **g**o | a**g**o | dra**g** |
-#### English Nasal stops
+#### English nasal stops
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|||-| | m | `m` | **m**at, smash | ca**m**era | roo**m** | | n | `n` | **n**o, s**n**ow | te**n**t | chicke**n** |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
#### English fricatives
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|-||| | f | `f` | **f**ork | le**f**t | hal**f** | | v | `v` | **v**alue | e**v**ent | lo**v**e |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
#### English affricates
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--|--|| | ch | `tʃ` | **ch**in | fu**t**ure | atta**ch** | | jh | `dʒ` | **j**oy | ori**g**inal | oran**g**e | #### English approximants
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--||--| | l | `l` | **l**id, g**l**ad | pa**l**ace | chi**ll** | | r | `╔╣` | **r**ed, b**r**ing | bo**rr**ow | ta**r** |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
#### French suprasegmentals
-The Speech service phone set puts stress after the vowel of the stressed syllable, however; the `fr-FR` Speech service phone set doesn't support the IPA substress 'ˌ'. If the IPA substress is needed, you should use the IPA directly.
+The Speech service phone set puts stress after the vowel of the stressed syllable. However, the `fr-FR` Speech service phone set doesn't support the IPA substress 'ˌ'. If the IPA substress is needed, you should use the IPA directly.
#### French vowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-||--|--| | a | `a` | **a**rbre | p**a**tte | ir**a** | | aa | `ɑ` | | p**â**te | p**a**s | | aa ~ | `ɑ̃` | **en**fant | enf**en**t | t**em**ps | | ax | `ə` | | p**e**tite | l**e** | | eh | `ɛ` | **e**lle | p**e**rdu | ét**ai**t |
-| eu | `ø` | **œu**fs | cr**eu**ser | qu**eu** |
+| eu | `ø` | **œu**fs | cr**eu**ser | qu**eu**e |
| ey | `e` | ému | crétin | ôté | | eh ~ | `ɛ̃` | **im**portant | p**ein**ture | mat**in** | | iy | `i` | **i**dée | pet**i**te | am**i** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
#### French consonants
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|-||-| | b | `b` | **b**ête | ha**b**ille | ro**b**e | | d | `d` | **d**ire | ron**d**eur | chau**d**e | | f | `f` | **f**emme | su**ff**ixe | bo**f** | | g | `g` | **g**auche | é**g**ale | ba**gu**e |
-| ng | `ŋ` | | | [<sup>1</sup>](#fr-1)park**ing** |
+| ng | `ŋ` | | | park**ing**[<sup>1</sup>](#fr-1) |
| hy | `ɥ` | h**u**ile | n**u**ire | | | k | `k` | **c**arte | é**c**aille | be**c** | | l | `l` | **l**ong | é**l**ire | ba**l** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
| | `z‿` | | | di**x** | <a id="fr-1"></a>
-**1** *Only for some foreign words.*
+**1** *Only for some foreign words*.
> [!TIP] > The `fr-FR` Speech service phone set doesn't support the following French liasions, `n‿`, `t‿`, and `z‿`. If they are needed, you should consider using the IPA directly.
The Speech service phone set puts stress after the vowel of the stressed syllabl
#### German suprasegmentals
-| Example 1 (Onset for consonant, word initial for vowel) | Example 2 (Intervocalic for consonant, word medial nucleus for vowel) | Example 3 (Coda for consonant, word final for vowel) | Comments |
+| Example&nbsp;1 (Onset for consonant, word-initial for vowel) | Example&nbsp;2 (Intervocalic for consonant, word-medial nucleus for vowel) | Example&nbsp;3 (Coda for consonant, word-final for vowel) | Comments |
|--|--|--|--| | anders /a **1** n - d ax r s/ | Multiplikationszeichen /m uh l - t iy - p l iy - k a - ts y ow **1** n s - ts ay - c n/ | Biologie /b iy - ow - l ow - g iy **1**/ | Speech service phone set put stress after the vowel of the stressed syllable |
-| Allgemeinwissen /a **2** l - g ax - m ay 1 n - v ih - s n/ | Abfallentsorgungsfirma /a 1 p - f a l - ^ eh n t - z oh **2** ax r - g uh ng s - f ih ax r - m a/ | Computertomographie /k oh m - p y uw 1 - t ax r - t ow - m ow - g r a - f iy **2**/ | Speech service phone set put stress after the vowel of the sub-stressed syllable |
+| Allgemeinwissen /a **2** l - g ax - m ay 1 n - v ih - s n/ | Abfallentsorgungsfirma /a 1 p - f a l - ^ eh n t - z oh **2** ax r - g uh ng s - f ih ax r - m a/ | Computertomographie /k oh m - p y uw 1 - t ax r - t ow - m ow - g r a - f iy **2**/ | The Speech service phone set puts stress after the vowel of the sub-stressed syllable |
#### German vowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|--|||| | a: | `aː` | **A**ber | Maßst**a**b | Schem**a** | | a | `a` | **A**bfall | B**a**ch | Agath**a** | | oh | `ɔ` | **O**sten | Pf**o**sten | |
-| eh: | `ɛː` | **Ä**hnlichkeit | B**ä**r | [<sup>1</sup>](#de-v-1)Fasci**ae** |
+| eh: | `ɛː` | **Ä**hnlichkeit | B**ä**r | Fasci**ae**[<sup>1</sup>](#de-v-1) |
| eh | `ɛ` | **ä**ndern | Proz**e**nt | Amygdal**ae** |
-| ax | `ə` | [<sup>2</sup>](#de-v-2)'v**e**rstauen | Aach**e**n | Frag**e** |
+| ax | `ə` | 'v**e**rstauen[<sup>2</sup>](#de-v-2) | Aach**e**n | Frag**e** |
| iy | `iː` | **I**ran | abb**ie**gt | Relativitätstheor**ie** | | ih | `ɪ` | **I**nnung | s**i**ngen | Wood**y** | | eu | `øː` | **Ö**sen | abl**ö**sten | Malm**ö** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
| uy | `ʏ` | **ü**ppig | S**y**stem | | <a id="de-v-1"></a>
-**1** *Only in words of foreign origin, such as: Fasci**ae**.*<br>
+**1** *Only in words of foreign origin, such as Fasci**ae***.<br>
<a id="de-v-2"></a>
-**2** *Word-intially only in words of foreign origin such as **A**ppointment. Syllable-initially in: 'v**e**rstauen.*
+**2** *Word-initial only in words of foreign origin, such as **A**ppointment. Syllable-initial in 'v**e**rstauen*.
#### German diphthong
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--|--|--| | ay | `ai` | **ei**nsam | Unabhängigk**ei**t | Abt**ei** | | aw | `au` | **au**ßen | abb**au**st | St**au** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
#### German semivowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--|--|| | ax r | `ɐ` | | abänd**er**n | lock**er** | #### German consonants
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|--|--|--|--|
-| b | `b` | **B**ank | | [<sup>1</sup>](#de-c-1)Pu**b** |
-| c | `ç` | **Ch**emie | mögli**ch**st | [<sup>2</sup>](#de-c-2)i**ch** |
-| d | `d` | **d**anken | [<sup>3</sup>](#de-c-3)Len**d**l | [<sup>4</sup>](#de-c-4)Clau**d**e |
-| jh | `ʤ` | **J**eff | gemana**g**t | [<sup>5</sup>](#de-c-5)Chan**g**e |
+| b | `b` | **B**ank | | Pu**b**[<sup>1</sup>](#de-c-1) |
+| c | `ç` | **Ch**emie | mögli**ch**st | i**ch**[<sup>2</sup>](#de-c-2) |
+| d | `d` | **d**anken | Len**d**l[<sup>3</sup>](#de-c-3) | Clau**d**e[<sup>4</sup>](#de-c-4) |
+| jh | `ʤ` | **J**eff | gemana**g**t | Chan**g**e[<sup>5</sup>](#de-c-5) |
| f | `f` | **F**ahrtdauer | angri**ff**slustig | abbruchrei**f** |
-| g | `g` | **g**ut | [<sup>6</sup>](#de-c-6)Gre**g** | |
+| g | `g` | **g**ut | Gre**g**[<sup>6</sup>](#de-c-6) | |
| h | `h` | **H**ausanbau | | | | y | `j` | **J**od | Reakt**i**on | hu**i** | | k | `k` | **K**oma | Aspe**k**t | Flec**k** | | l | `l` | **l**au | ähne**l**n | zuvie**l** | | m | `m` | **M**ut | A**m**t | Leh**m** | | n | `n` | **n**un | u**n**d | Huh**n** |
-| ng | `ŋ` | [<sup>7</sup>](#de-c-7)**Ng**uyen | Schwa**nk** | R**ing** |
+| ng | `ŋ` | **Ng**uyen[<sup>7</sup>](#de-c-7) | Schwa**nk** | R**ing** |
| p | `p` | **P**artner | abru**p**t | Ti**p** | | pf | `pf` | **Pf**erd | dam**pf**t | To**pf** | | r | `ʀ`, `r`, `ʁ` | **R**eise | knu**rr**t | Haa**r** |
-| s | `s` | [<sup>8</sup>](#de-c-8)**S**taccato | bi**s**t | mie**s** |
+| s | `s` | **S**taccato[<sup>8</sup>](#de-c-8) | bi**s**t | mie**s** |
| sh | `ʃ` | **Sch**ule | mi**sch**t | lappi**sch** | | t | `t` | **T**raum | S**t**raße | Mu**t** | | ts | `ts` | **Z**ug | Ar**z**t | Wit**z** | | ch | `tʃ` | **Tsch**echien | aufgepu**tsch**t | bundesdeu**tsch** |
-| v | `v` | **w**inken | Q**u**alle | [<sup>9</sup>](#de-c-9)Gr**oo**ve |
-| x | [<sup>10</sup>](#de-c-10)`x`,[<sup>11</sup>](#de-c-11)`ç` | [<sup>12</sup>](#de-c-12)Ba**ch**erach | Ma**ch**t mögli**ch**st | Schma**ch** 'i**ch** |
+| v | `v` | **w**inken | Q**u**alle | Gr**oo**ve[<sup>9</sup>](#de-c-9) |
+| x | `x`[<sup>10</sup>](#de-c-10), `ç`[<sup>11</sup>](#de-c-11) | Ba**ch**erach[<sup>12</sup>](#de-c-12) | Ma**ch**t mögli**ch**st | Schma**ch** 'i**ch** |
| z | `z` | **s**uper | | | | zh | `ʒ` | **G**enre | B**re**ezinski | Edvi**g**e | <a id="de-c-1"></a>
-**1** *Only in words of foreign origin, such as: Pu**b**.*<br>
+**1** *Only in words of foreign origin, such as Pu**b***.<br>
<a id="de-c-2"></a>
-**2** *Soft "ch" after "e" and "i"*<br>
+**2** *Soft "ch" after "e" and "i"*.<br>
<a id="de-c-3"></a>
-**3** *Only in words of foreign origin, such as: Len**d**l.*<br>
+**3** *Only in words of foreign origin, such as Len**d**l*.<br>
<a id="de-c-4"></a>
-**4** *Only in words of foreign origin such as: Clau**d**e.*<br>
+**4** *Only in words of foreign origin, such as Clau**d**e*.<br>
<a id="de-c-5"></a>
-**5** *Only in words of foreign origin such as: Chan**g**e.*<br>
+**5** *Only in words of foreign origin, such as Chan**g**e*.<br>
<a id="de-c-6"></a>
-**6** *Word-terminally only in words of foreign origin such as Gre**g**.*<br>
+**6** *Word-terminally only in words of foreign origin, such as Gre**g***.<br>
<a id="de-c-7"></a>
-**7** *Only in words of foreign origin such as: **Ng**uyen.*<br>
+**7** *Only in words of foreign origin, such as **Ng**uyen*.<br>
<a id="de-c-8"></a>
-**8** *Only in words of foreign origin such as: **S**taccato.*<br>
+**8** *Only in words of foreign origin, such as **S**taccato*.<br>
<a id="de-c-9"></a>
-**9** *Only in words of foreign origin, such as: Gr**oo**ve.*<br>
+**9** *Only in words of foreign origin, such as Gr**oo**ve*.<br>
<a id="de-c-10"></a>
-**10** *The IPA `x` is a hard "ch" after all non-front vowels (a, aa, oh, ow, uh, uw and the diphthong aw).*<br>
+**10** *The IPA `x` is a hard "ch" after all non-front vowels (a, aa, oh, ow, uh, uw, and the diphthong aw)*.<br>
<a id="de-c-11"></a>
-**11** *The IPA `ç` is a soft 'ch' after front vowels (ih, iy, eh, ae, uy, ue, oe, eu also in diphthongs ay, oy) and consonants*<br>
+**11** *The IPA `ç` is a soft "ch" after front vowels (ih, iy, eh, ae, uy, ue, oe, eu, and diphthongs ay, oy) and consonants*.<br>
<a id="de-c-12"></a>
-**12** *Word-initially only in words of foreign origin, such as: **J**uan. Syllable-initially also in words like: Ba**ch**erach.*<br>
+**12** *Word-initial only in words of foreign origin, such as **J**uan. Syllable-initial also in words such as Ba**ch**erach*.<br>
#### German oral consonants
-| `sapi` | `ipa` | Example 1 |
+| `sapi` | `ipa` | Example |
|--|-|--| | ^ | `ʔ` | beachtlich /b ax - ^ a 1 x t - l ih c/ | > [!NOTE]
-> We need to add a [gs\] phone between two distinct vowels, except the two vowels are a genuine diphthong. This oral consonant is a glottal stop, for more information, see [glottal stop](http://en.wikipedia.org/wiki/Glottal_stop).
+> We need to add a [gs\] phone between two distinct vowels, except when the two vowels are a genuine diphthong. This oral consonant is a glottal stop. For more information, see [glottal stop](http://en.wikipedia.org/wiki/Glottal_stop).
### [es-ES](#tab/es-ES) #### Spanish vowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--||--| | a | `a` | **a**lto | c**a**ntar | cas**a** | | i | `i` | **i**bérica | av**i**spa | tax**i** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
#### Spanish consonants
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|||-|-| | b | `b` | **b**aobab | | am**b** | | | `╬▓` | | bao**b**ab | baoba**b** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
| x | `x` | **j**ota | a**j**o | relo**j** | > [!TIP]
-> The `es-ES` Speech service phone set doesn't support the following Spanish IPA, `β`, `ð`, and `ɣ`. If they are needed, you should consider using the IPA directly.
+> The `es-ES` Speech service phone set doesn't support the following Spanish IPA: `β`, `ð`, and `ɣ`. If they're needed, consider using the IPA directly.
### [zh-CN](#tab/zh-CN)
The Speech service phone set for `zh-TW` is based on the native phone [Bopomofo]
#### Tone
-| Speech service tone | Bopomofo tone | Example (word) | Speech service phones | Bopomofo | Pinyin (拼音) |
+| Speech service tone | Bopomofo tone | Example (word) | Speech service phones | Bopomofo | Pinyin (拼音) |
|||-|--|-|-| | ˉ | empty | 偵 | ㄓㄣˉ | ㄓㄣ | zhēn | | ˊ | ˊ | 察 | ㄔㄚˊ | ㄔㄚˊ | chá |
The Speech service phone set for `zh-TW` is based on the native phone [Bopomofo]
| ˋ | ˋ | 望 | ㄨㄤˋ | ㄨㄤˋ | wàng | | ˙ | ˙ | 影子 | 一ㄥˇ ㄗ˙ | 一ㄥˇ ㄗ˙ | yǐng zi |
-#### Example
+#### Examples
| Character | `sapi` | |--|-|
The Speech service phone set for `ja-JP` is based on the native phone [Kana](htt
| `ˈ` | `ˈ` mainstress | | `+` | `ˌ` substress |
-#### Example
+#### Examples
| Character | `sapi` | `ipa` | |--||-|
The Speech service phone set for `ja-JP` is based on the native phone [Kana](htt
## International Phonetic Alphabet
-For the locales below, the Speech service uses the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet).
+For the following locales, Speech service uses the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet).
You set `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
-These locales all use the same IPA stress and syllables described here.
+These locales all use the IPA stress and syllable symbols that are listed here:
|`ipa` | Symbol | |-|-|
These locales all use the same IPA stress and syllables described here.
| `.` | Syllable boundary |
-Select a tab for the IPA phonemes specific to each locale.
+Select a tab to view the IPA phonemes that are specific to each locale.
### [ca-ES](#tab/ca-ES)
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-|-||-| | `a` | **a**men | am**a**ro | est**à** | | `ɔ` | **o**dre | ofert**o**ri | microt**ò** |
Select a tab for the IPA phonemes specific to each locale.
#### Vowels
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||--|-| | `ɑː` | | f**a**st | br**a** | | `æ` | | f**a**t | |
Select a tab for the IPA phonemes specific to each locale.
#### Consonants
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||--|-| | `b ` | **b**ike | ri**bb**on | ri**b** | | `tʃ ` | **ch**allenge | na**t**ure | ri**ch** |
Select a tab for the IPA phonemes specific to each locale.
#### Vowels
-| `ipa` | Example 1 | Example 2 | Example 3|
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3|
|-||-|-| | `ɑ` | **a**zúcar | tom**a**te | rop**a** | | `e` | **e**so | rem**e**ro | am**é** |
Select a tab for the IPA phonemes specific to each locale.
#### Consonants
-| `ipa` | Example 1 | Example 2 | Example 3|
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3|
|-||-|-| | `b` | **b**ote | | | | `╬▓` | ├│r**b**ita | envol**v**ente | |
Select a tab for the IPA phonemes specific to each locale.
#### Vowels
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||--|--| | `a` | **a**mo | s**a**no | scort**a** | | `ai` | **ai**cs | abb**ai**no | m**ai** |
Select a tab for the IPA phonemes specific to each locale.
#### Consonants
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||--|--| | `b` | **b**ene | e**b**anista | Euroclu**b** | | `bː` | | go**bb**a | |
Select a tab for the IPA phonemes specific to each locale.
### [pt-BR](#tab/pt-BR)
-#### VOWELS
+#### Vowels
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-|--||--| | `i` | **i**lha | f**i**car | com**i** | | `ĩ` | **in**tacto | p**in**tar | aberd**een** |
Select a tab for the IPA phonemes specific to each locale.
#### Consonants
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-|--||--| | `w̃` | | | atualizaçã**o** | | `w` | **w**ashington | ág**u**a | uso**u** |
Select a tab for the IPA phonemes specific to each locale.
### [pt-PT](#tab/pt-PT)
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-|-|--|| | `a` | **á**bdito | consul**a**r | medir**á** | | `ɐ` | **a**bacaxi | dom**a**ção | long**a** |
Select a tab for the IPA phonemes specific to each locale.
### [ru-RU](#tab/ru-RU)
-#### VOWELS
+#### Vowels
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||-|-| | `a` | **а**дрес | р**а**дость | бед**а** | | `ʌ` | **о**блаков | з**а**стенчивость | внучк**а** |
Select a tab for the IPA phonemes specific to each locale.
| `ɔ` | **о**крик | м**о**т | весл**о** | | `u` | **у**жин | к**у**ст | пойд**у** |
-#### CONSONANT
+#### Consonants
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||-|-| | `p` | **п**рофессор | по**п**лавок | укро**п** | | `pʲ` | **П**етербург | осле**п**ительно | сте**пь** |
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-studio-overview.md
# What is Speech Studio?
-[Speech Studio](https://speech.microsoft.com) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech in your applications. You create projects in Speech Studio using a no-code approach, and then reference those assets in your applications using the [Speech SDK](speech-sdk.md), [Speech CLI](spx-overview.md), or REST APIs.
+[Speech Studio](https://speech.microsoft.com) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
-## Set up your Azure account
+## Prerequisites
-You need to have an Azure account and add a Speech resource before you can use [Speech Studio](https://speech.microsoft.com). If you don't have an account and resource, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
+Before you can begin using [Speech Studio](https://speech.microsoft.com), you need to have an Azure account and a Speech resource. If you don't already have an account and a resource, [try Speech service for free](overview.md#try-the-speech-service-for-free).
-After you create an Azure account and a Speech service resource:
+After you've created an Azure account and a Speech service resource, do the following:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com) with your Azure account.
-1. Select a Speech resource in your subscription. You can change the resources anytime in "Settings" in the top menu.
+1. Sign in to [Speech Studio](https://speech.microsoft.com) with your Azure account.
+1. In your Speech Studio subscription, select a Speech resource. You can change the resource at any time by selecting **Settings** at the top of the pane.
## Speech Studio features
-The following Speech service features are available as project types in Speech Studio.
+In Speech Studio, the following Speech service features are available as project types:
+
+* **Real-time speech-to-text**: Quickly test speech-to-text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech-to-text works on your audio samples. To explore the full functionality, see [What is speech-to-text?](speech-to-text.md).
+
+* **Custom Speech**: Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Prepare data for Custom Speech](how-to-custom-speech-test-and-train.md).
+
+* **Pronunciation assessment**: Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
-* **Real-time speech-to-text**: Quickly test speech-to-text by dragging and dropping audio files without using any code. This is a demo tool for seeing how speech-to-text works on your audio samples, but see the [overview](speech-to-text.md) for speech-to-text to explore the full functionality that's available.
-* **Custom Speech**: Custom Speech allows you to create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to using a base speech recognition model, Custom Speech models become part of your unique competitive advantage because they are not publicly accessible. See the [quickstart](how-to-custom-speech-test-and-train.md) to get started with uploading sample audio to create a Custom Speech model.
-* **Pronunciation Assessment**: Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly with no code, but see the [how-to](how-to-pronunciation-assessment.md) article for using the feature with the Speech SDK in your applications.
* **Voice Gallery**: Build apps and services that speak naturally. Choose from more than 170 voices in over 70 languages and variants. Bring your scenarios to life with highly expressive and human-like neural voices.
-* **Custom Voice**: Custom Voice allows you to create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. See the [how-to](how-to-custom-voice-create-voice.md) article on creating and using custom voices via endpoints.
-* **Audio Content Creation**: [Audio Content Creation](how-to-audio-content-creation.md) is an easy-to-use tool that lets you build highly natural audio content for a variety of scenarios, like audiobooks, news broadcasts, video narrations, and chat bots. Speech Studio allows you to export your created audio files to use in your applications.
-* **Custom Keyword**: A Custom Keyword is a word or short phrase that allows your product to be voice-activated. You create a Custom Keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
-* **Custom Commands**: Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a code-free authoring experience in Speech Studio, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios. See the [how-to](how-to-develop-custom-commands-application.md) guide for building Custom Commands applications, and also see the guide for [integrating your Custom Commands application with the Speech SDK](how-to-custom-commands-setup-speech-sdk.md).
+
+* **Custom Voice**: Create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
+
+* **Audio Content Creation**: Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots, with the easy-to-use [Audio Content Creation](how-to-audio-content-creation.md) tool. With Speech Studio, you can export these audio files to use in your applications.
+
+* **Custom Keyword**: A custom keyword is a word or short phrase that you can use to voice-activate a product. You create a custom keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
+
+* **Custom Commands**: Easily build rich, voice-command apps that are optimized for voice-first interaction experiences. Custom Commands provides a code-free authoring experience in Speech Studio, an automatic hosting model, and relatively lower complexity. The feature helps you focus on building the best solution for your voice-command scenarios. For more information, see the [Develop Custom Commands applications](how-to-develop-custom-commands-application.md) guide. Also see [Integrate with a client application by using the Speech SDK](how-to-custom-commands-setup-speech-sdk.md).
## Next steps
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-translation.md
Title: Speech translation overview - Speech service
-description: Speech translation allows you to add end-to-end, real-time, multi-language translation of speech to your applications, tools, and devices. The same API can be used for both speech-to-speech and speech-to-text translation. This article is an overview of the benefits and capabilities of the speech translation service.
+description: With speech translation, you can add end-to-end, real-time, multi-language translation of speech to your applications, tools, and devices.
keywords: speech translation
# What is speech translation?
-In this overview, you learn about the benefits and capabilities of the speech translation service, which enables real-time, [multi-language speech-to-speech](language-support.md#speech-translation) and speech-to-text translation of audio streams. With the Speech SDK, your applications, tools, and devices have access to source transcriptions and translation outputs for provided audio. Interim transcription and translation results are returned as speech is detected, and final results can be converted into synthesized speech.
+In this article, you learn about the benefits and capabilities of the speech translation service, which enables real-time, multi-language speech-to-speech and speech-to-text translation of audio streams. By using the Speech SDK, you can give your applications, tools, and devices access to source transcriptions and translation outputs for the provided audio. Interim transcription and translation results are returned as speech is detected, and the final results can be converted into synthesized speech.
+
+For a list of languages that the Speech Translation API supports, see the "Speech translation" section of [Language and voice support for the Speech service](language-support.md#speech-translation).
## Core features
In this overview, you learn about the benefits and capabilities of the speech tr
* Support for translation to multiple target languages. * Interim recognition and translation results.
-## Get started
+## Before you begin
-See the [quickstart](get-started-speech-translation.md) to get started with speech translation. The speech translation service is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
+As your first step, see [Get started with speech translation](get-started-speech-translation.md). The speech translation service is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
## Sample code
-Sample code for the Speech SDK is available on GitHub. These samples cover common scenarios like reading audio from a file or stream, continuous and at-start recognition/translation, and working with custom models.
-
-* [Speech-to-text and translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
+You'll find [Speech SDK speech-to-text and translation samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk) on GitHub. These samples cover common scenarios, such as reading audio from a file or stream, continuous and at-start recognition and translation, and working with custom models.
## Migration guides
-If your applications, tools, or products are using the [Translator Speech API](./how-to-migrate-from-translator-speech-api.md), we've created guides to help you migrate to the Speech service.
-
-* [Migrate from the Translator Speech API to the Speech service](how-to-migrate-from-translator-speech-api.md)
+If your applications, tools, or products are using the [Translator Speech API](./how-to-migrate-from-translator-speech-api.md), see [Migrate from the Translator Speech API to Speech service](how-to-migrate-from-translator-speech-api.md).
## Reference docs
If your applications, tools, or products are using the [Translator Speech API](.
## Next steps
-* Complete the speech translation [quickstart](get-started-speech-translation.md)
-* [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free)
-* [Get the Speech SDK](speech-sdk.md)
+* Read the [Get started with speech translation](get-started-speech-translation.md) quickstart article.
+* Get a [Speech service subscription key for free](overview.md#try-the-speech-service-for-free).
+* Get the [Speech SDK](speech-sdk.md).
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-basics.md
Title: "Speech CLI quickstart - Speech service"
+ Title: "Quickstart: The Speech CLI - Speech service"
-description: Get started with the Azure Speech CLI. You can interact with Speech services like speech to text, text to speech, and speech translation without writing code.
+description: By using the Azure Speech CLI, you can interact with speech-to-text, text-to-speech, and speech translation without having to write code.
# Get started with the Azure Speech CLI
-In this article, you'll learn how to use the Azure Speech CLI (command-line interface) to access Speech services like speech to text, text to speech, and speech translation without writing code. The Speech CLI is production ready and can be used to automate simple workflows in the Speech service, using `.bat` or shell scripts.
+In this article, you'll learn how to use the Azure Speech CLI (also called SPX) to access Speech services such as speech-to-text, text-to-speech, and speech translation, without having to write any code. The Speech CLI is production ready, and you can use it to automate simple workflows in the Speech service by using `.bat` or shell scripts.
-This article assumes that you have working knowledge of the command prompt, terminal, or PowerShell.
+This article assumes that you have working knowledge of the Command Prompt window, terminal, or PowerShell.
> [!NOTE] > In PowerShell, the [stop-parsing token](/powershell/module/microsoft.powershell.core/about/about_special_characters#stop-parsing-token) (`--%`) should follow `spx`. For example, run `spx --% config @region` to view the current region config value.
This article assumes that you have working knowledge of the command prompt, term
[!INCLUDE [](includes/spx-setup.md)]
-## Create subscription config
+## Create a subscription configuration
# [Terminal](#tab/terminal)
-You need an Azure subscription key and region identifier (ex. `eastus`, `westus`) to get started. See the [Speech service overview](overview.md#find-keys-and-locationregion) documentation for steps to get these credentials.
+To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). To learn how to get these credentials, see the [Speech service overview](overview.md#find-keys-and-locationregion) documentation.
-You run the following commands in a terminal to configure your subscription key and region identifier.
+To configure your subscription key and region identifier, run the following commands:
```console spx config @key --set SUBSCRIPTION-KEY spx config @region --set REGION ```
-The key and region are stored for future Speech CLI commands. Run the following commands to view the current configuration.
+The key and region are stored for future Speech CLI commands. To view the current configuration, run the following commands:
```console spx config @key spx config @region ```
-As needed, include the `clear` option to remove either stored value.
+As needed, include the `clear` option to remove either stored value:
```console spx config @key --clear
spx config @region --clear
# [PowerShell](#tab/powershell)
-You need an Azure subscription key and region identifier (ex. `eastus`, `westus`) to get started. See the [Speech service overview](overview.md#find-keys-and-locationregion) documentation for steps to get these credentials.
+To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). To learn how to get these credentials, see the [Speech service overview](overview.md#find-keys-and-locationregion) documentation.
-You run the following commands in PowerShell to configure your subscription key and region identifier.
+To configure your subscription key and region identifier, run the following commands in PowerShell:
```powershell spx --% config @key --set SUBSCRIPTION-KEY spx --% config @region --set REGION ```
-The key and region are stored for future Speech CLI commands. Run the following commands to view the current configuration.
+The key and region are stored for future SPX commands. To view the current configuration, run the following commands:
```powershell spx --% config @key spx --% config @region ```
-As needed, include the `clear` option to remove either stored value.
+As needed, include the `clear` option to remove either stored value:
```powershell spx --% config @key --clear
spx --% config @region --clear
## Basic usage
-This section shows a few basic SPX commands that are often useful for first-time testing and experimentation. Start by viewing the help built in to the tool by running the following command.
+This section shows a few basic SPX commands that are often useful for first-time testing and experimentation. Start by viewing the help that's built into the tool by running the following command:
```console spx ```
-You can search help topics by keyword. For example, run the following command to see a list of Speech CLI usage examples:
+You can search help topics by keyword. For example, to see a list of Speech CLI usage examples, run the following command:
```console spx help find --topics "examples" ```
-Run the following command to see options for the recognize command:
+To see options for the recognize command, run the following command:
```console spx help recognize
spx help recognize
Additional help commands are listed in the console output. You can enter these commands to get detailed help about subcommands.
-## Speech to text (speech recognition)
+## Speech-to-text (speech recognition)
-You run this command to convert speech to text (speech recognition) using your system's default microphone.
+To convert speech to text (speech recognition) by using your system's default microphone, run the following command:
```console spx recognize --microphone ```
-After entering the command, SPX will begin listening for audio on the current active input device, and stop when you press **ENTER**. The spoken audio is then recognized and converted to text in the console output.
+After you run the command, SPX begins listening for audio on the current active input device. It stops listening when you select **Enter**. The spoken audio is then recognized and converted to text in the console output.
-With the Speech CLI, you can also recognize speech from an audio file.
+With the Speech CLI, you can also recognize speech from an audio file. Run the following command:
```console spx recognize --file /path/to/file.wav ``` > [!NOTE]
-> If you are using a Docker container, `--microphone` will not work.
+> If you're using a Docker container, `--microphone` will not work.
>
-> If you're recognizing speech from an audio file in a Docker container, make sure that the audio file is located in the directory that you mounted in the previous step.
+> If you're recognizing speech from an audio file in a Docker container, make sure that the audio file is located in the directory that you mounted previously.
> [!TIP]
-> If you get stuck or want to learn more about the Speech CLI's recognition options, you can run ```spx help recognize```.
+> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help recognize```.
-## Text to speech (speech synthesis)
+## Text-to-speech (speech synthesis)
-Running the following command will take text as input, and output the synthesized speech to the current active output device (for example, your computer speakers).
+The following command takes text as input and then outputs the synthesized speech to the current active output device (for example, your computer speakers).
```console spx synthesize --text "Testing synthesis using the Speech CLI" --speakers ```
-You can also save the synthesized output to file. In this example, we'll create a file named `my-sample.wav` in the directory that the command is run.
+You can also save the synthesized output to a file. In this example, let's create a file named *my-sample.wav* in the directory where you're running the command.
```console spx synthesize --text "Enjoy using the Speech CLI." --audio output my-sample.wav ```
-These examples presume that you're testing in English. However, we support speech synthesis in many languages. You can pull down a full list of voices with this command, or by visiting the [language support page](./language-support.md).
+These examples presume that you're testing in English. However, Speech service supports speech synthesis in many languages. You can pull down a full list of voices either by running the following command or by visiting the [language support page](./language-support.md).
```console spx synthesize --voices ```
-Here's how you use one of the voices you've discovered.
+Here's a command for using one of the voices you've discovered.
```console spx synthesize --text "Bienvenue chez moi." --voice fr-CA-Caroline --speakers ``` > [!TIP]
-> If you get stuck or want to learn more about the Speech CLI's recognition options, you can run ```spx help synthesize```.
+> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help synthesize```.
-## Speech to text translation
+## Speech-to-text translation
-With the Speech CLI, you can also do speech to text translation. Run this command to capture audio from your default microphone, and output the translation as text. Keep in mind that you need to supply the `source` and `target` language with the `translate` command.
+With the Speech CLI, you can also do speech-to-text translation. Run the following command to capture audio from your default microphone and output the translation as text. Keep in mind that you need to supply the `source` and `target` language with the `translate` command.
```console spx translate --microphone --source en-US --target ru-RU ```
-When translating into multiple languages, separate language codes with `;`.
+When you're translating into multiple languages, separate the language codes with a semicolon (`;`).
```console spx translate --microphone --source en-US --target ru-RU;fr-FR;es-ES
spx translate --file /some/file/path/input.wav --source en-US --target ru-RU --o
``` > [!NOTE]
-> See the [language and locale article](language-support.md) for a list of all supported languages with their corresponding locale codes.
+> For a list of all supported languages and their corresponding locale codes, see [Language and voice support for the Speech service](language-support.md).
> [!TIP]
-> If you get stuck or want to learn more about the Speech CLI's recognition options, you can run ```spx help translate```.
+> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help translate```.
## Next steps
-* [Install GStreamer to use Speech CLI with MP3 and other formats](./how-to-use-codec-compressed-audio-input-streams.md)
-* [Speech CLI configuration options](./spx-data-store-configuration.md)
+* [Install GStreamer to use the Speech CLI with MP3 and other formats](./how-to-use-codec-compressed-audio-input-streams.md)
+* [Configuration options for the Speech CLI](./spx-data-store-configuration.md)
* [Batch operations with the Speech CLI](./spx-batch-operations.md)
cognitive-services Spx Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-overview.md
Title: The Azure Speech CLI
-description: The Speech CLI is a command-line tool for using the Speech service without writing any code. The Speech CLI requires minimal setup, and it's easy to immediately start experimenting with key features of the Speech service to see if your use-cases can be met.
+description: In this article, you learn about the Speech CLI, a command-line tool for using Speech service without having to write any code.
# What is the Speech CLI?
-The Speech CLI is a command-line tool for using the Speech service without writing any code. The Speech CLI requires minimal setup, and it's easy to immediately start experimenting with key features of the Speech service to see if your use-cases can be met. Within minutes, you can run simple test workflows like batch speech-recognition from a directory of files, or text-to-speech on a collection of strings from a file. Beyond simple workflows, the Speech CLI is production-ready and can be scaled up to run larger processes using automated `.bat` or shell scripts.
+The Speech CLI is a command-line tool for using Speech service without having to write any code. The Speech CLI requires minimal setup. You can easily use it to experiment with key features of Speech service and see how it works with your use cases. Within minutes, you can run simple test workflows, such as batch speech-recognition from a directory of files or text-to-speech on a collection of strings from a file. Beyond simple workflows, the Speech CLI is production-ready, and you can scale it up to run larger processes by using automated `.bat` or shell scripts.
-Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI. Consider the following guidance to decide when to use the Speech CLI or the Speech SDK.
+Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI. As you're deciding when to use the Speech CLI or the Speech SDK, consider the following guidance.
Use the Speech CLI when:
-* You want to experiment with Speech service features with minimal setup and no code
-* You have relatively simple requirements for a production application using the Speech service
+* You want to experiment with Speech service features with minimal setup and without having to write code.
+* You have relatively simple requirements for a production application that uses Speech service.
Use the Speech SDK when:
-* You want to integrate Speech service functionality within a specific language or platform (for example, C#, Python, C++)
-* You have complex requirements that may require advanced service requests, or developing custom behavior including response streaming
+* You want to integrate Speech service functionality within a specific language or platform (for example, C#, Python, or C++).
+* You have complex requirements that might require advanced service requests.
+* You're developing custom behavior, including response streaming.
## Core features
-* Speech recognition - Convert speech-to-text either from audio files or directly from a microphone, or transcribe a recorded conversation.
+* **Speech recognition**: Convert speech to text either from audio files or directly from a microphone, or transcribe a recorded conversation.
-* Speech synthesis - Convert text-to-speech using either input from text files, or input directly from the command line. Customize speech output characteristics using [SSML configurations](speech-synthesis-markup.md), and [neural voices](speech-synthesis-markup.md#prebuilt-neural-voices-and-custom-neural-voices).
+* **Speech synthesis**: Convert text to speech either by using input from text files or by inputting directly from the command line. Customize speech output characteristics by using [Speech Synthesis Markup Language (SSML) configurations](speech-synthesis-markup.md), and [neural voices](speech-synthesis-markup.md#prebuilt-neural-voices-and-custom-neural-voices).
-* Speech translation - Translate audio in a source language to text or audio in a target language.
+* **Speech translation**: Translate audio in a source language to text or audio in a target language.
-* Run on Azure compute resources - Send Speech CLI commands to run on an Azure remote compute resource using `spx webjob`.
+* **Run on Azure compute resources**: Send Speech CLI commands to run on an Azure remote compute resource by using `spx webjob`.
## Get started
-To get started with the Speech CLI, see the [quickstart](spx-basics.md). This article shows you how to run some basic commands, and also shows slightly more advanced commands for running batch operations for speech-to-text and text-to-speech. After reading the basics article, you should have enough of an understanding of the syntax to start writing some custom commands, or automating simple Speech service operations.
+To get started with the Speech CLI, see the [quickstart](spx-basics.md). This article shows you how to run some basic commands. It also gives you slightly more advanced commands for running batch operations for speech-to-text and text-to-speech. After you've read the basics article, you should understand the syntax well enough to start writing some custom commands or automate simple Speech service operations.
## Next steps -- Get started with the [Speech CLI quickstart](spx-basics.md)-- [Configure your data store](./spx-data-store-configuration.md)-- Learn how to [run batch operations with the Speech CLI](./spx-batch-operations.md)
+- [Get started with the Azure Speech CLI](spx-basics.md)
+- [Speech CLI configuration options](./spx-data-store-configuration.md)
+- [Speech CLI batch operations](./spx-batch-operations.md)
cognitive-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/voice-assistants.md
Title: Voice assistants - Speech service
-description: An overview of the features, capabilities, and restrictions for voice assistants using the Speech Software Development Kit (SDK).
+description: An overview of the features, capabilities, and restrictions for voice assistants with the Speech SDK.
# What is a voice assistant?
-Voice assistants using the Speech service empowers developers to create natural, human-like conversational interfaces for their applications and experiences.
+By using voice assistants with the Speech service, developers can create natural, human-like, conversational interfaces for their applications and experiences.
-The voice assistant service provides fast, reliable interaction between a device and an assistant implementation that uses either (1) [Direct Line Speech](direct-line-speech.md) (via Azure Bot Service) for adding voice capabilities to your bots, or, (2) Custom Commands for voice commanding scenarios.
+The voice assistant service provides fast, reliable interaction between a device and an assistant implementation that uses either [Direct Line Speech](direct-line-speech.md) (via Azure Bot Service) for adding voice capabilities to your bots or Custom Commands for voice-command scenarios.
-## Choosing an assistant solution
+## Choose an assistant solution
-The first step to creating a voice assistant is to decide what it should do. The Speech service provides multiple, complementary solutions for crafting your assistant interactions. You can add voice in and voice out capabilities to your flexible and versatile bot built using Azure Bot Service with the [Direct Line Speech](direct-line-speech.md) channel, or leverage the simplicity of authoring a [Custom Commands](custom-commands.md) app for straightforward voice commanding scenarios.
+The first step in creating a voice assistant is to decide what you want it to do. Speech service provides multiple, complementary solutions for crafting assistant interactions. For flexibility and versatility, you can add voice in and voice out capabilities to a bot by using Azure Bot Service with the [Direct Line Speech](direct-line-speech.md) channel, or you can simply author a [Custom Commands](custom-commands.md) app for more straightforward voice-command scenarios.
-| If you want... | Then consider... | For example... |
+| If you want... | Consider using... | Examples |
|-||-| |Open-ended conversation with robust skills integration and full deployment control | Azure Bot Service bot with [Direct Line Speech](direct-line-speech.md) channel | <ul><li>"I need to go to Seattle"</li><li>"What kind of pizza can I order?"</li></ul>
-|Voice commanding or simple task-oriented conversations with simplified authoring and hosting | [Custom Commands](custom-commands.md) | <ul><li>"Turn on the overhead light"</li><li>"Make it 5 degrees warmer"</li><li>Other samples [available here](https://speech.microsoft.com/customcommands)</li></ul>
+|Voice-command or simple task-oriented conversations with simplified authoring and hosting | [Custom Commands](custom-commands.md) | <ul><li>"Turn on the overhead light"</li><li>"Make it 5 degrees warmer"</li><li>More examples at [Speech Studio](https://speech.microsoft.com/customcommands)</li></ul>
-We recommend [Direct Line Speech](direct-line-speech.md) as the best default choice if you aren't yet sure what you'd like your assistant to handle. It offers integration with a rich set of tools and authoring aids such as the [Virtual Assistant Solution and Enterprise Template](/azure/bot-service/bot-builder-enterprise-template-overview) and the [QnA Maker service](../qnamaker/overview/overview.md) to build on common patterns and use your existing knowledge sources.
+If you aren't yet sure what you want your assistant to do, we recommend [Direct Line Speech](direct-line-speech.md) as the best option. It offers integration with a rich set of tools and authoring aids, such as the [Virtual Assistant solution and enterprise template](/azure/bot-service/bot-builder-enterprise-template-overview) and the [QnA Maker service](../qnamaker/overview/overview.md), to build on common patterns and use your existing knowledge sources.
-[Custom Commands](custom-commands.md) makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios.
+If you want to keep it simpler for now, [Custom Commands](custom-commands.md) makes it easy to build rich, voice-command apps that are optimized for voice-first interaction. Custom Commands provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, all of which can help you focus on building the best solution for your voice-command scenario.
- ![Comparison of assistant solutions](media/voice-assistants/assistant-solution-comparison.png "Comparison of assistant solutions")
+ ![Screenshot of a graph comparing the relative complexity and flexibility of the two voice assistant solutions.](media/voice-assistants/assistant-solution-comparison.png)
+## Reference architecture for building a voice assistant by using the Speech SDK
-## Reference Architecture for building a voice assistant using the Speech SDK
-
- ![Conceptual diagram of the voice assistant orchestration service flow](media/voice-assistants/overview.png "The voice assistant flow")
+ ![Conceptual diagram of the voice assistant orchestration service flow.](media/voice-assistants/overview.png)
## Core features
Whether you choose [Direct Line Speech](direct-line-speech.md) or [Custom Comman
| Category | Features | |-|-|
-|[Custom keyword](./custom-keyword-basics.md) | Users can start conversations with assistants with a custom keyword like "Hey Contoso." An app does this with a custom keyword engine in the Speech SDK, which can be configured with a custom keyword [that you can generate here](./custom-keyword-basics.md). Voice assistants can use service-side keyword verification to improve the accuracy of the keyword activation (versus the device alone).
-|[Speech to text](speech-to-text.md) | Voice assistants convert real-time audio into recognized text using [Speech-to-text](speech-to-text.md) from the Speech service. This text is available, as it's transcribed, to both your assistant implementation and your client application.
-|[Text to speech](text-to-speech.md) | Textual responses from your assistant are synthesized using [Text-to-speech](text-to-speech.md) from the Speech service. This synthesis is then made available to your client application as an audio stream. Microsoft offers the ability to build your own custom, high-quality Neural TTS voice that gives a voice to your brand. To learn more, [contact us](mailto:mstts@microsoft.com).
+|[Custom keyword](./custom-keyword-basics.md) | Users can start conversations with assistants by using a custom keyword such as "Hey Contoso." An app does this with a custom keyword engine in the Speech SDK, which you can configure by going to [Get started with custom keywords](./custom-keyword-basics.md). Voice assistants can use service-side keyword verification to improve the accuracy of the keyword activation (versus using the device alone).
+|[Speech-to-text](speech-to-text.md) | Voice assistants convert real-time audio into recognized text by using [speech-to-text](speech-to-text.md) from the Speech service. This text is available, as it's transcribed, to both your assistant implementation and your client application.
+|[Text-to-speech](text-to-speech.md) | Textual responses from your assistant are synthesized through [text-to-speech](text-to-speech.md) from the Speech service. This synthesis is then made available to your client application as an audio stream. Microsoft offers the ability to build your own custom, high-quality Neural Text to Speech (Neural TTS) voice that gives a voice to your brand. To learn more, [contact us](mailto:mstts@microsoft.com).
-## Getting started with voice assistants
+## Get started with voice assistants
-We offer quickstarts designed to have you running code in less than 10 minutes. This table includes a list of voice assistant quickstarts, organized by language.
+We offer the following quickstart articles, organized by programming language, that are designed to have you running code in less than 10 minutes:
-* [Quickstart: Create a custom voice assistant using Direct Line Speech](quickstarts/voice-assistants.md)
-* [Quickstart: Build a voice commanding app using Custom Commands](quickstart-custom-commands-application.md)
+* [Quickstart: Create a custom voice assistant by using Direct Line Speech](quickstarts/voice-assistants.md)
+* [Quickstart: Build a voice-command app by using Custom Commands](quickstart-custom-commands-application.md)
-## Sample code and Tutorials
+## Sample code and tutorials
-Sample code for creating a voice assistant is available on GitHub. These samples cover the client application for connecting to your assistant in several popular programming languages.
+Sample code for creating a voice assistant is available on GitHub. The samples cover the client application for connecting to your assistant in several popular programming languages.
* [Voice assistant samples on GitHub](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)
-* [Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](tutorial-voice-enable-your-bot-speech-sdk.md)
+* [Tutorial: Voice-enable an assistant that's built by using Azure Bot Service with the C# Speech SDK](tutorial-voice-enable-your-bot-speech-sdk.md)
* [Tutorial: Create a Custom Commands application with simple voice commands](./how-to-develop-custom-commands-application.md) ## Customization
-Voice assistants built using Azure Speech services can use the full range of customization options.
+Voice assistants that you build by using Speech service can use a full range of customization options.
* [Custom Speech](./custom-speech-overview.md) * [Custom Voice](how-to-custom-voice.md) * [Custom Keyword](keyword-recognition-overview.md) > [!NOTE]
-> Customization options vary by language/locale (see [Supported languages](language-support.md)).
+> Customization options vary by language and locale. To learn more, see [Supported languages](language-support.md).
## Next steps
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
Previously updated : 09/16/2021 Last updated : 02/02/2022 recommendations: false ms.devlang: csharp, golang, java, javascript, python
Operation-Location | https://<<span>NAME-OF-YOUR-RESOURCE>.cognitiveservices.a
} }
-}
``` ### [Node.js](#tab/javascript)
cognitive-services Start Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/start-translation.md
Previously updated : 06/22/2021 Last updated : 02/01/2022 # Start translation
Source of the input documents.
| | | | | |filter|DocumentFilter[]|False|DocumentFilter[] listed below.| |filter.prefix|string|False|A case-sensitive prefix string to filter documents in the source path for translation. For example, when using an Azure storage blob Uri, use the prefix to restrict sub folders for translation.|
-|filter.suffix|string|False|A case-sensitive suffix string to filter documents in the source path for translation. This is most often use for file extensions.|
-|language|string|False|Language code If none is specified, we will perform auto detect on the document.|
+|filter.suffix|string|False|A case-sensitive suffix string to filter documents in the source path for translation. It's most often use for file extensions.|
+|language|string|False|Language code If none is specified, we'll perform auto detect on the document.|
|sourceUrl|string|True|Location of the folder / container or single file with your documents.| |storageSource|StorageSource|False|StorageSource listed below.| |storageSource.AzureBlob|string|False||
Destination for the finished translated documents.
|category|string|False|Category / custom system for translation request.| |glossaries|Glossary[]|False|Glossary listed below. List of Glossary.| |glossaries.format|string|False|Format.|
-|glossaries.glossaryUrl|string|True (if using glossaries)|Location of the glossary. We will use the file extension to extract the formatting if the format parameter isn't supplied. If the translation language pair isn't present in the glossary, it won't be applied.|
+|glossaries.glossaryUrl|string|True (if using glossaries)|Location of the glossary. We'll use the file extension to extract the formatting if the format parameter isn't supplied. If the translation language pair isn't present in the glossary, it won't be applied.|
|glossaries.storageSource|StorageSource|False|StorageSource listed above.| |glossaries.version|string|False|Optional Version. If not specified, default is used.| |targetUrl|string|True|Location of the folder / container with your documents.|
Destination for the finished translated documents.
The following are examples of batch requests.
+> [!NOTE]
+> In the following examples, limited access has been granted to the contents of an Azure Storage container [using a shared access signature(SAS)](/azure/storage/common/storage-sas-overview) token.
+ **Translating all documents in a container** ```json
The following are examples of batch requests.
**Translating all documents in a container applying glossaries**
-Ensure you have created glossary URL & SAS token for the specific blob/document (not for the container)
- ```json { "inputs": [
Ensure you have created glossary URL & SAS token for the specific blob/document
**Translating specific folder in a container**
-Ensure you have specified the folder name (case sensitive) as prefix in filter ΓÇô though the SAS token is still for the container.
+Make sure you've specified the folder name (case sensitive) as prefix in filter.
```json {
Ensure you have specified the folder name (case sensitive) as prefix in filter
**Translating specific document in a container**
-* Ensure you have specified "storageType": "File"
-* Ensure you have created source URL & SAS token for the specific blob/document (not for the container)
-* Ensure you have specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
-* Sample request below shows a single document getting translated into two target languages
+* Specify "storageType": "File"
+* Create source URL & SAS token for the specific blob/document.
+* Specify the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
+
+The sample request below shows a single document translated into two target languages
```json {
The following are the possible HTTP status codes that a request returns.
| | | |202|Accepted. Successful request and the batch request are created by the service. The header Operation-Location will indicate a status url with the operation ID.HeadersOperation-Location: string| |400|Bad Request. Invalid request. Check input parameters.|
-|401|Unauthorized. Please check your credentials.|
+|401|Unauthorized. Check your credentials.|
|429|Request rate is too high.| |500|Internal Server Error.|
-|503|Service is currently unavailable. Please try again later.|
+|503|Service is currently unavailable. Try again later.|
|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>| ## Error response
The following are the possible HTTP status codes that a request returns.
| | | | |code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Cognitive Services API Guidelines. This contains required properties: ErrorCode, message and optional properties target, details(key value pair), and inner error(this can be nested).|
|inner.Errorcode|string|Gets code error string.| |innerError.message|string|Gets high-level error message.|
-|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
+|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document ID" if the document is invalid.|
## Examples
Follow our quickstart to learn more about using Document Translation and the cli
> [!div class="nextstepaction"] > [Get started with Document Translation](../get-started-with-document-translation.md)+
cognitive-services Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/reference/rest-api-guide.md
Text Translation is a cloud-based feature of the Azure Translator service and is
|[**detect**](v3-0-detect.md) | **POST** | Identify the source language. | |[**breakSentence**](v3-0-break-sentence.md) | **POST** | Returns an array of integers representing the length of sentences in a source text. | | [**dictionary/lookup**](v3-0-dictionary-lookup.md) | **POST** | Returns alternatives for single word translations. |
-| [**dictionary/examples**](v3-0-dictionary-lookup.md) | **POST** | Returns how a term is used in context. |
+| [**dictionary/examples**](v3-0-dictionary-examples.md) | **POST** | Returns how a term is used in context. |
> [!div class="nextstepaction"] > [Create a Translator resource in the Azure portal.](../translator-how-to-signup.md)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md
Previously updated : 11/02/2021 Last updated : 02/02/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/personally-identifiable-information/quickstart.md
Previously updated : 11/19/2021 Last updated : 02/02/2022 ms.devlang: csharp, java, javascript, python
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
A number of specific Android devices fail to join calls and meetings. The device
### iOS 15.1 users joining group calls or Microsoft Teams meetings.
-* Low volume. Known regression introduced by Apple with the release of iOS 15.1. Related webkit bug [here](https://bugs.webkit.org/show_bug.cgi?id=230902).
* Sometimes when incoming PSTN is received the tab with the call or meeting will hang. Related webkit bugs [here](https://bugs.webkit.org/show_bug.cgi?id=233707) and [here](https://bugs.webkit.org/show_bug.cgi?id=233708#c0). ### Device mutes and incoming video stops rendering when certain interruptions occur on iOS Safari.
To recover from all these cases, the user must go back to the application to unm
Occasionally, microphone or camera devices won't be released on time, and that can cause issues with the original call. For example, if the user tries to unmute while watching a YouTube video, or if a PSTN call is active simultaneously.
+Incoming video streams won't stop rendering if the user is on iOS 15.2+ and is using SDK version 1.4.1-beta.1+, the unmute/start video steps will still be required to re-start outgoing audio and video.
+ ### iOS with Safari crashes and refreshes the page if a user tries to switch from front camera to back camera. ACS Calling SDK version 1.2.3-beta.1 introduced a bug that affects all of the calls made from iOS Safari. The problem occurs when a user tries to switch the camera video stream from front to back. Switching camera results in Safari browser to crash and reload the page.
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-sftp-ssh.md
Title: Connect to SFTP server with SSH
-description: Automate tasks that monitor, create, manage, send, and receive files for an SFTP server by using SSH and Azure Logic Apps
+description: Automate tasks that monitor, create, manage, send, and receive files for an SFTP server by using SSH and Azure Logic Apps.
ms.suite: integration -- Previously updated : 01/12/2022++ Last updated : 02/02/2022 tags: connectors
For differences between the SFTP-SSH connector and the SFTP connector, review th
* OpenText GXS * Globalscape * SFTP for Azure Blob Storage
+ * FileMage Gateway
-* SFTP-SSH actions that support [chunking](../logic-apps/logic-apps-handle-large-messages.md) can handle files up to 1 GB, while SFTP-SSH actions that don't support chunking can handle files up to 50 MB. The default chunk size is 15 MB. However, this size can dynamically change, starting from 5 MB and gradually increasing to the 50-MB maximum. Dynamic sizing is based on factors such as network latency, server response time, and so on.
-
- > [!NOTE]
- > For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
- > this connector's ISE-labeled version requires chunking to use the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
-
- You can override this adaptive behavior when you [specify a constant chunk size](#change-chunk-size) to use instead. This size can range from 5 MB to 50 MB. For example, suppose you have a 45-MB file and a network that can that support that file size without latency. Adaptive chunking results in several calls, rather that one call. To reduce the number of calls, you can try setting a 50-MB chunk size. In different scenario, if your logic app is timing out, for example, when using 15-MB chunks, you can try reducing the size to 5 MB.
-
- Chunk size is associated with a connection. This attribute means you can use the same connection for both actions that support chunking and actions that don't support chunking. In this case, the chunk size for actions that don't support chunking ranges from 5 MB to 50 MB. This table shows which SFTP-SSH actions support chunking:
+* The following SFTP-SSH actions support [chunking](../logic-apps/logic-apps-handle-large-messages.md):
| Action | Chunking support | Override chunk size support | |--||--|
For differences between the SFTP-SSH connector and the SFTP connector, review th
| **Update file** | No | Not applicable | ||||
-* SFTP-SSH triggers don't support message chunking. When requesting file content, triggers select only files that are 15 MB or smaller. To get files larger than 15 MB, follow this pattern instead:
+ SFTP-SSH actions that support chunking can handle files up to 1 GB, while SFTP-SSH actions that don't support chunking can handle files up to 50 MB. The default chunk size is 15 MB. However, this size can dynamically change, starting from 5 MB and gradually increasing to the 50-MB maximum. Dynamic sizing is based on factors such as network latency, server response time, and so on.
+
+ > [!NOTE]
+ > For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
+ > this connector's ISE-labeled version requires chunking to use the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
+
+ You can override this adaptive behavior when you [specify a constant chunk size](#change-chunk-size) to use instead. This size can range from 5 MB to 50 MB. For example, suppose you have a 45-MB file and a network that can that support that file size without latency. Adaptive chunking results in several calls, rather that one call. To reduce the number of calls, you can try setting a 50-MB chunk size. In different scenario, if your logic app is timing out, for example, when using 15-MB chunks, you can try reducing the size to 5 MB.
+
+ Chunk size is associated with a connection. This attribute means you can use the same connection for both actions that support chunking and actions that don't support chunking. In this case, the chunk size for actions that don't support chunking ranges from 5 MB to 50 MB.
+
+* SFTP-SSH triggers don't support message chunking. When triggers request file content, they select only files that are 15 MB or smaller. To get files larger than 15 MB, follow this pattern instead:
1. Use an SFTP-SSH trigger that returns only file properties. These triggers have names that include the description, **(properties only)**.
The following list describes key SFTP-SSH capabilities that differ from the SFTP
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Your SFTP server address and account credentials, so your workflow can access your SFTP account. You also need access to an SSH private key and the SSH private key password. To upload large files using chunking, you need both read and write access for the root folder on your SFTP server. Otherwise, you get a "401 Unauthorized" error. The SFTP-SSH connector supports both private key authentication and password authentication. However, the SFTP-SSH connector supports *only* these private key formats, encryption algorithms, fingerprints, and key exchange algorithms: * **Private key formats**: RSA (Rivest Shamir Adleman) and DSA (Digital Signature Algorithm) keys in both OpenSSH and ssh.com formats. If your private key is in PuTTY (.ppk) file format, first [convert the key to the OpenSSH (.pem) file format](#convert-to-openssh).
- * **Encryption algorithms**: DES-EDE3-CBC, DES-EDE3-CFB, DES-CBC, AES-128-CBC, AES-192-CBC, and AES-256-CBC
+ * **Encryption algorithms**: Review [Encryption Method - SSH.NET](https://github.com/sshnet/SSH.NET#encryption-method).
* **Fingerprint**: MD5
- * **Key exchange algorithms**: curve25519-sha256, curve25519-sha256@libssh.org, ecdh-sha2-nistp256, ecdh-sha2-nistp384, ecdh-sha2-nistp521, diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1, diffie-hellman-group16-sha512, diffie-hellman-group14-sha256, diffie-hellman-group14-sha1, and diffie-hellman-group1-sha1
+ * **Key exchange algorithms**: Review [Key Exchange Method - SSH.NET](https://github.com/sshnet/SSH.NET#key-exchange-method).
After you add an SFTP-SSH trigger or action to your workflow, you have to provide connection information for your SFTP server. When you provide your SSH private key for this connection, ***don't manually enter or edit the key***, which might cause the connection to fail. Instead, make sure that you ***copy the key*** from your SSH private key file, and ***paste*** that key into the connection details. For more information, see the [Connect to SFTP with SSH](#connect) section later this article.
When a trigger finds a new file, the trigger checks that the new file is complet
### Trigger recurrence shift and drift
-Connection-based triggers where you need to create a connection first, such as the SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends. To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
+Connection-based triggers where you need to create a connection first, such as the SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In connection-based recurrence triggers, the schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends. To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
<a name="convert-to-openssh"></a>
If this trigger problem happens, remove the files from the folder that the trigg
To create a file on your SFTP server, you can use the SFTP-SSH **Create file** action. When this action creates the file, the Logic Apps service also automatically calls your SFTP server to get the file's metadata. However, if you move the newly created file before the Logic Apps service can make the call to get the metadata, you get a `404` error message, `'A reference was made to a file or folder which does not exist'`. To skip reading the file's metadata after file creation, follow the steps to [add and set the **Get all file metadata** property to **No**](#file-does-not-exist).
+> [!IMPORTANT]
+> If you use chunking with SFTP-SSH operations that create files on your SFTP server,
+> these operations create temporary `.partial` and `.lock` files. These files help
+> the operations use chunking. Don't remove or change these files. Otherwise,
+> the file operations fail. When the operations finish, they delete the temporary files.
+ <a name="connect"></a> ## Connect to SFTP with SSH
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/microservices-dapr.md
Get the storage account key with the following command:
STORAGE_ACCOUNT_KEY=`az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT --query '[0].value' --out tsv` ```
-```bash
-echo $STORAGE_ACCOUNT_KEY
-```
- # [PowerShell](#tab/powershell) ```powershell $STORAGE_ACCOUNT_KEY=(Get-AzStorageAccountKey -ResourceGroupName $RESOURCE_GROUP -AccountName $STORAGE_ACCOUNT)| Where-Object -Property KeyName -Contains 'key1' | Select-Object -ExpandProperty Value ```
-```powershell
-echo $STORAGE_ACCOUNT_KEY
-```
Create a config file named *components.yaml* with the properties that you source
# should be securely stored. For more information, see # https://docs.dapr.io/operations/components/component-secrets - name: accountName
- value: <YOUR_STORAGE_ACCOUNT_NAME>
+ secretRef: storage-account-name
- name: accountKey
- value: <YOUR_STORAGE_ACCOUNT_KEY>
+ secretRef: storage-account-key
- name: containerName
- value: <YOUR_STORAGE_CONTAINER_NAME>
+ value: mycontainer
```
-To use this file, make sure to replace the placeholder values between the `<>` brackets with your own values.
+To use this file, make sure to replace the value of `containerName` with your own value if you have changed `STORAGE_ACCOUNT_CONTAINER` variable from its original value, `mycontainer`.
> [!NOTE] > Container Apps does not currently support the native [Dapr components schema](https://docs.dapr.io/operations/components/component-schema/). The above example uses the supported schema.
->
-> In a production-grade application, follow [secret management](https://docs.dapr.io/operations/components/component-secrets) instructions to securely manage your secrets.
## Deploy the service application (HTTP web server)
az containerapp create \
--enable-dapr \ --dapr-app-port 3000 \ --dapr-app-id nodeapp \
+ --secrets "storage-account-name=${STORAGE_ACCOUNT},storage-account-key=${STORAGE_ACCOUNT_KEY}" \
--dapr-components ./components.yaml ```
az containerapp create `
--enable-dapr ` --dapr-app-port 3000 ` --dapr-app-id nodeapp `
+ --secrets "storage-account-name=${STORAGE_ACCOUNT},storage-account-key=${STORAGE_ACCOUNT_KEY}" `
--dapr-components ./components.yaml ```
Remove-AzResourceGroup -Name $RESOURCE_GROUP -Force
This command deletes the resource group that includes all of the resources created in this tutorial.
- [!NOTE]
+
+> [!NOTE]
> Since `pythonapp` continuously makes calls to `nodeapp` with messages that get persisted into your configured state store, it is important to complete these cleanup steps to avoid ongoing billable operations. > [!TIP]
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/overview.md
Applications built on Azure Container Apps can dynamically scale based on the fo
Azure Container Apps enables executing application code packaged in any container and is unopinionated about runtime or programming model. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of managing cloud infrastructure and complex container orchestrators.
+## Features
+ With Azure Container Apps, you can: - [**Run multiple container revisions**](application-lifecycle-management.md) and manage the container app's application lifecycle.
With Azure Container Apps, you can:
<sup>1</sup> Applications that [scale on CPU or memory load](scale-app.md) can't scale to zero.
+## Introductory video
+
+> [!VIDEO https://www.youtube.com/embed/b3dopSTnSRg]
+ ### Next steps > [!div class="nextstepaction"]
container-registry Buffer Gate Public Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/buffer-gate-public-content.md
Title: Manage public content in private container registry
description: Practices and workflows in Azure Container Registry to manage dependencies on public images from Docker Hub and other public content -+ Last updated 02/01/2022
container-registry Container Registry Troubleshoot Login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-login.md
May include one or more of the following:
* Unable to login to registry and you receive error `unauthorized: authentication required` or `unauthorized: Application not registered with AAD` * Unable to login to registry and you receive Azure CLI error `Could not connect to the registry login server` * Unable to push or pull images and you receive Docker error `unauthorized: authentication required`
+* Unable to access a registry using `az acr login` and you receive error `CONNECTIVITY_REFRESH_TOKEN_ERROR. Access to registry was denied. Response code: 403.Unable to get admin user credentials with message: Admin user is disabled.Unable to authenticate using AAD or admin login credentials.`
* Unable to access registry from Azure Kubernetes Service, Azure DevOps, or another Azure service * Unable to access registry and you receive error `Error response from daemon: login attempt failed with status: 403 Forbidden` - See [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md) * Unable to access or view registry settings in Azure portal or manage registry using the Azure CLI
May include one or more of the following:
* Docker isn't configured properly in your environment - [solution](#check-docker-configuration) * The registry doesn't exist or the name is incorrect - [solution](#specify-correct-registry-name) * The registry credentials aren't valid - [solution](#confirm-credentials-to-access-registry)
+* The registry public access is disabled.Public network access rules on the registry prevent access - [solution](container-registry-troubleshoot-access.md#configure-public-access-to-registry)
* The credentials aren't authorized for push, pull, or Azure Resource Manager operations - [solution](#confirm-credentials-are-authorized-to-access-registry) * The credentials are expired - [solution](#check-that-credentials-arent-expired)
cosmos-db Account Databases Containers Items https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/account-databases-containers-items.md
An Azure Cosmos container has a set of system-defined properties. Depending on w
|TimeToLive | User-configurable | Provides the ability to delete items automatically from a container after a set time period. For details, see [Time to Live](time-to-live.md). | Yes | No | No | No | Yes | |changeFeedPolicy | User-configurable | Used to read changes made to items in a container. For details, see [Change feed](change-feed.md). | Yes | No | No | No | Yes | |uniqueKeyPolicy | User-configurable | Used to ensure the uniqueness of one or more values in a logical partition. For more information, see [Unique key constraints](unique-keys.md). | Yes | No | No | No | Yes |
+|AnalyticalTimeToLive | User-configurable | Provides the ability to delete items automatically from a container after a set time period. For details, see [Time to Live](analytical-store-introduction.md). | Yes | No | Yes | No | No |
### Operations on an Azure Cosmos container
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
Analytical store partitioning is completely independent of partitioning in
## Security
-* **Authentication with the analytical store** is the same as the transactional store for a given database. You can use primary, secondary, or read-only keys for authentication. You can leverage linked service in Synapse Studio to prevent pasting the Azure Cosmos DB keys in the Spark notebooks. For Azure Synapse SQL serverless, you can use SQL credentials to also prevent pasting the Azure Cosmos DB keys in the SQL notebooks. The Access to these Linked Services or to these SQL credentials are available to anyone who has access to the workspace.
+* **Authentication with the analytical store** is the same as the transactional store for a given database. You can use primary, secondary, or read-only keys for authentication. You can leverage linked service in Synapse Studio to prevent pasting the Azure Cosmos DB keys in the Spark notebooks. For Azure Synapse SQL serverless, you can use SQL credentials to also prevent pasting the Azure Cosmos DB keys in the SQL notebooks. The Access to these Linked Services or to these SQL credentials are available to anyone who has access to the workspace. Please note that the Cosmos DB read only key can also be used.
* **Network isolation using private endpoints** - You can control network access to the data in the transactional and analytical stores independently. Network isolation is done using separate managed private endpoints for each store, within managed virtual networks in Azure Synapse workspaces. To learn more, see how to [Configure private endpoints for analytical store](analytical-store-private-endpoints.md) article.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/introduction.md
Build fast with open source APIs, multiple SDKs, schemaless data and no-ETL anal
- Deeply integrated with key Azure services used in modern (cloud-native) app development including Azure Functions, IoT Hub, AKS (Azure Kubernetes Service), App Service, and more. - Choose from multiple database APIs including the native Core (SQL) API, API for MongoDB, Cassandra API, Gremlin API, and Table API. - Build apps on Core (SQL) API using the languages of your choice with SDKs for .NET, Java, Node.js and Python. Or your choice of drivers for any of the other database APIs.-- Run no-ETL analytics over the near-real time operational data stored in Azure Cosmos DB with Azure Synapse Analytics. - Change feed makes it easy to track and manage changes to database containers and create triggered events with Azure Functions. - Azure Cosmos DBΓÇÖs schema-less service automatically indexes all your data, regardless of the data model, to deliver blazing fast queries.
End-to-end database management, with serverless and automatic scaling matching y
- Serverless model offers spiky workloads automatic and responsive service to manage traffic bursts on demand. - Autoscale provisioned throughput automatically and instantly scales capacity for unpredictable workloads, while maintaining [SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db).
+### Azure Synapse Link for Azure Cosmos DB
+
+[Azure Synapse Link for Azure Cosmos DB](synapse-link.md) is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables near real time analytics over operational data in Azure Cosmos DB. Azure Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
+
+- Reduced analytics complexity with No ETL jobs to manage.
+- Near real-time insights into your operational data.
+- No impact on operational workloads.
+- Optimized for large-scale analytics workloads.
+- Cost effective.
+- Analytics for locally available, globally distributed, multi-region writes.
+- Native integration with Azure Synapse Analytics.
++ ## Solutions that benefit from Azure Cosmos DB Any [web, mobile, gaming, and IoT application](use-cases.md) that needs to handle massive amounts of data, reads, and writes at a [global scale](distribute-data-globally.md) with near-real response times for a variety of data will benefit from Cosmos DB's [guaranteed high availability](https://azure.microsoft.com/support/legal/sl#web-and-mobile-applications).
cosmos-db Mongodb Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/mongodb-introduction.md
The Azure Cosmos DB API for MongoDB makes it easy to use Cosmos DB as if it were
The API for MongoDB has numerous added benefits of being built on [Azure Cosmos DB](../introduction.md) when compared to service offerings such as MongoDB Atlas: * **Instantaneous scalability**: By enabling the [Autoscale](../provision-throughput-autoscale.md) feature, your database can scale up/down with zero warmup period.
-* **Automatic and transparent sharding**: The API for MongoDB manages all of the infrastructure for you. This includes sharding and the number of shards, unlike other MongoDB offerings such as MongoDB Atlas, which require your to specify and manage sharding to horizontally scale. This gives you more time to focus on developing applications for your users.
+* **Automatic and transparent sharding**: The API for MongoDB manages all of the infrastructure for you. This includes sharding and the number of shards, unlike other MongoDB offerings such as MongoDB Atlas, which require you to specify and manage sharding to horizontally scale. This gives you more time to focus on developing applications for your users.
* **Five 9's of availability**: [99.999% availability](../high-availability.md) is easily configurable to ensure your data is always there for you.
-* **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. API for MongoDB users are running databases with over 600TB of storage today. Scaling is done in a cost-efficient manner, since unlike other MongoDB service offering, the Cosmos DB platform can scale in increments as small as 1/100th of a VM due to economies of scale and resource governance.
+* **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. APIs for MongoDB users are running databases with over 600TB of storage today. Scaling is done in a cost-efficient manner, since unlike other MongoDB service offering, the Cosmos DB platform can scale in increments as small as 1/100th of a VM due to economies of scale and resource governance.
* **Serverless deployments**: Unlike MongoDB Atlas, the API for MongoDB is a cloud native database that offers a [serverless capacity mode](../serverless.md). With [Serverless](../serverless.md), you are only charged per operation, and don't pay for the database when you don't use it. * **Free Tier**: With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage in your account for free forever, applied at the account level. * **Upgrades take seconds**: All API versions are contained within one codebase, making version changes as simple as [flipping a switch](upgrade-mongodb-version.md), with zero downtime.
Azure Cosmos DB API for MongoDB is compatible with the following MongoDB server
- [Version 3.6](feature-support-36.md) - [Version 3.2](feature-support-32.md)
-All the API for MongoDB versions run on the same codebase, making upgrades a simple task that can be completed in seconds with zero downtime. Azure Cosmos DB simply flips a few feature flags to go from one version to another. The feature flags also enable continued support for older API versions such as 3.2 and 3.6. You can choose the server version that works best for you.
+All the APIs for MongoDB versions run on the same codebase, making upgrades a simple task that can be completed in seconds with zero downtime. Azure Cosmos DB simply flips a few feature flags to go from one version to another. The feature flags also enable continued support for older API versions such as 3.2 and 3.6. You can choose the server version that works best for you.
:::image type="content" source="./media/mongodb-introduction/cosmosdb-mongodb.png" alt-text="Azure Cosmos DB's API for MongoDB" border="false":::
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use i
* Connect to a Cosmos account using [Robo 3T](connect-using-robomongo.md). * Learn how to [Configure read preferences for globally distributed apps](tutorial-global-distribution-mongodb.md). * Find the solutions to commonly found errors in our [Troubleshooting guide](error-codes-solutions.md)
+* Configure near real time analytics with [Azure Synapse Link for Azure Cosmos DB](../configure-synapse-link.md)
<sup>Note: This article describes a feature of Azure Cosmos DB that provides wire protocol compatibility with MongoDB databases. Microsoft does not run MongoDB databases to provide this service. Azure Cosmos DB is not affiliated with MongoDB, Inc.</sup>
cosmos-db Performance Tips Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/performance-tips-java-sdk-v4-sql.md
For a variety of reasons, you may want or need to add logging in a thread which
* ***Configure an async logger***
-The latency of a synchronous logger necessarily factors into the overall latency calculation of your request-generating thread. An async logger such as [log4j2](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flogging.apache.org%2Flog4j%2Flog4j-2.3%2Fmanual%2Fasync.html&data=02%7C01%7CCosmosDBPerformanceInternal%40service.microsoft.com%7C36fd15dea8384bfe9b6b08d7c0cf2113%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637189868158267433&sdata=%2B9xfJ%2BWE%2F0CyKRPu9AmXkUrT3d3uNA9GdmwvalV3EOg%3D&reserved=0) is recommended to decouple logging overhead from your high-performance application threads.
+The latency of a synchronous logger necessarily factors into the overall latency calculation of your request-generating thread. An async logger such as [log4j2](https://logging.apache.org/log4j/log4j-2.3/manual/async.html) is recommended to decouple logging overhead from your high-performance application threads.
* ***Disable netty's logging***
data-lake-store Data Lake Store Get Started Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/data-lake-store-get-started-python.md
To work with Data Lake Storage Gen1 using Python, you need to install three modu
Use the following commands to install the modules. ```console
+pip install azure-identity
pip install azure-mgmt-resource pip install azure-mgmt-datalake-store pip install azure-datalake-store
pip install azure-datalake-store
2. Add the following snippet to import the required modules ```python
- ## Use this only for Azure AD service-to-service authentication
- from azure.common.credentials import ServicePrincipalCredentials
-
- ## Use this only for Azure AD end-user authentication
- from azure.common.credentials import UserPassCredentials
-
- ## Use this only for Azure AD multi-factor authentication
- from msrestazure.azure_active_directory import AADTokenCredentials
+ # Acquire a credential object for the app identity. When running in the cloud,
+ # DefaultAzureCredential uses the app's managed identity (MSI) or user-assigned service principal.
+ # When run locally, DefaultAzureCredential relies on environment variables named
+ # AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, and AZURE_TENANT_ID.
+ from azure.identity import DefaultAzureCredential
## Required for Data Lake Storage Gen1 account management from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient
- from azure.mgmt.datalake.store.models import DataLakeStoreAccount
+ from azure.mgmt.datalake.store.models import CreateDataLakeStoreAccountParameters
## Required for Data Lake Storage Gen1 filesystem management from azure.datalake.store import core, lib, multithread # Common Azure imports
+ import adal
from azure.mgmt.resource.resources import ResourceManagementClient from azure.mgmt.resource.resources.models import ResourceGroup
- ## Use these as needed for your application
+ # Use these as needed for your application
import logging, getpass, pprint, uuid, time ```
subscriptionId = 'FILL-IN-HERE'
adlsAccountName = 'FILL-IN-HERE' resourceGroup = 'FILL-IN-HERE' location = 'eastus2'
+credential = DefaultAzureCredential()
## Create Data Lake Storage Gen1 account management client object
-adlsAcctClient = DataLakeStoreAccountManagementClient(armCreds, subscriptionId)
+adlsAcctClient = DataLakeStoreAccountManagementClient(credential, subscription_id=subscriptionId)
## Create a Data Lake Storage Gen1 account
-adlsAcctResult = adlsAcctClient.account.create(
+adlsAcctResult = adlsAcctClient.accounts.begin_create(
resourceGroup, adlsAccountName,
- DataLakeStoreAccount(
+ CreateDataLakeStoreAccountParameters(
location=location )
-).wait()
+)
```
adlsAcctResult = adlsAcctClient.account.create(
```python ## List the existing Data Lake Storage Gen1 accounts
-result_list_response = adlsAcctClient.account.list()
+result_list_response = adlsAcctClient.accounts.list()
result_list = list(result_list_response) for items in result_list: print(items)
for items in result_list:
```python ## Delete an existing Data Lake Storage Gen1 account
-adlsAcctClient.account.delete(adlsAccountName)
+adlsAcctClient.accounts.begin_delete(resourceGroup, adlsAccountName)
```
adlsAcctClient.account.delete(adlsAccountName)
## See also * [azure-datalake-store Python (Filesystem) reference](/python/api/azure-datalake-store/azure.datalake.store.core)
-* [Open Source Big Data applications compatible with Azure Data Lake Storage Gen1](data-lake-store-compatible-oss-other-applications.md)
+* [Open Source Big Data applications compatible with Azure Data Lake Storage Gen1](data-lake-store-compatible-oss-other-applications.md)
data-share How To Share From Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-sql.md
Previously updated : 12/17/2021 Last updated : 02/02/2022 # Share and receive data from Azure SQL Database and Azure Synapse Analytics [!INCLUDE [appliesto-sql](includes/appliesto-sql.md)]
-Azure Data Share allows you to securely share data snapshots from your Azure SQL Database and Azure Synapse Analytics resources, to other Azure subscriptions. Including Azure subscriptions outside your tenant. This article will guide you through what kinds of data can be shared, how to prepare you environment, how to create a share, and how to receive shared data.
+[Azure Data Share](overview.md) allows you to securely share data snapshots from your Azure SQL Database and Azure Synapse Analytics resources, to other Azure subscriptions. Including Azure subscriptions outside your tenant.
+
+This article describes sharing data from **Azure SQL Database** and **Azure Synapse Analytics**, but Azure Data Share also allows sharing from these other kinds of resources:
+
+- [Azure Storage](how-to-share-from-storage.md)
+- [Azure Data Explorer](/data-explorer/data-share.md)
+
+This article will guide you through:
+
+- [What kinds of data can be shared](#whats-supported)
+- [How to prepare your environment](#prerequisites-to-share-data)
+- [How to create a share](#create-a-share)
+- [How to receive shared data](#receive-shared-data)
You can use the table of contents to jump to the section you need, or continue with this article to follow the process from start to finish.
Azure Data Share supports sharing full data snapshots from several SQL resources
> [!NOTE] > Currently, Azure Data Share does not support sharing from these resources:
-> * Azure Synapse Analytics (workspace) serverless SQL pool
-> * Azure SQL databases with Always Encrypted configured
+>
+> - Azure Synapse Analytics (workspace) serverless SQL pool
+> - Azure SQL databases with Always Encrypted configured
-### Receive shared data
+### Receive data
Data consumers can choose to accept shared data into several Azure resources:
-* Azure Data Lake Storage Gen2
-* Azure Blob Storage
-* Azure SQL Database
-* Azure Synapse Analytics
+- Azure Data Lake Storage Gen2
+- Azure Blob Storage
+- Azure SQL Database
+- Azure Synapse Analytics
Shared data in **Azure Data Lake Storage Gen 2** or **Azure Blob Storage** can be stored as a csv or parquet file. Full data snapshots overwrite the contents of the target file if already exists.
-Shared data in **Azure SQL Database** and **Azure Synapse Analytics** is stored in tables. If the target table doesn't already exist, Azure Data Share creates the SQL table with the source schema. If a target table with the same name already exists, it will be dropped and overwritten with the latest full snapshot.
+Shared data in **Azure SQL Database** and **Azure Synapse Analytics** is stored in tables. If the target table doesn't already exist, Azure Data Share creates the SQL table with the source schema. If a target table with the same name already exists, it will be dropped and overwritten with the latest full snapshot.
->[!NOTE]
+>[!NOTE]
> For source SQL tables with dynamic data masking, data will appear masked on the recipient side. ### Supported data types
-When you share data from a SQL source, the following mappings are used from SQL Server data types to Azure Data Share interim data types during the snapshot process.
+
+When you share data from a SQL source, the following mappings are used from SQL Server data types to Azure Data Share interim data types during the snapshot process.
>[!NOTE]
-> 1. For data types that map to the Decimal interim type, currently snapshot supports precision up to 28. If you have data that requires precision larger than 28, consider converting to a string.
-> 1. If you are sharing data from Azure SQL database to Azure Synapse Analytics, not all data types are supported. Refer to [Table data types in dedicated SQL pool](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-data-types.md) for details.
+>
+> 1. For data types that map to the Decimal interim type, currently snapshot supports precision up to 28. If you have data that requires precision larger than 28, consider converting to a string.
+> 1. If you are sharing data from Azure SQL database to Azure Synapse Analytics, not all data types are supported. Refer to [Table data types in dedicated SQL pool](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-data-types.md) for details.
| SQL Server data type | Azure Data Share interim data type | |: |: |
When you share data from a SQL source, the following mappings are used from SQL
| varchar |String, Char[] | | xml |String | -
-## Prerequisites to share data
+## Prerequisites to share data
To share data snapshots from your Azure SQL resources, you first need to prepare your environment. You'll need:
-* An Azure subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md) with tables and views that you want to share.
-* [An Azure Data Share account](share-your-data-portal.md#create-a-data-share-account).
-* Your data recipient's Azure sign-in e-mail address (using their e-mail alias won't work).
-* If your Azure SQL resource is in a different Azure subscription than your Azure Data Share account, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where your source Azure SQL resource is located.
+- An Azure subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md) with tables and views that you want to share.
+- [An Azure Data Share account](share-your-data-portal.md#create-a-data-share-account).
+- Your data recipient's Azure sign in e-mail address (using their e-mail alias won't work).
+- If your Azure SQL resource is in a different Azure subscription than your Azure Data Share account, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where your source Azure SQL resource is located.
-There are also source-specific prerequisites for sharing. Select your data share source and follow the steps:
+### Source-specific prerequisites
-* [Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)](#prerequisitesforsharingazuresqlorsynapse)
-* [Azure Synapse Analytics (workspace) SQL pool](#prerequisitesforsharingazuresynapseworkspace)
+There are also prerequisites for sharing that depend on where your data is coming from. Select your data share source and follow the steps:
+
+- [Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)](#prerequisitesforsharingazuresqlorsynapse)
+- [Azure Synapse Analytics (workspace) SQL pool](#prerequisitesforsharingazuresynapseworkspace)
<a id="prerequisitesforsharingazuresqlorsynapse"></a> ### Prerequisites for sharing from Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW) You can use one of these methods to authenticate with Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW):
-* [Azure Active Directory authentication](#azure-active-directory-authentication)
-* [SQL authentication](#sql-authentication)
+
+- [Azure Active Directory authentication](#azure-active-directory-authentication)
+- [SQL authentication](#sql-authentication)
#### Azure Active Directory authentication These prerequisites cover the authentication you'll need so Azure Data Share can connect with your Azure SQL Database:
-* You'll need permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
-* SQL Server **Azure Active Directory Admin** permissions.
-* SQL Server Firewall access:
+- You'll need permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
+- SQL Server **Azure Active Directory Admin** permissions.
+- SQL Server Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to your SQL server. Select *Firewalls and virtual networks* from left navigation. 1. Select **Yes** for *Allow Azure services and resources to access this server*. 1. Select **+Add client IP**. Client IP address can change, so you may need to add your client IP again next time you share data from the portal.
These prerequisites cover the authentication you'll need so Azure Data Share can
You can follow the [step by step demo video](https://youtu.be/hIE-TjJD8Dc) to configure authentication, or complete each of these prerequisites:
-* Permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
-* Permission for the Azure Data Share resource's managed identity to access the database:
+- Permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
+- Permission for the Azure Data Share resource's managed identity to access the database:
1. In the [Azure portal](https://portal.azure.com/), navigate to the SQL server and set yourself as the **Azure Active Directory Admin**.
- 1. Connect to the Azure SQL Database/Data Warehouse using the [Query Editor](../azure-sql/database/connect-query-portal.md#connect-using-azure-active-directory) or SQL Server Management Studio with Azure Active Directory authentication.
- 1. Execute the following script to add the Data Share resource-Managed Identity as a db_datareader. Connect using Active Directory and not SQL Server authentication.
-
+ 1. Connect to the Azure SQL Database/Data Warehouse using the [Query Editor](../azure-sql/database/connect-query-portal.md#connect-using-azure-active-directory) or SQL Server Management Studio with Azure Active Directory authentication.
+ 1. Execute the following script to add the Data Share resource-Managed Identity as a db_datareader. Connect using Active Directory and not SQL Server authentication.
+ ```sql create user "<share_acct_name>" from external provider; exec sp_addrolemember db_datareader, "<share_acct_name>";
- ```
+ ```
+ > [!Note]
- > The *<share_acc_name>* is the name of your Data Share resource.
+ > The *<share_acc_name>* is the name of your Data Share resource.
-* An Azure SQL Database User with **'db_datareader'** access to navigate and select the tables or views you wish to share.
+- An Azure SQL Database User with **'db_datareader'** access to navigate and select the tables or views you wish to share.
-* SQL Server Firewall access:
+- SQL Server Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to SQL server. Select *Firewalls and virtual networks* from left navigation. 1. Select **Yes** for *Allow Azure services and resources to access this server*. 1. Select **+Add client IP**. Client IP address can change, so you may need to add your client IP again next time you share data from the portal.
- 1. Select **Save**.
+ 1. Select **Save**.
<a id="prerequisitesforsharingazuresynapseworkspace"></a> ### Prerequisites for sharing from Azure Synapse Analytics (workspace) SQL pool
-* Permission to write to the SQL pool in Synapse workspace: *Microsoft.Synapse/workspaces/sqlPools/write*. This permission exists in the **Contributor** role.
-* Permission for the Data Share resource's managed identity to access Synapse workspace SQL pool:
+- Permission to write to the SQL pool in Synapse workspace: *Microsoft.Synapse/workspaces/sqlPools/write*. This permission exists in the **Contributor** role.
+- Permission for the Data Share resource's managed identity to access Synapse workspace SQL pool:
1. In the [Azure portal](https://portal.azure.com/), navigate to your Synapse workspace. Select **SQL Active Directory admin** from left navigation and set yourself as the **Azure Active Directory admin**. 1. Open the Synapse Studio, select **Manage** from the left navigation. Select **Access control** under Security. Assign yourself the **SQL admin** or **Workspace admin** role.
- 1. Select **Develop** from the left navigation in the Synapse Studio. Execute the following script in SQL pool to add the Data Share resource-Managed Identity as a db_datareader.
-
+ 1. Select **Develop** from the left navigation in the Synapse Studio. Execute the following script in SQL pool to add the Data Share resource-Managed Identity as a db_datareader.
+ ```sql create user "<share_acct_name>" from external provider; exec sp_addrolemember db_datareader, "<share_acct_name>";
- ```
+ ```
+ > [!Note] > The *<share_acc_name>* is the name of your Data Share resource.
-* Synapse workspace Firewall access:
+- Synapse workspace Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to Synapse workspace. Select **Firewalls** from left navigation. 1. Select **ON** for **Allow Azure services and resources to access this workspace**. 1. Select **+Add client IP**. Client IP address can change, so you may need to add your client IP again next time you share data from the portal.
- 1. Select **Save**.
+ 1. Select **Save**.
## Create a share 1. Navigate to your Data Share Overview page.
- ![Share your data](./media/share-receive-data.png "Share your data")
+ ![Share your data](./media/share-receive-data.png "Share your data")
1. Select **Start sharing your data**.
-1. Select **Create**.
+1. Select **Create**.
-1. Fill out the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
+1. Fill out the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
- ![EnterShareDetails](./media/enter-share-details.png "Enter Share details")
+ ![EnterShareDetails](./media/enter-share-details.png "Enter Share details")
1. Select **Continue**.
-1. To add Datasets to your share, select **Add Datasets**.
+1. To add Datasets to your share, select **Add Datasets**.
![Add Datasets to your share](./media/datasets.png "Datasets")
-1. Select the dataset type that you would like to add. There will be a different list of dataset types depending on the share type (snapshot or in-place) you selected in the previous step.
+1. Select the dataset type that you would like to add. There will be a different list of dataset types depending on the share type (snapshot or in-place) you selected in the previous step.
- ![AddDatasets](./media/add-datasets.png "Add Datasets")
+ ![AddDatasets](./media/add-datasets.png "Add Datasets")
-1. Select your SQL server or Synapse workspace. If you're using Azure Active Directory authentication and the checkbox **Allow Data Share to run the above 'create user' SQL script on my behalf** appears, check the checkbox. If you're using SQL authentication, provide credentials, and be sure you have followed the prerequisites so that you have permissions.
+1. Select your SQL server or Synapse workspace. If you're using Azure Active Directory authentication and the checkbox **Allow Data Share to run the above 'create user' SQL script on my behalf** appears, check the checkbox. If you're using SQL authentication, provide credentials, and be sure you've followed the prerequisites so that you have permissions.
- Select **Next** to navigate to the object you would like to share and select 'Add Datasets'. You can select tables and views from Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL DW), or tables from Azure Synapse Analytics (workspace) dedicated SQL pool.
+ Select **Next** to navigate to the object you would like to share and select 'Add Datasets'. You can select tables and views from Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL DW), or tables from Azure Synapse Analytics (workspace) dedicated SQL pool.
- ![SelectDatasets](./media/select-datasets-sql.png "Select Datasets")
+ ![SelectDatasets](./media/select-datasets-sql.png "Select Datasets")
-1. In the Recipients tab, enter in the email addresses of your Data Consumer by selecting '+ Add Recipient'. The email address needs to be recipient's Azure sign-in email.
+1. In the Recipients tab, enter in the email addresses of your Data Consumer by selecting '+ Add Recipient'. The email address needs to be recipient's Azure sign in email.
- ![AddRecipients](./media/add-recipient.png "Add recipients")
+ ![AddRecipients](./media/add-recipient.png "Add recipients")
1. Select **Continue**.
-1. If you have selected snapshot share type, you can configure snapshot schedule to provide updates of your data to your data consumer.
+1. If you have selected snapshot share type, you can configure snapshot schedule to provide updates of your data to your data consumer.
- ![EnableSnapshots](./media/enable-snapshots.png "Enable snapshots")
+ ![EnableSnapshots](./media/enable-snapshots.png "Enable snapshots")
-1. Select a start time and recurrence interval.
+1. Select a start time and recurrence interval.
1. Select **Continue**. 1. In the Review + Create tab, review your Package Contents, Settings, Recipients, and Synchronization Settings. Select **Create**.
-Your Azure Data Share has now been created and the recipient of your Data Share can now accept your invitation.
+Your Azure Data Share has now been created and the recipient of your Data Share can now accept your invitation.
## Prerequisites to receive data+ Before you can accept a data share invitation, you need to prepare your environment. Confirm that all pre-requisites are complete before accepting a data share invitation:
-* Azure Subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* A Data Share invitation: An invitation from Microsoft Azure with a subject titled "Azure Data Share invitation from **<yourdataprovider@domain.com>**".
-* Register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the Azure subscription where you will create a Data Share resource and the Azure subscription where your target Azure data stores are located.
-* You'll need a resource in Azure to store the shared data. You can use these kinds of resources:
- - [Azure Storage](../storage/common/storage-account-create.md)
- - [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md)
- - [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md)
- - [Azure Synapse Analytics (workspace) dedicated SQL pool](../synapse-analytics/get-started-analyze-sql-pool.md)
+- Azure Subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- A Data Share invitation: An invitation from Microsoft Azure with a subject titled "Azure Data Share invitation from **<yourdataprovider@domain.com>**".
+- Register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the Azure subscription where you'll create a Data Share resource and the Azure subscription where your target Azure data stores are located.
+- You'll need a resource in Azure to store the shared data. You can use these kinds of resources:
+ - [Azure Storage](../storage/common/storage-account-create.md)
+ - [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md)
+ - [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md)
+ - [Azure Synapse Analytics (workspace) dedicated SQL pool](../synapse-analytics/get-started-analyze-sql-pool.md)
-There are also prerequisites for the resource where the received data will be stored.
+There are also prerequisites for the resource where the received data will be stored.
Select your resource type and follow the steps:
-* [Azure Storage prerequisites](#prerequisites-for-target-storage-account)
-* [Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW) prerequisites](#prerequisitesforreceivingtoazuresqlorsynapse)
-* [Azure Synapse Analytics (workspace) SQL pool prerequisites](#prerequisitesforreceivingtoazuresynapseworkspacepool)
+- [Azure Storage prerequisites](#prerequisites-for-target-storage-account)
+- [Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW) prerequisites](#prerequisitesforreceivingtoazuresqlorsynapse)
+- [Azure Synapse Analytics (workspace) SQL pool prerequisites](#prerequisitesforreceivingtoazuresynapseworkspacepool)
### Prerequisites for target storage account+ If you choose to receive data into Azure Storage, complete these prerequisites before accepting a data share:
-* An [Azure Storage account](../storage/common/storage-account-create.md).
-* Permission to write to the storage account: *Microsoft.Storage/storageAccounts/write*. This permission exists in the Azure RBAC **Contributor** role.
-* Permission to add role assignment of the Data Share resource's managed identity to the storage account: which is present in *Microsoft.Authorization/role assignments/write*. This permission exists in the Azure RBAC **Owner** role.
+- An [Azure Storage account](../storage/common/storage-account-create.md).
+- Permission to write to the storage account: *Microsoft.Storage/storageAccounts/write*. This permission exists in the Azure RBAC **Contributor** role.
+- Permission to add role assignment of the Data Share resource's managed identity to the storage account: which is present in *Microsoft.Authorization/role assignments/write*. This permission exists in the Azure RBAC **Owner** role.
<a id="prerequisitesforreceivingtoazuresqlorsynapse"></a>
-### Prerequisites for receiving data into Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)
+### Prerequisites for receiving data into Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)
For a SQL server where you're the **Azure Active Directory admin** of the SQL server, complete these prerequisites before accepting a data share:
-* An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md).
-* Permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
-* SQL Server Firewall access:
+- An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md).
+- Permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
+- SQL Server Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to your SQL server. Select **Firewalls and virtual networks** from left navigation. 1. Select **Yes** for *Allow Azure services and resources to access this server*. 1. Select **+Add client IP**. Client IP address can change, so you may need to add your client IP again next time you share data from the portal.
- 1. Select **Save**.
-
-For a SQL server where you're **not** the **Azure Active Directory admin**, complete these prerequisites before accepting a data share:
+ 1. Select **Save**.
+
+For a SQL server where you're **not** the **Azure Active Directory admin**, complete these prerequisites before accepting a data share:
You can follow the [step by step demo video](https://youtu.be/aeGISgK1xro), or the steps below to configure prerequisites.
-* An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md).
-* Permission to write to databases on the SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
-* Permission for the Data Share resource's managed identity to access the Azure SQL Database or Azure Synapse Analytics:
+- An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md).
+- Permission to write to databases on the SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
+- Permission for the Data Share resource's managed identity to access the Azure SQL Database or Azure Synapse Analytics:
1. In the [Azure portal](https://portal.azure.com/), navigate to the SQL server and set yourself as the **Azure Active Directory Admin**.
- 1. Connect to the Azure SQL Database/Data Warehouse using the [Query Editor](../azure-sql/database/connect-query-portal.md#connect-using-azure-active-directory) or SQL Server Management Studio with Azure Active Directory authentication.
+ 1. Connect to the Azure SQL Database/Data Warehouse using the [Query Editor](../azure-sql/database/connect-query-portal.md#connect-using-azure-active-directory) or SQL Server Management Studio with Azure Active Directory authentication.
1. Execute the following script to add the Data Share Managed Identity as a 'db_datareader, db_datawriter, db_ddladmin'. ```sql
You can follow the [step by step demo video](https://youtu.be/aeGISgK1xro), or t
exec sp_addrolemember db_datareader, "<share_acc_name>"; exec sp_addrolemember db_datawriter, "<share_acc_name>"; exec sp_addrolemember db_ddladmin, "<share_acc_name>";
- ```
+ ```
+ > [!Note]
- > The *<share_acc_name>* is the name of your Data Share resource.
+ > The *<share_acc_name>* is the name of your Data Share resource.
-* SQL Server Firewall access:
+- SQL Server Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to the SQL server and select **Firewalls and virtual networks**. 1. Select **Yes** for **Allow Azure services and resources to access this server**. 1. Select **+Add client IP**. Client IP address can change, so you may need to add your client IP again next time you share data from the portal.
- 1. Select **Save**.
-
+ 1. Select **Save**.
+ <a id="prerequisitesforreceivingtoazuresynapseworkspacepool"></a> ### Prerequisites for receiving data into Azure Synapse Analytics (workspace) SQL pool
-* An Azure Synapse Analytics (workspace) dedicated SQL pool. Receiving data into serverless SQL pool is not currently supported.
-* Permission to write to the SQL pool in Synapse workspace: *Microsoft.Synapse/workspaces/sqlPools/write*. This permission exists in the Azure RBAC **Contributor** role.
-* Permission for the Data Share resource's managed identity to access the Synapse workspace SQL pool:
- 1. In the [Azure portal](https://portal.azure.com/), navigate to Synapse workspace.
+- An Azure Synapse Analytics (workspace) dedicated SQL pool. Receiving data into serverless SQL pool isn't currently supported.
+- Permission to write to the SQL pool in Synapse workspace: *Microsoft.Synapse/workspaces/sqlPools/write*. This permission exists in the Azure RBAC **Contributor** role.
+- Permission for the Data Share resource's managed identity to access the Synapse workspace SQL pool:
+ 1. In the [Azure portal](https://portal.azure.com/), navigate to Synapse workspace.
1. Select SQL Active Directory admin from left navigation and set yourself as the **Azure Active Directory admin**. 1. Open Synapse Studio, select **Manage** from the left navigation. Select **Access control** under Security. Assign yourself the **SQL admin** or **Workspace admin** role.
- 1. In Synapse Studio, select **Develop** from the left navigation. Execute the following script in SQL pool to add the Data Share resource-Managed Identity as a 'db_datareader, db_datawriter, db_ddladmin'.
-
+ 1. In Synapse Studio, select **Develop** from the left navigation. Execute the following script in SQL pool to add the Data Share resource-Managed Identity as a 'db_datareader, db_datawriter, db_ddladmin'.
+ ```sql create user "<share_acc_name>" from external provider; exec sp_addrolemember db_datareader, "<share_acc_name>"; exec sp_addrolemember db_datawriter, "<share_acc_name>"; exec sp_addrolemember db_ddladmin, "<share_acc_name>";
- ```
+ ```
+ > [!Note] > The *<share_acc_name>* is the name of your Data Share resource.
-* Synapse workspace Firewall access:
+- Synapse workspace Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to Synapse workspace. Select *Firewalls* from left navigation. 1. Select **ON** for **Allow Azure services and resources to access this workspace**.
- 1. Select **+Add client IP**. Client IP address is subject to change. This process might need to be repeated the next time you are sharing SQL data from Azure portal.
- 1. Select **Save**.
+ 1. Select **+Add client IP**. Client IP address is subject to change. This process might need to be repeated the next time you're sharing SQL data from Azure portal.
+ 1. Select **Save**.
## Receive shared data ### Open invitation
-You can open invitation from email or directly from the [Azure portal](https://portal.azure.com/).
+You can open invitation from email or directly from the [Azure portal](https://portal.azure.com/).
-To open an invitation from email, check your inbox for an invitation from your data provider. The invitation is from Microsoft Azure, titled **Azure Data Share invitation from <yourdataprovider@domain.com>**. Select **View invitation** to see your invitation in Azure.
+To open an invitation from email, check your inbox for an invitation from your data provider. The invitation is from Microsoft Azure, titled **Azure Data Share invitation from <yourdataprovider@domain.com>**. Select **View invitation** to see your invitation in Azure.
To open an invitation from Azure portal directly, search for **Data Share Invitations** in the Azure portal, which takes you to the list of Data Share invitations. If you're a guest user on a tenant, you'll need to verify your email address for the tenant before viewing a Data Share invitation for the first time. Once verified, your email is valid for 12 months.
-![List of Invitations](./media/invitations.png "List of invitations")
+![List of Invitations](./media/invitations.png "List of invitations")
-Then, select the share you would like to view.
+Then, select the share you would like to view.
### Accept invitation
-1. Make sure all fields are reviewed, including the **Terms of Use**. If you agree to the terms of use, you'll be required to check the box to indicate you agree.
- ![Terms of use](./media/terms-of-use.png "Terms of use")
+1. Make sure all fields are reviewed, including the **Terms of Use**. If you agree to the terms of use, you'll be required to check the box to indicate you agree.
+
+ ![Terms of use](./media/terms-of-use.png "Terms of use")
-1. Under *Target Data Share Account*, select the Subscription and Resource Group that you'll be deploying your Data Share into.
+1. Under *Target Data Share Account*, select the Subscription and Resource Group that you'll be deploying your Data Share into.
-1. For the **Data Share Account** field, select **Create new** if you don't have an existing Data Share account. Otherwise, select an existing Data Share account that you'd like to accept your data share into.
+1. For the **Data Share Account** field, select **Create new** if you don't have an existing Data Share account. Otherwise, select an existing Data Share account that you'd like to accept your data share into.
-1. For the **Received Share Name** field, you may leave the default specified by the data provide, or specify a new name for the received share.
+1. For the **Received Share Name** field, you may leave the default specified by the data provide, or specify a new name for the received share.
-1. Once you've agreed to the terms of use and specified a Data Share account to manage your received share, Select **Accept and configure**. A share subscription will be created.
+1. Once you've agreed to the terms of use and specified a Data Share account to manage your received share, Select **Accept and configure**. A share subscription will be created.
- ![Accept options](./media/accept-options.png "Accept options")
+ ![Accept options](./media/accept-options.png "Accept options")
-If you don't want to accept the invitation, Select *Reject*.
+If you don't want to accept the invitation, Select *Reject*.
### Configure received share+ Follow the steps below to configure where you want to receive data.
-1. Select **Datasets** tab. Check the box next to the dataset you'd like to assign a destination to. Select **+ Map to target** to choose a target data store.
+1. Select **Datasets** tab. Check the box next to the dataset you'd like to assign a destination to. Select **+ Map to target** to choose a target data store.
- ![Map to target](./media/dataset-map-target.png "Map to target")
+ ![Map to target](./media/dataset-map-target.png "Map to target")
1. Select the target resource to store the shared data. Any data files or tables in the target data store with the same path and name will be overwritten. If you're receiving data into a SQL store and the **Allow Data Share to run the above 'create user' SQL script on my behalf** checkbox appears, check the checkbox. Otherwise, follow the instruction in prerequisites to run the script appear on the screen. This will give Data Share resource write permission to your target SQL DB.
- ![Target storage account](./media/dataset-map-target-sql.png "Target Data Store")
+ ![Target storage account](./media/dataset-map-target-sql.png "Target Data Store")
-1. For snapshot-based sharing, if the data provider has created a snapshot schedule to provide regular updates to the data, you can also enable snapshot schedule by selecting the **Snapshot Schedule** tab. Check the box next to the snapshot schedule and select **+ Enable**.
+1. For snapshot-based sharing, if the data provider has created a snapshot schedule to provide regular updates to the data, you can also enable snapshot schedule by selecting the **Snapshot Schedule** tab. Check the box next to the snapshot schedule and select **+ Enable**.
> [!NOTE] > The first scheduled snapshot will start within one minute of the schedule time and the next snapshots will start within seconds of the scheduled time.
Follow the steps below to configure where you want to receive data.
![Enable snapshot schedule](./media/enable-snapshot-schedule.png "Enable snapshot schedule") ### Trigger a snapshot+ These steps only apply to snapshot-based sharing.
-1. You can trigger a snapshot by selecting **Details** tab followed by **Trigger snapshot**. Here, you can trigger a full snapshot of your data. If it's your first time receiving data from your data provider, select full copy. When a snapshot is executing, the next snapshots will not start until the previous one is complete.
+1. You can trigger a snapshot by selecting **Details** tab followed by **Trigger snapshot**. Here, you can trigger a full snapshot of your data. If it's your first time receiving data from your data provider, select full copy. When a snapshot is executing, the next snapshots won't start until the previous one is complete.
- ![Trigger snapshot](./media/trigger-snapshot.png "Trigger snapshot")
+ ![Trigger snapshot](./media/trigger-snapshot.png "Trigger snapshot")
-1. When the last run status is *successful*, go to target data store to view the received data. Select **Datasets**, and select the link in the Target Path.
+1. When the last run status is *successful*, go to target data store to view the received data. Select **Datasets**, and select the link in the Target Path.
- ![Consumer datasets](./media/consumer-datasets.png "Consumer dataset mapping")
+ ![Consumer datasets](./media/consumer-datasets.png "Consumer dataset mapping")
### View history
-This step only applies to snapshot-based sharing. To view history of your snapshots, select **History** tab. Here you'll find history of all snapshots that were generated for the past 30 days.
+
+This step only applies to snapshot-based sharing. To view history of your snapshots, select **History** tab. Here you'll find history of all snapshots that were generated for the past 30 days.
## Snapshot performance
-SQL snapshot performance is impacted by many factors. It is always recommended to conduct your own performance testing. Below are some example factors impacting performance.
-* Source or destination data store input/output operations per second (IOPS) and bandwidth.
-* Hardware configuration (For example: vCores, memory, DWU) of the source and target SQL data store.
-* Concurrent access to the source and target data stores. If you are sharing multiple tables and views from the same SQL data store, or receive multiple tables and views into the same SQL data store, performance will be impacted.
-* Network bandwidth between the source and destination data stores, and location of source and target data stores.
-* Size of the tables and views being shared. SQL snapshot sharing does a full copy of the entire table. If the size of the table grows over time, snapshot will take longer.
+SQL snapshot performance is impacted by many factors. It's always recommended to conduct your own performance testing. Below are some example factors impacting performance.
+
+- Source or destination data store input/output operations per second (IOPS) and bandwidth.
+- Hardware configuration (For example: vCores, memory, DWU) of the source and target SQL data store.
+- Concurrent access to the source and target data stores. If you're sharing multiple tables and views from the same SQL data store, or receive multiple tables and views into the same SQL data store, performance will be impacted.
+- Network bandwidth between the source and destination data stores, and location of source and target data stores.
+- Size of the tables and views being shared. SQL snapshot sharing does a full copy of the entire table. If the size of the table grows over time, snapshot will take longer.
For large tables where incremental updates are desired, you can export updates to storage account and use the storage accountΓÇÖs incremental sharing capability for faster performance. ## Troubleshoot snapshot failure
-The most common cause of snapshot failure is that Data Share does not have permission to the source or target data store. In order to grant Data Share permission to the source or target Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), you must run the provided SQL script when connecting to the SQL database using Azure Active Directory authentication. To troubleshoot other SQL snapshot failures, refer to [Troubleshoot snapshot failure](data-share-troubleshoot.md#snapshots).
+
+The most common cause of snapshot failure is that Data Share doesn't have permission to the source or target data store. In order to grant Data Share permission to the source or target Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), you must run the provided SQL script when connecting to the SQL database using Azure Active Directory authentication. To troubleshoot other SQL snapshot failures, refer to [Troubleshoot snapshot failure](data-share-troubleshoot.md#snapshots).
## Next steps
-You have learned how to share and receive data from SQL sources using Azure Data Share service. To learn more about sharing from other data sources, continue to [supported data stores](supported-data-stores.md).
+
+You've learned how to share and receive data from SQL sources using Azure Data Share service. To learn more about sharing from other data sources, continue to [supported data stores](supported-data-stores.md).
data-share How To Share From Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-storage.md
Previously updated : 09/10/2021 Last updated : 02/02/2022 + # Share and receive data from Azure Blob Storage and Azure Data Lake Storage [!INCLUDE[appliesto-storage](includes/appliesto-storage.md)]
-Azure Data Share supports snapshot-based sharing from a storage account. This article explains how to share and receive data from Azure Blob Storage, Azure Data Lake Storage Gen1, and Azure Data Lake Storage Gen2.
+[Azure Data Share](overview.md) allows you to securely share data snapshots from your Azure storage resources to other Azure subscriptions. Including Azure subscriptions outside your tenant.
-Azure Data Share supports the sharing of files, folders, and file systems from Azure Data Lake Gen1 and Azure Data Lake Gen2. It also supports the sharing of blobs, folders, and containers from Azure Blob Storage. You can share block, append, or page blobs, and they are received as block blobs. Data shared from these sources can be received by Azure Data Lake Gen2 or Azure Blob Storage.
+This article describes sharing data from **Azure Blob Storage**, **Azure Data Lake Storage Gen1**, and **Azure Data Lake Storage Gen2**.
+However, Azure Data Share also allows sharing from these other kinds of resources:
-When file systems, containers, or folders are shared in snapshot-based sharing, data consumers can choose to make a full copy of the share data. Or they can use the incremental snapshot capability to copy only new or updated files. The incremental snapshot capability is based on the last modified time of the files.
+- [Azure SQL Database and Azure Synapse Analytics](how-to-share-from-sql.md)
+- [Azure Data Explorer](/data-explorer/data-share.md)
-Existing files that have the same name are overwritten during a snapshot. A file that is deleted from the source isn't deleted on the target. Empty subfolders at the source aren't copied over to the target.
+This article will guide you through:
-## Share data
+- [What kinds of data can be shared](#whats-supported)
+- [How to prepare your environment](#prerequisites-to-share-data)
+- [How to create a share](#create-a-share)
+- [How to receive shared data](#receive-shared-data)
-Use the information in the following sections to share data by using Azure Data Share.
-### Prerequisites to share data
+You can use the table of contents to jump to the section you need, or continue with this article to follow the process from start to finish.
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* Find your recipient's Azure sign-in email address. The recipient's email alias won't work for your purposes.
-* If the source Azure data store is in a different Azure subscription than the one where you'll create the Data Share resource, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where the Azure data store is located.
+## What's supported
-### Prerequisites for the source storage account
+Azure Data Share supports sharing data from Azure Data Lake Gen1, Azure Data Lake Gen2, and Azure storage.
-* An Azure Storage account. If you don't already have an account, [create one](../storage/common/storage-account-create.md).
-* Permission to write to the storage account. Write permission is in *Microsoft.Storage/storageAccounts/write*. It's part of the Contributor role.
-* Permission to add role assignment to the storage account. This permission is in *Microsoft.Authorization/role assignments/write*. It's part of the Owner role.
+|Resource type | Sharable resource |
+|-|--
+|Azure Data Lake Gen1 and Gen2 |Files |
+||Folders|
+||File systems|
+|Azure Storage |*Blobs |
+||Folders|
+||Containers|
-### Sign in to the Azure portal
+>[!NOTE]
+> *Block, append, and page blobs are all supported. However, when they are shared they will be received as **block blobs**.
-Sign in to the [Azure portal](https://portal.azure.com/).
+Data shared from these sources can be received by Azure Data Lake Gen2 or Azure Blob Storage.
-### Create a Data Share account
+### Share behavior
-Create an Azure Data Share resource in an Azure resource group.
+For file systems, containers, or folders, you can choose to make full or incremental snapshots of your data.
-1. In the upper-left corner of the portal, open the menu and then select **Create a resource** (+).
+A **full snapshot** copies all specified files and folders at every snapshot.
-1. Search for *Data Share*.
+An **incremental snapshot** copies only new or updated files, based on the last modified time of the files.
-1. Select **Data Share** and **Create**.
+Existing files that have the same name are overwritten during a snapshot. A file that is deleted from the source isn't deleted on the target. Empty subfolders at the source aren't copied over to the target.
-1. Provide the basic details of your Azure Data Share resource:
+## Prerequisites to share data
- **Setting** | **Suggested value** | **Field description**
- ||||
- | Subscription | Your subscription | Select an Azure subscription for your data share account.|
- | Resource group | *test-resource-group* | Use an existing resource group or create a resource group. |
- | Location | *East US 2* | Select a region for your data share account.
- | Name | *datashareaccount* | Name your data share account. |
- | | |
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [An Azure Data Share account](share-your-data-portal.md#create-a-data-share-account).
+- Your data recipient's Azure sign in e-mail address (using their e-mail alias won't work).
+- If your Azure SQL resource is in a different Azure subscription than your Azure Data Share account, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where your source Azure SQL resource is located.
-1. Select **Review + create** > **Create** to provision your data share account. Provisioning a new data share account typically takes about 2 minutes.
+### Prerequisites for the source storage account
-1. When the deployment finishes, select **Go to resource**.
+- An Azure Storage account. If you don't already have an account, [create one](../storage/common/storage-account-create.md).
+- Permission to write to the storage account. Write permission is in *Microsoft.Storage/storageAccounts/write*. It's part of the Contributor role.
+- Permission to add role assignment to the storage account. This permission is in *Microsoft.Authorization/role assignments/write*. It's part of the Owner role.
### Create a share
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+ 1. Go to your data share **Overview** page. :::image type="content" source="./media/share-receive-data.png" alt-text="Screenshot showing the data share overview."::: 1. Select **Start sharing your data**.
-1. Select **Create**.
+1. Select **Create**.
-1. Provide the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
+1. Provide the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
- ![Screenshot showing data share details.](./media/enter-share-details.png "Enter the data share details.")
+ ![Screenshot showing data share details.](./media/enter-share-details.png "Enter the data share details.")
1. Select **Continue**.
-1. To add datasets to your share, select **Add Datasets**.
+1. To add datasets to your share, select **Add Datasets**.
![Screenshot showing how to add datasets to your share.](./media/datasets.png "Datasets.")
-1. Select a dataset type to add. The list of dataset types depends on whether you selected snapshot-based sharing or in-place sharing in the previous step.
+1. Select a dataset type to add. The list of dataset types depends on whether you selected snapshot-based sharing or in-place sharing in the previous step.
- ![Screenshot showing where to select a dataset type.](./media/add-datasets.png "Add datasets.")
+ ![Screenshot showing where to select a dataset type.](./media/add-datasets.png "Add datasets.")
-1. Go to the object you want to share. Then select **Add Datasets**.
+1. Go to the object you want to share. Then select **Add Datasets**.
- ![Screenshot showing how to select an object to share.](./media/select-datasets.png "Select datasets.")
+ ![Screenshot showing how to select an object to share.](./media/select-datasets.png "Select datasets.")
-1. On the **Recipients** tab, add the email address of your data consumer by selecting **Add Recipient**.
+1. On the **Recipients** tab, add the email address of your data consumer by selecting **Add Recipient**.
- ![Screenshot showing how to add recipient email addresses.](./media/add-recipient.png "Add recipients.")
+ ![Screenshot showing how to add recipient email addresses.](./media/add-recipient.png "Add recipients.")
1. Select **Continue**.
-1. If you selected a snapshot share type, you can set up the snapshot schedule to update your data for the data consumer.
+1. If you selected a snapshot share type, you can set up the snapshot schedule to update your data for the data consumer.
- ![Screenshot showing the snapshot schedule settings.](./media/enable-snapshots.png "Enable snapshots.")
+ ![Screenshot showing the snapshot schedule settings.](./media/enable-snapshots.png "Enable snapshots.")
-1. Select a start time and recurrence interval.
+1. Select a start time and recurrence interval.
1. Select **Continue**. 1. On the **Review + Create** tab, review your package contents, settings, recipients, and synchronization settings. Then select **Create**.
-You've now created your Azure data share. The recipient of your data share can accept your invitation.
+You've now created your Azure data share. The recipient of your data share can accept your invitation.
-## Receive data
+## Prerequisites to receive data
-The following sections describe how to receive shared data.
-### Prerequisites to receive data
-Before you accept a data share invitation, make sure you have the following prerequisites:
+Before you accept a data share invitation, make sure you have the following prerequisites:
-* An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/).
-* An invitation from Azure. The email subject should be "Azure Data Share invitation from *\<yourdataprovider\@domain.com>*".
-* A registered [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in:
- * The Azure subscription where you'll create a Data Share resource.
- * The Azure subscription where your target Azure data stores are located.
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/).
+- An invitation from Azure. The email subject should be "Azure Data Share invitation from *\<yourdataprovider\@domain.com>*".
+- A registered [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in:
+ - The Azure subscription where you'll create a Data Share resource.
+ - The Azure subscription where your target Azure data stores are located.
### Prerequisites for a target storage account
-* An Azure Storage account. If you don't already have one, [create an account](../storage/common/storage-account-create.md).
-* Permission to write to the storage account. This permission is in *Microsoft.Storage/storageAccounts/write*. It's part of the Contributor role.
-* Permission to add role assignment to the storage account. This assignment is in *Microsoft.Authorization/role assignments/write*. It's part of the Owner role.
-
-### Sign in to the Azure portal
+- An Azure Storage account. If you don't already have one, [create an account](../storage/common/storage-account-create.md).
+- Permission to write to the storage account. This permission is in *Microsoft.Storage/storageAccounts/write*. It's part of the Contributor role.
+- Permission to add role assignment to the storage account. This assignment is in *Microsoft.Authorization/role assignments/write*. It's part of the Owner role.
-Sign in to the [Azure portal](https://portal.azure.com/).
+## Receive shared data
### Open an invitation
-You can open an invitation from email or directly from the Azure portal.
+You can open an invitation from email or directly from the [Azure portal](https://portal.azure.com/).
-1. To open an invitation from email, check your inbox for an invitation from your data provider. The invitation from Microsoft Azure is titled "Azure Data Share invitation from *\<yourdataprovider\@domain.com>*". Select **View invitation** to see your invitation in Azure.
+1. To open an invitation from email, check your inbox for an invitation from your data provider. The invitation from Microsoft Azure is titled "Azure Data Share invitation from *\<yourdataprovider\@domain.com>*". Select **View invitation** to see your invitation in Azure.
To open an invitation from the Azure portal, search for *Data Share invitations*. You see a list of Data Share invitations.
- If you are a guest user of a tenant, you will be asked to verify your email address for the tenant prior to viewing Data Share invitation for the first time. Once verified, it is valid for 12 months.
+ If you're a guest user of a tenant, you'll be asked to verify your email address for the tenant prior to viewing Data Share invitation for the first time. Once verified, it's valid for 12 months.
- ![Screenshot showing the list of invitations in the Azure portal.](./media/invitations.png "List of invitations.")
+ ![Screenshot showing the list of invitations in the Azure portal.](./media/invitations.png "List of invitations.")
-1. Select the share you want to view.
+1. Select the share you want to view.
### Accept an invitation
-1. Review all of the fields, including the **Terms of use**. If you agree to the terms, select the check box.
- ![Screenshot showing the Terms of use area.](./media/terms-of-use.png "Terms of use.")
+1. Review all of the fields, including the **Terms of use**. If you agree to the terms, select the check box.
+
+ ![Screenshot showing the Terms of use area.](./media/terms-of-use.png "Terms of use.")
1. Under **Target Data Share account**, select the subscription and resource group where you'll deploy your Data Share. Then fill in the following fields:
- * In the **Data share account** field, select **Create new** if you don't have a Data Share account. Otherwise, select an existing Data Share account that will accept your data share.
+ - In the **Data share account** field, select **Create new** if you don't have a Data Share account. Otherwise, select an existing Data Share account that will accept your data share.
- * In the **Received share name** field, either leave the default that the data provider specified or specify a new name for the received share.
+ - In the **Received share name** field, either leave the default that the data provider specified or specify a new name for the received share.
-1. Select **Accept and configure**. A share subscription is created.
+1. Select **Accept and configure**. A share subscription is created.
- ![Screenshot showing where to accept the configuration options.](./media/accept-options.png "Accept options")
+ ![Screenshot showing where to accept the configuration options.](./media/accept-options.png "Accept options")
- The received share appears in your Data Share account.
+ The received share appears in your Data Share account.
- If you don't want to accept the invitation, select **Reject**.
+ If you don't want to accept the invitation, select **Reject**.
### Configure a received share
-Follow the steps in this section to configure a location to receive data.
-1. On the **Datasets** tab, select the check box next to the dataset where you want to assign a destination. Select **Map to target** to choose a target data store.
+1. On the **Datasets** tab, select the check box next to the dataset where you want to assign a destination. Select **Map to target** to choose a target data store.
- ![Screenshot showing how to map to a target.](./media/dataset-map-target.png "Map to target.")
+ ![Screenshot showing how to map to a target.](./media/dataset-map-target.png "Map to target.")
-1. Select a target data store for the data. Files in the target data store that have the same path and name as files in the received data will be overwritten.
+1. Select a target data store for the data. Files in the target data store that have the same path and name as files in the received data will be overwritten.
- ![Screenshot showing where to select a target storage account.](./media/map-target.png "Target storage.")
+ ![Screenshot showing where to select a target storage account.](./media/map-target.png "Target storage.")
-1. For snapshot-based sharing, if the data provider uses a snapshot schedule to regularly update the data, you can enable the schedule from the **Snapshot Schedule** tab. Select the box next to the snapshot schedule. Then select **Enable**. Note that the first scheduled snapshot will start within one minute of the schedule time and subsequent snapshots will start within seconds of the scheduled time.
+1. For snapshot-based sharing, if the data provider uses a snapshot schedule to regularly update the data, you can enable the schedule from the **Snapshot Schedule** tab. Select the box next to the snapshot schedule. Then select **Enable**. The first scheduled snapshot will start within one minute of the schedule time and subsequent snapshots will start within seconds of the scheduled time.
![Screenshot showing how to enable a snapshot schedule.](./media/enable-snapshot-schedule.png "Enable snapshot schedule.") ### Trigger a snapshot+ The steps in this section apply only to snapshot-based sharing.
-1. You can trigger a snapshot from the **Details** tab. On the tab, select **Trigger snapshot**. You can choose to trigger a full snapshot or incremental snapshot of your data. If you're receiving data from your data provider for the first time, select **Full copy**. When a snapshot is executing, subsequent snapshots will not start until the previous one complete.
+1. You can trigger a snapshot from the **Details** tab. On the tab, select **Trigger snapshot**. You can choose to trigger a full snapshot or incremental snapshot of your data. If you're receiving data from your data provider for the first time, select **Full copy**. When a snapshot is executing, subsequent snapshots won't start until the previous one complete.
- ![Screenshot showing the Trigger snapshot selection.](./media/trigger-snapshot.png "Trigger snapshot.")
+ ![Screenshot showing the Trigger snapshot selection.](./media/trigger-snapshot.png "Trigger snapshot.")
-1. When the last run status is *successful*, go to the target data store to view the received data. Select **Datasets**, and then select the target path link.
+1. When the last run status is *successful*, go to the target data store to view the received data. Select **Datasets**, and then select the target path link.
- ![Screenshot showing a consumer dataset mapping.](./media/consumer-datasets.png "Consumer dataset mapping.")
+ ![Screenshot showing a consumer dataset mapping.](./media/consumer-datasets.png "Consumer dataset mapping.")
### View history
-You can view the history of your snapshots only in snapshot-based sharing. To view the history, open the **History** tab. Here you see the history of all of the snapshots that were generated in the past 30 days.
+
+You can view the history of your snapshots only in snapshot-based sharing. To view the history, open the **History** tab. Here you see the history of all of the snapshots that were generated in the past 30 days.
## Storage snapshot performance
-Storage snapshot performance is impacted by a number of factors in addition to number of files and size of the shared data. It is always recommended to conduct your own performance testing. Below are some example factors impacting performance.
-* Concurrent access to the source and target data stores.
-* Location of source and target data stores.
-* For incremental snapshot, the number of files in the shared dataset can impact the time it takes to find the list of files with last modified time after the last successful snapshot.
+Storage snapshot performance is impacted by many factors in addition to number of files and size of the shared data. It's always recommended to conduct your own performance testing. Below are some example factors impacting performance.
+- Concurrent access to the source and target data stores.
+- Location of source and target data stores.
+- For incremental snapshot, the number of files in the shared dataset can impact the time it takes to find the list of files with last modified time after the last successful snapshot.
## Next steps
-You've learned how to share and receive data from a storage account by using the Azure Data Share service. To learn about sharing from other data sources, see [Supported data stores](supported-data-stores.md).
+
+You've learned how to share and receive data from a storage account by using the Azure Data Share service. To learn about sharing from other data sources, see the [supported data stores](supported-data-stores.md).
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/migration-using-azure-data-studio.md
The workflow of the migration process is illustrated below.
:::image type="content" source="media/migration-using-azure-data-studio/architecture-ads-sql-migration.png" alt-text="Diagram of architecture for database migration using Azure Data Studio with DMS":::
-1. **Source SQL Server**: SQL Server instance on-premises, private cloud, or any public cloud virtual machine. All editions of SQL Server 2008 and above are supported.
+1. **Source SQL Server**: SQL Server instance on-premises, private cloud, or any public cloud virtual machine. All editions of SQL Server 2016 and above are supported.
1. **Target Azure SQL**: Supported Azure SQL targets are Azure SQL Managed Instance or SQL Server on Azure Virtual Machines (registered with SQL IaaS Agent extension in [Full management mode](../azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md#management-modes)) 1. **Network File Share**: Server Message Block (SMB) network file share where backup files are stored for the database(s) to be migrated. Azure Storage blob containers and Azure Storage file share are also supported. 1. **Azure Data Studio**: Download and install the [Azure SQL Migration extension in Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
Azure Database Migration Service prerequisites that are common across all suppor
- Server roles - Server audit - Automating migrations with Azure Data Studio using PowerShell / CLI isn't supported.
+- SQL Server 2014 and below are not supported.
- Migrating to Azure SQL Database isn't supported. - Azure storage accounts secured by specific firewall rules or configured with a private endpoint are not supported for migrations. - You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL Migration extension in Azure Data Studio and can be reused for further database migrations.
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
To complete this tutorial, you need to:
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- - Owner or Contributor role for the Azure subscription.
+ - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
* Create a SQL Managed Instance by following the detail in the article [Create a SQL Managed Instance in the Azure portal](../azure-sql/managed-instance/instance-create-quickstart.md). * Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission. * Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files, which Azure Database Migration Service can use for database migration.
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- - Owner or Contributor role for the Azure subscription.
+ - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
* Create a target [Azure SQL Managed Instance](../azure-sql/managed-instance/instance-create-quickstart.md). * Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission. * Use one of the following storage options for the full database and transaction log backup files:
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Last updated 10/05/2021
# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine offline using Azure Data Studio with DMS (Preview)
-Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
+Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance (SQL Server 2016 and above) to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
In this tutorial, you migrate the **Adventureworks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with the offline migration method by using Azure Data Studio with Azure Database Migration Service.
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
Last updated 10/05/2021
# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio with DMS (Preview)
-Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
+Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance (SQL Server 2016 and above) to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
In this tutorial, you migrate the **Adventureworks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with minimal downtime by using Azure Data Studio with Azure Database Migration Service.
dns Private Dns Autoregistration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/private-dns-autoregistration.md
# What is the auto registration feature in Azure DNS private zones?
-The Azure DNS private zones auto registration feature manages DNS records for virtual machines deployed in a virtual network. When you [link a virtual network](./private-dns-virtual-network-links.md) with a private DNS zone with this setting enabled. A DNS record gets created for each virtual machine deployed in the virtual network.
+The Azure DNS private zones auto registration feature manages DNS records for virtual machines deployed in a virtual network. When you [link a virtual network](./private-dns-virtual-network-links.md) with a private DNS zone with this setting enabled, a DNS record gets created for each virtual machine deployed in the virtual network.
For each virtual machine, an A record and a PTR record are created. DNS records for newly deployed virtual machines are also automatically created in the linked private DNS zone. When a virtual machine gets deleted, any associated DNS records also get deleted from the private DNS zone.
To enable auto registration, select the checkbox for "Enable auto registration"
* Read about some common [private zone scenarios](./private-dns-scenarios.md) that can be realized with private zones in Azure DNS.
-* For common questions and answers about private zones in Azure DNS, including specific behavior you can expect for certain kinds of operations, see [Private DNS FAQ](./dns-faq-private.yml).
+* For common questions and answers about private zones in Azure DNS, including specific behavior you can expect for certain kinds of operations, see [Private DNS FAQ](./dns-faq-private.yml).
event-hubs Event Hubs Capture Enable Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-capture-enable-through-portal.md
You can configure Capture at the event hub creation time using the [Azure portal
For more information, see the [Event Hubs Capture overview][capture-overview]. > [!IMPORTANT]
-> The destination storage (Azure Storage or Azure Data Lake Storage) account must be in the same subscription as the event hub.
+> - The destination storage (Azure Storage or Azure Data Lake Storage) account must be in the same subscription as the event hub.
+> - Event Hubs doesn't support capturing events in a **premium** storage account.
+ ## Capture data to Azure Storage
When you create an event hub, you can enable Capture by clicking the **On** butt
The default time window is 5 minutes. The minimum value is 1, the maximum 15. The **Size** window has a range of 10-500 MB.
+You can enable or disable emitting empty files when no events occur during the Capture window.
+ ![Time window for capture][1]
-> [!NOTE]
-> You can enable or disable emitting empty files when no events occur during the Capture window.
## Capture data to Azure Data Lake Storage Gen 2
event-hubs Event Hubs Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-capture-overview.md
Azure Event Hubs enables you to automatically capture the streaming data in Even
Event Hubs Capture enables you to process real-time and batch-based pipelines on the same stream. This means you can build solutions that grow with your needs over time. Whether you're building batch-based systems today with an eye towards future real-time processing, or you want to add an efficient cold path to an existing real-time solution, Event Hubs Capture makes working with streaming data easier. > [!IMPORTANT]
-> The destination storage (Azure Storage or Azure Data Lake Storage) account must be in the same subscription as the event hub.
+> - The destination storage (Azure Storage or Azure Data Lake Storage) account must be in the same subscription as the event hub.
+> - Event Hubs doesn't support capturing events in a **premium** storage account.
## How Event Hubs Capture works
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-premium-overview.md
Title: Overview of Event Hubs Premium description: This article provides an overview of Azure Event Hubs Premium, which offers multi-tenant deployments of Event Hubs for high-end streaming needs. Previously updated : 10/20/2021 Last updated : 02/02/2022 # Overview of Event Hubs Premium
+The Event Hubs Premium (premium tier) is designed for high-end streaming scenarios that require elastic, superior performance with predictable latency. The performance is achieved by providing reserved compute, memory, and storage resources, which minimize cross-tenant interference in a managed multi-tenant PaaS environment.
-The Event Hubs Premium tier is designed for high-end streaming scenarios that require elastic, superior performance with predictable latency. The performance is achieved by providing reserved compute, memory, and storage resources, which minimize cross-tenant interference in a managed multi-tenant PaaS environment.
+It replicates events to three replicas, distributed across Azure availability zones where available. All replicas are synchronously flushed to the underlying fast storage before the send operation is reported as completed. Events that aren't read immediately or that need to be re-read later can be retained up to 90 days, transparently held in an availability-zone redundant storage tier.
-Event Hubs Premium introduces a new, two-tier, native-code log engine that provides far more predictable and much lower send and passthrough latencies than the prior generation, without any durability compromises. Event Hubs Premium replicates every event to three replicas, distributed across Azure availability zones where available, and all replicas are synchronously flushed to the underlying fast storage before the send operation is reported as completed. Events that are not read immediately or that need to be re-read later can be retained up to 90 days, transparently held in an availability-zone redundant storage tier. Events in both the fast storage and retention storage tiers are encrypted; in Event Hubs Premium, the encryption keys can be supplied by you.
+In addition to these storage-related features and all capabilities and protocol support of the standard tier, the isolation model of the premium tier enables features like [dynamic partition scale-up](dynamically-add-partitions.md). You also get far more generous [quota allocations](event-hubs-quotas.md). Event Hubs Capture is included at no extra cost.
-In addition to these storage-related features and all capabilities and protocol support of the Event Hubs Standard offering, the isolation model of Event Hubs Premium enables new features like dynamic partition scale-up and yet-to-be-added future capabilities. You also get far more generous quota allocations. Event Hubs Capture is included at no extra cost.
-
-The Premium offering is billed by [Processing Units (PUs)](event-hubs-scalability.md#processing-units) which correspond to a share of isolated resources (CPU, Memory, and Storage) in the underlying infrastructure.
-
-In comparison to Dedicated offering, since Event Hubs Premium provides isolation inside a very large multi-tenant environment that can shift resources quickly, it can scale far more elastically and quicker and PUs can be dynamically adjusted. Therefore, Event Hubs Premium will often be a more cost effective option for mid-range (<120MB/sec) throughput requirements, especially with changing loads throughout the day or week, when compared to Event Hubs Dedicated.
> [!NOTE]
-> Please note that Event Hubs Premium will only support TLS 1.2 or greater .
+> Event Hubs Premium supports TLS 1.2 or greater .
-For the extra robustness gained by availability-zone support, the minimal deployment scale for Event Hubs Dedicated is 8 Capacity Units (CU), but you will have availability zone support in Event Hubs Premium from the first PU in all AZ regions.
+## Why premium?
+The premium tier offers three compelling benefits for customers who require better isolation in a multitenant environment with low latency and high throughput data ingestion needs.
-You can purchase 1, 2, 4, 8 and 16 Processing Units for each namespace. Since Event Hubs Premium is a capacity-based offering, the achievable throughput is not set by a throttle as it is in Event Hubs Standard, but depends on the work you ask Event Hubs to do, similar to Event Hubs Dedicated. The effective ingest and stream throughput per PU will depend on various factors, including:
+### Superior performance with the new two-tier storage engine
+The premium tier uses a new two-tier log storage engine that drastically improves the data ingress performance with substantially reduced overall latency without compromising the durability guarantees.
-* Number of producers and consumers
-* Payload size
-* Partition count
-* Egress request rate
-* Usage of Event Hubs Capture, Schema Registry, and other advanced features
+### Better isolation and predictability
+The premium tier offers an isolated compute and memory capacity to achieve more predictable latency and far reduced *noisy neighbor* impact risk in a multi-tenant deployment.
-Refer the [comparison between Event Hubs SKUs](event-hubs-quotas.md) for more details.
+It implements a *cluster in cluster* model in its multitenant clusters to provide predictability and performance while retaining all the benefits of a managed multitenant PaaS environment.
+### Cost savings and scalability
+As the premium tier is a multitenant offering, it can dynamically scale more flexibly and very quickly. Capacity is allocated in processing units (PUs) that allocate isolated pods of CPU/memory inside the cluster. The number of those pods can be scaled up/down per namespace. Therefore, the premium tier is a low-cost option for messaging scenarios with the overall throughput range that is less than 120 MB/s but higher than what you can achieve with the standard SKU.
-> [!NOTE]
-> All Event Hubs namespaces are enabled for the Apache Kafka RPC protocol by default can be used by your existing Kafka based applications. Having Kafka enabled on your cluster does not affect your non-Kafka use cases; there is no option or need to disable Kafka on a cluster.
+## Premium vs. dedicated tiers
+In comparison to the dedicated offering, the premium tier provides the following benefits:
-## Why Premium?
+- Isolation inside a very large multi-tenant environment that can shift resources quickly
+- Scale far more elastically and quicker
+- PUs can be dynamically adjusted
-Premium Event Hubs offers three compelling benefits for customers who require better isolation in a multitenant environment with low latency and high throughput data ingestion needs.
+Therefore, the premium tier is often a more cost effective option for mid-range (<120MB/sec) throughput requirements, especially with changing loads throughout the day or week, when compared to the dedicated tier.
-#### Superior performance with the new two-tier storage engine
+For the extra robustness gained by availability-zone support, the minimal deployment scale for the dedicated tier is 8 capacity units (CU), but you'll have availability zone support in the premium tier from the first PU in all availability zone regions.
-Event Hubs premium uses a new two-tier log storage engine that drastically improves the data ingress performance with substantially reduced overall latency and latency jitter without compromising the durability guarantees.
+You can purchase 1, 2, 4, 8 and 16 processing units for each namespace. As the premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle as it's' in the standard tier, but depends on the work you ask Event Hubs to do, similar to the dedicated tier. The effective ingest and stream throughput per PU will depend on various factors, including:
-#### Better isolation and predictability
+* Number of producers and consumers
+* Payload size
+* Partition count
+* Egress request rate
+* Usage of Event Hubs Capture, Schema Registry, and other advanced features
-Event Hubs premium offers an isolated compute and memory capacity to achieve more predictable latency and far reduced *noisy neighbor* impact risk in a multi-tenant deployment.
+For more information, see [comparison between Event Hubs SKUs](event-hubs-quotas.md).
-Event Hubs premium implements a *Cluster in Cluster* model in its multitenant clusters to provide predictability and performance while retaining all the benefits of a managed multitenant PaaS environment.
+## Encryption of events
+Azure Event Hubs provides encryption of data at rest with Azure Storage Service Encryption (Azure SSE). The Event Hubs service uses Azure Storage to store the data. All the data that's stored with Azure Storage is encrypted using Microsoft-managed keys. If you use your own key (also referred to as Bring Your Own Key (BYOK) or customer-managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key will be encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the BYOK feature is a one time setup process on your namespace. For more information, see [Configure customer-managed keys for encrypting Azure Event Hubs data at rest](configure-customer-managed-key.md).
+> [!NOTE]
+> All Event Hubs namespaces are enabled for the Apache Kafka RPC protocol by default can be used by your existing Kafka based applications. Having Kafka enabled on your cluster does not affect your non-Kafka use cases; there is no option or need to disable Kafka on a cluster.
-#### Cost savings and scalability
-As Event Hubs Premium is a multitenant offering, it can dynamically scale more flexibly and very quickly. Capacity is allocated in Processing Units that allocate isolated pods of CPU/Memory inside the cluster. The number of those pods can be scaled up/down per namespace. Therefore, Event Hubs Premium is a low-cost option for messaging scenarios with the overall throughput range that is less than 120 MB/s but higher than what you can achieve with the standard SKU.
## Quotas and limits The premium tier offers all the features of the standard plan, but with better performance, isolation and more generous quotas. For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md)
+## Pricing
+
+The Premium offering is billed by [Processing Units (PUs)](event-hubs-scalability.md#processing-units) which correspond to a share of isolated resources (CPU, Memory, and Storage) in the underlying infrastructure.
## FAQs
For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas
## Next steps
-You can start using Event Hubs Premium via [Azure portal](https://portal.azure.com/#create/Microsoft.EventHub). Refer [Event Hubs Premium pricing](https://azure.microsoft.com/pricing/details/event-hubs/) for more details on pricing and [Event Hubs FAQ](event-hubs-faq.yml) to find answers to some frequently asked questions about Event Hubs.
+See the following articles:
+
+- [Create an event hub](event-hubs-create.md). Select **Premium** for **Pricing tier**.
+- [Event Hubs Premium pricing](https://azure.microsoft.com/pricing/details/event-hubs/) for more details on pricing
+- [Event Hubs FAQ](event-hubs-faq.yml) to find answers to some frequently asked questions about Event Hubs.
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
The Scenario 2 is illustrated in the following diagram. In the diagram, green li
[![9]][9]
-The solution is illustrated in the following diagram. As illustrated, you can architect the scenario either using more specific route (Option 1) or AS-path prepend (Option 2) to influence VNet path selection. To influence on-premises network route selection for Azure bound traffic, you need configure the interconnection between the on-premises location as less preferable. Howe you configure the interconnection link as preferable depends on the routing protocol used within the on-premises network. You can use local preference with iBGP or metric with IGP (OSPF or IS-IS).
+The solution is illustrated in the following diagram. As illustrated, you can architect the scenario either using more specific route (Option 1) or AS-path prepend (Option 2) to influence VNet path selection. To influence on-premises network route selection for Azure bound traffic, you need configure the interconnection between the on-premises location as less preferable. How you configure the interconnection link as preferable depends on the routing protocol used within the on-premises network. You can use local preference with iBGP or metric with IGP (OSPF or IS-IS).
[![10]][10]
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/create-front-door-cli.md
az group delete \
az group delete \ --name myRGFDEast ```+
+## Next steps
+
+Advance to the next article to learn how to add a custom domain to your Front Door.
+> [!div class="nextstepaction"]
+> [Add a custom domain](how-to-add-custom-domain.md)
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-component-versioning.md
Basic support does not include the following:
Microsoft does not encourage creating analytics pipelines or solutions on clusters in basic support. We recommend migrating existing clusters to the most recent fully supported version.
+## HDInsight 3.6 to 4.0 Migration Guides
+- [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](spark/migrate-versions.md).
+- [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](interactive-query/apache-hive-migrate-workloads.md).
+- [Migrate Apache Kafka workloads to Azure HDInsight 4.0](kafk).
+- [Migrate an Apache HBase cluster to a new version](hbase/apache-hbase-migrate-new-version.md).
+ ## Release notes For additional release notes on the latest versions of HDInsight, see [HDInsight release notes](hdinsight-release-notes.md).
hdinsight Hdinsight Ubuntu 1804 Qa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-ubuntu-1804-qa.md
This article provides more details for HDInsight Ubuntu 18.04 OS update and pote
HDInsight has started rolling out the new HDInsight 4.0 cluster image running on Ubuntu 18.04 in May 2021. Newly created HDInsight 4.0 clusters will run on Ubuntu 18.04 by default once available. Existing clusters on Ubuntu 16.04 will run as is with full support.
-HDInsight 3.6 will continue to run on Ubuntu 16.04. It will reach the end of standard support by 30 June 2021, and will change to Basic support starting on 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md). Ubuntu 18.04 won't be supported for HDInsight 3.6. If youΓÇÖd like to use Ubuntu 18.04, youΓÇÖll need to migrate your clusters to HDInsight 4.0.
+HDInsight 3.6 will continue to run on Ubuntu 16.04. It will reach the end of standard support by 30 June 2021, and will change to Basic support starting on 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md). Ubuntu 18.04 won't be supported for HDInsight 3.6. If youΓÇÖd like to use Ubuntu 18.04, youΓÇÖll need to migrate your clusters to HDInsight 4.0. Spark 3.0 with HDInsight 4.0 is available only on Ubuntu 16.04. Spark 3.1 with HDInsight 4.0 will be shipping soon and will be available on Ubuntu 18.04.
Drop and recreate your clusters if youΓÇÖd like to move existing clusters to Ubuntu 18.04. Plan to create or recreate your cluster.
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Below is a summary of the supported RESTful capabilities. For more information o
| update with optimistic locking | Yes | Yes | | update (conditional) | Yes | Yes | | patch | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We have included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).|
-| patch (conditional) | Yes | Yes |
+| patch (conditional) | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We have included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).
| history | Yes | Yes | | create | Yes | Yes | Support both POST/PUT | | create (conditional) | Yes | Yes | Issue [#1382](https://github.com/microsoft/fhir-server/issues/1382) |
All the operations that are supported that extend the REST API.
| Search parameter type | Azure API for FHIR | FHIR service in Healthcare APIs| Comment | ||--|--||
-| [$export](../../healthcare-apis/data-transformation/export-data.md) (whole system) | Yes | Yes | Supports system, group, and patient. |
+| [$export](../../healthcare-apis/data-transformation/export-data.md) | Yes | Yes | Supports system, group, and patient. |
| [$convert-data](convert-data.md) | Yes | Yes | | | [$validate](validation-against-profiles.md) | Yes | Yes | | | [$member-match](tutorial-member-match.md) | Yes | Yes | |
Currently, the allowed actions for a given role are applied *globally* on the AP
## Service limits
-* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 10,000 RUs in the portal for Azure API for FHIR. You will need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 10,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000.
+* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 10,000 RUs in the portal for Azure API for FHIR. You will need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 10,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000. In addition, we support [autoscaling of RUs](autoscale-azure-api-fhir.md).
* **Bundle size** - Each bundle is limited to 500 items. * **Data size** - Data/Documents must each be slightly less than 2 MB.
-* **Subscription Limit** - By default, each subscription is limited to a maximum of 10 FHIR Server Instances. If you need more instances per subscription, open a support ticket and provide details about your needs.
-
-* **Concurrent connections and Instances** - By default, you have 15 concurrent connections on two instances in the cluster (for a total of 30 concurrent requests). If you need more concurrent requests, open a support ticket and provide details about your needs.
+* **Subscription Limit** - By default, each subscription is limited to a maximum of 10 FHIR server instances. If you need more instances per subscription, open a support ticket and provide details about your needs.
## Next steps
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/how-to-do-custom-search.md
To update a search parameter, use `PUT` to create a new version of the search pa
> If you don't know the ID for your search parameter, you can search for it. Using `GET {{FHIR_URL}}/SearchParameter` will return all custom search parameters, and you can scroll through the search parameter to find the search parameter you need. You could also limit the search by name. With the example below, you could search for name using `USCoreRace: GET {{FHIR_URL}}/SearchParameter?name=USCoreRace`. ```rest
-PUT {{FHIR_ULR}}/SearchParameter/{SearchParameter ID}
+PUT {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
{ "resourceType" : "SearchParameter",
healthcare-apis Fhir Service Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-service-autoscale.md
Previously updated : 11/16/2021 Last updated : 2/2/2022
The autoscale feature for the FHIR service is designed to provide optimized serv
## How does FHIR service autoscale work?
-When transaction workloads are high, the autoscale feature increases computing resources automatically. When transaction workloads are low, it decreases computing resources accordingly.
+The autoscale feature adjusts computing resources automatically to optimize the overall service scalability. It requires no action from customers.
-The autoscale feature adjusts computing resources automatically to optimize the overall service scalability. Whether you are performing read requests that include simple queries like getting patient information using a patient ID, and advanced queries like getting all `DiagnosticReport` resources for patients whose name is "Sarah", or you're creating or updating FHIR resources, the autoscale feature manages the dynamics and complexity of resource allocation to ensure high scalability.
-
-The autoscale feature is part of the managed service and requires no action from customers. However, customers are encouraged to share their feedback to help improve the feature. Customers can also raise a support ticket to address any scalability issue they may have experienced.
+When transaction workloads are high, the autoscale feature increases computing resources automatically. When transaction workloads are low, it decreases computing resources accordingly. Whether you are performing read requests that include simple queries like getting patient information using a patient ID, and advanced queries like getting all DiagnosticReport resources for patients whose name is "Sarah", or you're creating or updating FHIR resources, the autoscale feature manages the dynamics and complexity of resource allocation to ensure high scalability.
### What is the cost of the FHIR service autoscale?
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/how-to-do-custom-search.md
To update a search parameter, use `PUT` to create a new version of the search pa
> If you don't know the ID for your search parameter, you can search for it. Using `GET {{FHIR_URL}}/SearchParameter` will return all custom search parameters, and you can scroll through the search parameter to find the search parameter you need. You could also limit the search by name. With the example below, you could search for name using `USCoreRace: GET {{FHIR_URL}}/SearchParameter?name=USCoreRace`. ```rest
-PUT {{FHIR_ULR}}/SearchParameter/{SearchParameter ID}
+PUT {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
{ "resourceType" : "SearchParameter",
healthcare-apis Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/workspace-overview.md
Previously updated : 07/12/2021 Last updated : 2/2/2022
One or more workspaces can be created in a resource group from the Azure portal,
A workspace can't be deleted unless all child service instances within the workspace have been deleted. This feature helps prevent any accidental deletion of service instances. However, when a workspace resource group is deleted, all the workspaces and child service instances within the workspace resource group get deleted.
+Workspace names can be re-used in the same Azure subscription, but not in a different Azure subscription, after deletion. However, when the move operation is supported and enabled, workspaces and its child resources can be moved from one subscription to another subscription if certain requirements are met. One requirement is that the two subscriptions must be part of the same Azure Active Directory (Azure AD) tenant. Another requirement is that the Private Link configuration is not enabled. Names for FHIR services, DICOM services and IoT connectors can be re-used in the same or different subscription after deletion if there is no collision with the URLs of any existing services.
+ ## Workspace and Azure region selection When you create a workspace, it must be configured for an Azure region, which can be the same as or different from the resource group. The region cannot be changed after the workspace is created. Within each workspace, all Healthcare APIs services (FHIR service, DICOM service, and IoT Connector service) must be created in the region of the workspace and cannot be moved to a different workspace.
to. For more information, see [Azure RBAC](../role-based-access-control/index.ym
To start working with the Azure Healthcare APIs, follow the 5-minute quick start to deploying a workspace. >[!div class="nextstepaction"]
->[Deploy workspace in the Azure portal](healthcare-apis-quickstart.md)
+>[Deploy workspace in the Azure portal](healthcare-apis-quickstart.md)
iot-central Concepts Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-quotas-limits.md
There are various quotas and limits that apply to IoT Central applications. IoT Central applications internally use multiple Azure services such as IoT Hub and the Device Provisioning Service (DPS), and these services also have quotas and limits. Where relevant, quotas and limits in the underlying services are called out in this article. > [!NOTE]
-> The quotas and limits described in this article apply to the new multiple IoT hub architecture. Currently, there are a few legacy IoT Central applications that were created before April 2021 that haven't yet been migrated to the multiple IoT hub architecture. Use the `az iot central device manual-failover` command to check if your application still uses a single IoT hub.
+> The quotas and limits described in this article apply to the new multiple IoT hub architecture. Currently, there are a few legacy IoT Central applications that were created before April 2021 that haven't yet been migrated to the multiple IoT hub architecture. Use the `az iot central device manual-failover` command in the [Azure CLI](/cli/azure/?view=azure-cli-latest&preserve-view=true) to check if your application still uses a single IoT hub. This triggers an IoT hub failover if your application uses the multiple IoT hub architecture. It returns an error if your application uses the older architecture.
## Devices
There are various quotas and limits that apply to IoT Central applications. IoT
| Item | Quota or limit | Notes | | - | -- | -- | | Number of device templates in an application | 1,000 | For performance reasons, you shouldn't exceed this limit. |
-| Number of telemetry capabilities in a device template | 300 | For performance reasons, you shouldn't exceed this limit. |
+| Number of capabilities in a device template | 300 | For performance reasons, you shouldn't exceed this limit. |
## Device groups
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-organizations.md
After you've created your organization hierarchy you can use organizations in ar
## Default organization
-You can set an organization as the default organization to use in your application. The default organization becomes the default option whenever you choose an organization, such as when you add a new user to your IoT Central application.
+> [!TIP]
+> This is a personal preference that only applies to you.
+
+You can set an organization as the default organization to use in your application as a personal preference. The default organization becomes the default option whenever you choose an organization, such as when you add a new user or add a device to your IoT Central application.
To set the default organization, select **Settings** on the top menu bar: :::image type="content" source="media/howto-create-organization/set-default-organization.png" alt-text="Screenshot that shows how to set your default organization.":::
-> [!TIP]
-> This is a personal preference that only applies to you.
## Add organizations to an existing application
When you start adding organizations, all existing devices, users, and experience
## Limits
-To following limits apply to organizations:
+The following limits apply to organizations:
- The hierarchy can be no more than five levels deep. - The total number of organization cannot be more than 200. Each node in the hierarchy counts as an organization. + ## Next steps Now that you've learned how to manage Azure IoT Central organizations, the suggested next step is learn how to [Export IoT data to cloud destinations using data export](howto-export-data.md).
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-transform-data.md
In this scenario, an IoT Edge module transforms the data from downstream devices
1. **Verify**: Send data from a downstream device to the gateway and verify the transformed device data reaches your IoT Central application.
-In the example described in the following sections, the downstream device sends CSV data in the following format to the IoT Edge gateway device:
+In the example described in the following sections, the downstream device sends JSON data in the following format to the IoT Edge gateway device:
-```csv
-"<temperature >, <pressure>, <humidity>"
+```json
+{
+ "device": {
+ "deviceId": "<downstream-deviceid>"
+ },
+ "measurements": {
+ "temp": <temperature>,
+ "pressure": <pressure>,
+ "humidity": <humidity>,
+ "scale": "celsius",
+ }
+}
```
-You want to use an IoT Edge module to transform the data to the following JSON format before it's sent to IoT Central:
+You want to use an IoT Edge module to transform the data and convert the temperature value from `Celsius` to `Fahrenheit` before sending it to IoT Central:
```json {
You want to use an IoT Edge module to transform the data to the following JSON f
"temp": <temperature>, "pressure": <pressure>, "humidity": <humidity>,
+ "scale": "fahrenheit"
} } ```
To create a container registry:
1. Open the [Azure Cloud Shell](https://shell.azure.com/) and sign in to your Azure subscription.
+1. Select the **Bash** shell.
+ 1. Run the following commands to create an Azure container registry: ```azurecli
To create a container registry:
az acr credential show -n $REGISTRY_NAME ```
- Make a note of the `username` and `password` values, you use them later.
+ Make a note of the `username` and `password` values, you use them later. You only need one of the passwords shown in the command output.
To build the custom module in the [Azure Cloud Shell](https://shell.azure.com/):
-1. In the [Azure Cloud Shell](https://shell.azure.com/), navigate to a suitable folder.
+1. In the [Azure Cloud Shell](https://shell.azure.com/), create a new folder and navigate to it by running the following commands:
+
+ ```azurecli
+ mkdir yournewfolder
+ cd yournewfolder
+ ```
+ 1. To clone the GitHub repository that contains the module source code, run the following command: ```azurecli
To register a gateway device in IoT Central:
1. In your IoT Central application, navigate to the **Devices** page.
-1. Select **IoT Edge gateway device** and select **Create a device**. Enter *IoT Edge gateway device* as the device name, enter *gateway-01* as the device ID, make sure **IoT Edge gateway device** is selected as the device template. Select **Create**.
+1. Select **IoT Edge gateway device** and select **Create a device**. Enter *IoT Edge gateway device* as the device name, enter *gateway-01* as the device ID, make sure **IoT Edge gateway device** is selected as the device template and **No** is selected as **Simulate this device?**. Select **Create**.
1. In the list of devices, click on the **IoT Edge gateway device**, and then select **Connect**.
To register a downstream device in IoT Central:
1. In your IoT Central application, navigate to the **Devices** page.
-1. Don't select a device template. Select **+ New**. Enter *Downstream 01* as the device name, enter *downstream-01* as the device ID, make sure that the device template is **Unassigned**. Select **Create**.
+1. Don't select a device template. Select **+ New**. Enter *Downstream 01* as the device name, enter *downstream-01* as the device ID, make sure that the device template is **Unassigned** and **No** is selected as **Simulate this device?**. Select **Create**.
1. In the list of devices, click on the **Downstream 01**, and then select **Connect**.
For convenience, this article uses Azure virtual machines to run the gateway and
Select **Review + Create**, and then **Create**. It takes a couple of minutes to create the virtual machines in the **ingress-scenario** resource group.
-To check that the IoT Edge device is running correctly:
+To check that the IoT Edge gateway device is running correctly:
1. Open your IoT Central application. Then navigate to the **IoT Edge Gateway device** on the list of devices on the **Devices** page.
To generate the demo certificates and install them on your gateway device:
The example shown above assumes you're signed in as **AzureUser** and created a device CA certificated called "mycacert".
-1. Save the changes and restart the IoT Edge runtime:
+1. Save the changes and run the following command to verify that the *config.yaml* file is correct:
+
+ ```bash
+ sudo iotedge check
+ ```
+
+1. Restart the IoT Edge runtime:
```bash sudo systemctl restart iotedge
To connect a downstream device to the IoT Edge gateway device:
npm run-script start ```
+ During `sudo apt install nodejs npm node-typescript` commands, you could be asked to allow installations: press `Y` if prompted.
+ 1. Enter the device ID, scope ID, and SAS key for the downstream device you created previously. For the hostname, enter `edgegateway`. The output from the command looks like: ```output
To verify the scenario is running, navigate to your **IoT Edge gateway device**
{"temperature":85.21208,"pressure":59.97321,"humidity":77.718124,"scale":"farenheit"} ```
-Because the IoT Edge device is transforming the data from the downstream device, the telemetry is associated with the gateway device in IoT Central. To visualize the transformed telemetry, create a view in the **IoT Edge gateway device** template and republish it.
+The temperature is sent in Fahrenheit. Because the IoT Edge device is transforming the data from the downstream device, the telemetry is associated with the gateway device in IoT Central. To visualize the transformed telemetry, create a view in the **IoT Edge gateway device** template and republish it.
## Data transformation at egress
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-operator.md
To manage individual devices, use device views to set device and cloud propertie
To manage devices in bulk, create and schedule jobs. Jobs can update properties and run commands on multiple devices. To learn more, see [Create and run a job in your Azure IoT Central application](howto-manage-devices-in-bulk.md).
+To manage IoT Edge devices, [create and edit deployment manifests](concepts-iot-edge.md#iot-edge-deployment-manifests-and-iot-central-device-templates) and deploy them onto the device directly from IoT Central. You can also run commands on modules from within IoT Central.
+ If your IoT Central application uses *organizations*, an administrator controls which devices you have access to. ## Troubleshoot and remediate issues
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central.md
Title: What is Azure IoT Central | Microsoft Docs
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions and helps to reduce the burden and cost of IoT management operations, and development. This article provides an overview of the features of Azure IoT Central.
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. It helps to reduce the burden and cost of IoT management operations, and development. This article provides an overview of the features of Azure IoT Central.
Last updated 12/22/2021
# What is Azure IoT Central?
-IoT Central is an IoT application platform that reduces the burden and cost of developing, managing, and maintaining enterprise-grade IoT solutions. Choosing to build with IoT Central gives you the opportunity to focus time, money, and energy on transforming your business with IoT data, rather than just maintaining and updating a complex and continually evolving IoT infrastructure.
+IoT Central is an IoT application platform that reduces the burden and cost of developing, managing, and maintaining enterprise-grade IoT solutions. If you choose to build with IoT Central, you'll have the opportunity to focus time, money, and energy on transforming your business with IoT data, rather than just maintaining and updating a complex and continually evolving IoT infrastructure.
The web UI lets you quickly connect devices, monitor device conditions, create rules, and manage millions of devices and their data throughout their life cycle. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications.
-This article outlines, for IoT Central:
+This article provides an overview of IoT Central and describes its core functionality.
-- How to create your application.-- How to connect your devices to your application.-- How to integrate your application with other services.-- How to administer your application.-- The typical user roles associated with a project.-- Pricing options.-
-## Create your IoT Central application
+## Create an IoT Central application
[Quickly deploy a new IoT Central application](quick-deploy-iot-central.md) and then customize it to your specific requirements. Application templates in Azure IoT Central are a tool to help you kickstart your IoT solution development. You can use app templates for everything from getting a feel for what is possible, to fully customizing your application to resell to your customers.
Start with a generic _application template_ or with one of the industry-focused
- [Retail](../retail/tutorial-in-store-analytics-create-app.md) - [Energy](../energy/tutorial-smart-meter-app.md) - [Government](../government/tutorial-connected-waste-management.md)-- [Healthcare](../healthcare/tutorial-continuous-patient-monitoring.md).
+- [Healthcare](../healthcare/tutorial-continuous-patient-monitoring.md)
See the [Create a new application](quick-deploy-iot-central.md) quickstart for a walk-through of how to create your first application. ## Connect devices
-After creating your application, the first step is to create and connect devices. Every device connected to IoT Central uses a _device template_. A device template is the blueprint that defines the characteristics and behavior of a type of device such as the:
+After you create your application, the next step is to create and connect devices. Every device connected to IoT Central uses a _device template_. A device template is the blueprint that defines the characteristics and behavior of a type of device such as the:
- Telemetry it sends. Examples include temperature and humidity. Telemetry is streaming data. - Business properties that an operator can modify. Examples include a customer address and a last serviced date. - Device properties that are set by a device and are read-only in the application. For example, the state of a valve as either open or shut.-- Properties, that an operator sets, that determine the behavior of the device. For example, a target temperature for the device.-- Commands, that an operator can call, that run on a device. For example, a command to remotely reboot a device.
+- Properties that an operator sets, that determine the behavior of the device. For example, a target temperature for the device.
+- Commands that an operator can call, that run on a device. For example, a command to remotely reboot a device.
Every [device template](howto-set-up-template.md) includes:
iot-central Quick Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-configure-rules.md
In this quickstart, you create an IoT Central rule that sends an email when some
## Prerequisites
-Before you begin, you should complete the previous quickstart [Create and use an Azure IoT Central application](./quick-deploy-iot-central.md) to connect the **IoT Plug and Play** smartphone app to your IoT Central application.
+Before you begin, you should complete the previous quickstart [Connect your first device](./quick-deploy-iot-central.md). It shows you how to create an Azure IoT Central application and connect the **IoT Plug and Play** smartphone app to it.
## Create a telemetry-based rule
When the phone is lying on its back, the **z** value is greater than `9`, when t
1. In the **Target devices** section, select **IoT Plug and Play mobile** as the **Device template**. This option filters the devices the rule applies to by device template type. You can add more filter criteria by selecting **+ Filter**.
-1. In the **Conditions** section, you define what triggers your rule. Use the following information to define a single condition based on accelerometer z-axis telemetry. This rule uses aggregation so you receive a maximum of one email for each device every five minutes:
+1. In the **Conditions** section, you define what triggers your rule. Use the following information to define a single condition based on accelerometer z-axis telemetry. This rule uses aggregation, so you receive a maximum of one email for each device every five minutes:
| Field | Value | |||
iot-central Tutorial Water Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-quality-monitoring.md
The water quality monitoring application you created from the application templa
:::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitor-device1.png" alt-text="Select device 1":::
-1. On the **Cloud Properties** tab, change the **Acidity (pH) threshold** value from **8** to **9** and select **Save**.
+1. On the **Cloud Properties** tab, change the **Acidity (pH) threshold** value to **9** and select **Save**.
1. Explore the **Device Properties** tab and the **Device Dashboard** tab. > [!NOTE]
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
Last updated 12/20/2021
For many retailers, environmental conditions within their stores are a key differentiator from their competitors. Retailers want to maintain pleasant conditions within their stores for the benefit of their customers.
-You can use the IoT Central in-store analytics condition monitoring application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using of different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
+You can use the IoT Central in-store analytics condition monitoring application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
Use the application template to:
Use the IoT Central *in-store analytics* application template and the guidance i
:::image type="content" source="media/tutorial-in-store-analytics-create-app/store-analytics-architecture-frame.png" alt-text="Azure IoT Central Store Analytics."::: -- Set of IoT sensors sending telemetry data to a gateway device.-- Gateway devices sending telemetry and aggregated insights to IoT Central.-- Continuous data export to the desired Azure service for manipulation.-- Data can be structured in the desired format and sent to a storage service.-- Business applications can query data and generate insights that power retail operations.
+1. Set of IoT sensors sending telemetry data to a gateway device.
+1. Gateway devices sending telemetry and aggregated insights to IoT Central.
+1. Continuous data export to the desired Azure service for manipulation.
+1. Data can be structured in the desired format and sent to a storage service.
+1. Business applications can query data and generate insights that power retail operations.
## Condition monitoring sensors
To create a custom theme:
To update the application image:
-1. Select **Administration > Application settings**.
+1. Select **Administration > Your Application**.
1. Use the **Select image** button to choose an image to upload as the application image. This image appears on the application tile in the **My Apps** page of the IoT Central application manager.
To update the application image:
### Create device templates
-You can create device templates that enable you and the application operators to configure and manage devices. You create a template by building a custom one, by importing an existing template file, or by importing a template from the Azure IoT device catalog. After you create and customize a device template, use it to connect real devices to your application. Optionally, use a device template to generate simulated devices for testing.
+You can create device templates that enable you and the application operators to configure and manage devices. You can create a template by building a custom one, by importing an existing template file, or by importing a template from the Azure IoT device catalog. After you create and customize a device template, use it to connect real devices to your application. Optionally, use a device template to generate simulated devices for testing.
The **In-store analytics - checkout** application template has device templates for several devices. There are device templates for two of the three devices you use in the application. The RuuviTag device template isn't included in the **In-store analytics - checkout** application template. In this section, you add a device template for RuuviTag sensors to your application.
To add a RuuviTag device template to your application:
1. Find and select the **RuuviTag Multisensor** device template in the Azure IoT device catalog.
-1. Select **Next: Customize**.
+1. Select **Next: Review**.
:::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-template.png" alt-text="Screenshot that highlights the Next: Customize button.":::
To customize the built-in interfaces of the RuuviTag device template:
1. Select **Customize** in the RuuviTag device template menu.
-1. Scroll in the list of capabilities and find the `humidity` telemetry type. It's the row item with the editable **Display name** value of *humidity*.
+1. Scroll in the list of capabilities and find the `RelativeHumidity` telemetry type. It's the row item with the editable **Display name** value of *RelativeHumidity*.
-In the following steps, you customize the `humidity` telemetry type for the RuuviTag sensors. Optionally, customize some of the other telemetry types.
+In the following steps, you customize the `RelativeHumidity` telemetry type for the RuuviTag sensors. Optionally, customize some of the other telemetry types.
-For the `humidity` telemetry type, make the following changes:
+For the `RelativeHumidity` telemetry type, make the following changes:
1. Select the **Expand** control to expand the schema details for the row.
-1. Update the **Display Name** value from *humidity* to a custom value such as *Relative humidity*.
+1. Update the **Display Name** value from *RelativeHumidity* to a custom value such as *Humidity*.
-1. Change the **Semantic Type** option from *None* to *Humidity*. Optionally, set schema values for the humidity telemetry type in the expanded schema view. Schema settings allow you to create detailed validation requirements for the data that your sensors track. For example, you could set minimum and maximum operating range values for a given interface.
+1. Change the **Semantic Type** option from *Relative humidity* to *Humidity*. Optionally, set schema values for the humidity telemetry type in the expanded schema view. Schema settings allow you to create detailed validation requirements for the data that your sensors track. For example, you could set minimum and maximum operating range values for a given interface.
1. Select **Save** to save your changes.
To create a rule:
1. Enter *Humidity level* as the name of the rule.
-1. Choose the RuuviTag device template in **Scopes**. The rule you define will apply to all sensors based on that template. Optionally, you could create a filter that would apply the rule only to a defined subset of the sensors.
+1. Choose the RuuviTag device template in **Target devices**. The rule you define will apply to all sensors based on that template. Optionally, you could create a filter that would apply the rule only to a defined subset of the sensors.
-1. Choose `Relative humidity` as the **Telemetry**. It's the device capability that you customized in a previous step.
+1. Choose `Humidity` as the **Telemetry**. It's the device capability that you customized in a previous step.
1. Choose `Is greater than` as the **Operator**.
iot-develop Quickstart Devkit Microchip Atsame54 Xpro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md
Keep Termite open to monitor device output in the following steps.
* IAR Embedded Workbench for ARM (EW for ARM). You can download and install a [14-day free trial of IAR EW for ARM](https://www.iar.com/products/architectures/arm/iar-embedded-workbench-for-arm/).
-* Download the [Azure_RTOS_6.1_ATSAME54-XPRO_IAR_Samples_2020_10_10.zip](https://github.com/azure-rtos/samples/releases/download/rel_6.1_pnp_beta/Azure_RTOS_6.1_PnP_ATSAME54-XPRO_IAR_Sample_2021_03_18.zip) file and extract it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the [Azure_RTOS_6.1_ATSAME54-XPRO_IAR_Samples_2020_10_10.zip](https://github.com/azure-rtos/samples/releases/download/v6.1_rel/Azure_RTOS_6.1_ATSAME54-XPRO_IAR_Samples_2021_11_03.zip) file and extract it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
Keep Termite open to monitor device output in the following steps.
* [MPLAB XC32/32++ Compiler 2.4.0 or later](https://www.microchip.com/mplab/compilers).
-* Download the [Azure_RTOS_6.1_ATSAME54-XPRO_MPLab_Samples_2020_10_10.zip](https://github.com/azure-rtos/samples/releases/download/rel_6.1_pnp_beta/Azure_RTOS_6.1_PnP_ATSAME54-XPRO_MPLab_Sample_2021_03_18.zip) file and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the [Azure_RTOS_6.1_ATSAME54-XPRO_MPLab_Samples_2020_10_10.zip](https://github.com/azure-rtos/samples/releases/download/v6.1_rel/Azure_RTOS_6.1_ATSAME54-XPRO_MPLab_Samples_2021_11_03.zip) file and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-manage-device-certificates.md
For more information about the function of the different certificates on an IoT
For these two automatically generated certificates, you have the option of setting a flag in the config file to configure the number of days for the lifetime of the certificates. >[!NOTE]
->There is a third auto-generated certificate that the IoT Edge security manager creates, the **IoT Edge hub server certificate**. This certificate always has a 90 day lifetime, but is automatically renewed before expiring. The auto-generated CA lifetime value set in the config file doesn't affect this certificate.
+>There is a third auto-generated certificate that the IoT Edge security manager creates, the **IoT Edge hub server certificate**. This certificate always has a 30 day lifetime, but is automatically renewed before expiring. The auto-generated CA lifetime value set in the config file doesn't affect this certificate.
Upon expiry after the specified number of days, IoT Edge has to be restarted to regenerate the device CA certificate. The device CA certificate won't be renewed automatically.
iot-hub Iot Hub Device Streams Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-streams-overview.md
Two sides of each stream (on the device and service side) use the IoT Hub SDK to
Use the links below to learn more about device streams. > [!div class="nextstepaction"]
-> [Device streams on IoT show (Channel 9)](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fchannel9.msdn.com%2FShows%2FInternet-of-Things-Show%2FAzure-IoT-Hub-Device-Streams&data=02%7C01%7Crezas%40microsoft.com%7Cc3486254a89a43edea7c08d67a88bcea%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636831125031268909&sdata=S6u9qiehBN4tmgII637uJeVubUll0IZ4p2ddtG5pDBc%3D&reserved=0)
+> [Azure IoT Hub Device Streams Video](/shows/Internet-of-Things-Show/Azure-IoT-Hub-Device-Streams)
lighthouse Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/managed-services-offers.md
Title: Managed Service offers in Azure Marketplace description: Offer your Azure Lighthouse management services to customers through Managed Services offers in Azure Marketplace. Previously updated : 09/08/2021 Last updated : 02/02/2022
This article describes the **Managed Service** offer type in [Azure Marketplace]
Managed Service offers streamline the process of onboarding customers to Azure Lighthouse. When a customer purchases an offer in Azure Marketplace, they'll be able to specify which subscriptions and/or resource groups should be onboarded.
-For each offer, you define the access that users in your organization will have to work on resources in the customer tenant. This is done through a manifest that specifies the Azure Active Directory (Azure AD) users, groups, and service principals that will have access to customer resources, along with [roles](tenants-users-roles.md) that define their level of access.
+For each offer, you define the access that users in your organization will have to work on resources in the customer tenant. This is done through a manifest that specifies the Azure Active Directory (Azure AD) users, groups, and service principals that will have access to customer resources, along with [roles](tenants-users-roles.md#role-support-for-azure-lighthouse) that define their level of access.
> [!NOTE] > Managed Service offers may not be available in Azure Government and other national clouds.
-## Public and private offers
+## Public and private plans
Each Managed Service offer includes one or more plans. Plans can be either private or public.
-If you want to limit your offer to specific customers, you can publish a private plan. When you do so, the plan can only be purchased for the specific subscription IDs that you provide. For more info, see [Private offers](../../marketplace/private-offers.md).
+If you want to limit your offer to specific customers, you can publish a private plan. When you do so, the plan can only be purchased for the specific subscription IDs that you provide. For more info, see [Private plans](../../marketplace/private-plans.md).
> [!NOTE]
-> Private offers are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program.
+> Private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program.
Public plans let you promote your services to new customers. These are usually more appropriate when you only require limited access to the customer's tenant. Once you've established a relationship with a customer, if they decide to grant your organization additional access, you can do so either by publishing a new private plan for that customer only, or by [onboarding them for further access using Azure Resource Manager templates](../how-to/onboard-customer.md).
lighthouse Publish Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/publish-managed-services-offers.md
Title: Publish a Managed Service offer to Azure Marketplace description: Learn how to publish a Managed Service offer that onboards customers to Azure Lighthouse. Previously updated : 08/10/2021 Last updated : 02/02/2022 # Publish a Managed Service offer to Azure Marketplace
-In this article, you'll learn how to publish a public or private Managed Service offer to [Azure Marketplace](https://azuremarketplace.microsoft.com) using the [Commercial Marketplace](../../marketplace/overview.md) program in Partner Center. Customers who purchase the offer will then delegate subscriptions or resource groups, allowing you to manage them through [Azure Lighthouse](../overview.md).
+In this article, you'll learn how to publish a public or private Managed Service offer to [Azure Marketplace](https://azuremarketplace.microsoft.com) using the [commercial marketplace](../../marketplace/overview.md) program in Partner Center. Customers who purchase the offer will then delegate subscriptions or resource groups, allowing you to manage them through [Azure Lighthouse](../overview.md).
## Publishing requirements
-You need to have a valid [account in Partner Center](../../marketplace/create-account.md) to create and publish offers. If you don't have an account already, the [sign-up process](https://aka.ms/joinmarketplace) will lead you through the steps of creating an account in Partner Center and enrolling in the Commercial Marketplace program.
+You need to have a valid [account in Partner Center](../../marketplace/create-account.md) to create and publish offers. If you don't have an account already, the [sign-up process](https://aka.ms/joinmarketplace) will lead you through the steps of creating an account in Partner Center and enrolling in the commercial marketplace program.
Per the [Managed Service offer certification requirements](/legal/marketplace/certification-policies#700-managed-services), you must have a [Silver or Gold Cloud Platform competency level](/partner-center/learn-about-competencies) or be an [Azure Expert MSP](https://partner.microsoft.com/membership/azure-expert-msp) in order to publish a Managed Service offer. You must also [enter a lead destination that will create a record in your CRM system](../../marketplace/plan-managed-service-offer.md#customer-leads) each time a customer deploys your offer.
The following table can help determine whether to onboard customers by publishin
|Requires [Partner Center account](../../marketplace/create-account.md) |Yes |No | |Requires [Silver or Gold Cloud Platform competency level](/partner-center/learn-about-competencies) or [Azure Expert MSP](https://partner.microsoft.com/membership/azure-expert-msp) |Yes |No | |Available to new customers through Azure Marketplace |Yes |No |
-|Can limit offer to specific customers |Yes (only with private offers, which can't be used with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program) |Yes |
+|Can limit offer to specific customers |Yes (only with private plans, which can't be used with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program) |Yes |
|Requires customer acceptance in Azure portal |Yes |No | |Can use automation to onboard multiple subscriptions, resource groups, or customers |No |Yes | |Immediate access to new built-in roles and Azure Lighthouse features |Not always (generally available after some delay) |Yes |
The following table can help determine whether to onboard customers by publishin
For detailed instructions about how to create your offer, including all of the information and assets you'll need to provide, see [Create a Managed Service offer](../../marketplace/create-managed-service-offer.md).
-To learn about the general publishing process, review the [Commercial Marketplace documentation](../../marketplace/overview.md). You should also review the [commercial marketplace certification policies](/legal/marketplace/certification-policies), particularly the [Managed Services](/legal/marketplace/certification-policies#700-managed-services) section.
+To learn about the general publishing process, review the [commercial marketplace documentation](../../marketplace/overview.md). You should also review the [commercial marketplace certification policies](/legal/marketplace/certification-policies), particularly the [Managed Services](/legal/marketplace/certification-policies#700-managed-services) section.
Once a customer adds your offer, they will be able to delegate one or more subscriptions or resource groups, which will then be [onboarded to Azure Lighthouse](#the-customer-onboarding-process).
Once a customer adds your offer, they will be able to delegate one or more subsc
## Publish your offer
-Once you've completed all of the sections, your next step is to publish the offer to Azure Marketplace. Select the **Publish** button to initiate the process of making your offer live. More info about this process can be found [here](../../marketplace/review-publish-offer.md).
+Once you've completed all of the sections, your next step is to publish the offer. After you initiate the publishing process, your offer will go through several validation and publishing steps. For more information, see [Review and publish an offer to the commercial marketplace](../../marketplace/review-publish-offer.md)
-You can [publish an updated version of your offer](../../marketplace/update-existing-offer.md) at any time. For example, you may want to add a new role definition to a previously-published offer. When you do so, customers who have already added the offer will see an icon in the [**Service providers**](view-manage-service-providers.md) page in the Azure portal that lets them know an update is available. Each customer will be able to [review the changes](view-manage-service-providers.md#update-service-provider-offers) and decide whether they want to update to the new version.
+You can [publish an updated version of your offer](../../marketplace/update-existing-offer.md) at any time. For example, you may want to add a new role definition to a previously-published offer. When you do so, customers who have already added the offer will see an icon in the [**Service providers**](view-manage-service-providers.md) page in the Azure portal that lets them know an update is available. Each customer will be able to [review the changes and update to the new version](view-manage-service-providers.md#update-service-provider-offers).
## The customer onboarding process
-After a customer adds your offer, they'll be able to [delegate one or more specific subscriptions or resource groups](view-manage-service-providers.md#delegate-resources), which will then be onboarded to Azure Lighthouse. If a customer has accepted an offer but has not yet delegated any resources, they'll see a note at the top of the **Provider offers** section of the [**Service providers**](view-manage-service-providers.md) page in the Azure portal.
+After a customer adds your offer, they can [delegate one or more specific subscriptions or resource groups](view-manage-service-providers.md#delegate-resources), which will be onboarded to Azure Lighthouse. If a customer has accepted an offer but has not yet delegated any resources, they'll see a note at the top of the **Provider offers** section of the [**Service providers**](view-manage-service-providers.md) page in the Azure portal.
> [!IMPORTANT] > Delegation must be done by a non-guest account in the customer's tenant who has a role with the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), for the subscription being onboarded (or which contains the resource groups that are being onboarded). To find users who can delegate the subscription, a user in the customer's tenant can select the subscription in the Azure portal, open **Access control (IAM)**, and [view all users with the Owner role](../../role-based-access-control/role-assignments-list-portal.md#list-owners-of-a-subscription).
-Once the customer delegates a subscription (or one or more resource groups within a subscription), the **Microsoft.ManagedServices** resource provider will be registered for that subscription, and users in your tenant will be able to access the delegated resources according to the authorizations in your offer.
+Once the customer delegates a subscription (or one or more resource groups within a subscription), the **Microsoft.ManagedServices** resource provider will be registered for that subscription, and users in your tenant will be able to access the delegated resources according to the authorizations that you defined in your offer.
> [!NOTE] > To delegate additional subscriptions or resource groups to the same offer at a later time, the customer will need to [manually register the **Microsoft.ManagedServices** resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) on each subscription before delegating.
If you publish an updated version of your offer, the customer can [review the ch
## Next steps -- Learn about the [Commercial Marketplace](../../marketplace/overview.md).
+- Learn about the [commercial marketplace](../../marketplace/overview.md).
- [Link your partner ID](partner-earned-credit.md) to track your impact across customer engagements. - Learn about [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md). - [View and manage customers](view-manage-customers.md) by going to **My customers** in the Azure portal.
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-custom-probe-overview.md
Azure Monitor logs are not available for both public and internal Basic Load Bal
- HTTPS probes do not support mutual authentication with a client certificate. - You should assume Health probes will fail when TCP timestamps are enabled. - A basic SKU load balancer health probe isn't supported with a virtual machine scale set.
+- HTTP probes do not support probing on the following ports due to security concerns: 19, 21, 25, 70, 110, 119, 143, 220, 993.
## Next steps
load-balancer Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/whats-new.md
You can also find the latest Azure Load Balancer updates and subscribe to the RS
| Feature | Support for moves across resource groups | Standard Load Balancer and Standard Public IP support for [resource group moves](https://azure.microsoft.com/updates/standard-resource-group-move/). | October 2020 | | Feature | [Cross-region load balancing with Global tier on Standard LB](https://azure.microsoft.com/updates/preview-azure-load-balancer-now-supports-crossregion-load-balancing/) | Azure Load Balancer supports Cross Region Load Balancing. Previously, Standard Load Balancer had a regional scope. With this release, you can load balance across multiple Azure regions via a single, static, global anycast Public IP address. | September 2020 | | Feature| Azure Load Balancer Insights using Azure Monitor | Built as part of Azure Monitor for Networks, customers now have topological maps for all their Load Balancer configurations and health dashboards for their Standard Load Balancers preconfigured with metrics in the Azure portal. [Get started and learn more](https://azure.microsoft.com/blog/introducing-azure-load-balancer-insights-using-azure-monitor-for-networks/) | June 2020 |
-| Validation | Addition of validation for HA ports | A validation was added to ensure that HA port rules and non HA port rules are only configurable when Floating IP is enabled. Previously, the this configuration would go through, but not work as intended. No change to functionality was made. You can learn more [here](load-balancer-ha-ports-overview.md#limitations)| June 2020 |
+| Validation | Addition of validation for HA ports | A validation was added to ensure that HA port rules and non HA port rules are only configurable when Floating IP is enabled. Previously, this configuration would go through, but not work as intended. No change to functionality was made. You can learn more [here](load-balancer-ha-ports-overview.md#limitations)| June 2020 |
| Feature| IPv6 support for Azure Load Balancer (generally available) | You can have IPv6 addresses as your frontend for your Azure Load Balancers. Learn how to [create a dual stack application here](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md) |April 2020| | Feature| TCP Resets on Idle Timeout (generally available)| Use TCP resets to create a more predictable application behavior. [Learn more](load-balancer-tcp-reset.md)| February 2020 |
The product group is actively working on resolutions for the following known iss
|Issue |Description |Mitigation | | - ||| | IP based LB outbound IP | IP based LB leverages Azure's Default Outbound Access IP for outbound when no outbound rules are configured | In order to prevent outbound access from this IP, please leverage Outbound rules or a NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion |
+| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, is not respected. Load Balancer health probes will probe up/down immediately after 1 probe regardless of the property's configured value | To reflect the current behavior, please set the value of numberOfProbes ("Unhealthy threshold" in Portal) as 1 |
logic-apps Manage Logic Apps With Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/manage-logic-apps-with-azure-portal.md
To stop the trigger from firing the next time when the trigger condition is met,
1. Save your changes. This step resets your trigger's current state. 1. [Reactivate your logic app](#disable-enable-single-logic-app).
+* When a workflow is disabled, you can still resubmit runs.
+ <a name="disable-enable-single-logic-app"></a> ### Disable or enable a single logic app
To stop the trigger from firing the next time when the trigger condition is met,
1. To confirm whether your operation succeeded or failed, on the main Azure toolbar, open the **Notifications** list (bell icon).
-> [!NOTE]
-> When a logic app workflow is disabled, you can still resubmit runs.
- <a name="disable-or-enable-multiple-logic-apps"></a> ### Disable or enable multiple logic apps
logic-apps Manage Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/manage-logic-apps-with-visual-studio.md
Title: Edit and manage logic apps by using Visual Studio with Cloud Explorer
description: Edit, update, manage, add to source control, and deploy logic apps by using Visual Studio with Cloud Explorer ms.suite: integration--++ Last updated 01/28/2022
To stop the trigger from firing the next time when the trigger condition is met,
1. Save your changes. This step resets your trigger's current state. 1. [Reactivate your logic app](#enable-logic-apps).
+* When a workflow is disabled, you can still resubmit runs.
+ <a name="disable-logic-apps"></a> ### Disable logic apps
In Cloud Explorer, open your logic app's shortcut menu, and select **Disable**.
![Disable your logic app in Cloud Explorer](./media/manage-logic-apps-with-visual-studio/disable-logic-app-cloud-explorer.png)
-> [!NOTE]
-> When a logic app workflow is disabled, you can still resubmit runs.
- <a name="enable-logic-apps"></a> ### Enable logic apps
logic-apps Quickstart Create Logic Apps Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/quickstart-create-logic-apps-visual-studio-code.md
ms.suite: integration
Previously updated : 05/25/2021 Last updated : 02/02/2022 #Customer intent: As a developer, I want to create my first automated workflow by using Azure Logic Apps while working in Visual Studio Code
Before you start, make sure that you have these items:
* Basic knowledge about [logic app workflow definitions](../logic-apps/logic-apps-workflow-definition-language.md) and their structure as described with JSON
- If you're new to Logic Apps, try this [quickstart](../logic-apps/quickstart-create-first-logic-app-workflow.md), which creates your first logic apps in the Azure portal and focuses more on the basic concepts.
+ If you're new to Azure Logic Apps, try this [quickstart](../logic-apps/quickstart-create-first-logic-app-workflow.md), which creates your first logic apps in the Azure portal and focuses more on the basic concepts.
* Access to the web for signing in to Azure and your Azure subscription
Before you start, make sure that you have these items:
For more information, see [Extension Marketplace](https://code.visualstudio.com/docs/editor/extension-gallery). To contribute to this extension's open-source version, visit the [Azure Logic Apps extension for Visual Studio Code on GitHub](https://github.com/Microsoft/vscode-azurelogicapps).
-* If your logic app needs to communicate through a firewall that limits traffic to specific IP addresses, that firewall needs to allow access for *both* the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses used by the Logic Apps service or runtime in the Azure region where your logic app exists. If your logic app also uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#outbound) in your logic app's Azure region.
+* If your logic app needs to communicate through a firewall that limits traffic to specific IP addresses, that firewall needs to allow access for *both* the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses used by Azure Logic Apps or runtime in the Azure region where your logic app exists. If your logic app also uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#outbound) in your logic app's Azure region.
<a name="access-azure"></a>
In Visual Studio Code, you can open and review the earlier versions for your log
In Visual Studio Code, if you edit a published logic app and save your changes, you *overwrite* your already deployed app. To avoid breaking your logic app in production and minimize disruption, disable your logic app first. You can then reactivate your logic app after you've confirmed that your logic app still works.
-> [!NOTE]
-> Disabling a logic app affects workflow instances in the following ways:
->
-> * The Logic Apps service continues all in-progress and pending runs until they finish. Based on the volume or backlog, this process might take time to complete.
->
-> * The Logic Apps service doesn't create or run new workflow instances.
->
-> * The trigger won't fire the next time that its conditions are met. However, the trigger state remembers the point at which the logic app was stopped. So, if you reactivate the logic app, the trigger fires for all the unprocessed items since the last run.
->
-> To stop the trigger from firing on unprocessed items since the last run, clear the trigger's state before you reactivate the logic app:
->
-> 1. In the logic app, edit any part of the workflow's trigger.
-> 1. Save your changes. This step resets your trigger's current state.
-> 1. Reactivate your logic app.
+* Azure Logic Apps continues all in-progress and pending runs until they finish. Based on the volume or backlog, this process might take time to complete.
+
+* Azure Logic Apps doesn't create or run new workflow instances.
+
+* The trigger won't fire the next time that its conditions are met.
+
+* The trigger state remembers the point at which the logic app was stopped. So, if you reactivate the logic app, the trigger fires for all the unprocessed items since the last run.
+
+ To stop the trigger from firing on unprocessed items since the last run, clear the trigger's state before you reactivate the logic app:
+
+ 1. In the logic app, edit any part of the workflow's trigger.
+ 1. Save your changes. This step resets your trigger's current state.
+ 1. Reactivate your logic app.
+
+* When a workflow is disabled, you can still resubmit runs.
1. If you haven't signed in to your Azure account and subscription yet from inside Visual Studio Code, follow the [previous steps to sign in now](#access-azure).
In Visual Studio Code, if you edit a published logic app and save your changes,
Deleting a logic app affects workflow instances in the following ways:
-* The Logic Apps service makes a best effort to cancel any in-progress and pending runs.
+* Azure Logic Apps makes a best effort to cancel any in-progress and pending runs.
Even with a large volume or backlog, most runs are canceled before they finish or start. However, the cancellation process might take time to complete. Meanwhile, some runs might get picked up for execution while the service works through the cancellation process.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. You have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an `Unauthorized` error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions.
machine-learning Concept Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-differential-privacy.md
As the amount of data that an organization