Updates from: 02/03/2022 02:09:41
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/publisher-verification-overview.md
Publisher verification helps admins and end users understand the authenticity of
When an application is marked as publisher verified, it means that the publisher has verified their identity using a [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process and has associated this MPN account with their application registration. A blue "verified" badge appears on the Azure AD consent prompt and other screens:+ ![Consent prompt](./media/publisher-verification-overview/consent-prompt.png)
+> [!NOTE]
+> We recently changed the color of the "verified" badge from blue to gray. We will revert that change sometime in the last half of February 2022, so the "verified" badge will be blue.
+ This feature is primarily for developers building multi-tenant apps that leverage [OAuth 2.0 and OpenID Connect](active-directory-v2-protocols.md) with the [Microsoft identity platform](v2-overview.md). These apps can sign users in using OpenID Connect, or they may use OAuth 2.0 to request access to data using APIs like [Microsoft Graph](https://developer.microsoft.com/graph/). ## Benefits
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
The following samples show public client desktop applications that access the Mi
> | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-desktop/) | MSAL Java | Integrated Windows authentication | > | Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | MSAL Node | Authorization code with PKCE | > | Powershell | [Call Microsoft Graph by signing in users using username/password](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) | MSAL.NET | Resource owner password credentials |
-> | Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | MSAL Python | Authorization code with PKCE |
+> | Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | MSAL Python | Resource owner password credentials |
> | Universal Window Platform (UWP) | [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-xamarin-native-v2/tree/main/2-With-broker) | MSAL.NET | Web account manager | > | Windows Presentation Foundation (WPF) | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/2.%20Web%20API%20now%20calls%20Microsoft%20Graph) | MSAL.NET | Authorization code with PKCE | > | XAML | &#8226; [Sign in users and call ASP.NET core web API](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/1.%20Desktop%20app%20calls%20Web%20API) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | MSAL.NET | Authorization code with PKCE |
active-directory Scenario Daemon App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-daemon-app-configuration.md
Configuration parameters for the [Node.js daemon sample](https://github.com/Azur
# Credentials TENANT_ID=Enter_the_Tenant_Info_Here CLIENT_ID=Enter_the_Application_Id_Here+
+// You provide either a ClientSecret or a CertificateConfiguration, or a ClientAssertion. These settings are exclusive
CLIENT_SECRET=Enter_the_Client_Secret_Here
+CERTIFICATE_THUMBPRINT=Enter_the_certificate_thumbprint_Here
+CERTIFICATE_PRIVATE_KEY=Enter_the_certificate_private_key_Here
+CLIENT_ASSERTION=Enter_the_Assertion_String_Here
# Endpoints // the Azure AD endpoint is the authority endpoint for token issuance
app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
.WithAuthority(new Uri(config.Authority)) .Build(); ```+ # [Java](#tab/java) In MSAL Java, there are two builders to instantiate the confidential client application with certificates:
ConfidentialClientApplication cca =
# [Node.js](#tab/nodejs)
-The sample application does not implement initialization with certificates at the moment.
+```JavaScript
+
+const config = {
+ auth: {
+ clientId: process.env.CLIENT_ID,
+ authority: process.env.AAD_ENDPOINT + process.env.TENANT_ID,
+ clientCertificate: {
+ thumbprint: process.env.CERTIFICATE_THUMBPRINT, // a 40-digit hexadecimal string
+ privateKey: process.env.CERTIFICATE_PRIVATE_KEY,
+ }
+ }
+};
+
+// Create an MSAL application object
+const cca = new msal.ConfidentialClientApplication(config);
+```
+
+For details, see [Use certificate credentials with MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/certificate-credentials.md).
# [Python](#tab/python)
ConfidentialClientApplication cca =
# [Node.js](#tab/nodejs)
-The sample application does not implement initialization with assertions at the moment.
+```JavaScript
+const clientConfig = {
+ auth: {
+ clientId: process.env.CLIENT_ID,
+ authority: process.env.AAD_ENDPOINT + process.env.TENANT_ID,
+ clientAssertion: process.env.CLIENT_ASSERTION
+ }
+};
+const cca = new msal.ConfidentialClientApplication(clientConfig);
+```
+
+For details, see [Initialize the ConfidentialClientApplication object](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md).
# [Python](#tab/python)
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
To request an access token, make an HTTP POST to the tenant-specific Microsoft i
https://login.microsoftonline.com/<tenant>/oauth2/v2.0/token ``` + There are two cases depending on whether the client application chooses to be secured by a shared secret or a certificate. ### First case: Access token request with a shared secret
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
Previously updated : 1/20/2022 Last updated : 1/31/2022
The What's new in Azure Active Directory? release notes provide information abou
- Plans for changes +
+## July 2021
+
+### New Google sign-in integration for Azure AD B2C and B2B self-service sign-up and invited external users will stop working starting July 12, 2021
+
+**Type:** Plan for change
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Previously we announced that [the exception for Embedded WebViews for Gmail authentication will expire in the second half of 2021](https://www.yammer.com/cepartners/threads/1188371962232832).
+
+On July 7, 2021, we learned from Google that some of these restrictions will apply starting **July 12, 2021**. Azure AD B2B and B2C customers who set up a new Google ID sign-in in their custom or line of business applications to invite external users or enable self-service sign-up will have the restrictions applied immediately. As a result, end-users will be met with an error screen that blocks their Gmail sign-in if the authentication is not moved to a system webview. See the docs linked below for details.
+
+Most apps use system web-view by default, and will not be impacted by this change. This only applies to customers using embedded webviews (the non-default setting.) We advise customers to move their application's authentication to system browsers instead, prior to creating any new Google integrations. To learn how to move to system browsers for Gmail authentications, read the Embedded vs System Web UI section in the [Using web browsers (MSAL.NET)](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) documentation. All MSAL SDKs use the system web-view by default. [Learn more](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
+++
+### Google sign-in on embedded web-views expiring September 30, 2021
+
+**Type:** Plan for change
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+
+About two months ago we announced that the exception for Embedded WebViews for Gmail authentication will expire in the second half of 2021.
+
+Recently, Google has specified the date to be **September 30, 2021**.
+
+Rolling out globally beginning September 30, 2021, Azure AD B2B guests signing in with their Gmail accounts will now be prompted to enter a code in a separate browser window to finish signing in on Microsoft Teams mobile and desktop clients. This applies to invited guests and guests who signed up using Self-Service Sign-Up.
+
+Azure AD B2C customers who have set up embedded webview Gmail authentications in their custom/line of business apps or have existing Google integrations, will no longer can let their users sign in with Gmail accounts. To mitigate this, make sure to modify your apps to use the system browser for sign-in. For more information, read the Embedded vs System Web UI section in the [Using web browsers (MSAL.NET)](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) documentation. All MSAL SDKs use the system web-view by default.
+
+As the device login flow will start rolling out on September 30, 2021, it is likely that it may not be rolled out to your region yet (in which case, your end-users will be met with the error screen shown in the documentation until it gets deployed to your region.)
+
+For details on known impacted scenarios and what experience your users can expect, read [Add Google as an identity provider for B2B guest users](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
+++
+### Bug fixes in My Apps
+
+**Type:** Fixed
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+- Previously, the presence of the banner recommending the use of collections caused content to scroll behind the header. This issue has been resolved.
+- Previously, there was another issue when adding apps to a collection, the order of apps in All Apps collection would get randomly reordered. This issue has also been resolved.
+
+For more information on My Apps, read [Sign in and start apps from the My Apps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+++
+### Public preview - Application authentication method policies
+
+**Type:** New feature
+**Service category:** MS Graph
+**Product capability:** Developer Experience
+
+Application authentication method policies in MS Graph which allow IT admins to enforce lifetime on application password secret credential or block the use of secrets altogether. Policies can be enforced for an entire tenant as a default configuration and it can be scoped to specific applications or service principals. [Learn more](/graph/api/resources/policy-overview).
+
++
+### Public preview - Authentication Methods registration campaign to download Microsoft Authenticator
+
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+The Authenticator registration campaign helps admins to move their organizations to a more secure posture by prompting users to adopt the Microsoft Authenticator app. Prior to this feature, there was no way for an admin to push their users to set up the Authenticator app.
+
+The registration campaign comes with the ability for an admin to scope users and groups by including and excluding them from the registration campaign to ensure a smooth adoption across the organization. [Learn more](../authentication/how-to-mfa-registration-campaign.md)
+
++
+### Public preview - Separation of duties check
+
+**Type:** New feature
+**Service category:** User Access Management
+**Product capability:** Entitlement Management
+
+In Azure AD entitlement management, an administrator can define that an access package is incompatible with another access package or with a group. Users who have the incompatible memberships will be then unable to request more access. [Learn more](../governance/entitlement-management-access-package-request-policy.md#prevent-requests-from-users-with-incompatible-access-preview).
+
++
+### Public preview - Identity Protection logs in Log Analytics, Storage Accounts, and Event Hubs
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+You can now send the risky users and risk detections logs to Azure Monitor, Storage Accounts, or Log Analytics using the Diagnostic Settings in the Azure AD blade. [Learn more](../identity-protection/howto-export-risk-data.md).
+
++
+### Public preview - Application Proxy API addition for backend SSL certificate validation
+
+**Type:** New feature
+**Service category:** App Proxy
+**Product capability:** Access Control
+
+The onPremisesPublishing resource type now includes the property, "isBackendCertificateValidationEnabled" which indicates whether backend SSL certificate validation is enabled for the application. For all new Application Proxy apps, the property will be set to true by default. For all existing apps, the property will be set to false. For more information, read the [onPremisesPublishing resource type](/graph/api/resources/onpremisespublishing?view=graph-rest-beta&preserve-view=true) api.
+
++
+### General availability - Improved Authenticator setup experience for add Azure AD account in Microsoft Authenticator app by directly signing into the app.
+
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+Users can now use their existing authentication methods to directly sign into the Microsoft Authenticator app to set up their credential. Users don't need to scan a QR Code anymore and can use a Temporary Access Pass (TAP) or Password + SMS (or other authentication method) to configure their account in the Authenticator app.
+
+This improves the user credential provisioning process for the Microsoft Authenticator app and gives the end user a self-service method to provision the app. [Learn more](https://support.microsoft.com/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c#sign-in-with-your-credentials).
+
++
+### General availability - Set manager as reviewer in Azure AD entitlement management access packages
+
+**Type:** New feature
+**Service category:** User Access Management
+**Product capability:** Entitlement Management
+
+Access packages in Azure AD entitlement management now support setting the user's manager as the reviewer for regularly occurring access reviews. [Learn more](../governance/entitlement-management-access-reviews-create.md).
+++
+### General availability - Enable external users to self-service sign-up in Azure AD using MSA accounts
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+Users can now enable external users to self-service sign-up in Azure Active Directory using Microsoft accounts. [Learn more](../external-identities/microsoft-account.md).
+
+
+
+### General availability - External Identities Self-Service Sign-Up with Email One-time Passcode
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+
+Now users can enable external users to self-service sign-up in Azure Active Directory using their email and one-time passcode. [Learn more](../external-identities/one-time-passcode.md).
+
++
+### General availability - Anomalous token
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+Anomalous token detection is now available in Identity Protection. This feature can detect that there are abnormal characteristics in the token such as time active and authentication from unfamiliar IP address. [Learn more](../identity-protection/concept-identity-protection-risks.md#sign-in-risk).
+
++
+### General availability - Register or join devices in Conditional Access
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+The Register or join devices user action in Conditional access is now in general availability. This user action allows you to control multifactor authentication (MFA) policies for Azure AD device registration.
+
+Currently, this user action only allows you to enable multifactor authentication as a control when users register or join devices to Azure AD. Other controls that are dependent on or not applicable to Azure AD device registration continue to be disabled with this user action. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions).
+++
+### New provisioning connectors in the Azure AD Application Gallery - July 2021
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Clebex](../saas-apps/clebex-provisioning-tutorial.md)
+- [Exium](../saas-apps/exium-provisioning-tutorial.md)
+- [SoSafe](../saas-apps/sosafe-provisioning-tutorial.md)
+- [Talentech](../saas-apps/talentech-provisioning-tutorial.md)
+- [Thrive LXP](../saas-apps/thrive-lxp-provisioning-tutorial.md)
+- [Vonage](../saas-apps/vonage-provisioning-tutorial.md)
+- [Zip](../saas-apps/zip-provisioning-tutorial.md)
+- [TimeClock 365](../saas-apps/timeclock-365-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, read [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+++
+### Changes to security and Microsoft 365 group settings in Azure portal
+
+**Type:** Changed feature
+**Service category:** Group Management
+**Product capability:** Directory
+
+
+In the past, users could create security groups and Microsoft 365 groups in the Azure portal. Now users will have the ability to create groups across Azure portals, PowerShell, and API. Customers are required to verify and update the new settings have been configured for their organization. [Learn More](../enterprise-users/groups-self-service-management.md#group-settings).
+
++
+### "All Apps" collection has been renamed to "Apps"
+
+**Type:** Changed feature
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+In the My Apps portal, the collection that was called "All Apps" has been renamed to be called "Apps". As the product evolves, "Apps" is a more fitting name for this default collection. [Learn more](../manage-apps/my-apps-deployment-plan.md#plan-the-user-experience).
+
++ ## June 2021 ### Context panes to display risk details in Identity Protection Reports
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
Previously updated : 1/20/2022 Last updated : 1/31/2022
This page is updated monthly, so revisit it regularly. If you're looking for ite
+## January 2022
+
+### Public preview - Custom security attributes
+
+**Type:** New feature
+**Service category:** Directory Management
+**Product capability:** Directory
+
+Enables you to define business-specific attributes that you can assign to Azure AD objects. These attributes can be used to store information, categorize objects, or enforce fine-grained access control. Custom security attributes can be used with Azure attribute-based access control. [Learn more](custom-security-attributes-overview.md).
+
++
+### Public preview - Filter groups in tokens using a substring match
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+In the past, Azure AD only permitted groups to be filtered based on whether they were assigned to an application. Now, you can also use Azure AD to filter the groups included in the token. You can filter with the substring match on the display name or onPremisesSAMAccountName attributes of the group object on the token. Only groups that the user is a member of will be included in the token. This token will be recognized whether it's on the ObjectID or the on premises SAMAccountName or security identifier (SID). This feature can be used together with the setting to include only groups assigned to the application if desired to further filter the list.[Learn more](../hybrid/how-to-connect-fed-group-claims.md)
+++
+### General availability - Continuous Access Evaluation
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** Access Control
+
+With Continuous access evaluation (CAE), critical security events and policies are evaluated in real time. This includes account disable, password reset, and location change. [Learn more](../conditional-access/concept-continuous-access-evaluation.md).
+
++
+### General Availability - User management enhancements are now available
+
+**Type:** New feature
+**Service category:** User Management
+**Product capability:** User Management
+
+The Azure AD portal has been updated to make it easier to find users in the All users and Deleted users pages. Changes in the preview include:
+
+- More visible user properties including object ID, directory sync status, creation type, and identity issuer.
+- **Search now** allows substring search and combined search of names, emails, and object IDs.
+- Enhanced filtering by user type (member, guest, and none), directory sync status, creation type, company name, and domain name.
+- New sorting capabilities on properties like name, user principal name, creation time, and deletion date.
+- A new total users count that updates with any searches or filters.
+
+For more information, go to [User management enhancements (preview) in Azure Active Directory](../enterprise-users/users-search-enhanced.md).
+++
+### General Availability - My Apps customization of default Apps view
+
+**Type:** New feature
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+Customization of the default My Apps view in now in general availability. For more information on My Apps, you can go to [Sign in and start apps from the My Apps portal](https://support.microsoft.com/en-us/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
++
+### General Availability - Audited BitLocker Recovery
+
+**Type:** New feature
+**Service category:** Device Access Management
+**Product capability:** Device Lifecycle Management
+
+BitLocker keys are sensitive security items. Audited BitLocker recovery ensures that when BitLocker keys are read, an audit log is generated so that you can trace who accesses this information for given devices. [Learn more](../devices/device-management-azure-portal.md#view-or-copy-bitlocker-keys).
+++
+### General Availability - Download a list of devices
+
+**Type:** New feature
+**Service category:** Device Registration and Management
+**Product capability:** Device Lifecycle Management
+
+Download a list of your organization's devices to a .csv file for easier reporting and management. [Learn more](../devices/device-management-azure-portal.md#download-devices).
+
++
+### New provisioning connectors in the Azure AD Application Gallery - January 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Autodesk SSO](../saas-apps/autodesk-sso-provisioning-tutorial.md)
+- [Evercate](../saas-apps/evercate-provisioning-tutorial.md)
+- [frankli.io](../saas-apps/frankli-io-provisioning-tutorial.md)
+- [Plandisc](../saas-apps/plandisc-provisioning-tutorial.md)
+- [Swit](../saas-apps/swit-provisioning-tutorial.md)
+- [TerraTrue](../saas-apps/terratrue-provisioning-tutorial.md)
+- [TimeClock 365 SAML](../saas-apps/timeclock-365-saml-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, go to [Automate user provisioning to SaaS applications with Azure AD](../manage-apps/user-provisioning.md).
+++
+### New Federated Apps available in Azure AD Application gallery - January 2022
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In January 2022, weΓÇÖve added the following 47 new applications in our App gallery with Federation support
+
+[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://auth.healthnote.works/oauth), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Active and Thriving - Perth Airport](../saas-apps/active-and-thriving-perth-airport-tutorial.md), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
+
+You can also find the documentation of all the applications from: https://aka.ms/AppsTutorial,
+
+For listing your application in the Azure AD app gallery, read the details in: https://aka.ms/AzureADAppRequest
+++
+### Azure Ad access reviews reviewer recommendations now account for non-interactive sign-in information
+
+**Type:** Changed feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+Azure AD access reviews reviewer recommendations now account for non-interactive sign-in information, improving upon original recommendations based on interactive last sign-ins only. Reviewers can now make more accurate decisions based on the last sign-in activity of the users theyΓÇÖre reviewing. To learn more about how to create access reviews, go to [Create an access review of groups and applications in Azure AD](../governance/create-access-review.md).
+
++
+### Risk reason for offline Azure AD Threat Intelligence risk detection
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+The offline Azure AD Threat Intelligence risk detection can now have a risk reason that will help customers with the risk investigation. If a risk reason is available, it will show up as **Additional Info** in the risk details of that risk event. The information can be found in the Risk detections report. It will also be available through the additionalInfo property of the riskDetections API. [Learn more](../identity-protection/howto-identity-protection-investigate-risk.md).
+
++ ## December 2021 ### Tenant enablement of combined security information registration for Azure Active Directory
This page is updated monthly, so revisit it regularly. If you're looking for ite
**Service category:** MFA **Product capability:** Identity Security & Protection
-We previously announced in April 2020, a new combined registration experience enabling users to register authentication methods for SSPR and multifactor authentication at the same time was generally available for existing customer to opt-in. Any Azure AD tenants created after August 2020 automatically have the default experience set to combined registration. Starting in 2022 Microsoft will be enabling the multifactor authentication and SSPR combined registration experience for existing customers. [Learn more](../authentication/concept-registration-mfa-sspr-combined.md).
+We previously announced in April 2020, a new combined registration experience enabling users to register authentication methods for SSPR and multi-factor authentication at the same time was generally available for existing customer to opt in. Any Azure AD tenants created after August 2020 automatically have the default experience set to combined registration. Starting in 2022 Microsoft will be enabling the multi-factor authentication and SSPR combined registration experience for existing customers. [Learn more](../authentication/concept-registration-mfa-sspr-combined.md).
We previously announced in April 2020, a new combined registration experience en
**Service category:** Microsoft Authenticator App **Product capability:** User Authentication
-To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign-in screen when approving an multifactor authentication notification in the Authenticator app. This feature adds an additional security measure to the Microsoft Authenticator app. [Learn more](../authentication/how-to-mfa-number-match.md).
+To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign in screen when approving a multi-factor authentication notification in the Authenticator app. This feature adds an extra security measure to the Microsoft Authenticator app. [Learn more](../authentication/how-to-mfa-number-match.md).
To prevent accidental notification approvals, admins can now require users to e
**Service category:** Reporting **Product capability:** Monitoring & Reporting
-We are no longer publishing sign-in logs with the following error codes because these events are pre-authentication events that occur before our service has authenticated a user. Because these events happen before authentication, our service is not always able to correctly identify the user. If a user continues on to authenticate, the user sign-in will show up in your tenant Sign-in logs. These logs are no longer visible in the Azure portal UX, and querying these error codes in the Graph API will no longer return results.
+WeΓÇÖre no longer publishing sign-in logs with the following error codes because these events are pre-authentication events that occur before our service has authenticated a user. Because these events happen before authentication, our service isnΓÇÖt always able to correctly identify the user. If a user continues on to authenticate, the user sign-in will show up in your tenant Sign-in logs. These logs are no longer visible in the Azure portal UX, and querying these error codes in the Graph API will no longer return results.
|Error code | Failure reason| | | |
-|50058| Session information is not sufficient for single-sign-on.|
-|16000| Either multiple user identities are available for the current request or selected account is not supported for the scenario.|
+|50058| Session information isnΓÇÖt sufficient for single-sign-on.|
+|16000| Either multiple user identities are available for the current request or selected account isnΓÇÖt supported for the scenario.|
|500581| Rendering JavaScript. Fetching sessions for single-sign-on on V2 with prompt=none requires JavaScript to verify if any MSA accounts are signed in.| |81012| The user trying to sign in to Azure AD is different from the user signed into the device.|
We are no longer publishing sign-in logs with the following error codes because
**Service category:** MFA **Product capability:** Identity Security & Protection
-We previously announced in April 2020, a new combined registration experience enabling users to register authentication methods for SSPR and multifactor authentication at the same time was generally available for existing customer to opt-in. Any Azure AD tenants created after August 2020 automatically have the default experience set to combined registration. Starting 2022, Microsoft will be enabling the MF).
+We previously announced in April 2020, a new combined registration experience enabling users to register authentication methods for SSPR and multi-factor authentication at the same time was generally available for existing customer to opt in. Any Azure AD tenants created after August 2020 automatically have the default experience set to combined registration. Starting 2022, Microsoft will be enabling the MF).
The Public Preview feature for Azure AD Connect Cloud Sync Password writeback pr
**Service category:** Conditional Access for workload identities **Product capability:** Identity Security & Protection
-Previously, Conditional Access policies applied only to users when they access apps and services like SharePoint online or the Azure portal. This preview adds support for Conditional Access policies applied to service principals owned by the organization. You can block service principals from accessing resources from outside trusted named locations or Azure Virtual Networks. [Learn more](../conditional-access/workload-identity.md).
+Previously, Conditional Access policies applied only to users when they access apps and services like SharePoint online or the Azure portal. This preview adds support for Conditional Access policies applied to service principals owned by the organization. You can block service principals from accessing resources from outside trusted-named locations or Azure Virtual Networks. [Learn more](../conditional-access/workload-identity.md).
-### Public preview - Additional attributes available as claims
+### Public preview - Extra attributes available as claims
**Type:** Changed feature **Service category:** Enterprise Apps
Several user attributes have been added to the list of attributes available to m
**Service category:** Authentications (Logins) **Product capability:** Identity Security & Protection
-We have recently added other property to the sign-in logs called "Session Lifetime Policies Applied". This property will list all the session lifetime policies that applied to the sign-in for example, Sign-in frequency, Remember multifactor authentication and Configurable token lifetime. [Learn more](../reports-monitoring/concept-sign-ins.md#authentication-details).
+We have recently added other property to the sign-in logs called "Session Lifetime Policies Applied". This property will list all the session lifetime policies that applied to the sign-in for example, Sign-in frequency, Remember multi-factor authentication and Configurable token lifetime. [Learn more](../reports-monitoring/concept-sign-ins.md#authentication-details).
Updated "switch organizations" user interface in My Account. This visually impro
Sometimes, application developers configure their apps to require more permissions than it's possible to grant. To prevent this from happening, a limit on the total number of required permissions that can be configured for an app registration will be enforced.
-The total number of required permissions for any single application registration mustn't exceed 400 permissions, across all APIs. The change to enforce this limit will begin rolling out mid-October 2021. Applications exceeding the limit can't increase the number of permissions they are configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and may not exceed 50 APIs.
+The total number of required permissions for any single application registration mustn't exceed 400 permissions, across all APIs. The change to enforce this limit will begin rolling out mid-October 2021. Applications exceeding the limit can't increase the number of permissions theyΓÇÖre configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and may not exceed 50 APIs.
In the Azure portal, the required permissions are listed under API permissions for the application you wish to configure. Using Microsoft Graph or Microsoft Graph PowerShell, the required permissions are listed in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. [Learn more](../enterprise-users/directory-service-limits-restrictions.md).
Previously, we announced that starting October 31, 2021, Microsoft Azure Active
**Service category:** Conditional Access **Product capability:** End User Experiences
-If there's no trust relation between a home and resource tenant, a guest user would have previously been asked to re-register their device, which would break the previous registration. However, the user would end up in a registration loop because only home tenant device registration is supported. In this specific scenario, instead of this loop, we have created a new conditional access blocking page. The page tells the end user that they can't get access to conditional access protected resources as a guest user. [Learn more](../external-identities/b2b-quickstart-add-guest-users-portal.md#prerequisites).
+If there's no trust relation between a home and resource tenant, a guest user would have previously been asked to re-register their device, which would break the previous registration. However, the user would end up in a registration loop because only home tenant device registration is supported. In this specific scenario, instead of this loop, weΓÇÖve created a new conditional access blocking page. The page tells the end user that they can't get access to conditional access protected resources as a guest user. [Learn more](../external-identities/b2b-quickstart-add-guest-users-portal.md#prerequisites).
We've released beta MS Graph API for Azure AD access reviews. The API has method
**Product capability:** Identity Security & Protection
-The "Register or join devices" user action is generally available in Conditional access. This user action allows you to control multifactor authentication policies for Azure Active Directory (AD) device registration. Currently, this user action only allows you to enable multifactor authentication as a control when users register or join devices to Azure AD. Other controls that are dependent on or not applicable to Azure AD device registration continue to be disabled with this user action. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions).
+The "Register or join devices" user action is generally available in Conditional access. This user action allows you to control multi-factor authentication policies for Azure Active Directory (AD) device registration. Currently, this user action only allows you to enable multi-factor authentication as a control when users register or join devices to Azure AD. Other controls that are dependent on or not applicable to Azure AD device registration continue to be disabled with this user action. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions).
For more information about how to better secure your organization by using autom
**Product capability:** Identity Security & Protection
-To help administrators understand that their users are blocked for multifactor authentication as a result of fraud report, we have added a new audit event. This audit event is tracked when the user reports fraud. The audit log is available in addition to the existing information in the sign-in logs about fraud report. To learn how to get the audit report, see [multifactor authentication Fraud alert](../authentication/howto-mfa-mfasettings.md#fraud-alert).
+To help administrators understand that their users are blocked for multi-factor authentication as a result of fraud report, weΓÇÖve added a new audit event. This audit event is tracked when the user reports fraud. The audit log is available in addition to the existing information in the sign-in logs about fraud report. To learn how to get the audit report, see [multi-factor authentication Fraud alert](../authentication/howto-mfa-mfasettings.md#fraud-alert).
Deploying MIM for Privileged Access Management with a Windows Server 2012 R2 dom
-## July 2021
-
-### New Google sign-in integration for Azure AD B2C and B2B self-service sign-up and invited external users will stop working starting July 12, 2021
-
-**Type:** Plan for change
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-
-Previously we announced that [the exception for Embedded WebViews for Gmail authentication will expire in the second half of 2021](https://www.yammer.com/cepartners/threads/1188371962232832).
-
-On July 7, 2021, we learned from Google that some of these restrictions will apply starting **July 12, 2021**. Azure AD B2B and B2C customers who set up a new Google ID sign-in in their custom or line of business applications to invite external users or enable self-service sign-up will have the restrictions applied immediately. As a result, end-users will be met with an error screen that blocks their Gmail sign-in if the authentication is not moved to a system webview. See the docs linked below for details.
-
-Most apps use system web-view by default, and will not be impacted by this change. This only applies to customers using embedded webviews (the non-default setting.) We advise customers to move their application's authentication to system browsers instead, prior to creating any new Google integrations. To learn how to move to system browsers for Gmail authentications, read the Embedded vs System Web UI section in the [Using web browsers (MSAL.NET)](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) documentation. All MSAL SDKs use the system web-view by default. [Learn more](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
---
-### Google sign-in on embedded web-views expiring September 30, 2021
-
-**Type:** Plan for change
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-
-About two months ago we announced that the exception for Embedded WebViews for Gmail authentication will expire in the second half of 2021.
-
-Recently, Google has specified the date to be **September 30, 2021**.
-
-Rolling out globally beginning September 30, 2021, Azure AD B2B guests signing in with their Gmail accounts will now be prompted to enter a code in a separate browser window to finish signing in on Microsoft Teams mobile and desktop clients. This applies to invited guests and guests who signed up using Self-Service Sign-Up.
-
-Azure AD B2C customers who have set up embedded webview Gmail authentications in their custom/line of business apps or have existing Google integrations, will no longer can let their users sign in with Gmail accounts. To mitigate this, make sure to modify your apps to use the system browser for sign-in. For more information, read the Embedded vs System Web UI section in the [Using web browsers (MSAL.NET)](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) documentation. All MSAL SDKs use the system web-view by default.
-
-As the device login flow will start rolling out on September 30, 2021, it is likely that it may not be rolled out to your region yet (in which case, your end-users will be met with the error screen shown in the documentation until it gets deployed to your region.)
-
-For details on known impacted scenarios and what experience your users can expect, read [Add Google as an identity provider for B2B guest users](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
---
-### Bug fixes in My Apps
-
-**Type:** Fixed
-**Service category:** My Apps
-**Product capability:** End User Experiences
-
-- Previously, the presence of the banner recommending the use of collections caused content to scroll behind the header. This issue has been resolved. -- Previously, there was another issue when adding apps to a collection, the order of apps in All Apps collection would get randomly reordered. This issue has also been resolved. -
-For more information on My Apps, read [Sign in and start apps from the My Apps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
---
-### Public preview - Application authentication method policies
-
-**Type:** New feature
-**Service category:** MS Graph
-**Product capability:** Developer Experience
-
-Application authentication method policies in MS Graph which allow IT admins to enforce lifetime on application password secret credential or block the use of secrets altogether. Policies can be enforced for an entire tenant as a default configuration and it can be scoped to specific applications or service principals. [Learn more](/graph/api/resources/policy-overview).
-
--
-### Public preview - Authentication Methods registration campaign to download Microsoft Authenticator
-
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** User Authentication
-
-The Authenticator registration campaign helps admins to move their organizations to a more secure posture by prompting users to adopt the Microsoft Authenticator app. Prior to this feature, there was no way for an admin to push their users to set up the Authenticator app.
-
-The registration campaign comes with the ability for an admin to scope users and groups by including and excluding them from the registration campaign to ensure a smooth adoption across the organization. [Learn more](../authentication/how-to-mfa-registration-campaign.md)
-
--
-### Public preview - Separation of duties check
-
-**Type:** New feature
-**Service category:** User Access Management
-**Product capability:** Entitlement Management
-
-In Azure AD entitlement management, an administrator can define that an access package is incompatible with another access package or with a group. Users who have the incompatible memberships will be then unable to request more access. [Learn more](../governance/entitlement-management-access-package-request-policy.md#prevent-requests-from-users-with-incompatible-access-preview).
-
--
-### Public preview - Identity Protection logs in Log Analytics, Storage Accounts, and Event Hubs
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-You can now send the risky users and risk detections logs to Azure Monitor, Storage Accounts, or Log Analytics using the Diagnostic Settings in the Azure AD blade. [Learn more](../identity-protection/howto-export-risk-data.md).
-
--
-### Public preview - Application Proxy API addition for backend SSL certificate validation
-
-**Type:** New feature
-**Service category:** App Proxy
-**Product capability:** Access Control
-
-The onPremisesPublishing resource type now includes the property, "isBackendCertificateValidationEnabled" which indicates whether backend SSL certificate validation is enabled for the application. For all new Application Proxy apps, the property will be set to true by default. For all existing apps, the property will be set to false. For more information, read the [onPremisesPublishing resource type](/graph/api/resources/onpremisespublishing?view=graph-rest-beta&preserve-view=true) api.
-
--
-### General availability - Improved Authenticator setup experience for add Azure AD account in Microsoft Authenticator app by directly signing into the app.
-
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** User Authentication
-
-Users can now use their existing authentication methods to directly sign into the Microsoft Authenticator app to set up their credential. Users don't need to scan a QR Code anymore and can use a Temporary Access Pass (TAP) or Password + SMS (or other authentication method) to configure their account in the Authenticator app.
-
-This improves the user credential provisioning process for the Microsoft Authenticator app and gives the end user a self-service method to provision the app. [Learn more](https://support.microsoft.com/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c#sign-in-with-your-credentials).
-
--
-### General availability - Set manager as reviewer in Azure AD entitlement management access packages
-
-**Type:** New feature
-**Service category:** User Access Management
-**Product capability:** Entitlement Management
-
-Access packages in Azure AD entitlement management now support setting the user's manager as the reviewer for regularly occurring access reviews. [Learn more](../governance/entitlement-management-access-reviews-create.md).
---
-### General availability - Enable external users to self-service sign-up in Azure AD using MSA accounts
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-Users can now enable external users to self-service sign-up in Azure Active Directory using Microsoft accounts. [Learn more](../external-identities/microsoft-account.md).
-
-
-
-### General availability - External Identities Self-Service Sign-Up with Email One-time Passcode
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-
-Now users can enable external users to self-service sign-up in Azure Active Directory using their email and one-time passcode. [Learn more](../external-identities/one-time-passcode.md).
-
--
-### General availability - Anomalous token
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-Anomalous token detection is now available in Identity Protection. This feature can detect that there are abnormal characteristics in the token such as time active and authentication from unfamiliar IP address. [Learn more](../identity-protection/concept-identity-protection-risks.md#sign-in-risk).
-
--
-### General availability - Register or join devices in Conditional Access
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-The Register or join devices user action in Conditional access is now in general availability. This user action allows you to control multifactor authentication (MFA) policies for Azure AD device registration.
-
-Currently, this user action only allows you to enable multifactor authentication as a control when users register or join devices to Azure AD. Other controls that are dependent on or not applicable to Azure AD device registration continue to be disabled with this user action. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions).
---
-### New provisioning connectors in the Azure AD Application Gallery - July 2021
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [Clebex](../saas-apps/clebex-provisioning-tutorial.md)-- [Exium](../saas-apps/exium-provisioning-tutorial.md)-- [SoSafe](../saas-apps/sosafe-provisioning-tutorial.md)-- [Talentech](../saas-apps/talentech-provisioning-tutorial.md)-- [Thrive LXP](../saas-apps/thrive-lxp-provisioning-tutorial.md)-- [Vonage](../saas-apps/vonage-provisioning-tutorial.md)-- [Zip](../saas-apps/zip-provisioning-tutorial.md)-- [TimeClock 365](../saas-apps/timeclock-365-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, read [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
---
-### Changes to security and Microsoft 365 group settings in Azure portal
-
-**Type:** Changed feature
-**Service category:** Group Management
-**Product capability:** Directory
-
-
-In the past, users could create security groups and Microsoft 365 groups in the Azure portal. Now users will have the ability to create groups across Azure portals, PowerShell, and API. Customers are required to verify and update the new settings have been configured for their organization. [Learn More](../enterprise-users/groups-self-service-management.md#group-settings).
-
--
-### "All Apps" collection has been renamed to "Apps"
-
-**Type:** Changed feature
-**Service category:** My Apps
-**Product capability:** End User Experiences
-
-In the My Apps portal, the collection that was called "All Apps" has been renamed to be called "Apps". As the product evolves, "Apps" is a more fitting name for this default collection. [Learn more](../manage-apps/my-apps-deployment-plan.md#plan-the-user-experience).
-
-
active-directory Identity Governance Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/identity-governance-automation.md
+
+ Title: Automate Azure AD Identity Governance tasks with Azure Automation
+description: Learn how to write PowerShell scripts in Azure Automation to interact with Azure Active Directory entitlement management and other features.
+
+documentationCenter: ''
++
+editor:
++
+ na
+ms.devlang: na
++ Last updated : 1/20/2022+++++++
+# Automate Azure AD Identity Governance tasks via Azure Automation and Microsoft Graph
+
+[Azure Automation](/azure/automation/overview) is an Azure cloud service that allows you to automate common or repetitive systems management and processes. Microsoft Graph is the Microsoft unified API endpoint for Azure AD features that manage users, groups, access packages, access reviews, and other resources in the directory. You can manage Azure AD at scale from the PowerShell command line, using the [Microsoft Graph PowerShell SDK](/graph/powershell/get-started). You can also include the Microsoft Graph PowerShell cmdlets from a [PowerShell-based runbook in Azure Automation](/azure/automation/automation-intro), so that you can automate Azure AD tasks from a simple script.
+
+Azure Automation and the PowerShell Graph SDK supports certificate-based authentication and application permissions, so you can have Azure Automation runbooks authenticate to Azure AD without needing a user context.
+
+This article will show you how to get started using Azure Automation for Azure AD Identity Governance, by creating a simple runbook that queries entitlement management via Microsoft Graph PowerShell.
+
+## Create an Azure Automation account
+
+Azure Automation provides a cloud-hosted environment for [runbook execution](/azure/automation/automation-runbook-execution). Those runbooks can start automatically based on a schedule, or be triggered by webhooks or by Logic Apps.
+
+Using Azure Automation requires you to have an Azure subscription.
+
+**Prerequisite role**: Azure subscription or resource group owner
+
+1. Sign in to the Azure portal. Make sure you have access to the subscription or resource group where the Azure Automation account will be located.
+
+1. Select the subscription or resource group, and select **Create**. Type **Automation**, select the **Automation** Azure service from Microsoft, then select **Create**.
+
+1. After the Azure Automation account has been created, select **Access control (IAM)**. Then select **View** in **View access to this resource**. These users and service principals will subsequently be able to interact with the Microsoft service through the scripts to be created in that Azure Automation account.
+1. Review the users and service principals who are listed there and ensure they are authorized. Remove any users who are unauthorized.
+
+## Create a self-signed key pair and certificate on your computer
+
+So that it can operate without needing your personal credentials, the Azure Automation account you created will need to authenticate itself to Azure AD with a certificate.
+
+If you already have a key pair for authenticating your service to Azure AD, and a certificate that you received from a certificate authority, skip to the next section.
+
+To generate a self-signed certificate,
+
+1. Follow the instructions in [how to create a self-signed certificate](../develop/howto-create-self-signed-certificate.md), option 2, to create and export a certificate with its private key.
+
+1. Display the thumbprint of the certificate.
+
+ ```powershell
+ $cert | ft Thumbprint
+
+1. After you have exported the files, you can remove the certificate and key pair from your local user certificate store. In subsequent steps you will remove the `.pfx` and `.crt` files as well, once the certificate and private key have been uploaded to the Azure Automation and Azure AD services.
+
+## Upload the key pair to Azure Automation
+
+Your runbook in Azure Automation will retrieve the private key from the `.pfx` file, and use it for authenticating to Microsoft Graph.
+
+1. In the Azure portal for the Azure Automation account, select **Certificates** and **Add a certificate**.
+
+1. Upload the `.pfx` file created earlier, and type the password you provided when you created the file.
+
+1. After the private key is uploaded, record the certificate expiration date.
+
+1. You can now delete the `.pfx` file from your local computer. However, do not delete the `.crt` file yet, as you will need this file in a subsequent step.
+
+## Add modules for Microsoft Graph to your Azure Automation account
+
+By default, Azure Automation does not have any PowerShell modules preloaded for Microsoft Graph. You will need to add **Microsoft.Graph.Authentication**, and then additional modules, from the gallery to your Automation account. Note that you will need to choose whether to use the beta or v1.0 APIs through those modules, as you cannot mix both in a single runbook.
+
+1. In the Azure portal for the Azure Automation account, select **Modules** and then **Browse gallery**.
+
+1. In the Search bar, type **Microsoft.Graph.Authentication**. Select the module, select **Import**, and select **OK** to have Azure AD begin importing the module. After clicking OK, importing a module may take several minutes. Don't attempt to add more Microsoft Graph modules until the Microsoft.Graph.Authentication module import has completed, since those other modules have Microsoft.Graph.Authentication as a prerequisite.
+
+1. Return to the **Modules** list and select **Refresh**. Once the Status of the **Microsoft.Graph.Authentication** module has changed to **Available**, you can import the next module.
+
+1. If you are using the cmdlets for Azure AD identity governance features, such as entitlement management, then repeat the import process for the module **Microsoft.Graph.Identity.Governance**.
+
+1. Import other modules that your script may require. For example, if you are using Identity Protection, then you may wish to import the **Microsoft.Graph.Identity.SignIns** module.
+
+## Create an app registration and assign permissions
+
+Next, you will create an app registration in Azure AD, so that Azure AD will recognize your Azure Automation runbook's certificate for authentication.
+
+**Prerequisite role**: Global Administrator or other administrator who can consent applications to application permissions
+
+1. In the Azure portal, browse to **Azure Active Directory** > **App registrations**.
+
+1. Select **New registration**.
+
+1. Type a name for the application and select **Register**.
+
+1. Once the application registration is created, take note of the **Application (client) ID** and **Directory (tenant) ID** as you will need these items later.
+
+1. Select **Certificates and Secrets** and **Upload certificate**.
+
+1. Upload the `.crt` file created earlier.
+
+1. Select **API permissions** and **Add a permission**.
+
+1. Select **Microsoft Graph** and **Application permissions**.
+
+1. Select each of the permissions that your Azure Automation account will require, then select **Add permissions**.
+
+ * If your runbook is only performing queries for entitlement management, then it can use the **EntitlementManagement.Read.All** permission.
+ * If your runbook is making changes to entitlement management, for example to create assignments, then use the **EntitlementManagement.ReadWrite.All** permission.
+ * For other APIs, ensure that the necessary permission is added. For example, for identity protection, the **IdentityRiskyUser.Read.All** permission should be added.
+
+10. Select **Grant admin permissions** to give your app those permissions.
+
+## Create Azure Automation variables
+
+In this step, you will create in the Azure automation account three variables that the runbook will use to determine how to authenticate to Azure AD.
+
+1. In the Azure portal, return to the Azure Automation account.
+
+1. Select **Variables**, and **Add variable**.
+
+1. Create a variable named **Thumbprint**. Type, as the value of the variable, the certificate thumbprint that was generated earlier.
+
+1. Create a variable named **ClientId**. Type, as the value of the variable, the client ID for the application registered in Azure AD.
+
+1. Create a variable named **TenantId**. Type, as the value of the variable, the tenant ID of the directory where the application was registered.
+
+## Create an Azure Automation PowerShell runbook that can use Graph
+
+In this step, you will create an initial runbook. You can trigger this runbook to verify the authentication using the certificate created earlier is successful.
+
+1. Select **Runbooks** and **Create a runbook**.
+
+1. Type the name of the runbook, select **PowerShell** as the type of runbook to create, and select **Create**.
+
+1. Once the runbook is created, a text editing pane will appear for you to type in the PowerShell source code of the runbook.
+
+1. Type the following PowerShell into the text editor.
+
+```powershell
+Import-Module Microsoft.Graph.Authentication
+$ClientId = Get-AutomationVariable -Name 'ClientId'
+$TenantId = Get-AutomationVariable -Name 'TenantId'
+$Thumbprint = Get-AutomationVariable -Name 'Thumbprint'
+Connect-MgGraph -clientId $ClientId -tenantid $TenantId -certificatethumbprint $Thumbprint
+```
+
+5. Select **Test pane**, and select **Start**. Wait a few seconds for the Azure Automation processing of your runbook script to complete.
+
+1. If the run of your runbook is successful, then the message **Welcome to Microsoft Graph!** will appear.
+
+Now that you have verified that your runbook can authenticate to Microsoft Graph, extend your runbook by adding cmdlets for interacting with Azure AD features.
+
+## Extend the runbook to use Entitlement Management
+
+If the app registration for your runbook has the **EntitlementManagement.Read.All** or **EntitlementManagement.ReadWrite.All** permissions, then it can use the entitlement management APIs.
+
+1. For example, to get a list of Azure AD entitlement management access packages, you can update the above-created runbook, and replace the text with the following PowerShell.
+
+```powershell
+Import-Module Microsoft.Graph.Authentication
+$ClientId = Get-AutomationVariable -Name 'ClientId'
+$TenantId = Get-AutomationVariable -Name 'TenantId'
+$Thumbprint = Get-AutomationVariable -Name 'Thumbprint'
+$auth = Connect-MgGraph -clientId $ClientId -tenantid $TenantId -certificatethumbprint $Thumbprint
+Select-MgProfile -Name beta
+Import-Module Microsoft.Graph.Identity.Governance
+$ap = Get-MgEntitlementManagementAccessPackage -All -ErrorAction Stop
+$ap | Select-Object -Property Id,DisplayName | ConvertTo-Json
+```
+
+2. Select **Test pane**, and select **Start**. Wait a few seconds for the Azure Automation processing of your runbook script to complete.
+
+3. If the run was successful, the output instead of the welcome message will be a JSON array. The JSON array will include the ID and display name of each access package returned from the query.
+
+## Parse the output of an Azure Automation account in Logic Apps (optional)
+
+Once your runbook is published, your can create a schedule in Azure Automation, and link your runbook to that schedule to run automatically. Scheduling runbooks from Azure Automation is suitable for runbooks that do not need to interact with other Azure or Office 365 services.
+
+If you wish to send the output of your runbook to another service, then you may wish to consider using [Azure Logic Apps](/azure/logic-apps/logic-apps-overview) to start your Azure Automation runbook, as Logic Apps can also parse the results.
+
+1. In Azure Logic Apps, create a Logic App in the Logic Apps Designer starting with **Recurrence**.
+
+1. Add the operation **Create job** from **Azure Automation**. Authenticate to Azure AD, and select the Subscription, Resource Group, Automation Account created earlier. Select **Wait for Job**.
+
+1. Add the parameter **Runbook name** and type the name of the runbook to be started.
+
+1. Select **New step** and add the operation **Get job output**. Select the same Subscription, Resource Group, Automation Account as the previous step, and select the Dynamic value of the **Job ID** from the previous step.
+
+1. You can then add more operations to the Logic App, such as the [**Parse JSON** action](/azure/logic-apps/logic-apps-perform-data-operations#parse-json-action), that use the **Content** returned when the runbook completes.
+
+Note that in Azure Automation, a PowerShell runbook can fail to complete if it tries to write a large amount of data to the output stream at once. You can typically work around this issue by having the runbook output just the information needed by the Logic App, such as by using the `Select-Object -Property` cmdlet to exclude unneeded properties.
+
+## Plan to keep the certificate up to date
+
+If you created a self-signed certificate following the steps above for authentication, keep in mind that the certificate will have a limited lifetime before it will expire. You will need to regenerate the certificate and upload the new certificate before its expiration date.
+
+There are two places where you can see the expiration date in the Azure portal.
+
+* In Azure Automation, the **Certificates** screen displays the expiration date of the certificate.
+* In Azure AD, on the app registration, the **Certificates & secrets** screen displays the expiration date of the certificate used for the Azure Automation account.
+
+## Next steps
+
+- [Create an Automation account using the Azure portal](/azure/automation/quickstarts/create-account-portal)
+- [Manage access to resources in Active Directory entitlement management using Microsoft Graph PowerShell](/powershell/microsoftgraph/tutorial-entitlement-management?view=graph-powershell-beta)
active-directory How To Connect Health Adfs Risky Ip Workbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip-workbook.md
AD FS customers may expose password authentication endpoints to the internet to provide authentication services for end users to access SaaS applications such as Microsoft 365. In this case, it is possible for a bad actor to attempt logins against your AD FS system to guess an end userΓÇÖs password and get access to application resources. AD FS provides the extranet account lockout functionality to prevent these types of attacks since AD FS in Windows Server 2012 R2. If you are on a lower version, we strongly recommend that you upgrade your AD FS system to Windows Server 2016. <br />
-Additionally, it is possible for a single IP address to attempt multiple logins against multiple users. In these cases, the number of attempts per user may be under the threshold for account lockout protection in AD FS. Azure AD Connect Health now provides the ΓÇ£Risky IP reportΓÇ¥ that detects this condition and notifies administrators when this occurs. The following are the key benefits for this report:
+Additionally, it is possible for a single IP address to attempt multiple logins against multiple users. In these cases, the number of attempts per user may be under the threshold for account lockout protection in AD FS. Azure AD Connect Health now provides the ΓÇ£Risky IP reportΓÇ¥ that detects this condition and notifies administrators. The following are the key benefits for this report:
- Detection of IP addresses that exceed a threshold of failed password-based logins - Supports failed logins due to bad password or due to extranet lockout state - Supports enabling alerts through Azure Alerts
Additionally, it is possible for a single IP address to attempt multiple logins
## What is in the report?
-The Risky IP report workbook is powered from data in the ADFSSignInLogs stream and has pre-existing queries to be able to quickly visualize and analyze risky IPs. The parameters can be configured and customized for threshold counts. The workbook is also configurable based on queries, and each query can be updated and modified based on the organizationΓÇÖs needs.
+The Risky IP report workbook is powered from data in the ADFSSignInLogs stream and can quickly visualize and analyze risky IPs. The parameters can be configured and customized for threshold counts. The workbook is also configurable based on queries, and each query can be updated and modified based on the organizationΓÇÖs needs.
The risky IP workbook analyzes data from ADFSSignInLogs to help you detect password spray or password brute force attacks. The workbook has two parts. The first part "Risky IP Analysis" identifies risky IP addresses based on designated error thresholds and detection window length. The second part provides the sign-in details and error counts for selected IPs.
Each item in the Risky IP report table shows aggregated information about failed
Filter the report by IP address or user name to see an expanded view of sign-ins details for each risky IP event.
+## Accessing the workbook
+
+To access the workbook:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to **Azure Active Directory** > **Monitoring** > **Workbooks**.
+3. Select the Risky IP report workbook.
+ ## Load balancer IP addresses in the list Load balancer aggregate failed sign-in activities and hit the alert threshold. If you are seeing load balancer IP addresses, it is highly likely that your external load balancer is not sending the client IP address when it passes the request to the Web Application Proxy server. Please configure your load balancer correctly to pass forward client IP address.
active-directory F5 Big Ip Header Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-header-advanced.md
The secure hybrid access solution for this scenario is made up of:
- **BIG-IP**: Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP, before performing header-based SSO to the backend application.
-![Screenshot shows the architecture flow diagram](./media/f5-big-ip-header-advanced/flow-diagram.png)
+![Screenshot shows the architecture flow diagram](./media/f5-big-ip-easy-button-header/sp-initiated-flow.png)
| Step | Description | |:-|:--|
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
The SHA solution for this scenario consists of the following elements:
The following image illustrates the SAML SP-initiated flow for this scenario, but IdP-initiated flow is also supported.
-![Diagram of the scenario architecture.](./media/f5-big-ip-kerberos-advanced/scenario-architecture.png)
+![Diagram of the scenario architecture.](./media/f5-big-ip-kerberos-easy-button/scenario-architecture.png)
| Step| Description | | -- |-|
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
+
+ Title: Configure F5 BIG-IP Easy Button for SSO to Oracle EBS
+description: Learn to implement SHA with header-based SSO to Oracle EBS using F5ΓÇÖs BIG-IP Easy Button guided configuration
+++++++ Last updated : 1/31/2022++++
+# Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle EBS
+
+In this article, you'll learn to implement Secure Hybrid Access (SHA) with header-based single sign-on (SSO) to Oracle Enterprise Business Suite (EBS) using F5ΓÇÖs BIG-IP Easy Button guided configuration.
+
+Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
+
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/conditional-access/overview)
+
+* Full SSO between Azure AD and BIG-IP published services
+
+* Manage Identities and access from a single control plane, [the Azure portal](https://portal.azure.com/)
+
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](http://f5-aad-integration.md/) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+
+## Scenario description
+
+For this scenario, use an **Oracle EBS application using HTTP authorization headers** to manage access to protected content.
+
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+
+Having a BIG-IP in front of the app enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
+
+## Scenario architecture
+
+The secure hybrid access solution for this scenario is made up of several components including a multi-tiered Oracle architecture:
+
+**Oracle EBS Application:** BIG-IP published service to be protected by Azure AD SHA.
+
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP.
+
+**Oracle Internet Directory (OID):** Hosts the user database. BIG-IP checks via LDAP for authorization attributes.
+
+**Oracle AccessGate:** Validates authorization attributes through back channel with OID service, before issuing EBS access cookies
+
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the Oracle service.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+
+![Secure hybrid access - SP initiated flow](./media/f5-big-ip-oracle/sp-initiated-flow.png)
+
+| Steps| Description |
+| -- |-|
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
+| 4| User is redirected back to BIG-IP with issued token and claims |
+| 5| BIG-IP authenticates user and performs LDAP query for user Unique ID (UID) attribute |
+| 6| BIG-IP injects returned UID attribute as user_orclguid header in EBS session cookie request to Oracle AccessGate |
+| 7| Oracle AccessGate validates UID against Oracle Internet Directory (OID) service and issues EBS access cookie
+| 8| EBS user headers and cookie sent to application and returns the payload to the user |
+
+## Prerequisites
+
+Prior BIG-IP experience isnΓÇÖt necessary, but you need:
+
+* An Azure AD free subscription or above
+
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](./f5-bigip-deployment-guide.md)
+
+* Any of the following F5 BIG-IP license SKUs
+
+ * F5 BIG-IP® Best bundle
+
+ * F5 BIG-IP Access Policy ManagerΓäó (APM) standalone license
+
+ * F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD or created directly within Azure AD and flowed back to your on-premises directory
+
+* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
+
+* [SSL certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS
+
+* An existing Oracle EBS suite including Oracle AccessGate and an LDAP enabled OID (Oracle Internet Database)
+
+## BIG-IP configuration methods
+
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+
+>[!NOTE]
+> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+
+## Register Easy Button
+
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform](../develop/quickstart-register-app.md).
+
+A BIG-IP must also be registered as a client in Azure AD, before it is allowed to establish a trust in between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
+
+1. Sign in to the [Azure AD portal](https://portal.azure.com/) with Application Administrative rights
+
+2. From the left navigation pane, select the **Azure Active Directory** service
+
+3. Under Manage, select **App registrations > New registration**
+
+4. Enter a display name for your application. For example, F5 BIG-IP Easy Button
+
+5. Specify who can use the application > **Accounts in this organizational directory only**
+
+6. Select **Register** to complete the initial app registration
+
+7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions:
+
+ * Application.Read.All
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+
+8. Grant admin consent for your organization
+
+9. Go to **Certificates & Secrets**, generate a new **Client secret** and note it down
+
+10. Go to **Overview**, note the **Client ID** and **Tenant ID**
+
+## Configure Easy Button
+
+Initiate **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+
+1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
+
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
+
+2. Review the list of configuration steps and select **Next**
+
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
+
+3. Follow the sequence of steps required to publish your application.
+
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox)
+
+### Configuration Properties
+
+The **Configuration Properties** tab creates up a new application config and SSO object. Consider **Azure Service Account Details** section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button does this for every BIG-IP APM service being enabled for SHA.
+
+Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
+
+1. Provide a unique **Configuration Name** that enables an admin to easily distinguish between Easy Button configurations
+
+2. Enable **Single Sign-On (SSO) & HTTP Headers**
+
+3. Enter the **Tenant Id, Client ID**, and **Client Secret** you noted down from your registered application
+
+4. Before you select **Next**, confirm that BIG-IP can successfully connect to your tenant.
+
+ ![ Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-oracle/configuration-general-and-service-account-properties.png)
+
+### Service Provider
+
+The **Service Provider** settings define the SAML SP properties for the APM instance representing the application protected through SHA.
+
+1. Enter **Host**. This is the public FQDN of the application being secured. You need a corresponding DNS record for clients to resolve this address, but using a localhost record is fine during testing
+
+2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
+
+ ![Screenshot for Service Provider settings](./media/f5-big-ip-oracle/service-provider-settings.png)
+
+ Next, under optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
+
+5. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab.
+
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png)
+
+6. Check **Enable Encrypted Assertion**
+
+7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM uses to decrypt Azure AD assertions
+
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP uploads to Azure AD for encrypting the issued SAML assertions.
+
+ ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
+
+### Azure Active Directory
+
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. In this example, select **Oracle E-Business Suite > Add**. This adds the template for the Oracle E-business Suite
+
+![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-oracle/azure-configuration-add-big-ip-application.png)
+
+#### Azure Configuration
+
+1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users see on [MyApps portal](https://myapplications.microsoft.com/)
+
+2. In the **Sign On URL (optional)** enter the public FQDN of the EBS application being secured, along with the default path for the Oracle EBS homepage
+
+ ![Screenshot for Azure configuration add display info](./media/f5-big-ip-oracle/azure-configuration-add-display-info.png)
+
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+4. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
+
+5. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
+
+6. **User and User Groups** are used to authorize access to the application. They are dynamically added from the tenant. **Add** a user or group that you can use later for testing, otherwise all access will be denied
+
+ ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
+
+#### User Attributes & Claims
+
+When a user successfully authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims** tab shows the default claims to issue for the new application. It also lets you configure more claims.
+
+![Screenshot for Azure configuration ΓÇô User attributes & claims](./media/f5-big-ip-easy-button-ldap/user-attributes-claims.png)
+
+You can include additional Azure AD attributes if necessary, but the example PeopleSoft scenario only requires the default attributes.
+
+#### Additional User Attributes
+
+The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+
+1. Enable the **Advanced Settings** option
+
+2. Check the **LDAP Attributes** check box
+
+3. Select **Create New** in **Choose Authentication Server**
+
+4. Select **Use pool** or **Direct** server connection mode depending on your setup. This provides the **Server Address** of the target LDAP service. If using a single LDAP server, select **Direct**.
+
+5. Enter **Service Port** as 3060 (Default), 3161 (Secure), or any other port your Oracle LDAP service operates on
+
+6. Enter the **Base Search DN** (distinguished name) from which to search. This search DN is used to search groups across a whole directory.
+
+7. Set the **Admin DN** to the exact distinguished name for the account the APM will use to authenticate for LDAP queries, along with its password
+
+ ![Screenshot for additional user attributes](./media/f5-big-ip-oracle/additional-user-attributes.png)
+
+8. Leave all default **LDAP Schema Attributes**
+
+ ![Screenshot for LDAP schema attributes](./media/f5-big-ip-oracle/ldap-schema-attributes.png)
+
+9. Under **LDAP Query Properties**, set the **Search Dn** to the base node of the LDAP server from which to search for user objects
+
+10. Add the name of the user object attribute that must be returned from the LDAP directory. For EBS, the default is **orclguid**
+
+ ![Screenshot for LDAP query properties.png](./media/f5-big-ip-oracle/ldap-query-properties.png)
+
+#### Conditional Access Policy
+
+Conditional Access policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+
+The **Available Policies** view, by default, will list all Conditional Access policies that do not include user-based actions.
+
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+
+To select a policy to be applied to the application being published:
+
+1. Select the desired policy in the **Available Policies** list
+
+2. Select the right arrow and move it to the **Selected Policies** list
+
+ The selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the policy is not enforced.
+
+ ![Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
+
+> [!NOTE]
+> The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+
+### Virtual Server Properties
+
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP.
+
+2. Enter **Service Port** as *443* for HTTPS
+
+3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
+
+4. Select **Client SSL Profile** to enable the virtual server for HTTPS so that client connections are encrypted over TLS. Select the client SSL profile you created as part of the prerequisites or leave the default if testing
+
+ ![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
+
+### Pool Properties
+
+The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
+
+1. Choose from **Select a Pool**. Create a new pool or select an existing one
+
+2. Choose the **Load Balancing Method** as *Round Robin*
+
+3. Update the **Pool Servers**. Select an existing node or specify an IP and port for the servers hosting the Oracle EBS application.
+
+ ![Screenshot for Application pool](./media/f5-big-ip-oracle/application-pool.png)
+
+4. The **Access Gate Pool** specifies the servers Oracle EBS uses for mapping an SSO authenticated user to an Oracle E-Business Suite session. Update **Pool Servers** with the IP and port for of the Oracle application servers hosting the application
+
+ ![Screenshot for AccessGate pool](./media/f5-big-ip-oracle/accessgate-pool.png)
+
+#### Single Sign-On & HTTP Headers
+
+The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO to published applications. As the PeopleSoft application expects headers, enable **HTTP Headers** and enter the following properties.
+
+* **Header Operation:** replace
+* **Header Name:** USER_NAME
+* **Header Value:** %{session.sso.token.last.username}
+
+
+* **Header Operation:** replace
+* **Header Name:** USER_ORCLGUID
+* **Header Value:** %{session.ldap.last.attr.orclguid}
+
+ ![ Screenshot for SSO and HTTP headers](./media/f5-big-ip-oracle/sso-and-http-headers.png)
+
+>[!NOTE]
+>APM session variables defined within curly brackets are CASE sensitive. If you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure.
+
+### Session Management
+
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Consult [F5 documentation](https://support.f5.com/csp/article/K18390492) for details on these settings.
+
+What isnΓÇÖt covered however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button deploys a SAML application to your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
+
+During deployment, the SAML federation metadata for the published application is imported from your tenant, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign outs terminate the session between a client and Azure AD.
+
+## Summary
+
+Select **Deploy** to commit all settings and verify that the application has appeared in your tenant. This last step provides breakdown of all applied settings before theyΓÇÖre committed. Your application should now be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals.
+
+## Next steps
+
+From a browser, connect to the **PeopleSoft applicationΓÇÖs external URL** or select the applicationΓÇÖs icon in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+
+## Advanced deployment
+
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](./f5-big-ip-kerberos-advanced.md). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+> [!NOTE]
+> Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+
+## Troubleshooting
+
+There can be many factors leading to failure to access a published application. BIG-IP logging can help quickly isolate all sorts of issues with connectivity, policy violations, or misconfigured variable mappings.
+
+Start troubleshooting by increasing the log verbosity level.
+
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+
+2. Select the row for your published application then **Edit > Access System Logs**
+
+3. Select **Debug** from the SSO list then **OK**
+
+Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data. If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+
+1. Navigate to **Access > Overview > Access reports**
+
+2. Run the report for the last hour to see logs provide any clues. The **View session** variables link for your session will also help understand if the APM is receiving the expected claims from Azure AD
+
+If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+
+1. In which case you should head to **Access Policy > Overview > Active Sessions** and select the link for your active session
+
+2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes
+
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+
+The following command from a bash shell validates the APM service account used for LDAP queries and can successfully authenticate and query a user object:
+
+```ldapsearch -xLLL -H 'ldap://192.168.0.58' -b "CN=oraclef5,dc=contoso,dc=lds" -s sub -D "CN=f5-apm,CN=partners,DC=contoso,DC=lds" -w 'P@55w0rd!' "(cn=testuser)" ```
+
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this [F5 knowledge article on LDAP Query](https://techdocs.f5.com/en-us/bigip-16-1-0/big-ip-access-policy-manager-authentication-methods/ldap-query.html).
active-directory Groups Activate Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/groups-activate-roles.md
na Previously updated : 10/07/2021 Last updated : 02/02/2022
If you do not require activation of a role that requires approval, you can cance
When you select **Cancel**, the request will be canceled. To activate the role again, you will have to submit a new request for activation.
+## Deactivate a role assignment
+
+When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. When you select **Deactivate**, there's a short time lag before the role is deactivated. Also, you can't deactivate a role assignment within five minutes after activation.
+ ## Troubleshoot ### Permissions are not granted after activating a role
active-directory Groups Assign Member Owner https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md
na Previously updated : 11/09/2021 Last updated : 02/02/2022
Azure Active Directory (Azure AD) Privileged Identity Management (PIM) can help you manage the eligibility and activation of assignments to privileged access groups in Azure AD. You can assign eligibility to members or owners of the group.
+When a role is assigned, the assignment:
+- Can't be assigned for a duration of less than five minutes
+- Can't be removed within five minutes of it being assigned
+ >[!NOTE] >Every user who is eligible for membership in or ownership of a privileged access group must have an Azure AD Premium P2 license. For more information, see [License requirements to use Privileged Identity Management](subscription-requirements.md).
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
Title: Activate my Azure AD roles in PIM - Azure Active Directory | Microsoft Docs
+ Title: Activate Azure AD roles in PIM - Azure Active Directory | Microsoft Docs
description: Learn how to activate Azure AD roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''
Previously updated : 10/07/2021 Last updated : 02/02/2022
-# Activate my Azure AD roles in PIM
+# Activate an Azure AD role in PIM
Azure Active Directory (Azure AD) Privileged Identity Management (PIM) simplifies how enterprises manage privileged access to resources in Azure AD and other Microsoft online services like Microsoft 365 or Microsoft Intune.
When you need to assume an Azure AD role, you can request activation by opening
![Screen to provide security verification such as a PIN code](./media/pim-resource-roles-activate-your-roles/resources-mfa-enter-code.png)
-1. After multi-factor authentication, select **Activate before proceeding**.
+1. After multifactor authentication, select **Activate before proceeding**.
![Verify my identity with MFA before role activates](./media/pim-how-to-activate-role/activate-role-mfa-banner.png)
GET https://graph.microsoft.com/beta/roleManagement/directory/roleEligibilitySch
#### HTTP response
-To save space we're showing only the response for one roles, but all eligible role assignments that you can activate will be listed.
+To save space we're showing only the response for one role, but all eligible role assignments that you can activate will be listed.
````HTTP {
You can view the status of your pending requests to activate.
## Cancel a pending request for new version
-If you do not require activation of a role that requires approval, you can cancel a pending request at any time.
+If you don't require activation of a role that requires approval, you can cancel a pending request at any time.
1. Open Azure AD Privileged Identity Management.
If you do not require activation of a role that requires approval, you can cance
1. For the role that you want to cancel, select the **Cancel** link.
- When you select Cancel, the request will be canceled. To activate the role again, you will have to submit a new request for activation.
+ When you select Cancel, the request will be canceled. To activate the role again, you'll have to submit a new request for activation.
![My request list with Cancel action highlighted](./media/pim-resource-roles-activate-your-roles/resources-my-requests-cancel.png)
+## Deactivate a role assignment
+
+When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. When you select **Deactivate**, there's a short time lag before the role is deactivated. Also, you can't deactivate a role assignment within five minutes after activation.
+ ## Troubleshoot portal delay ### Permissions aren't granted after activating a role
-When you activate a role in Privileged Identity Management, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, sign out of the portal you are trying to perform the action and then sign back in. In the Azure portal, PIM signs you out and back in automatically.
+When you activate a role in Privileged Identity Management, the activation might not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may cause a delay before the change takes effect. If your activation is delayed, sign out of the portal you're trying to perform the action and then sign back in. In the Azure portal, PIM signs you out and back in automatically.
## Next steps
active-directory Pim How To Add Role To User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md
Previously updated : 10/07/2021 Last updated : 02/02/2022
The Azure AD Privileged Identity Management (PIM) service also allows Privileged
Privileged Identity Management support both built-in and custom Azure AD roles. For more information on Azure AD custom roles, see [Role-based access control in Azure Active Directory](../roles/custom-overview.md).
+>[!Note]
+>When a role is assigned, the assignment:
+>- Can't be asigned for a duration of less than five minutes
+>- Can't be removed within five minutes of it being assigned
+ ## Assign a role Follow these steps to make a user eligible for an Azure AD admin role.
active-directory Pim Resource Roles Activate Your Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md
na Previously updated : 10/07/2021 Last updated : 02/02/2022
If you do not require activation of a role that requires approval, you can cance
![My request list with Cancel action highlighted](./media/pim-resource-roles-activate-your-roles/resources-my-requests-cancel.png)
+## Deactivate a role assignment
+
+When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. When you select **Deactivate**, there's a short time lag before the role is deactivated. Also, you can't deactivate a role assignment within five minutes after activation.
+ ## Troubleshoot ### Permissions are not granted after activating a role
active-directory Pim Resource Roles Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md
na Previously updated : 09/28/2021 Last updated : 02/02/2022
Privileged Identity Management support both built-in and custom Azure roles. For
You can use the Azure attribute-based access control (Azure ABAC) preview to place resource conditions on eligible role assignments using Privileged Identity Management (PIM). With PIM, your end users must activate an eligible role assignment to get permission to perform certain actions. Using Azure attribute-based access control conditions in PIM enables you not only to limit a userΓÇÖs role permissions to a resource using fine-grained conditions, but also to use PIM to secure the role assignment with a time-bound setting, approval workflow, audit trail, and so on. For more information, see [Azure attribute-based access control public preview](../../role-based-access-control/conditions-overview.md).
+>[!Note]
+>When a role is assigned, the assignment:
+>- Can't be assign for a duration of less than five minutes
+>- Can't be removed within five minutes of it being assigned
+ ## Assign a role Follow these steps to make a user eligible for an Azure resource role.
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Use the following table to better understand how to resolve errors that you find
||| |Conflict, EntryConflict|Correct the conflicting attribute values in either Azure AD or the application. Or, review your matching attribute configuration if the conflicting user account was supposed to be matched and taken over. Review the [documentation](../app-provisioning/customize-application-attributes.md) for more information on configuring matching attributes.| |TooManyRequests|The target app rejected this attempt to update the user because it's overloaded and receiving too many requests. There's nothing to do. This attempt will automatically be retired. Microsoft has also been notified of this issue.|
-|InternalServerError |The target app returned an unexpected error. A service issue with the target application might be preventing this from working. This attempt will automatically be retired in 40 minutes.|
+|InternalServerError |The target app returned an unexpected error. A service issue with the target application might be preventing this from working. This attempt will automatically be retried in 40 minutes.|
|InsufficientRights, MethodNotAllowed, NotPermitted, Unauthorized| Azure AD authenticated with the target application but was not authorized to perform the update. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md).| |UnprocessableEntity|The target application returned an unexpected response. The configuration of the target application might not be correct, or a service issue with the target application might be preventing this from working.|
-|WebExceptionProtocolError |An HTTP protocol error occurred in connecting to the target application. There is nothing to do. This attempt will automatically be retired in 40 minutes.|
+|WebExceptionProtocolError |An HTTP protocol error occurred in connecting to the target application. There is nothing to do. This attempt will automatically be retried in 40 minutes.|
|InvalidAnchor|A user that was previously created or matched by the provisioning service no longer exists. Ensure that the user exists. To force a new matching of all users, use the Microsoft Graph API to [restart the job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta&preserve-view=true). <br><br>Restarting provisioning will trigger an initial cycle, which can take time to complete. Restarting provisioning also deletes the cache that the provisioning service uses to operate. That means all users and groups in the tenant will have to be evaluated again, and certain provisioning events might be dropped.| |NotImplemented | The target app returned an unexpected response. The configuration of the app might not be correct, or a service issue with the target app might be preventing this from working. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md). | |MandatoryFieldsMissing, MissingValues |The user could not be created because required values are missing. Correct the missing attribute values in the source record, or review your matching attribute configuration to ensure that the required fields are not omitted. [Learn more](../app-provisioning/customize-application-attributes.md) about configuring matching attributes.|
Use the following table to better understand how to resolve errors that you find
* [Check the status of user provisioning](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) * [Problem configuring user provisioning to an Azure AD Gallery application](../app-provisioning/application-provisioning-config-problem.md)
-* [Graph API for provisioning logs](/graph/api/resources/provisioningobjectsummary)
+* [Graph API for provisioning logs](/graph/api/resources/provisioningobjectsummary)
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
To generate a lastSignInDateTime timestamp, you need a successful sign-in. Becau
### For how long is the last sign-in retained?
-The last sign-in date is associated with the user object. The value is retained until the sign-in of the user.
+The last sign-in date is associated with the user object. The value is retained until the next sign-in of the user.
## Next steps
active-directory Cornerstone Ondemand Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cornerstone-ondemand-provisioning-tutorial.md
This tutorial demonstrates the steps to perform in Cornerstone OnDemand and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and deprovision users or groups to Cornerstone OnDemand. > [!NOTE]
+> This Conerstone OnDemand automatic provisioning service is deprecated and support will end soon.
> This tutorial describes a connector that's built on top of the Azure AD user provisioning service. For information on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to software-as-a-service (SaaS) applications with Azure Active Directory](../app-provisioning/user-provisioning.md). ## Prerequisites
active-directory Kronos Workforce Dimensions Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/kronos-workforce-dimensions-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Kronos Workforce Dimensions | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Kronos Workforce Dimensions'
description: Learn how to configure single sign-on between Azure Active Directory and Kronos Workforce Dimensions.
Previously updated : 07/19/2021 Last updated : 01/27/2021
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Kronos Workforce Dimensions
+# Tutorial: Azure AD SSO integration with Kronos Workforce Dimensions
In this tutorial, you'll learn how to integrate Kronos Workforce Dimensions with Azure Active Directory (Azure AD). When you integrate Kronos Workforce Dimensions with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Kronos Workforce Dimensions single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Kronos Workforce Dimensions you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Kronos Workforce Dimensions you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Lucid Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/lucid-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Lucid (All Products) | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Lucid (All Products)'
description: Learn how to configure single sign-on between Azure Active Directory and Lucid (All Products).
Previously updated : 11/04/2020 Last updated : 01/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Lucid (All Products)
+# Tutorial: Azure AD SSO integration with Lucid (All Products)
In this tutorial, you'll learn how to integrate Lucid (All Products) with Azure Active Directory (Azure AD). When you integrate Lucid (All Products) with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Lucid (All Products) single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Lucid (All Products) supports **SP and IDP** initiated SSO
-* Lucid (All Products) supports **Just In Time** user provisioning
+* Lucid (All Products) supports **SP and IDP** initiated SSO.
+* Lucid (All Products) supports **Just In Time** user provisioning.
+ > [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant. --
-## Adding Lucid (All Products) from the gallery
+## Add Lucid (All Products) from the gallery
To configure the integration of Lucid (All Products) into Azure AD, you need to add Lucid (All Products) from the gallery to your list of managed SaaS apps.
To configure the integration of Lucid (All Products) into Azure AD, you need to
1. In the **Add from the gallery** section, type **Lucid (All Products)** in the search box. 1. Select **Lucid (All Products)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Lucid (All Products) Configure and test Azure AD SSO with Lucid (All Products) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Lucid (All Products).
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Lucid (All Products)** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following step:
In the **Reply URL** text box, type a URL using the following pattern: `https://lucid.app/saml/sso/<TENANT_NAME>?idpHash=<HASH_ID>`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up Lucid (All Products)** section, copy the appropriate URL(s) based on your requirement. ![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-1. Click on **Test this application** in Azure portal. This will redirect to Lucid (All Products) Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to the Lucid (All Products) sign-on URL where you can initiate the login flow.
-1. Go to Lucid (All Products) Sign-on URL directly and initiate the login flow from there.
+* Go to Lucid (All Products) Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lucid (All Products) for which you set up the SSO
-
-You can also use Microsoft Access Panel to test the application in any mode. When you click the Lucid (All Products) tile in the Access Panel, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lucid (All Products) for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lucid (All Products) for which you set up the SSO.
+You can also use Microsoft My Apps to test the application in any mode. When you click the Lucid (All Products) tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lucid (All Products) for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next Steps
-Once you configure Lucid (All Products) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Lucid (All Products) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Mondaycom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/mondaycom-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with monday.com | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with monday.com'
description: Learn how to configure single sign-on between Azure Active Directory and monday.com.
Previously updated : 02/08/2021 Last updated : 01/28/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with monday.com
+# Tutorial: Azure AD SSO integration with monday.com
In this tutorial, you'll learn how to integrate monday.com with Azure Active Directory (Azure AD). When you integrate monday.com with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * monday.com single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* monday.com supports **SP and IDP** initiated SSO
+* monday.com supports **SP and IDP** initiated SSO.
* monday.com supports [**automated** user provisioning and deprovisioning](mondaycom-provisioning-tutorial.md) (recommended).
-* monday.com supports **Just In Time** user provisioning
+* monday.com supports **Just In Time** user provisioning.
## Add monday.com from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section. > [!Note]
- > If the **Identifier** and **Reply URL** values do not get populated automatically, then fill in the values manually. The **Identifier** and the **Reply URL** are the same and value is in the following pattern: `https://<your-domain>.monday.com/saml/saml_callback`
+ > If the **Identifier** and **Reply URL** values do not get populated automatically, then fill in the values manually. The **Identifier** and the **Reply URL** are the same and value is in the following pattern: `https://<YOUR_DOMAIN>.monday.com/saml/saml_callback`
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Setup configuration](common/setup-sso.png)
-1. If you want to setup monday.com manually, open a new web browser window and sign in to monday.com as an administrator and perform the following steps:
+1. If you want to set up monday.com manually, open a new web browser window and sign in to monday.com as an administrator and perform the following steps:
-1. Go to the **Profile** on the top right corner of page and click on **Admin**.
+1. Go to the **Profile** on the top-right corner of page and click on **Admin**.
- ![Screenshot shows the Admin profile selected.](./media/mondaycom-tutorial/configuration-1.png)
+ ![Screenshot shows the Admin profile selected.](./media/mondaycom-tutorial/admin.png)
1. Select **Security** and make sure to click on **Open** next to SAML.
- ![Screenshot shows the Security tab with the option to Open next to SAML.](./media/mondaycom-tutorial/configuration-2.png)
+ ![Screenshot shows the Security tab with the option to Open next to SAML.](./media/mondaycom-tutorial/security.png)
1. Fill in the details below from your IDP.
- ![Screenshot shows the SAML provider where you can enter information from your I D P.](./media/mondaycom-tutorial/configuration-3.png)
+ ![Screenshot shows the SAML provider where you can enter information from your I D P.](./media/mondaycom-tutorial/configuration.png)
> [!NOTE]
- > For more details refer [this](https://support.monday.com/hc/articles/360000460605-SAML-Single-Sign-on?abcb=34642) article
+ > For more details refer [this](https://support.monday.com/hc/articles/360000460605-SAML-Single-Sign-on?abcb=34642) article.
### Create monday.com test user
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure monday.com you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure monday.com you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Oracle Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/oracle-cloud-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Oracle Cloud Infrastructure Console | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Oracle Cloud Infrastructure Console'
description: Learn how to configure single sign-on between Azure Active Directory and Oracle Cloud Infrastructure Console.
Previously updated : 10/04/2020 Last updated : 01/28/2022
-# Tutorial: Integrate Oracle Cloud Infrastructure Console with Azure Active Directory
+# Tutorial: Azure AD SSO integration with Oracle Cloud Infrastructure Console
In this tutorial, you'll learn how to integrate Oracle Cloud Infrastructure Console with Azure Active Directory (Azure AD). When you integrate Oracle Cloud Infrastructure Console with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Oracle Cloud Infrastructure Console single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Oracle Cloud Infrastructure Console supports **SP** initiated SSO. * Oracle Cloud Infrastructure Console supports [**Automated** user provisioning and deprovisioning](oracle-cloud-infrastructure-console-provisioning-tutorial.md) (recommended).
-## Adding Oracle Cloud Infrastructure Console from the gallery
+## Add Oracle Cloud Infrastructure Console from the gallery
To configure the integration of Oracle Cloud Infrastructure Console into Azure AD, you need to add Oracle Cloud Infrastructure Console from the gallery to your list of managed SaaS apps.
Configure and test Azure AD SSO with Oracle Cloud Infrastructure Console using a
To configure and test Azure AD SSO with Oracle Cloud Infrastructure Console, perform the following steps: 1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on with B. Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** to enable B. Simon to use Azure AD single sign-on.
-1. **[Configure Oracle Cloud Infrastructure Console](#configure-oracle-cloud-infrastructure-console)** to configure the SSO settings on application side.
- 1. **[Create Oracle Cloud Infrastructure Console test user](#create-oracle-cloud-infrastructure-console-test-user)** to have a counterpart of B. Simon in Oracle Cloud Infrastructure Console that is linked to the Azure AD representation of user.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on with B. Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** to enable B. Simon to use Azure AD single sign-on.
+1. **[Configure Oracle Cloud Infrastructure Console SSO](#configure-oracle-cloud-infrastructure-console-sso)** to configure the SSO settings on application side.
+ 1. **[Create Oracle Cloud Infrastructure Console test user](#create-oracle-cloud-infrastructure-console-test-user)** to have a counterpart of B. Simon in Oracle Cloud Infrastructure Console that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal. 1. In the Azure portal, on the **Oracle Cloud Infrastructure Console** application integration page, find the **Manage** section and select **Single sign-on**. 1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** page, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
> [!NOTE] > You will get the Service Provider metadata file from the **Configure Oracle Cloud Infrastructure Console Single Sign-On** section of the tutorial.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Once the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in **Basic SAML Configuration** section textbox. > [!NOTE]
- > If the **Identifier** and **Reply URL** values do not get auto polulated, then fill in the values manually according to your requirement.
+ > If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
In the **Sign-on URL** text box, type a URL using the following pattern: `https://console.<REGIONNAME>.oraclecloud.com/`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Save**.
- ![image2](./media/oracle-cloud-tutorial/config07.png)
+ ![Screenshot showing image2](./media/oracle-cloud-tutorial/attributes.png)
- ![image3](./media/oracle-cloud-tutorial/config11.png)
+ ![Screenshot showing image3](./media/oracle-cloud-tutorial/claims.png)
1. Click the **pen** next to **Groups returned in claim**.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Save**.
- ![image4](./media/oracle-cloud-tutorial/config08.png)
+ ![Screenshot showing image4](./media/oracle-cloud-tutorial/groups.png)
1. On the **Set up Oracle Cloud Infrastructure Console** section, copy the appropriate URL(s) based on your requirement.
In this section, you'll enable B. Simon to use Azure single sign-on by granting
1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Oracle Cloud Infrastructure Console
+## Configure Oracle Cloud Infrastructure Console SSO
1. In a different web browser window, sign in to Oracle Cloud Infrastructure Console as an Administrator. 1. Click on the left side of the menu and click on **Identity** then navigate to **Federation**.
- ![Configuration1](./media/oracle-cloud-tutorial/config01.png)
+ ![Screenshot showing Configuration1](./media/oracle-cloud-tutorial/menu.png)
1. Save the **Service Provider metadata file** by clicking the **Download this document** link and upload it into the **Basic SAML Configuration** section of Azure portal and then click on **Add Identity Provider**.
- ![Configuration2](./media/oracle-cloud-tutorial/config02.png)
+ ![Screenshot showing Configuration2](./media/oracle-cloud-tutorial/metadata.png)
1. On the **Add Identity Provider** pop-up, perform the following steps:
- ![Configuration3](./media/oracle-cloud-tutorial/config03.png)
+ ![Screenshot showing Configuration3](./media/oracle-cloud-tutorial/file.png)
1. In the **NAME** text box, enter your name.
In this section, you'll enable B. Simon to use Azure single sign-on by granting
1. Click **Continue** and on the **Edit Identity Provider** section perform the following steps:
- ![Configuration4](./media/oracle-cloud-tutorial/configure-09.png)
+ ![Screenshot showing Configuration4](./media/oracle-cloud-tutorial/mapping.png)
1. The **IDENTITY PROVIDER GROUP** should be selected as Azure AD Group Object ID. The GROUP ID should be the GUID of the group from Azure Active Directory. The group needs to be mapped with corresponding group in **OCI GROUP** field.
In this section, you'll enable B. Simon to use Azure single sign-on by granting
Oracle Cloud Infrastructure Console supports just-in-time provisioning, which is by default. There is no action item for you in this section. A new user does not get created during an attempt to access and also no need to create the user.
-### Test SSO
+## Test SSO
-When you select the Oracle Cloud Infrastructure Console tile in the Access Panel, you will be redirected to the Oracle Cloud Infrastructure Console sign in page. Select the **IDENTITY PROVIDER** from the drop-down menu and click **Continue** as shown below to sign in. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+When you select the Oracle Cloud Infrastructure Console tile in the My Apps, you will be redirected to the Oracle Cloud Infrastructure Console sign-in page. Select the **IDENTITY PROVIDER** from the drop-down menu and click **Continue** as shown below to sign in. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-![Configuration](./media/oracle-cloud-tutorial/config10.png)
+![Screenshot showing Configuration](./media/oracle-cloud-tutorial/tenant.png)
## Next steps
-Once you configure the Oracle Cloud Infrastructure Console you can enforce session controls, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session controls extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure the Oracle Cloud Infrastructure Console you can enforce session controls, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
The following diagram illustrates the Azure AD Verifiable Credentials architectu
![Diagram that illustrates the Azure AD Verifiable Credentials architecture.](media/verifiable-credentials-configure-tenant/verifiable-credentials-architecture.png)
+See a [video walkthrough](https://www.youtube.com/watch?v=8jqjHjQo-3c) of setting up the Azure AD Verifiable Credential service, including all prerequisites, like Azure AD and an Azure subscription.
+ ## Prerequisites - If you don't have Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-about.md
OSM provides the following capabilities and features:
- Define and execute fine grained access control policies for services. - Monitor and debug services using observability and insights into application metrics. - Integrate with external certificate management.
+- Integrates with existing ingress solutions such as the [Azure Gateway Ingress Controller][agic], [NGINX][nginx], and [Contour][contour]. For more details on how ingress works with OSM, see [Using Ingress to manage external access to services within the cluster][osm-ingress]. For an example on integrating OSM with Contour for ingress, see [Ingress with Contour][osm-contour]. For an example on integrating OSM with ingress controllers that use the `networking.k8s.io/v1` API, such as NGINX, see [Ingress with Kubernetes Nginx Ingress Controller][osm-nginx].
## Example scenarios
The OSM AKS add-on has the following limitations:
* [Iptables redirection][ip-tables-redirection] for port IP address and port range exclusion must be enabled using `kubectl patch` after installation. For more details, see [iptables redirection][ip-tables-redirection]. * Pods that are onboarded to the mesh that need access to IMDS, Azure DNS, or the Kubernetes API server must have their IP addresses to the global list of excluded outbound IP ranges using [Global outbound IP range exclusions][global-exclusion].
+## Next steps
+
+After enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep template][osm-bicep], you can:
+* [Deploy a sample application][osm-deploy-sample-app]
+* [Onboard an existing application][osm-onboard-app]
+
+[ip-tables-redirection]: https://release-v1-0.docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/
+[global-exclusion]: https://release-v1-0.docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/#global-outbound-ip-range-exclusions
[osm-azure-cli]: open-service-mesh-deploy-addon-az-cli.md [osm-bicep]: open-service-mesh-deploy-addon-bicep.md
+[osm-deploy-sample-app]: https://release-v1-0.docs.openservicemesh.io/docs/getting_started/install_apps/
+[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/
[ip-tables-redirection]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/
-[global-exclusion]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/#global-outbound-ip-range-exclusions
+[global-exclusion]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/#global-outbound-ip-range-exclusions
+[agic]: ../application-gateway/ingress-controller-overview.md
+[nginx]: https://github.com/kubernetes/ingress-nginx
+[contour]: https://projectcontour.io/
+[osm-ingress]: https://release-v1-0.docs.openservicemesh.io/docs/guides/traffic_management/ingress/
+[osm-contour]: https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_contour
+[osm-nginx]: https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx
aks Open Service Mesh Azure Application Gateway Ingress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-azure-application-gateway-ingress.md
- Title: Using Azure Application Gateway Ingress
-description: How to use Azure Application Gateway Ingress with Open Service Mesh
-- Previously updated : 8/26/2021---
-# Deploy an application managed by Open Service Mesh (OSM) using Azure Application Gateway ingress AKS add-on
-
-In this tutorial, you will:
-
-> [!div class="checklist"]
->
-> - View the current OSM cluster configuration
-> - Create the namespace(s) for OSM to manage deployed applications in the namespace(s)
-> - Onboard the namespaces to be managed by OSM
-> - Deploy the sample application
-> - Verify the application running inside the AKS cluster
-> - Create an Azure Application Gateway to be used as the ingress controller for the application
-> - Expose a service via the Azure Application Gateway ingress to the internet
-
-## Before you begin
-
-The steps detailed in this walkthrough assume that you have previously enabled the OSM AKS add-on for your AKS cluster. If not, review the article [Deploy the OSM AKS add-on](./open-service-mesh-deploy-addon-az-cli.md) before proceeding. Also, your AKS cluster needs to be version Kubernetes `1.19+` and above, have Kubernetes RBAC enabled, and have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
-
-You must have the following resources installed:
--- The Azure CLI, version 2.20.0 or later-- OSM version v0.11.1 or later-- JSON processor "jq" version 1.6+-
-## View and verify the current OSM cluster configuration
-
-Once the OSM add-on for AKS has been enabled on the AKS cluster, you can view the current configuration parameters in the osm-mesh-config resource. Run the following command to view the properties:
-
-```azurecli-interactive
-kubectl get meshconfig osm-mesh-config -n kube-system -o yaml
-```
-
-Output shows the current OSM MeshConfig for the cluster.
-
-```
-apiVersion: config.openservicemesh.io/v1alpha1
-kind: MeshConfig
-metadata:
- creationTimestamp: "0000-00-00A00:00:00A"
- generation: 1
- name: osm-mesh-config
- namespace: kube-system
- resourceVersion: "2494"
- uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31
-spec:
- certificate:
- serviceCertValidityDuration: 24h
- featureFlags:
- enableEgressPolicy: true
- enableMulticlusterMode: false
- enableWASMStats: true
- observability:
- enableDebugServer: true
- osmLogLevel: info
- tracing:
- address: jaeger.osm-system.svc.cluster.local
- enable: false
- endpoint: /api/v2/spans
- port: 9411
- sidecar:
- configResyncInterval: 0s
- enablePrivilegedInitContainer: false
- envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
- initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
- logLevel: error
- maxDataPlaneConnections: 0
- resources: {}
- traffic:
- enableEgress: true
- enablePermissiveTrafficPolicyMode: true
- inboundExternalAuthorization:
- enable: false
- failureModeAllow: false
- statPrefix: inboundExtAuthz
- timeout: 1s
- useHTTPSIngress: false
-```
-
-Notice the **enablePermissiveTrafficPolicyMode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
-
-## Create namespaces for the application
-
-In this tutorial we will be using the OSM bookstore application that has the following application components:
--- `bookbuyer`-- `bookthief`-- `bookstore`-- `bookwarehouse`-
-Create namespaces for each of these application components.
-
-```azurecli-interactive
-for i in bookstore bookbuyer bookthief bookwarehouse; do kubectl create ns $i; done
-```
-
-You should see the following output:
-
-```Output
-namespace/bookstore created
-namespace/bookbuyer created
-namespace/bookthief created
-namespace/bookwarehouse created
-```
-
-## Onboard the namespaces to be managed by OSM
-
-When you add the namespaces to the OSM mesh, this will allow the OSM controller to automatically inject the Envoy sidecar proxy containers with your application. Run the following command to onboard the OSM bookstore application namespaces.
-
-```azurecli-interactive
-osm namespace add bookstore bookbuyer bookthief bookwarehouse
-```
-
-You should see the following output:
-
-```Output
-Namespace [bookstore] successfully added to mesh [osm]
-Namespace [bookbuyer] successfully added to mesh [osm]
-Namespace [bookthief] successfully added to mesh [osm]
-Namespace [bookwarehouse] successfully added to mesh [osm]
-```
-
-## Deploy the Bookstore application
-
-```azurecli-interactive
-SAMPLE_VERSION=v0.11
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookbuyer.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookthief.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookstore.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookwarehouse.yaml
-```
-
-All of the deployment outputs are summarized below.
-
-```Output
-serviceaccount/bookbuyer created
-service/bookbuyer created
-deployment.apps/bookbuyer created
-
-serviceaccount/bookthief created
-service/bookthief created
-deployment.apps/bookthief created
-
-service/bookstore created
-serviceaccount/bookstore created
-deployment.apps/bookstore created
-
-serviceaccount/bookwarehouse created
-service/bookwarehouse created
-deployment.apps/bookwarehouse created
-```
-
-## Update the `Bookbuyer` Service
-
-Update the `bookbuyer` service to the correct inbound port configuration with the following service manifest.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-apiVersion: v1
-kind: Service
-metadata:
- name: bookbuyer
- namespace: bookbuyer
- labels:
- app: bookbuyer
-spec:
- ports:
- - port: 14001
- name: inbound-port
- selector:
- app: bookbuyer
-EOF
-```
-
-## Verify the Bookstore application
-
-As of now we have deployed the bookstore multi-container application, but it is only accessible from within the AKS cluster. Later we will add the Azure Application Gateway ingress controller to expose the application outside the AKS cluster. To verify that the application is running inside the cluster, we will use a port forward to view the `bookbuyer` component UI.
-
-First let's get the `bookbuyer` pod's name
-
-```azurecli-interactive
-kubectl get pod -n bookbuyer
-```
-
-You should see output similar to the following. Your `bookbuyer` pod will have a unique name appended.
-
-```Output
-NAME READY STATUS RESTARTS AGE
-bookbuyer-7676c7fcfb-mtnrz 2/2 Running 0 7m8s
-```
-
-Once we have the pod's name, we can now use the port-forward command to set up a tunnel from our local system to the application inside the AKS cluster. Run the following command to set up the port forward for the local system port 8080. Again use your specific `bookbuyer` pod name.
-
-```azurecli-interactive
-kubectl port-forward bookbuyer-7676c7fcfb-mtnrz -n bookbuyer 8080:14001
-```
-
-You should see output similar to this.
-
-```Output
-Forwarding from 127.0.0.1:8080 -> 14001
-Forwarding from [::1]:8080 -> 14001
-```
-
-While the port forwarding session is in place, navigate to the following url from a browser `http://localhost:8080`. You should now be able to see the `bookbuyer` application UI in the browser similar to the image below.
-
-![OSM bookbuyer app for App Gateway UI image](./media/aks-osm-addon/osm-agic-bookbuyer-img.png)
-
-## Create an Azure Application Gateway to expose the `bookbuyer` application
-
-> [!NOTE]
-> The following directions will create a new instance of the Azure Application Gateway to be used for ingress. If you have an existing Azure Application Gateway you wish to use, skip to the section for enabling the Application Gateway Ingress Controller add-on.
-
-### Deploy a new Application Gateway
-
-> [!NOTE]
-> We are referencing existing documentation for enabling the Application Gateway Ingress Controller add-on for an existing AKS cluster. Some modifications have been made to suit the OSM materials. More detailed documentation on the subject can be found [here](../application-gateway/tutorial-ingress-controller-add-on-existing.md).
-
-You'll now deploy a new Application Gateway, to simulate having an existing Application Gateway that you want to use to load balance traffic to your AKS cluster, _myCluster_. The name of the Application Gateway will be _myApplicationGateway_, but you will need to first create a public IP resource, named _myPublicIp_, and a new virtual network called _myVnet_ with address space 11.0.0.0/8, and a subnet with address space 11.1.0.0/16 called _mySubnet_, and deploy your Application Gateway in _mySubnet_ using _myPublicIp_.
-
-When using an AKS cluster and Application Gateway in separate virtual networks, the address spaces of the two virtual networks must not overlap. The default address space that an AKS cluster deploys in is 10.0.0.0/8, so we set the Application Gateway virtual network address prefix to 11.0.0.0/8.
-
-```azurecli-interactive
-az group create --name myResourceGroup --location eastus2
-az network public-ip create -n myPublicIp -g MyResourceGroup --allocation-method Static --sku Standard
-az network vnet create -n myVnet -g myResourceGroup --address-prefix 11.0.0.0/8 --subnet-name mySubnet --subnet-prefix 11.1.0.0/16
-az network application-gateway create -n myApplicationGateway -l eastus2 -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet
-```
-
-> [!NOTE]
-> Application Gateway Ingress Controller (AGIC) add-on **only** supports Application Gateway v2 SKUs (Standard and WAF), and **not** the Application Gateway v1 SKUs.
-
-### Enable the AGIC add-on for an existing AKS cluster through Azure CLI
-
-If you'd like to continue using Azure CLI, you can continue to enable the AGIC add-on in the AKS cluster you created, _myCluster_, and specify the AGIC add-on to use the existing Application Gateway you created, _myApplicationGateway_.
-
-```azurecli-interactive
-appgwId=$(az network application-gateway show -n myApplicationGateway -g myResourceGroup -o tsv --query "id")
-az aks enable-addons -n myCluster -g myResourceGroup -a ingress-appgw --appgw-id $appgwId
-```
-
-You can verify the Azure Application Gateway AKS add-on has been enabled by the following command.
-
-```azurecli-interactive
-az aks list -g osm-aks-rg -o json | jq -r .[].addonProfiles.ingressApplicationGateway.enabled
-```
-
-This command should show the output as `true`.
-
-### Peer the two virtual networks together
-
-Since we deployed the AKS cluster in its own virtual network and the Application Gateway in another virtual network, you'll need to peer the two virtual networks together in order for traffic to flow from the Application Gateway to the pods in the cluster. Peering the two virtual networks requires running the Azure CLI command two separate times, to ensure that the connection is bi-directional. The first command will create a peering connection from the Application Gateway virtual network to the AKS virtual network; the second command will create a peering connection in the other direction.
-
-```azurecli-interactive
-nodeResourceGroup=$(az aks show -n myCluster -g myResourceGroup -o tsv --query "nodeResourceGroup")
-aksVnetName=$(az network vnet list -g $nodeResourceGroup -o tsv --query "[0].name")
-
-aksVnetId=$(az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query "id")
-az network vnet peering create -n AppGWtoAKSVnetPeering -g myResourceGroup --vnet-name myVnet --remote-vnet $aksVnetId --allow-vnet-access
-
-appGWVnetId=$(az network vnet show -n myVnet -g myResourceGroup -o tsv --query "id")
-az network vnet peering create -n AKStoAppGWVnetPeering -g $nodeResourceGroup --vnet-name $aksVnetName --remote-vnet $appGWVnetId --allow-vnet-access
-```
-
-## Expose the `bookbuyer` service to the internet
-
-Apply the following ingress manifest to the AKS cluster to expose the `bookbuyer` service to the internet via the Azure Application Gateway.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- name: bookbuyer-ingress
- namespace: bookbuyer
- annotations:
- kubernetes.io/ingress.class: azure/application-gateway
-
-spec:
-
- rules:
- - host: bookbuyer.contoso.com
- http:
- paths:
- - path: /
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-EOF
-```
-
-You should see the following output
-
-```Output
-Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
-ingress.extensions/bookbuyer-ingress created
-```
-
-Since the host name in the ingress manifest is a pseudo name used for testing, the DNS name will not be available on the internet. We can alternatively use the curl program and past the hostname header to the Azure Application Gateway public IP address and receive a 200 code successfully connecting us to the `bookbuyer` service.
-
-```azurecli-interactive
-appGWPIP=$(az network public-ip show -g MyResourceGroup -n myPublicIp -o tsv --query "ipAddress")
-curl -H 'Host: bookbuyer.contoso.com' http://$appGWPIP/
-```
-
-You should see the following output
-
-```Output
-<!doctype html>
-<html itemscope="" itemtype="http://schema.org/WebPage" lang="en">
- <head>
- <meta content="Bookbuyer" name="description">
- <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
- <title>Bookbuyer</title>
- <style>
- #navbar {
- width: 100%;
- height: 50px;
- display: table;
- border-spacing: 0;
- white-space: nowrap;
- line-height: normal;
- background-color: #0078D4;
- background-position: left top;
- background-repeat-x: repeat;
- background-image: none;
- color: white;
- font: 2.2em "Fira Sans", sans-serif;
- }
- #main {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.8em "Fira Sans", sans-serif;
- }
- li {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.2em "Consolas", sans-serif;
- }
- </style>
- <script>
- setTimeout(function(){window.location.reload(1);}, 1500);
- </script>
- </head>
- <body bgcolor="#fff">
- <div id="navbar">
- &#128214; Bookbuyer
- </div>
- <div id="main">
- <ul>
- <li>Total books bought: <strong>5969</strong>
- <ul>
- <li>from bookstore V1: <strong>277</strong>
- <li>from bookstore V2: <strong>5692</strong>
- </ul>
- </li>
- </ul>
- </div>
-
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
-
- Current Time: <strong>Fri, 26 Mar 2021 16:34:30 UTC</strong>
- </body>
-</html>
-```
-
-## Troubleshooting
--- [AGIC Troubleshooting Documentation](../application-gateway/ingress-controller-troubleshoot.md)-- [Additional troubleshooting tools are available on AGIC's GitHub repo](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/troubleshootings/troubleshooting-installing-a-simple-application.md)
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-deploy-addon-az-cli.md
Alternatively, you can uninstall the OSM add-on and the related resources from y
## Next steps
-This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed an running. To deploy a sample application on your OSM mesh, see [Manage a new application with OSM on AKS][osm-sample]
+This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed an running. With the the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
[aks-ephemeral]: cluster-configuration.md#ephemeral-os [osm-sample]: open-service-mesh-deploy-new-application.md [osm-uninstall]: open-service-mesh-uninstall-add-on.md
-[smi]: https://smi-spec.io/
+[smi]: https://smi-spec.io/
+[osm-deploy-sample-app]: https://release-v1-0.docs.openservicemesh.io/docs/getting_started/install_apps/
+[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-deploy-addon-bicep.md
az group delete --name osm-bicep-test
Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh (OSM) add-on from your AKS cluster][osm-uninstall].
+## Next steps
+
+This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed an running. With the the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
+ <!-- Links --> <!-- Internal -->
Alternatively, you can uninstall the OSM add-on and the related resources from y
[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update [osm-uninstall]: open-service-mesh-uninstall-add-on.md
+[osm-deploy-sample-app]: https://release-v1-0.docs.openservicemesh.io/docs/getting_started/install_apps/
+[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/
aks Open Service Mesh Deploy Existing Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-deploy-existing-application.md
- Title: Onboard applications to Open Service Mesh
-description: How to onboard an application to Open Service Mesh
-- Previously updated : 8/26/2021---
-# Onboarding applications to Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on
-
-The following guide describes how to onboard a kubernetes microservice to OSM.
-
-## Before you begin
-
-The steps detailed in this walk-through assume that you've previously enabled the OSM AKS add-on for your AKS cluster. If not, review the article [Deploy the OSM AKS add-on](./open-service-mesh-deploy-addon-az-cli.md) before proceeding. Also, your AKS cluster needs to be version Kubernetes `1.19+` and above, have Kubernetes RBAC enabled, and have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
-
-You must have the following resources installed:
--- The Azure CLI, version 2.20.0 or later-- OSM add-on version v0.11.1 or later-- OSM CLI version v0.11.1 or later-
-## Verify the Open Service Mesh (OSM) Permissive Traffic Mode Policy
-
-The OSM Permissive Traffic Policy mode is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
-
-To verify the current permissive traffic mode of OSM for your cluster, run the following command:
-
-```azurecli-interactive
-kubectl get meshconfig osm-mesh-config -n kube-system -o jsonpath='{.spec.traffic.enablePermissiveTrafficPolicyMode}{"\n"}'
-true
-```
-
-If the **enablePermissiveTrafficPolicyMode** is configured to **true**, you can safely onboard your namespaces without any disruption to your service-to-service communications. If the **enablePermissiveTrafficPolicyMode** is configured to **false**, you'll need to ensure you have the correct [SMI](https://smi-spec.io/) traffic access policy manifests deployed. You'll also need to ensure you have a service account representing each service deployed in the namespace. For more detailed information about permissive traffic mode, please visit and read the [Permissive Traffic Policy Mode](https://docs.openservicemesh.io/docs/guides/traffic_management/permissive_mode/) article.
-
-## Onboard applications with Open Service Mesh (OSM) Permissive Traffic Policy configured as True
-
-1. Refer to the [application requirements](https://docs.openservicemesh.io/docs/guides/app_onboarding/prereqs/) guide before onboarding applications.
-
-1. If an application in the mesh needs to communicate with the Kubernetes API server, the user needs to explicitly allow this either by using IP range exclusion or by creating an egress policy.
-
-1. Onboard Kubernetes Namespaces to OSM
-
- To onboard a namespace containing applications to be managed by OSM, run the `osm namespace add` command:
-
- ```console
- $ osm namespace add <namespace>
- ```
-
- By default, the `osm namespace add` command enables automatic sidecar injection for pods in the namespace.
-
- To disable automatic sidecar injection as a part of enrolling a namespace into the mesh, use `osm namespace add <namespace> --disable-sidecar-injection`.
- Once a namespace has been onboarded, pods can be enrolled in the mesh by configuring automatic sidecar injection. See the [Sidecar Injection](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) document for more details.
-
-1. Deploy new applications or redeploy existing applications
-
- By default, new deployments in onboarded namespaces are enabled for automatic sidecar injection. This means that when a new pod is created in a managed namespace, OSM will automatically inject the sidecar proxy to the Pod.
- Existing deployments need to be restarted so that OSM can automatically inject the sidecar proxy upon Pod re-creation. Pods managed by a deployment can be restarted using the `kubectl rollout restart deploy` command.
-
- In order to route protocol specific traffic correctly to service ports, configure the application protocol to use. Refer to the [application protocol selection guide](https://docs.openservicemesh.io/docs/guides/app_onboarding/app_protocol_selection/) to learn more.
--
-## Onboard existing deployed applications with Open Service Mesh (OSM) Permissive Traffic Policy configured as False
-
-When the OSM configuration for the permissive traffic policy is set to `false`, OSM will require explicit [SMI](https://smi-spec.io/) traffic access policies deployed for the service-to-service communication to happen within your cluster. Since OSM uses Kubernetes service accounts to implement access control policies between applications in the mesh, apply [SMI](https://smi-spec.io/) traffic access policies to authorize traffic flow between applications.
-
-For example SMI policies, please see the following examples:
- - [demo/deploy-traffic-specs.sh](https://github.com/openservicemesh/osm/blob/release-v0.11/demo/deploy-traffic-specs.sh)
- - [demo/deploy-traffic-split.sh](https://github.com/openservicemesh/osm/blob/release-v0.11/demo/deploy-traffic-split.sh)
- - [demo/deploy-traffic-target.sh](https://github.com/openservicemesh/osm/blob/release-v0.11/demo/deploy-traffic-target.sh)
--
-#### Removing Namespaces
-Namespaces can be removed from the OSM mesh with the `osm namespace remove` command:
-
-```console
-$ osm namespace remove <namespace>
-```
-
-> [!NOTE]
->
-> - The **`osm namespace remove`** command only tells OSM to stop applying updates to the sidecar proxy configurations in the namespace. It **does not** remove the proxy sidecars. This means the existing proxy configuration will continue to be used, but it will not be updated by the OSM control plane. If you wish to remove the proxies from all pods, remove the pods' namespaces from the mesh using OSM LCI and redeploy the corresponding pods or deployments.
aks Open Service Mesh Deploy New Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-deploy-new-application.md
- Title: Manage a new application with Open Service Mesh
-description: How to manage a new application with Open Service Mesh
-- Previously updated : 11/10/2021---
-# Manage a new application with Open Service Mesh (OSM) on Azure Kubernetes Service (AKS)
-
-This article shows you how to run a sample application on your OSM mesh running on AKS.
-
-## Prerequisites
--- An existing AKS cluster with the AKS OSM add-on installed. If you need to create a cluster or enable the AKS OSM add-on on an existing cluster, see [Install the Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on using Azure CLI][osm-cli]-- OSM mesh version v0.11.1 or later running on your cluster.-- The Azure CLI, version 2.20.0 or later.-- The latest version of the OSM CLI.-
-## Verify your mesh has permissive mode enabled
-
-Use `kubectl get meshconfig osm-mesh-config` to verify *enablePermissveTrafficPolicyMode* is *true*. For example:
-
-```azurecli-interactive
-kubectl get meshconfig osm-mesh-config -n kube-system -o=jsonpath='{$.spec.traffic.enablePermissiveTrafficPolicyMode}'
-```
-
-If permissive mode is not enabled, you can enable it using `kubectl patch meshconfig osm-mesh-config`. For example:
-
-```azurecli-interactive
-kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge
-```
-
-## Create and onboard the namespaces to be managed by OSM
-
-When you add namespaces to the OSM mesh, the OSM controller automatically injects the Envoy sidecar proxy containers with applications deployed in those namespaces. Use `kubectl create ns` to create the *bookstore*, *bookbuyer*, *bookthief*, and *bookwarehouse* namespaces, then use `osm namespace add` to add those namespaces to your mesh.
-
-```azurecli-interactive
-kubectl create ns bookstore
-kubectl create ns bookbuyer
-kubectl create ns bookthief
-kubectl create ns bookwarehouse
-
-osm namespace add bookstore bookbuyer bookthief bookwarehouse
-```
-
-You should see the following output:
-
-```output
-namespace/bookstore created
-namespace/bookbuyer created
-namespace/bookthief created
-namespace/bookwarehouse created
-
-Namespace [bookstore] successfully added to mesh [osm]
-Namespace [bookbuyer] successfully added to mesh [osm]
-Namespace [bookthief] successfully added to mesh [osm]
-Namespace [bookwarehouse] successfully added to mesh [osm]
-```
-
-## Deploy the sample application to the AKS cluster
-
-Use `kubectl apply` to deploy the sample application to your cluster.
-
-```azurecli-interactive
-SAMPLE_VERSION=v0.11
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookbuyer.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookthief.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookstore.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookwarehouse.yaml
-```
-
-You should see the following output:
-
-```output
-serviceaccount/bookbuyer created
-deployment.apps/bookbuyer created
-serviceaccount/bookthief created
-deployment.apps/bookthief created
-service/bookstore created
-serviceaccount/bookstore created
-deployment.apps/bookstore created
-serviceaccount/bookwarehouse created
-service/bookwarehouse created
-deployment.apps/bookwarehouse created
-```
-
-The sample application is an example of a multi-tiered application that works well for testing service mesh functionality. The application consists of four
-
-![OSM sample application architecture](./media/aks-osm-addon/osm-bookstore-app-arch.png)
-
-Both the *bookbuyer* and *bookthief* service communicate to the *bookstore* service to retrieve books from the *bookstore* service. The *bookstore* service retrieves books from the *bookwarehouse* service. This application helps demonstrate how a service mesh can be used to protect and authorize communications between the services. For example, later sections show how to disable permissive traffic mode and use SMI policies to secure access to services.
-
-## Access the bookbuyer and bookthief services using port forwarding
-
-Use `kubectl get pod` to get the name of the *bookbuyer* pod in the *bookbuyer* namespace. For example:
-
-```output
-$ kubectl get pod -n bookbuyer
-
-NAME READY STATUS RESTARTS AGE
-bookbuyer-1245678901-abcde 2/2 Running 0 7m8s
-```
-
-Open a new terminal and use `kubectl port forward` to begin forwarding traffic between your development computer and the *bookbuyer* pod. For example:
-
-```output
-$ kubectl port-forward bookbuyer-1245678901-abcde -n bookbuyer 8080:14001
-Forwarding from 127.0.0.1:8080 -> 14001
-Forwarding from [::1]:8080 -> 14001
-```
-
-The above example shows traffic is being forwarded between port 8080 on your development computer and 14001 on pod *bookbuyer-1245678901-abcde*.
-
-Go to `http://localhost:8080` on a web browser and confirm you see the *bookbuyer* application. For example:
-
-![OSM bookbuyer application](./media/aks-osm-addon/osm-bookbuyer-service-ui.png)
-
-Notice the number of bought books continues to increase. Stop the port forwarding command.
-
-Use `kubectl get pod` to get the name of the *bookthief* pod in the *bookthief* namespace. For example:
-
-```output
-$ kubectl get pod -n bookthief
-
-NAME READY STATUS RESTARTS AGE
-bookthief-1245678901-abcde 2/2 Running 0 7m8s
-```
-
-Open a new terminal and use `kubectl port forward` to begin forwarding traffic between your development computer and the *bookthief* pod. For example:
-
-```output
-$ kubectl port-forward bookthief-1245678901-abcde -n bookthief 8080:14001
-Forwarding from 127.0.0.1:8080 -> 14001
-Forwarding from [::1]:8080 -> 14001
-```
-
-The above example shows traffic is being forwarded between port 8080 on your development computer and 14001 on pod *bookthief-1245678901-abcde*.
-
-Go to `http://localhost:8080` on a web browser and confirm you see the *bookthief* application. For example:
-
-![OSM bookthief application](./media/aks-osm-addon/osm-bookthief-service-ui.png)
-
-Notice the number of stolen books continues to increase. Stop the port forwarding command.
-
-## Disable permissive traffic mode on your mesh
-
-When permissive traffic mode is enabled, you do not need to define explicit [SMI][smi] policies for services to communicate with other services in onboarded namespaces. For more information on permissive traffic mode in OSM, see [Permissive Traffic Policy Mode][osm-permissive-traffic-mode].
-
-In the sample application with permissive mode enabled, both the *bookbuyer* and *bookthief* services can communicate with the *bookstore* service and obtain books.
-
-Use `kubectl patch meshconfig osm-mesh-config` to disable permissive traffic mode:
-
-```azurecli-interactive
-kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":false}}}' --type=merge
-```
-
-The following example output shows the *osm-mesh-config* has been updated:
-
-```output
-$ kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":false}}}' --type=merge
-
-meshconfig.config.openservicemesh.io/osm-mesh-config patched
-```
-
-Repeat the steps from the previous section to forward traffic between the *bookbuyer* service and your development computer. Confirm the counter is no longer incrementing, even if you refresh the page. Stop the port forwarding command and repeat the steps to forward traffic between the *bookthief* service and your development computer. Confirm the counter is no longer incrementing even if you refresh the page. Stop the port forwarding command.
-
-## Apply an SMI traffic access policy for buying books
-
-Create `allow-bookbuyer-smi.yaml` using the following YAML:
-
-```yaml
-apiVersion: access.smi-spec.io/v1alpha3
-kind: TrafficTarget
-metadata:
- name: bookbuyer-access-bookstore
- namespace: bookstore
-spec:
- destination:
- kind: ServiceAccount
- name: bookstore
- namespace: bookstore
- rules:
- - kind: HTTPRouteGroup
- name: bookstore-service-routes
- matches:
- - buy-a-book
- - books-bought
- sources:
- - kind: ServiceAccount
- name: bookbuyer
- namespace: bookbuyer
-
-apiVersion: specs.smi-spec.io/v1alpha4
-kind: HTTPRouteGroup
-metadata:
- name: bookstore-service-routes
- namespace: bookstore
-spec:
- matches:
- - name: books-bought
- pathRegex: /books-bought
- methods:
- - GET
- headers:
- - "user-agent": ".*-http-client/*.*"
- - "client-app": "bookbuyer"
- - name: buy-a-book
- pathRegex: ".*a-book.*new"
- methods:
- - GET
- - name: update-books-bought
- pathRegex: /update-books-bought
- methods:
- - POST
-
-kind: TrafficTarget
-apiVersion: access.smi-spec.io/v1alpha3
-metadata:
- name: bookstore-access-bookwarehouse
- namespace: bookwarehouse
-spec:
- destination:
- kind: ServiceAccount
- name: bookwarehouse
- namespace: bookwarehouse
- rules:
- - kind: HTTPRouteGroup
- name: bookwarehouse-service-routes
- matches:
- - restock-books
- sources:
- - kind: ServiceAccount
- name: bookstore
- namespace: bookstore
- - kind: ServiceAccount
- name: bookstore-v2
- namespace: bookstore
-
-apiVersion: specs.smi-spec.io/v1alpha4
-kind: HTTPRouteGroup
-metadata:
- name: bookwarehouse-service-routes
- namespace: bookwarehouse
-spec:
- matches:
- - name: restock-books
- methods:
- - POST
- headers:
- - host: bookwarehouse.bookwarehouse
-```
-
-The above creates the following SMI access policies that allow the *bookbuyer* service to communicate with the *bookstore* service for buying books. It also allows the *bookstore* service to communicate with the *bookwarehouse* service for restocking books.
-
-Use `kubectl apply` to apply the SMI access policies.
-
-```azurecli-interactive
-kubectl apply -f allow-bookbuyer-smi.yaml
-```
-
-The following example output shows the SMI access policies successfully applied:
-
-```output
-$ kubectl apply -f allow-bookbuyer-smi.yaml
-
-traffictarget.access.smi-spec.io/bookbuyer-access-bookstore-v1 created
-httproutegroup.specs.smi-spec.io/bookstore-service-routes created
-traffictarget.access.smi-spec.io/bookstore-access-bookwarehouse created
-httproutegroup.specs.smi-spec.io/bookwarehouse-service-routes created
-```
-
-Repeat the steps from the previous section to forward traffic between the *bookbuyer* service and your development computer. Confirm the counter is incrementing. Stop the port forwarding command and repeat the steps to forward traffic between the *bookthief* service and your development computer. Confirm the counter is not incrementing even if you refresh the page. Stop the port forwarding command.
-
-## Apply an SMI traffic split policy for buying books
-
-In addition to access policies, you can also use SMI to create traffic split policies. Traffic split policies allow you to configure the distribution of communications from one service to multiple services as a backend. This capability can help you test a new version of a backend service by sending a small portion of traffic to it while sending the rest of traffic to the current version of the backend service. This capability can also help progressively transition more traffic to the new version of a service and reduce traffic to the previous version over time.
-
-The following diagram shows an SMI Traffic Split policy that sends 25% of traffic to the *bookstore-v1* service and 75% of traffic to the *bookstore-v2* service.
-
-![OSM bookbuyer traffic split diagram](./media/aks-osm-addon/osm-bookbuyer-traffic-split-diagram.png)
-
-Create `bookbuyer-v2.yaml` using the following YAML:
-
-```yaml
-apiVersion: v1
-kind: Service
-metadata:
- name: bookstore-v2
- namespace: bookstore
- labels:
- app: bookstore-v2
-spec:
- ports:
- - port: 14001
- name: bookstore-port
- selector:
- app: bookstore-v2
-
-# Deploy bookstore-v2 Service Account
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- name: bookstore-v2
- namespace: bookstore
-
-# Deploy bookstore-v2 Deployment
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: bookstore-v2
- namespace: bookstore
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: bookstore-v2
- template:
- metadata:
- labels:
- app: bookstore-v2
- spec:
- serviceAccountName: bookstore-v2
- containers:
- - name: bookstore
- image: openservicemesh/bookstore:v0.8.0
- imagePullPolicy: Always
- ports:
- - containerPort: 14001
- name: web
- command: ["/bookstore"]
- args: ["--path", "./", "--port", "14001"]
- env:
- - name: BOOKWAREHOUSE_NAMESPACE
- value: bookwarehouse
- - name: IDENTITY
- value: bookstore-v2
-
-kind: TrafficTarget
-apiVersion: access.smi-spec.io/v1alpha3
-metadata:
- name: bookbuyer-access-bookstore-v2
- namespace: bookstore
-spec:
- destination:
- kind: ServiceAccount
- name: bookstore-v2
- namespace: bookstore
- rules:
- - kind: HTTPRouteGroup
- name: bookstore-service-routes
- matches:
- - buy-a-book
- - books-bought
- sources:
- - kind: ServiceAccount
- name: bookbuyer
- namespace: bookbuyer
-```
-
-The above creates a *bookstore-v2* service and SMI policies that allow the *bookbuyer* service to communicate with the *bookstore-v2* service for buying books. It also uses the SMI policies created in the previous section to allow the *bookstore-v2* service to communicate with the *bookwarehouse* service for restocking books.
-
-Use `kubectl apply` to deploy *bookstore-v2* and apply the SMI access policies.
-
-```azurecli-interactive
-kubectl apply -f bookbuyer-v2.yaml
-```
-
-The following example output shows the SMI access policies successfully applied:
-
-```output
-$ kubectl apply -f bookbuyer-v2.yaml
-
-service/bookstore-v2 configured
-serviceaccount/bookstore-v2 created
-deployment.apps/bookstore-v2 created
-traffictarget.access.smi-spec.io/bookstore-v2 created
-```
-
-Create `bookbuyer-split-smi.yaml` using the following YAML:
-
-```yaml
-apiVersion: split.smi-spec.io/v1alpha2
-kind: TrafficSplit
-metadata:
- name: bookstore-split
- namespace: bookstore
-spec:
- service: bookstore.bookstore
- backends:
- - service: bookstore
- weight: 25
- - service: bookstore-v2
- weight: 75
-```
-
-The above creates an SMI policy that splits traffic for the *bookstore* service. The original or v1 version of *bookstore* receives 25% of traffic and *bookstore-v2* receives 75% of traffic.
-
-Use `kubectl apply` to apply the SMI split policy.
-
-```azurecli-interactive
-kubectl apply -f bookbuyer-split-smi.yaml
-```
-
-The following example output shows the SMI access policies successfully applied:
-
-```output
-$ kubectl apply -f bookbuyer-split-smi.yaml
-
-trafficsplit.split.smi-spec.io/bookstore-split created
-```
-
-Repeat the steps from the previous section to forward traffic between the *bookbuyer* service and your development computer. Confirm the counter is incrementing for both *bookstore v1* and *bookstore v2*. Also confirm the number for *bookstore v2* is incrementing faster than for *bookstore v1*.
-
-![OSM bookbuyer books bought UI](./media/aks-osm-addon/osm-bookbuyer-traffic-split-ui.png)
-
-Stop the port forwarding command.
--
-[osm-cli]: open-service-mesh-deploy-addon-az-cli.md
-[osm-permissive-traffic-mode]: https://docs.openservicemesh.io/docs/guides/traffic_management/permissive_mode/
-[smi]: https://smi-spec.io/
aks Open Service Mesh Nginx Ingress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/open-service-mesh-nginx-ingress.md
- Title: Using NGINX Ingress
-description: How to use NGINX Ingress with Open Service Mesh
-- Previously updated : 8/26/2021---
-# Deploy an application managed by Open Service Mesh (OSM) with NGINX ingress
-
-Open Service Mesh (OSM) is a lightweight, extensible, Cloud Native service mesh, allowing users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
-
-In this tutorial, you will:
-
-> [!div class="checklist"]
->
-> - View the current OSM cluster configuration
-> - Create the namespace(s) for OSM to manage deployed applications in the namespace(s)
-> - Onboard the namespaces to be managed by OSM
-> - Deploy the sample application
-> - Verify the application running inside the AKS cluster
-> - Create a NGINX ingress controller used for the appliction
-> - Expose a service via the Azure Application Gateway ingress to the internet
-
-## Before you begin
-
-The steps detailed in this article assume that you've created an AKS cluster (Kubernetes `1.19+` and above, with Kubernetes RBAC enabled), have established a `kubectl` connection with the cluster (If you need help with any of these items, then see the [AKS quickstart](./kubernetes-walkthrough.md), and have installed the AKS OSM add-on.
-
-You must have the following resources installed:
--- The Azure CLI, version 2.20.0 or later-- OSM version v0.11.1 or later-- JSON processor "jq" version 1.6+-
-### View and verify the current OSM cluster configuration
-
-Once the OSM add-on for AKS has been enabled on the AKS cluster, you can view the current configuration parameters in the osm-mesh-config resource. Run the following command to view the properties:
-
-```azurecli-interactive
-kubectl get meshconfig osm-mesh-config -n osm-system -o yaml
-```
-
-Output shows the current OSM configuration for the cluster.
-
-```
-apiVersion: config.openservicemesh.io/v1alpha1
-kind: MeshConfig
-metadata:
- creationTimestamp: "0000-00-00A00:00:00A"
- generation: 1
- name: osm-mesh-config
- namespace: kube-system
- resourceVersion: "2494"
- uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31
-spec:
- certificate:
- serviceCertValidityDuration: 24h
- featureFlags:
- enableEgressPolicy: true
- enableMulticlusterMode: false
- enableWASMStats: true
- observability:
- enableDebugServer: true
- osmLogLevel: info
- tracing:
- address: jaeger.osm-system.svc.cluster.local
- enable: false
- endpoint: /api/v2/spans
- port: 9411
- sidecar:
- configResyncInterval: 0s
- enablePrivilegedInitContainer: false
- envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
- initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
- logLevel: error
- maxDataPlaneConnections: 0
- resources: {}
- traffic:
- enableEgress: true
- enablePermissiveTrafficPolicyMode: true
- inboundExternalAuthorization:
- enable: false
- failureModeAllow: false
- statPrefix: inboundExtAuthz
- timeout: 1s
- useHTTPSIngress: false
-```
-
-Notice the **enablePermissiveTrafficPolicyMode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services. For more detailed information about permissive traffic mode, please visit and read the [Permissive Traffic Policy Mode](https://docs.openservicemesh.io/docs/guides/traffic_management/permissive_mode/) article.
-
-## Create namespaces for the application
-
-In this tutorial we will be using the OSM `bookstore` application that has the following application components:
--- `bookbuyer`-- `bookthief`-- `bookstore`-- `bookwarehouse`-
-Create namespaces for each of these application components.
-
-```azurecli-interactive
-for i in bookstore bookbuyer bookthief bookwarehouse; do kubectl create ns $i; done
-```
-
-You should see the following output:
-
-```Output
-namespace/bookstore created
-namespace/bookbuyer created
-namespace/bookthief created
-namespace/bookwarehouse created
-```
-
-## Onboard the namespaces to be managed by OSM
-
-Adding the namespaces to the OSM mesh will allow the OSM controller to automatically inject the Envoy sidecar proxy containers with your application. Run the following command to onboard the OSM `bookstore` application namespaces.
-
-```azurecli-interactive
-osm namespace add bookstore bookbuyer bookthief bookwarehouse
-```
-
-You should see the following output:
-
-```Output
-Namespace [bookstore] successfully added to mesh [osm]
-Namespace [bookbuyer] successfully added to mesh [osm]
-Namespace [bookthief] successfully added to mesh [osm]
-Namespace [bookwarehouse] successfully added to mesh [osm]
-```
-
-## Deploy the Bookstore application to the AKS cluster
-
-```azurecli-interactive
-SAMPLE_VERSION=v0.11
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookbuyer.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookthief.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookstore.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-$SAMPLE_VERSION/docs/example/manifests/apps/bookwarehouse.yaml
-```
-
-All of the deployment outputs are summarized below.
-
-```Output
-serviceaccount/bookbuyer created
-service/bookbuyer created
-deployment.apps/bookbuyer created
-
-serviceaccount/bookthief created
-service/bookthief created
-deployment.apps/bookthief created
-
-service/bookstore created
-serviceaccount/bookstore created
-deployment.apps/bookstore created
-
-serviceaccount/bookwarehouse created
-service/bookwarehouse created
-deployment.apps/bookwarehouse created
-```
-
-## Update the Bookbuyer service
-
-Update the `bookbuyer` service to the correct inbound port configuration with the following service manifest.
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-apiVersion: v1
-kind: Service
-metadata:
- name: bookbuyer
- namespace: bookbuyer
- labels:
- app: bookbuyer
-spec:
- ports:
- - port: 14001
- name: inbound-port
- selector:
- app: bookbuyer
-EOF
-```
-
-## Verify the Bookstore application running inside the AKS cluster
-
-As of now we have deployed the `bookstore` mulit-container application, but it is only accessible from within the AKS cluster. Later we will add the Azure Application Gateway ingress controller to expose the application outside the AKS cluster. To verify that the application is running inside the cluster, we will use a port forward to view the `bookbuyer` component UI.
-
-First let's get the `bookbuyer` pod's name
-
-```azurecli-interactive
-kubectl get pod -n bookbuyer
-```
-
-You should see output similar to the following. Your `bookbuyer` pod will have a unique name appended.
-
-```Output
-NAME READY STATUS RESTARTS AGE
-bookbuyer-7676c7fcfb-mtnrz 2/2 Running 0 7m8s
-```
-
-Once we have the pod's name, we can now use the port-forward command to set up a tunnel from our local system to the application inside the AKS cluster. Run the following command to set up the port forward for the local system port 8080. Again use your specified bookbuyer pod name.
-
-```azurecli-interactive
-kubectl port-forward bookbuyer-7676c7fcfb-mtnrz -n bookbuyer 8080:14001
-```
-
-You should see similar output below:
-
-```Output
-Forwarding from 127.0.0.1:8080 -> 14001
-Forwarding from [::1]:8080 -> 14001
-```
-
-While the port forwarding session is in place, navigate to the following url from a browser `http://localhost:8080`. You should now be able to see the `bookbuyer` application UI in the browser similar to the image below.
-
-![OSM bookbuyer app for NGINX UI image](./media/aks-osm-addon/osm-agic-bookbuyer-img.png)
-
-## Create an NGINX ingress controller in Azure Kubernetes Service (AKS)
-
-An ingress controller is a piece of software that provides reverse proxy, configurable traffic routing, and TLS termination for Kubernetes services. Kubernetes ingress resources are used to configure the ingress rules and routes for individual Kubernetes services. Using an ingress controller and ingress rules, a single IP address can be used to route traffic to multiple services in a Kubernetes cluster.
-
-We will utilize the ingress controller to expose the application managed by OSM to the internet. To create the ingress controller, use Helm to install nginx-ingress. For added redundancy, two replicas of the NGINX ingress controllers are deployed with the `--set controller.replicaCount` parameter. To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster.
-
-The ingress controller will be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. A node selector is specified using the `--set nodeSelector` parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node.
-
-> [!TIP]
-> The following example creates a Kubernetes namespace for the ingress resources named _ingress-basic_. Specify a namespace for your own environment as needed.
-
-```azurecli-interactive
-# Create a namespace for your ingress resources
-kubectl create namespace ingress-basic
-
-# Add the ingress-nginx repository
-helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
-
-# Update the helm repo(s)
-helm repo update
-
-# Use Helm to deploy an NGINX ingress controller in the ingress-basic namespace
-helm install nginx-ingress ingress-nginx/ingress-nginx \
- --namespace ingress-basic \
- --set controller.replicaCount=1 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
-```
-
-A Kubernetes load balancer service is created for the NGINX ingress controller. A dynamic public IP address is assigned, as shown in the following example output:
-
-```Output
-$ kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller
-
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
-nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.74.133 EXTERNAL_IP 80:32486/TCP,443:30953/TCP 44s app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
-```
-
-No ingress rules have been created. Currently the NGINX ingress controller's default 404 page is displayed if you browse to the internal IP address. Ingress rules are configured in the following steps.
-
-## Expose the bookbuyer service to the internet
-
-```azurecli-interactive
-kubectl apply -f - <<EOF
-
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
- name: bookbuyer-ingress
- namespace: bookbuyer
- annotations:
- kubernetes.io/ingress.class: nginx
-
-spec:
-
- rules:
- - host: bookbuyer.contoso.com
- http:
- paths:
- - path: /
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-
- backend:
- serviceName: bookbuyer
- servicePort: 14001
-EOF
-```
-
-You should see the following output:
-
-```Output
-Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
-ingress.extensions/bookbuyer-ingress created
-```
-
-## View the NGINX logs
-
-```azurecli-interactive
-POD=$(kubectl get pods -n ingress-basic | grep 'nginx-ingress' | awk '{print $1}')
-
-kubectl logs $POD -n ingress-basic -f
-```
-
-Output shows the NGINX ingress controller status when ingress rule has been applied successfully:
-
-```Output
-I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-basic", Name:"nginx-ingress-ingress-nginx-controller-54cf6c8bf4-jdvrw", UID:"3ebbe5e5-50ef-481d-954d-4b82a499ebe1", APIVersion:"v1", ResourceVersion:"3272", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
-I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"bookbuyer", Name:"bookbuyer-ingress", UID:"e1018efc-8116-493c-9999-294b4566819e", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"5460", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
-I0321 <date> 6 controller.go:146] "Configuration changes detected, backend reload required"
-I0321 <date> 6 controller.go:163] "Backend successfully reloaded"
-I0321 <date> 6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-basic", Name:"nginx-ingress-ingress-nginx-controller-54cf6c8bf4-jdvrw", UID:"3ebbe5e5-50ef-481d-954d-4b82a499ebe1", APIVersion:"v1", ResourceVersion:"3272", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
-```
-
-## View the NGINX services and bookbuyer service externally
-
-```azurecli-interactive
-kubectl get services -n ingress-basic
-```
-
-```Output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.100.23 20.193.1.74 80:31742/TCP,443:32683/TCP 4m15s
-nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.0.163.98 <none> 443/TCP 4m15s
-```
-
-Since the host name in the ingress manifest is a pseudo name used for testing, the DNS name will not be available on the internet. We can alternatively use the curl program and past the hostname header to the NGINX public IP address and receive a 200 code successfully connecting us to the bookbuyer service.
-
-```azurecli-interactive
-curl -H 'Host: bookbuyer.contoso.com' http://EXTERNAL-IP/
-```
-
-You should see the following output:
-
-```Output
-<!doctype html>
-<html itemscope="" itemtype="http://schema.org/WebPage" lang="en">
- <head>
- <meta content="Bookbuyer" name="description">
- <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
- <title>Bookbuyer</title>
- <style>
- #navbar {
- width: 100%;
- height: 50px;
- display: table;
- border-spacing: 0;
- white-space: nowrap;
- line-height: normal;
- background-color: #0078D4;
- background-position: left top;
- background-repeat-x: repeat;
- background-image: none;
- color: white;
- font: 2.2em "Fira Sans", sans-serif;
- }
- #main {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.8em "Fira Sans", sans-serif;
- }
- li {
- padding: 10pt 10pt 10pt 10pt;
- font: 1.2em "Consolas", sans-serif;
- }
- </style>
- <script>
- setTimeout(function(){window.location.reload(1);}, 1500);
- </script>
- </head>
- <body bgcolor="#fff">
- <div id="navbar">
- &#128214; Bookbuyer
- </div>
- <div id="main">
- <ul>
- <li>Total books bought: <strong>1833</strong>
- <ul>
- <li>from bookstore V1: <strong>277</strong>
- <li>from bookstore V2: <strong>1556</strong>
- </ul>
- </li>
- </ul>
- </div>
-
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
- <br/><br/><br/><br/>
-
- Current Time: <strong>Fri, 26 Mar 2021 15:02:53 UTC</strong>
- </body>
-</html>
-```
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
Azure AD pod identity supports two modes of operation:
* **Standard Mode**: In this mode, the following two components are deployed to the AKS cluster: * [Managed Identity Controller (MIC)](https://azure.github.io/aad-pod-identity/docs/concepts/mic/): An MIC is a Kubernetes controller that watches for changes to pods, [AzureIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentity/) and [AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentitybinding/) through the Kubernetes API Server. When it detects a relevant change, the MIC adds or deletes [AzureAssignedIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureassignedidentity/) as needed. Specifically, when a pod is scheduled, the MIC assigns the managed identity on Azure to the underlying virtual machine scale set used by the node pool during the creation phase. When all pods using the identity are deleted, it removes the identity from the virtual machine scale set of the node pool, unless the same managed identity is used by other pods. The MIC takes similar actions when AzureIdentity or AzureIdentityBinding are created or deleted. * [Node Managed Identity (NMI)](https://azure.github.io/aad-pod-identity/docs/concepts/nmi/): NMI is a pod that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the [Azure Instance Metadata Service](../virtual-machines/linux/instance-metadata-service.md?tabs=linux) on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure AD tenant on behalf of the application.
-* **Managed Mode**: This mode offers only NMI. The identity needs to be manually assigned and managed by the user. For more information, see [Pod identity in managed mode](https://azure.github.io/aad-pod-identity/docs/configure/pod_identity_in_managed_mode/).
+* **Managed Mode**: This mode offers only NMI. When installed via the AKS cluster add-on, Azure manages creation of Kubernetes primitives (AzureIdentity and AzureIdentityBinding) and identity assignment in response to CLI commands by the user. Otherwise, if installed via Helm chart, the identity needs to be manually assigned and managed by the user. For more information, see [Pod identity in managed mode](https://azure.github.io/aad-pod-identity/docs/configure/pod_identity_in_managed_mode/).
When you install the Azure AD pod identity via Helm chart or YAML manifest as shown in the [Installation Guide](https://azure.github.io/aad-pod-identity/docs/getting-started/installation/), you can choose between the `standard` and `managed` mode. If you instead decide to install the Azure AD pod identity using the AKS cluster add-on as shown in this article, the setup will use the `managed` mode.
az aks update -g $MY_RESOURCE_GROUP -n $MY_CLUSTER --enable-pod-identity
## Using Kubenet network plugin with Azure Active Directory pod-managed identities > [!IMPORTANT]
-> Running aad-pod-identity in a cluster with Kubenet is not a recommended configuration because of the security implication. Please follow the mitigation steps and configure policies before enabling aad-pod-identity in a cluster with Kubenet.
+> Running aad-pod-identity in a cluster with Kubenet is not a recommended configuration due to security concerns. Default Kubenet configuration fails to prevent ARP spoofing, which could be utilized by a pod to act as another pod and gain access to an identity it's not intended to have. Please follow the mitigation steps and configure policies before enabling aad-pod-identity in a cluster with Kubenet.
### Mitigation
az aks update -g $MY_RESOURCE_GROUP -n $MY_CLUSTER --enable-pod-identity --enabl
> [!IMPORTANT] > You must have the relevant permissions (for example, Owner) on your subscription to create the identity.
-Create an identity using [az identity create][az-identity-create] and set the *IDENTITY_CLIENT_ID* and *IDENTITY_RESOURCE_ID* variables.
+Create an identity which will be used by the demo pod with [az identity create][az-identity-create] and set the *IDENTITY_CLIENT_ID* and *IDENTITY_RESOURCE_ID* variables.
```azurecli-interactive az group create --name myIdentityResourceGroup --location eastus
export IDENTITY_RESOURCE_ID="$(az identity show -g ${IDENTITY_RESOURCE_GROUP} -n
## Assign permissions for the managed identity
+The managed identity that will be assigned to the pod needs to be granted permissions that align with the actions it will be taking.
+ To run the demo, the *IDENTITY_CLIENT_ID* managed identity must have Virtual Machine Contributor permissions in the resource group that contains the virtual machine scale set of your AKS cluster. ```azurecli-interactive
az aks pod-identity add --resource-group myResourceGroup --cluster-name myAKSClu
> [!NOTE] > When you assign the pod identity by using `pod-identity add`, the Azure CLI attempts to grant the Managed Identity Operator role over the pod identity (*IDENTITY_RESOURCE_ID*) to the cluster identity.
+Azure will create an AzureIdentity resource in your cluster representing the identity in Azure, and an AzureIdentityBinding resource which connects the AzureIdentity to a selector. You can view these resources with
+
+```azurecli-interactive
+kubectl get azureidentity -n $POD_IDENTITY_NAMESPACE
+kubectl get azureidentitybinding -n $POD_IDENTITY_NAMESPACE
+```
+ ## Run a sample application
-For a pod to use an Azure AD pod-managed identity, the pod needs an *aadpodidbinding* label with a value that matches a selector from a *AzureIdentityBinding*. To run a sample application using an Azure AD pod-managed identity, create a `demo.yaml` file with the following contents. Replace *POD_IDENTITY_NAME*, *IDENTITY_CLIENT_ID*, and *IDENTITY_RESOURCE_GROUP* with the values from the previous steps. Replace *SUBSCRIPTION_ID* with your subscription ID.
+For a pod to use AAD pod-managed identity, the pod needs an *aadpodidbinding* label with a value that matches a selector from a *AzureIdentityBinding*. By default, the selector will match the name of the pod identity, but it can also be set using the `--binding-selector` option when calling `az aks pod-identity add`.
+
+To run a sample application using AAD pod-managed identity, create a `demo.yaml` file with the following contents. Replace *POD_IDENTITY_NAME*, *IDENTITY_CLIENT_ID*, and *IDENTITY_RESOURCE_GROUP* with the values from the previous steps. Replace *SUBSCRIPTION_ID* with your subscription ID.
> [!NOTE] > In the previous steps, you created the *POD_IDENTITY_NAME*, *IDENTITY_CLIENT_ID*, and *IDENTITY_RESOURCE_GROUP* variables. You can use a command such as `echo` to display the value you set for variables, for example `echo $IDENTITY_NAME`.
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
az aks nodepool add \
--no-wait ```
-> [!NOTE]
-> A taint can only be set for node pools during node pool creation.
- The following example output from the [az aks nodepool list][az-aks-nodepool-list] command shows that *taintnp* is *Creating* nodes with the specified *nodeTaints*: ```console
az aks nodepool add \
--labels dept=IT costcenter=9999 \ --no-wait ```-
-> [!NOTE]
-> Label can only be set for node pools during node pool creation. Labels must also be a key/value pair and have a [valid syntax][kubernetes-label-syntax].
- The following example output from the [az aks nodepool list][az-aks-nodepool-list] command shows that *labelnp* is *Creating* nodes with the specified *nodeLabels*: ```console
analysis-services Analysis Services Async Refresh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-async-refresh.md
description: Describes how to use the Azure Analysis Services REST API to code a
Previously updated : 04/15/2020 Last updated : 02/02/2022
analysis-services Analysis Services Bcdr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-bcdr.md
description: This article describes how Azure Analysis Services provides high av
Previously updated : 03/29/2021 Last updated : 02/02/2022
analysis-services Analysis Services Connect Excel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-connect-excel.md
description: Learn how to connect to an Azure Analysis Services server by using
Previously updated : 12/01/2020 Last updated : 02/02/2022
analysis-services Analysis Services Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-connect.md
description: Learn how to connect to and get data from an Analysis Services serv
Previously updated : 12/01/2020 Last updated : 02/02/2022
analysis-services Analysis Services Database Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-database-users.md
description: Learn how to manage database roles and users on an Analysis Service
Previously updated : 04/27/2021 Last updated : 02/02/2022
analysis-services Analysis Services Datasource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-datasource.md
description: Describes data sources and connectors supported for tabular 1200 an
Previously updated : 03/29/2021 Last updated : 02/02/2022
analysis-services Analysis Services Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-gateway.md
description: An On-premises gateway is necessary if your Analysis Services serve
Previously updated : 11/09/2021 Last updated : 02/02/2022 # Connecting to on-premises data sources with On-premises data gateway
-The on-premises data gateway provides secure data transfer between on-premises data sources and your Azure Analysis Services servers in the cloud. In addition to working with multiple Azure Analysis Services servers in the same region, the latest version of the gateway also works with Azure Logic Apps, Power BI, Power Apps, and Power Automate. While the gateway you install is the same across all of these services, Azure Analysis Services and Logic Apps have some additional steps.
+The On-premises data gateway provides secure data transfer between on-premises data sources and your Azure Analysis Services servers in the cloud. In addition to working with multiple Azure Analysis Services servers in the same region, the gateway also works with Azure Logic Apps, Power BI, Power Apps, and Power Automate. While the gateway you install is the same across all of these services, Azure Analysis Services and Logic Apps have some additional steps required for successful installation.
-Information provided here is specific to how Azure Analysis Services works with the On-premises Data Gateway. To learn more about the gateway in general and how it works with other services, see [What is an on-premises data gateway?](/data-integration/gateway/service-gateway-onprem).
+Information provided here is specific to how Azure Analysis Services works with the On-premises data gateway. To learn more about the gateway in general and how it works with other services, see [What is an On-premises data gateway?](/data-integration/gateway/service-gateway-onprem).
For Azure Analysis Services, getting setup with the gateway the first time is a four-part process:
For Azure Analysis Services, getting setup with the gateway the first time is a
- **Create a gateway resource in Azure** - In this step, you create a gateway resource in Azure. -- **Connect the gateway resource to servers** - Once you have a gateway resource, you can begin connecting servers to it. You can connect multiple servers and other resources provided they are in the same region.
+- **Connect the gateway resource to servers** - Once you have a gateway resource, you can begin connecting your servers to it. You can connect multiple servers and other resources provided they are in the same region.
## Installing
-When installing for an Azure Analysis Services environment, it's important you follow the steps described in [Install and configure on-premises data gateway for Azure Analysis Services](analysis-services-gateway-install.md). This article is specific to Azure Analysis Services. It includes additional steps required to setup an On-premises data gateway resource in Azure, and connect your Azure Analysis Services server to the resource.
+When installing for an Azure Analysis Services environment, it's important you follow the steps described in [Install and configure on-premises data gateway for Azure Analysis Services](analysis-services-gateway-install.md). This article is specific to Azure Analysis Services. It includes additional steps required to setup an On-premises data gateway resource in Azure, and connect your Azure Analysis Services server to the gateway resource.
## Connecting to a gateway resource in a different subscription
analysis-services Analysis Services Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-manage-users.md
description: This article describes how Azure Analysis Services uses Azure Activ
Previously updated : 12/01/2020 Last updated : 02/02/2022
Azure Analysis Services supports [Azure AD B2B collaboration](../active-director
All client applications and tools use one or more of the Analysis Services [client libraries](/analysis-services/client-libraries?view=azure-analysis-services-current&preserve-view=true) (AMO, MSOLAP, ADOMD) to connect to a server.
-All three client libraries support both Azure AD interactive flow, and non-interactive authentication methods. The two non-interactive methods, Active Directory Password and Active Directory Integrated Authentication methods can be used in applications utilizing AMOMD and MSOLAP. These two methods never result in pop-up dialog boxes.
+All three client libraries support both Azure AD interactive flow, and non-interactive authentication methods. The two non-interactive methods, Active Directory Password and Active Directory Integrated Authentication methods can be used in applications utilizing AMOMD and MSOLAP. These two methods never result in pop-up dialog boxes for sign in.
-Client applications like Excel and Power BI Desktop, and tools like SSMS and Analysis Services projects extension for Visual Studio install the latest versions of the libraries when updated to the latest release. Power BI Desktop, SSMS, and Analysis Services projects extension are updated monthly. Excel is [updated with Microsoft 365](https://support.microsoft.com/office/when-do-i-get-the-newest-features-for-microsoft-365-da36192c-58b9-4bc9-8d51-bb6eed468516). Microsoft 365 updates are less frequent, and some organizations use the deferred channel, meaning updates are deferred up to three months.
+Client applications like Excel and Power BI Desktop, and tools like SSMS and Analysis Services projects extension for Visual Studio install the latest versions of the client libraries with regular updates. Power BI Desktop, SSMS, and Analysis Services projects extension are updated monthly. Excel is [updated with Microsoft 365](https://support.microsoft.com/office/when-do-i-get-the-newest-features-for-microsoft-365-da36192c-58b9-4bc9-8d51-bb6eed468516). Microsoft 365 updates are less frequent, and some organizations use the deferred channel, meaning updates are deferred up to three months.
-Depending on the client application or tool you use, the type of authentication and how you sign in may be different. Each application may support different features for connecting to cloud services like Azure Analysis Services.
+Depending on the client application or tools you use, the type of authentication and how you sign in may be different. Each application may support different features for connecting to cloud services like Azure Analysis Services.
-Power BI Desktop, Visual Studio, and SSMS support Active Directory Universal Authentication, an interactive method that also supports Azure AD Multi-Factor Authentication (MFA). Azure AD MFA helps safeguard access to data and applications while providing a simple sign-in process. It delivers strong authentication with several verification options (phone call, text message, smart cards with pin, or mobile app notification). Interactive MFA with Azure AD can result in a pop-up dialog box for validation. **Universal Authentication is recommended**.
+Power BI Desktop, Visual Studio, and SSMS support Active Directory Universal Authentication, an interactive method that also supports Azure AD Multi-Factor Authentication (MFA). Azure AD MFA helps safeguard access to data and applications while providing a simple sign in process. It delivers strong authentication with several verification options (phone call, text message, smart cards with pin, or mobile app notification). Interactive MFA with Azure AD can result in a pop-up dialog box for validation. **Universal Authentication is recommended**.
If signing in to Azure by using a Windows account, and Universal Authentication is not selected or available (Excel), [Active Directory Federation Services (AD FS)](/windows-server/identity/ad-fs/deployment/how-to-connect-fed-azure-adfs) is required. With Federation, Azure AD and Microsoft 365 users are authenticated using on-premises credentials and can access Azure resources.
Excel users can connect to a server by using a Windows account, an organization
## User permissions
-**Server administrators** are specific to an Azure Analysis Services server instance. They connect with tools like Azure portal, SSMS, and Visual Studio to perform tasks like adding databases and managing user roles. By default, the user that creates the server is automatically added as an Analysis Services server administrator. Other administrators can be added by using Azure portal or SSMS. Server administrators must have an account in the Azure AD tenant in the same subscription. To learn more, see [Manage server administrators](analysis-services-server-admins.md).
+**Server administrators** are specific to an Azure Analysis Services server instance. They connect with tools like Azure portal, SSMS, and Visual Studio to perform tasks like configuring settings and managing user roles. By default, the user that creates the server is automatically added as an Analysis Services server administrator. Other administrators can be added by using Azure portal or SSMS. Server administrators must have an account in the Azure AD tenant in the same subscription. To learn more, see [Manage server administrators](analysis-services-server-admins.md).
**Database users** connect to model databases by using client applications like Excel or Power BI. Users must be added to database roles. Database roles define administrator, process, or read permissions for a database. It's important to understand database users in a role with administrator permissions is different than server administrators. However, by default, server administrators are also database administrators. To learn more, see [Manage database roles and users](analysis-services-database-users.md).
analysis-services Analysis Services Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-manage.md
description: This article describes the tools used to manage administration and
Previously updated : 10/28/2019 Last updated : 02/02/2022 # Manage Analysis Services
-Once you've created an Analysis Services server in Azure, there may be some administration and management tasks you need to perform right away or sometime down the road. For example, run processing to the refresh data, control who can access the models on your server, or monitor your server's health. Some management tasks can only be performed in Azure portal, others in SQL Server Management Studio (SSMS), and some tasks can be done in either.
+Once you've created an Analysis Services server resource in Azure, there may be some administration and management tasks you need to perform right away or sometime down the road. For example, run processing to the refresh data, control who can access the models on your server, or monitor your server's health. Some management tasks can only be performed in Azure portal, others in SQL Server Management Studio (SSMS), and some tasks can be done in either.
## Azure portal [Azure portal](https://portal.azure.com/) is where you can create and delete servers, monitor server resources, change size, and manage who has access to your servers. If you're having some problems, you can also submit a support request.
To get all the latest features, and the smoothest experience when connecting to
![Connect in SSMS](./media/analysis-services-manage/aas-manage-connect-ssms.png) +
+## External open source tools
+
+**Tabular Editor** - An open-source tool for creating, maintaining, and managing tabular models using an intuitive, lightweight editor. A hierarchical view shows all objects in your tabular model. Objects are organized by display folders with support for multi-select property editing and DAX syntax highlighting. XMLA read-only is required for query operations. Read-write is required for metadata operations. To learn more, see [tabulareditor.github.io](https://tabulareditor.github.io/).
+
+**ALM Toolkit** - An open-source schema compare tool for Analysis Services tabular models and Power BI datasets, most often used for application lifecycle management (ALM) scenarios. Perform deployment across environments and retain incremental refresh historical data. Diff and merge metadata files, branches and repos. Reuse common definitions between datasets. Read-only is required for query operations. Read-write is required for metadata operations. To learn more, seeΓÇ»[alm-toolkit.com](http://alm-toolkit.com/).
+
+**DAX Studio** – An open-source tool for DAX authoring, diagnosis, performance tuning, and analysis. Features include object browsing, integrated tracing, query execution breakdowns with detailed statistics, DAX syntax highlighting and formatting. XMLA read-only is required for query operations. To learn more, see [daxstudio.org](https://daxstudio.org/).
+ ## Server administrators and database users In Azure Analysis Services, there are two types of users, server administrators and database users. Both types of users must be in your Azure Active Directory and must be specified by organizational email address or UPN. To learn more, see [Authentication and user permissions](analysis-services-manage-users.md). - ## Troubleshooting connection problems When connecting using SSMS, if you run into problems, you may need to clear the login cache. Nothing is cached to disc. To clear the cache, close and restart the connect process. ## Next steps If you haven't already deployed a tabular model to your new server, now is a good time. To learn more, see [Deploy to Azure Analysis Services](analysis-services-deploy.md).
-If you've deployed a model to your server, you're ready to connect to it using a client or browser. To learn more, see [Get data from Azure Analysis Services server](analysis-services-connect.md).
+If you've deployed a model to your server, you're ready to connect to it using a client application or tool. To learn more, see [Get data from Azure Analysis Services server](analysis-services-connect.md).
analysis-services Analysis Services Server Admins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-server-admins.md
description: This article describes how to manage server administrators for an A
Previously updated : 2/4/2021 Last updated : 02/02/2022
analysis-services Analysis Services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-service-principal.md
description: Learn how to create a service principal for automating Azure Analys
Previously updated : 04/27/2021 Last updated : 02/02/2022
# Automation with service principals
-Service principals are an Azure Active Directory application resource you create within your tenant to perform unattended resource and service level operations. They're a unique type of *user identity* with an application ID and password or certificate. A service principal has only those permissions necessary to perform tasks defined by the roles and permissions for which it's assigned.
+Service principals are an Azure Active Directory application resource you create within your tenant to perform unattended resource and service level operations. They're a unique type of *user identity* with an application ID and password or certificate. A service principal has only those permissions necessary to perform tasks defined by the roles and permissions for which it is assigned.
In Analysis Services, service principals are used with Azure Automation, PowerShell unattended mode, custom client applications, and web apps to automate common tasks. For example, provisioning servers, deploying models, data refresh, scale up/down, and pause/resume can all be automated by using service principals. Permissions are assigned to service principals through role membership, much like regular Azure AD UPN accounts.
analysis-services Analysis Services Vnet Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-vnet-gateway.md
description: Learn how to configure an Azure Analysis Services server to use a g
Previously updated : 04/27/2021 Last updated : 02/02/2022
This article describes the Azure Analysis Services **AlwaysUseGateway** server p
## Server access to VNet data sources
-If your data sources are accessed through a VNet, your Azure Analysis Services server must connect to those data sources as if they are on-premises, in your own environment. You can configure the **AlwaysUseGateway** server property to specify the server to access all data sources through an [On-premises gateway](analysis-services-gateway.md).
+If your data sources are accessed through a VNet, your Azure Analysis Services server must connect to those data sources as if they are on-premises, in your own environment. You must configure the **AlwaysUseGateway** server property to specify the server resource to access all data sources through an [On-premises data gateway](analysis-services-gateway.md).
-Azure SQL Managed Instance data sources run within Azure VNet with a private IP address. If public endpoint is enabled on the instance, a gateway is not required. If public endpoint is not enabled, an On-premises Data Gateway is required and the AlwaysUseGateway property must be set to true.
+Azure SQL Managed Instance data sources run within Azure VNet with a private IP address. If public endpoint is enabled on the instance, a gateway is not required. If public endpoint is not enabled, an On-premises data gateway is required and the AlwaysUseGateway property must be set to true.
> [!NOTE]
-> This property is effective only when an [On-premises Data Gateway](analysis-services-gateway.md) is installed and configured. The gateway can be on the VNet.
+> This property is effective only when an [On-premises data gateway](analysis-services-gateway.md) is installed and configured. The gateway can be on the VNet.
## Configure AlwaysUseGateway property
analysis-services Analysis Services Tutorial Pbid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/tutorials/analysis-services-tutorial-pbid.md
description: In this tutorial, learn how to get an Analysis Services server name from the Azure portal and then connect to the server by using Power BI Desktop. Previously updated : 10/12/2021 Last updated : 02/02/2022 #Customer intent: As a BI developer, I want to connect to a sample tabular model on a server and create a basic report by using the Power BI Desktop client application.
In this tutorial, you use Power BI Desktop to connect to the adventureworks samp
- [Install the newest Power BI Desktop](https://powerbi.microsoft.com/desktop). ## Sign in to the Azure portal
-In this tutorial, you sing in to the portal to get the server name only. Typically, users would get the server name from the server administrator.
+In this tutorial, you sign in to the portal to get the server name only. Typically, users would get the server name from the server administrator.
Sign in to the [portal](https://portal.azure.com/). ## Get server name
-In order to connect to your server from Power BI Desktop, you first need the server name. You can get the server name from the portal.
+In order to connect to your server from Power BI Desktop, you first need the server name.
In **Azure portal** > server > **Overview** > **Server name**, copy the server name.
In **Azure portal** > server > **Overview** > **Server name**, copy the server n
If no longer needed, do not save your report or delete the file if you did save. ## Next steps
-In this tutorial, you learned how to use Power BI Desktop to connect to a data model on a server and create a basic report. If you're not familiar with how to create a data model, see the [Adventure Works Internet Sales tabular data modeling tutorial](/analysis-services/tutorial-tabular-1400/as-adventure-works-tutorial) in the SQL Server Analysis Services docs.
+In this tutorial, you learned how to use Power BI Desktop to connect to a data model on a server and create a basic report. If you're not familiar with how to create a data model, see the [Adventure Works Internet Sales tabular data modeling tutorial](/analysis-services/tutorial-tabular-1400/as-adventure-works-tutorial) in the SQL Server Analysis Services docs.
api-management Api Management Get Started Publish Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-get-started-publish-versions.md
You can interact directly with version sets by using the Azure CLI:
To see all your version sets, run the [az apim api versionset list](/cli/azure/apim/api/versionset#az_apim_api_versionset_list) command: ```azurecli
-az apim api versionset list --resource-group apim-hello-word-resource-group \
+az apim api versionset list --resource-group apim-hello-world-resource-group \
--service-name apim-hello-world --output table ```
When the Azure portal creates a version set for you, it assigns an alphanumeric
To see details about a version set, run the [az apim api versionset show](/cli/azure/apim/api/versionset#az_apim_api_versionset_show) command: ```azurecli
-az apim api versionset show --resource-group apim-hello-word-resource-group \
+az apim api versionset show --resource-group apim-hello-world-resource-group \
--service-name apim-hello-world --version-set-id 00000000000000000000000 ```
app-service Configure Authentication File Based https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-file-based.md
The following exhausts possible configuration options within the file:
"redirectToProvider": "<default provider alias>", "excludedPaths": [ "/path1",
- "/path2"
+ "/path2",
+ "/path3/subpath/*"
] }, "httpSettings": {
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/how-to-migrate.md
Title: How to migrate App Service Environment v2 to App Service Environment v3
-description: Learn how to migrate your App Service Environment v2 to App Service Environment v3
+ Title: Use the migration feature to migrate App Service Environment v2 to App Service Environment v3
+description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 using the migration feature
Previously updated : 2/01/2022 Last updated : 2/2/2022 zone_pivot_groups: app-service-cli-portal
-# How to migrate App Service Environment v2 to App Service Environment v3
+# Use the migration feature to migrate App Service Environment v2 to App Service Environment v3
-An App Service Environment v2 can be migrated to an [App Service Environment v3](overview.md). To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
+An App Service Environment v2 can be automatically migrated to an [App Service Environment v3](overview.md) using the migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
> [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
An App Service Environment v2 can be migrated to an [App Service Environment v3]
## Prerequisites
-Ensure you understand how migrating to an App Service Environment v3 will affect your applications. Review the [migration process](migrate.md#overview-of-the-migration-process) to understand the process timeline and where and when you'll need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which may answer some questions you currently have.
+Ensure you understand how migrating to an App Service Environment v3 will affect your applications. Review the [migration process](migrate.md#overview-of-the-migration-process-using-the-migration-feature) to understand the process timeline and where and when you'll need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which may answer some questions you currently have.
::: zone pivot="experience-azcli"
-The recommended experience for migration is using the [Azure portal](how-to-migrate.md?pivots=experience-azp). If you decide to use the Azure CLI to carry out the migration, you should follow the below steps in order and as written since you'll be making Azure REST API calls. The recommended way for making these API calls is by using the [Azure CLI](/cli/azure/). For information about other methods, see [Getting Started with Azure REST](/rest/api/azure/).
+The recommended experience for the migration feature is using the [Azure portal](how-to-migrate.md?pivots=experience-azp). If you decide to use the Azure CLI to carry out the migration, you should follow the steps described here in order and as written since you'll be making Azure REST API calls. The recommended way for making these API calls is by using the [Azure CLI](/cli/azure/). For information about other methods, see [Getting Started with Azure REST](/rest/api/azure/).
For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use the [Azure Cloud Shell](https://shell.azure.com/).
ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --quer
## 2. Validate migration is supported
-The following command will check whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. If your environment [won't be supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see [migration alternatives](migration-alternatives.md).
+The following command will check whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. If your environment [won't be supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
```azurecli az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=validation"
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG
From the [Azure portal](https://portal.azure.com), navigate to the **Overview** page for the App Service Environment you'll be migrating. The platform will validate if migration is supported for your App Service Environment. Wait a couple seconds after the page loads for this validation to take place.
-If migration is supported for your App Service Environment, there are three ways to access the migration feature. These methods include a banner at the top of the overview page, a new item in the left-hand side menu called **Migration (preview)**, and an info box on the **Configuration** page. Select any of these methods to move on to the next step in the migration process.
+If migration is supported for your App Service Environment, there are three ways to access the migration feature. These methods include a banner at the top of the Overview page, a new item in the left-hand side menu called **Migration (preview)**, and an info box on the **Configuration** page. Select any of these methods to move on to the next step in the migration process.
![migration access points](./media/migration/portal-overview.png) ![configuration page view](./media/migration/configuration-migration-support.png)
-If you don't see these elements, your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state (which blocks migration). If your environment [won't be supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see [migration alternatives](migration-alternatives.md).
+If you don't see these elements, your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state (which blocks migration). If your environment [won't be supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
The migration page will guide you through the series of steps to complete the migration.
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migrate.md
Title: Migration to App Service Environment v3
-description: Overview of the migration process to App Service Environment v3
+ Title: Migrate to App Service Environment v3 by using the migration feature
+description: Overview of the migration feature for migration to App Service Environment v3
Previously updated : 1/28/2022 Last updated : 2/2/2022
-# Migration to App Service Environment v3
+# Migration to App Service Environment v3 using the migration feature
-App Service can now migrate your App Service Environment v2 to an [App Service Environment v3](overview.md). If you want to migrate an App Service Environment v1 to an App Service Environment v3, see the [migration alternatives documentation](migration-alternatives.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+App Service can now automate migration of your App Service Environment v2 to an [App Service Environment v3](overview.md). If you want to migrate an App Service Environment v1 to an App Service Environment v3, see the [manual migration options documentation](migration-alternatives.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
> [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
App Service can now migrate your App Service Environment v2 to an [App Service E
## Supported scenarios
-At this time, App Service Environment migrations to v3 support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions:
+At this time, App Service Environment migrations to v3 using the migration feature support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions:
- West Central US - Canada Central
The following scenarios aren't supported in this version of the feature:
- [Zone pinned](zone-redundancy.md) App Service Environment v2 - App Service Environment in a region not listed in the supported regions
-The migration feature doesn't plan on supporting App Service Environment v1 within a classic VNet. See [migration alternatives](migration-alternatives.md) if your App Service Environment falls into this category.
+The migration feature doesn't plan on supporting App Service Environment v1 within a classic VNet. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into this category.
The App Service platform will review your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you won't be able to migrate at this time using the migration feature. If your environment is in an unhealthy or suspended state, you won't be able to migrate until you make the needed updates.
-## Overview of the migration process
+## Overview of the migration process using the migration feature
Migration consists of a series of steps that must be followed in order. Key points are given for a subset of the steps. It's important to understand what will happen during these steps and how your environment and apps will be impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-migrate.md).
Once the new IPs are created, you'll have the new default outbound to the intern
### Delegate your App Service Environment subnet
-App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. If the App Service Environment's subnet isn't delegated or it's delegated to a different resource, migration will fail.
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Migration will not succeed if the App Service Environment's subnet isn't delegated or it's delegated to a different resource.
### Migrate to App Service Environment v3
There's no cost to migrate your App Service Environment. You'll stop being charg
## Frequently asked questions - **What if migrating my App Service Environment is not currently supported?**
- You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see [migration alternatives](migration-alternatives.md).
+ You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md). This doc will be updated as additional regions and supported scenarios become available.
- **Will I experience downtime during the migration?**
- Yes, you should expect about one hour of downtime during the migration step so plan accordingly. If downtime isn't an option for you, see [migration alternatives](migration-alternatives.md).
+ Yes, you should expect about one hour of downtime during the migration step so plan accordingly. If downtime isn't an option for you, see the [manual migration options](migration-alternatives.md).
- **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?** No, all of your apps running on the old environment will be automatically migrated to the new environment and run like before. No user input is needed. - **What if my App Service Environment has a custom domain suffix?**
- You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see [migration alternatives](migration-alternatives.md).
+ You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
- **What if my App Service Environment is zone pinned?**
- Zone pinned App Service Environment is currently not a supported scenario for migration. When supported, zone pinned App Service Environments will be migrated to zone redundant App Service Environment v3.
+ Zone pinned App Service Environment is currently not a supported scenario for migration using the migration feature. When supported, zone pinned App Service Environments will be migrated to zone redundant App Service Environment v3.
- **What properties of my App Service Environment will change?** You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). - **What happens if migration fails or there is an unexpected issue during the migration?**
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migration-alternatives.md
Title: Alternative methods for migrating to App Service Environment v3
-description: Migrate to App Service Environment v3 Without Using the Migration Feature
+ Title: Migrate to App Service Environment v3
+description: How to migrate your applications to App Service Environment v3
Previously updated : 1/28/2022 Last updated : 2/2/2022
-# Migrate to App Service Environment v3 without using the migration feature
+# Migrate to App Service Environment v3
> [!NOTE]
-> The App Service Environment v3 [migration feature](migrate.md) is now available for a set of supported environment configurations. Consider that feature which provides an automated migration path to [App Service Environment v3](overview.md).
+> The App Service Environment v3 [migration feature](migrate.md) is now available for a set of supported environment configurations in certain regions. Consider that feature which provides an automated migration path to [App Service Environment v3](overview.md).
>
-If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios). If your environment isn't currently supported by the migration feature, you can wait for support if your scenario is listed in the [upcoming supported scenarios](migrate.md#migration-feature-limitations). Otherwise, you can choose to use one of the alternative migration options given in this article.
+If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios). If your environment isn't currently supported by the migration feature, you can wait for support if your scenario is listed in the [upcoming supported scenarios](migrate.md#migration-feature-limitations). Otherwise, you can choose to use one of the manual migration options given in this article.
-If your App Service Environment [won't be supported for migration](migrate.md#migration-feature-limitations) with the migration feature, you must use one of the alternative methods to migrate to App Service Environment v3.
+If your App Service Environment [won't be supported for migration](migrate.md#migration-feature-limitations) with the migration feature, you must use one of the manual methods to migrate to App Service Environment v3.
## Prerequisites
Scenario: An existing app running on an App Service Environment v1 or App Servic
For any migration method that doesn't use the [migration feature](migrate.md), you'll need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that will involve new (and for internet-facing environments, additional) IP addresses. You'll need to update any infrastructure that relies on these IPs.
-Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) on the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
+Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) in the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
### Checklist before migrating apps
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
1. Use your App Service Environment v3 name for **Region**. 1. Choose whether or not to clone your deployment source. 1. You can use an existing Windows **App Service plan** from your new environment if you created one already, or create a new one. The available Windows App Service plans in your new App Service Environment v3, if any, will be listed in the dropdown.
-1. Modify **SKU and size** as needed using one of the Isolated v2 options if creating a new App Service plan. Note App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plan. For more information, see [App Service Environment v3 pricing](overview.md#pricing).
+1. Modify **SKU and size** as needed using one of the Isolated v2 options if creating a new App Service plan. Note App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plan. For more information, see [App Service Environment v3 SKU details](overview.md#pricing).
![clone sample](./media/migration/portal-clone-sample.png)
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. At this time, all deployment methods except FTP are supported on App Service Environment v3. You don't need to make updates when you deploy your apps to your new environment unless you want to make changes or take advantage of App Service Environment v3's dedicated features.
-You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in your new environment. To export a template for just your app, head over to your App Service and go to **Export template** under **Automation**.
+You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in or with your new environment. To export a template for just your app, head over to your App Service and go to **Export template** under **Automation**.
![export from toc](./media/migration/export-toc.png)
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python.md
To run the application locally:
:::image type="content" source="./media/quickstart-python/run-flask-app-localhost.png" alt-text="Screenshot of the Flask app running locally in a browser":::
-Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
### [Django](#tab/django)
Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
:::image type="content" source="./media/quickstart-python/run-django-app-localhost.png" alt-text="Screenshot of the Django app running locally in a browser":::
-Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
To create Azure resources in VS Code, you must have the [Azure Tools extension p
-Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
## 3 - Deploy your application code to Azure
To deploy a web app from VS Code, you must have the [Azure Tools extension pack]
-Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
## 4 - Browse to the app
The Python sample code is running a Linux container in App Service using a built
**Congratulations!** You have deployed your Python app to App Service.
-Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
## 5 - Stream logs
Starting Live Log Stream
-
-Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? Refer first to the [Troubleshooting guide](/azure/app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
## Clean up resources
The `--no-wait` argument allows the command to return before the operation is co
-Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
## Next steps
automation Automation Create Alert Triggered Runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-create-alert-triggered-runbook.md
Ensure your VM is running. Navigate to the runbook **Stop-AzureVmInResponsetoVMA
:::image type="content" source="./media/automation-create-alert-triggered-runbook/job-result-portal.png" alt-text="Showing output from job.":::
+## Common Azure VM management operations
+
+Azure Automation provides scripts for common Azure VM management operations like restart VM, stop VM, delete VM, scale up and down scenarios in Runbook gallery. The scripts can also be found in the Azure Automation [GitHub repository](https://github.com/azureautomation) You can also use these scripts as mentioned in the above steps.
+
+|**Azure VM management operations** | **Details**|
+| | |
+[Stop-Azure-VM-On-Alert](https://github.com/azureautomation/Stop-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.
+[Restart-Azure-VM-On-Alert](https://github.com/azureautomation/Restart-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.
+[Delete-Azure-VM-On-Alert](https://github.com/azureautomation/Delete-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.
+[ScaleDown-Azure-VM-On-Alert](https://github.com/azureautomation/ScaleDown-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.
+[ScaleUp-Azure-VM-On-Alert](https://github.com/azureautomation/ScaleUp-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.
+ ## Next steps
-* To discover different ways to start a runbook, see [Start a runbook](./start-runbooks.md).
-* To create an activity log alert, see [Create activity log alerts](../azure-monitor/alerts/activity-log-alerts.md).
-* To learn how to create a near real-time alert, see [Create an alert rule in the Azure portal](../azure-monitor/alerts/alerts-metric.md?toc=/azure/azure-monitor/toc.json).
+* Discover different ways to start a runbook, see [Start a runbook](./start-runbooks.md).
+* Create an activity log alert, see [Create activity log alerts](../azure-monitor/alerts/activity-log-alerts.md).
+* Learn how to create a near real-time alert, see [Create an alert rule in the Azure portal](../azure-monitor/alerts/alerts-metric.md?toc=/azure/azure-monitor/toc.json).
azure-arc Create Complete Managed Instance Directly Connected https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-complete-managed-instance-directly-connected.md
To quickly create a Kubernetes cluster, use Azure Kubernetes Services (AKS).
1. Create a resource group, or specify an existing resource group. 1. Specify a cluster name 1. Specify a region
- 1. Under **Availability zones**, select **None**.
+ 1. Under **Availability zones**, remove all selected zones. You should not specify any zones.
1. Verify the Kubernetes version. For minimum supported version, see [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md). 1. Under **Node size**, select a node size for your cluster based on the [Sizing guidance](sizing-guidance.md). 1. For **Scale method**, select **Manual**.
azure-arc Deploy Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md
To support Active Directory authentication on SQL Managed Instance, new spec fie
Prepare the following yaml specification to deploy a SQL Managed Instance. The fields described above should be specified in the spec. ```yaml
+apiVersion: v1
+data:
+ password: <your base64 encoded password>
+ username: <your base64 encoded username>
+kind: Secret
+metadata:
+ name: my-login-secret
+type: Opaque
+ apiVersion: sql.arcdata.microsoft.com/v2 kind: SqlManagedInstance metadata:
spec:
keytabSecret: <Keytab secret name> primary:
- type: NodePort
+ type: LoadBalancer
dnsName: <Endpoint DNS name> port: <Endpoint port number> storage:
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
This output should not include AKV secrets provider. If you don't have any other
## Reconciliation and Troubleshooting Azure Key Vault secrets provider extension is self-healing. All extension components that are deployed on the cluster at the time of extension installation are reconciled to their original state in case somebody tries to intentionally or unintentionally change or delete them. The only exception to that is CRDs. In case the CRDs are deleted, they are not reconciled. You can bring them back by using the 'az k8s-exstension create' command again and providing the existing extension instance name.
-Some common issues and troubleshooting steps for Azure Key Vault secrets provider are captured in the open source documentation [here](https://azure.github.io/secrets-store-csi-driver-provider-azure/troubleshooting/) for your reference.
+Some common issues and troubleshooting steps for Azure Key Vault secrets provider are captured in the open source documentation [here](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/troubleshooting/) for your reference.
Additional troubleshooting steps that are specific to the Secrets Store CSI Driver Interface can be referenced [here](https://secrets-store-csi-driver.sigs.k8s.io/troubleshooting.html).
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
For usage details, see the following documents:
* [Migrate to Flux v2 Helm from Flux v1 Helm](https://fluxcd.io/docs/migration/helm-operator-migration/) * [Flux Helm controller](https://fluxcd.io/docs/components/helm/)
+### Use the GitRepository source for Helm charts
+
+If your Helm charts are stored in the `GitRepository` source that you configure as part of the `fluxConfigurations` resource, you can add an annotation to your HelmRelease yaml to indicate that the configured source should be used as the source of the Helm charts. The annotation is `clusterconfig.azure.com/use-managed-source: "true"`, and here is a usage example:
+
+```console
+
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: somename
+ namespace: somenamespace
+ annotations:
+ clusterconfig.azure.com/use-managed-source: "true"
+spec:
+ ...
+```
+
+By using this annotation, the HelmRelease that is deployed will be patched with the reference to the configured source. Note that only GitRepository source is supported for this currently.
+ ## Migrate from Flux v1 If you've been using Flux v1 in Azure Arc-enabled Kubernetes or AKS clusters and want to migrate to using Flux v2 in the same clusters, you first need to delete the Flux v1 `sourceControlConfigurations` from the clusters. The `microsoft.flux` cluster extension won't be installed if there are `sourceControlConfigurations` resources installed in the cluster.
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus.md
When using service bus extension version 5.x and higher, the following global co
|||| |prefetchCount|0|Gets or sets the number of messages that the message receiver can simultaneously request.| |autoCompleteMessages|true|Determines whether or not to automatically complete messages after successful execution of the function and should be used in place of the `autoComplete` configuration setting.|
-|maxAutoLockRenewalDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically. This only applies for functions that receive a batch of messages.|
-|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.|
-|maxConcurrentSessions|8|The maximum number of sessions that can be handled concurrently per scaled instance.|
-|maxMessages|1000|The maximum number of messages that will be passed to each function call. This only applies for functions that receive a batch of messages.|
-|sessionIdleTimeout|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the processor will close the session and attempt to process another session.|
+|maxAutoLockRenewalDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically. This setting only applies for functions that receive a single message at a time.|
+|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the should be initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently. This setting only applies for functions that receive a single message at a time.|
+|maxConcurrentSessions|8|The maximum number of sessions that can be handled concurrently per scaled instance. This setting only applies for functions that receive a single message at a time.|
+|maxMessages|1000|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.|
+|sessionIdleTimeout|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the processor will close the session and attempt to process another session. This setting only applies for functions that receive a single message at a time.|
|enableCrossEntityTransactions|false|Whether or not to enable transactions that span multiple entities on a Service Bus namespace.| ### Retry settings
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-develop-vs.md
Visual Studio doesn't automatically upload the settings in local.settings.json w
Your code can also read the function app settings values as environment variables. For more information, see [Environment variables](functions-dotnet-class-library.md#environment-variables).
-## Configure your build output settings
-
-When building an Azure Functions project, the build tools optimize the output so that only one copy of any assemblies that are shared with the functions runtime are preserved. The result is an optimized build that saves as much space as possible. However, when you move to a more recent version of any of your project assemblies, the build tools might not know that these assemblies must be preserved. To make sure that these assemblies are preserved during the optimization process, you can specify them using `FunctionsPreservedDependencies` elements in the project (.csproj) file:
-
-```xml
- <ItemGroup>
- <FunctionsPreservedDependencies Include="Microsoft.AspNetCore.Http.dll" />
- <FunctionsPreservedDependencies Include="Microsoft.AspNetCore.Http.Extensions.dll" />
- <FunctionsPreservedDependencies Include="Microsoft.AspNetCore.Http.Features.dll" />
- </ItemGroup>
-```
- ## Configure the project for local development The Functions runtime uses an Azure Storage account internally. For all trigger types other than HTTP and webhooks, set the `Values.AzureWebJobsStorage` key to a valid Azure Storage account connection string. Your function app can also use the [Azure Storage Emulator](../storage/common/storage-use-emulator.md) for the `AzureWebJobsStorage` connection setting that's required by the project. To use the emulator, set the value of `AzureWebJobsStorage` to `UseDevelopmentStorage=true`. Change this setting to an actual storage account connection string before deployment.
azure-functions Functions Identity Based Connections Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-identity-based-connections-tutorial.md
Last updated 10/20/2021
# Tutorial: Create a function app that connects to Azure services using identities instead of secrets
-This tutorial shows you how to configure a function app using Azure Active Directory identities instead of secrets or connection strings, where possible. Using identities helps you avoid accidentally leaking sensitive secrets and can provide better visibility into how data is accessed. To learn more about identity-based connections, see [configure an identity-based connection.](functions-reference.md#configure-an-identity-based-connection).
+This tutorial shows you how to configure a function app using Azure Active Directory identities instead of secrets or connection strings, where possible. Using identities helps you avoid accidentally leaking sensitive secrets and can provide better visibility into how data is accessed. To learn more about identity-based connections, see [configure an identity-based connection](functions-reference.md#configure-an-identity-based-connection).
While the procedures shown work generally for all languages, this tutorial currently supports C# class library functions on Windows specifically.
In order to use Azure Key Vault, your app will need to have an identity that can
1. Select **Save**. It might take a minute or two for the role to show up when you refresh the role assignments list for the identity.
-The identity will now be able to read secrets stored in the vault. Later in the tutorial, you will add additional role assignments for different purposes.
+The identity will now be able to read secrets stored in the key vault. Later in the tutorial, you will add additional role assignments for different purposes.
### Generate a template for creating a function app
Next you will update your function app to use its system-assigned identity when
| Option | Suggested value | Description | | | - | -- | | **Name** | AzureWebJobsStorage__accountName | Update the name from **AzureWebJobsStorage** to the exact name `AzureWebJobsStorage__accountName`. This setting tells the host to use the identity instead of looking for a stored secret. The new setting uses a double underscore (`__`), which is a special character in application settings. |
- | **Value** | Your account name | Update the name from the connection string to just your **AccountName**. |
+ | **Value** | Your account name | Update the name from the connection string to just your **StorageAccountName**. |
This configuration will let the system know that it should use an identity to connect to the resource.
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
On Linux, the function app must have its `kind` set to `functionapp,linux`, and
} ```
-The [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings aren't supported on Linux.
+The [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings aren't supported on a Linux Consumption plan.
<a name="premium"></a> ## Deploy on Premium plan
azure-functions Functions Openapi Definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-openapi-definition.md
To create an API Management instance linked to your function app:
![Create new API Management service](media/functions-openapi-definitions/new-apim-service-openapi.png)
-1. Choose **Create** to create the API Management instance, which may take several minutes.
+1. Choose **Export** to create the API Management instance, which may take several minutes.
1. After Azure creates the instance, it enables the **Enable Application Insights** option on the page. Select it to send logs to the same place as the function application.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-python.md
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.
By default, the Functions runtime collects logs and other telemetry data generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure), which sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib). >[!NOTE]
->To use the OpenCensus Python extensions, you need to enable [Python worker extensions](#python-worker-extensions) in your function app by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1` in your [application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+>To use the OpenCensus Python extensions, you need to enable [Python worker extensions](#python-worker-extensions) in your function app by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1`. You also need to switch to using the Application Insights connection string by adding the [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string) setting to your [application settings](functions-how-to-use-azure-function-app-settings.md#settings), if it's not already there.
```
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-run-local.md
The following considerations apply to project initialization:
+ When you don't provide a project name, the current folder is initialized.
-+ If you plan to publish your project to a custom Linux container, use the `--dockerfile` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function on Linux using a custom image](functions-create-function-linux-custom-image.md).
++ If you plan to publish your project to a custom Linux container, use the `--docker` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function on Linux using a custom image](functions-create-function-linux-custom-image.md). Certain languages may have additional considerations:
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory B2C](https://azure.microsoft.com/services/active-directory-b2c/) | &#x2705; | &#x2705; | | [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | &#x2705; | &#x2705; | | [Azure Active Directory Provisioning Service](../../active-directory/app-provisioning/user-provisioning.md)| &#x2705; | &#x2705; |
+| [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | &#x2705; | &#x2705; | | [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | &#x2705; | &#x2705; | | [Azure Arc-enabled Servers](../../azure-arc/servers/overview.md) | &#x2705; | &#x2705; | | [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; |
-| [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; |
| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | | [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | | [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Database for MariaDB](https://azure.microsoft.com/services/mariadb/) | &#x2705; | &#x2705; | | [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/) | &#x2705; | &#x2705; | | [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | &#x2705; | &#x2705; |
-| [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; |
| [Azure Databricks](https://azure.microsoft.com/services/databricks/) **&ast;&ast;** | &#x2705; | &#x2705; | | [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | | [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Functions](https://azure.microsoft.com/services/functions/) | &#x2705; | &#x2705; | | [Azure Health Bot](/healthbot/) | &#x2705; | &#x2705; | | [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/) | &#x2705; | &#x2705; |
-| [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; |
| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; | | [Azure Immersive Reader](https://azure.microsoft.com/services/immersive-reader/) | &#x2705; | &#x2705; | | [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Marketplace portal](https://azuremarketplace.microsoft.com/) | &#x2705; | &#x2705; | | [Azure Maps](https://azure.microsoft.com/services/azure-maps/) | &#x2705; | &#x2705; | | [Azure Media Services](https://azure.microsoft.com/services/media-services/) | &#x2705; | &#x2705; |
-| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | &#x2705; | &#x2705; |
| [Azure Monitor](https://azure.microsoft.com/services/monitor/) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | | [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | &#x2705; | &#x2705; | | [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | | [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; |
-| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; |
| [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | &#x2705; | &#x2705; | | [Azure Sphere](https://azure.microsoft.com/services/azure-sphere/) | &#x2705; | &#x2705; | | [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | | [Cognitive | [Cognitive
-| [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; |
| [Cognitive | [Cognitive | [Cognitive
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | | [Data Factory](https://azure.microsoft.com/services/data-factory/) | &#x2705; | &#x2705; | | [Dataverse](/powerapps/maker/common-data-service/data-platform-intro) (incl. [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake)) | &#x2705; | &#x2705; |
-| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; |
| [Dynamics 365 Commerce](https://dynamics.microsoft.com/commerce/overview/)| &#x2705; | &#x2705; | | [Dynamics 365 Customer Service](https://dynamics.microsoft.com/customer-service/overview/)| &#x2705; | &#x2705; | | [Dynamics 365 Field Service](https://dynamics.microsoft.com/field-service/overview/)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | | [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | | [Load Balancer](https://azure.microsoft.com/services/load-balancer/) | &#x2705; | &#x2705; |
-| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; |
| [Microsoft Azure Attestation](https://azure.microsoft.com/services/azure-attestation/)| &#x2705; | &#x2705; | | [Microsoft Azure Marketplace portal](https://azuremarketplace.microsoft.com/marketplace/)| &#x2705; | &#x2705; | | [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (incl. [UEBA](../../sentinel/identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba)) | &#x2705; | &#x2705; | | [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | | [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; |
-| [Multi-factor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) (incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md)) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory (Free and Basic)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Active Directory (Premium P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Cognitive Search](https://azure.microsoft.com/services/search/) (formerly Azure Search) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Databricks](https://azure.microsoft.com/services/databricks/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure DNS](https://azure.microsoft.com/services/dns/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) **&ast;&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Policy](https://azure.microsoft.com/services/azure-policy/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Policy's guest configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Scheduler](../../scheduler/scheduler-intro.md) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Web Application Firewall](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Batch](https://azure.microsoft.com/services/batch/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Cognitive | [Cognitive
-| [Container Instances](https://azure.microsoft.com/services/container-instances/)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Container Instances](https://azure.microsoft.com/services/container-instances/)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Container Registry](https://azure.microsoft.com/services/container-registry/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dynamics 365 Project Service Automation](/dynamics365/project-operations/psa/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Sales](https://dynamics.microsoft.com/sales/overview/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Supply Chain Management](https://dynamics.microsoft.com/supply-chain-management/overview/) | &#x2705; | &#x2705; | | | |
-| [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | &#x2705; | | | | [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Defender for Identity](/defender-for-identity/what-is) (formerly Azure Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/) (formerly Azure Security for IoT) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Graph](/graph/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Microsoft Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (formerly Azure Sentinel) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Multi-factor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
azure-government Documentation Government Plan Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-plan-security.md
The isolation of the Azure Government environment is achieved through a series o
- Physically isolated hardware - Physical barriers to the hardware using biometric devices and cameras - Conditional access (Azure RBAC, workflow)-- Specific credentials and multifactor authentication for logical access
+- Specific credentials and multi-factor authentication for logical access
- Infrastructure for Azure Government is located within the United States Within the Azure Government network, internal network system components are isolated from other system components through implementation of separate subnets and access control policies on management interfaces. Azure Government doesn't directly peer with the public internet or with the Microsoft corporate network. Azure Government directly peers to the commercial Microsoft Azure network, which has routing and transport capabilities to the Internet and the Microsoft Corporate network. Azure Government limits its exposed surface area by applying extra protections and communications capabilities of our commercial Azure network. In addition, Azure Government ExpressRoute (ER) uses peering with our customerΓÇÖs networks over non-Internet private circuits to route ER customer ΓÇ£DMZΓÇ¥ networks using specific Border Gateway Protocol (BGP)/AS peering as a trust boundary for application routing and associated policy enforcement.
Microsoft takes strong measures to protect your data from inappropriate access o
Microsoft engineers can be granted access to customer data using temporary credentials via **Just-in-Time (JIT)** access. There must be an incident logged in the Azure Incident Management system that describes the reason for access, approval record, what data was accessed, etc. This approach ensures that there's appropriate oversight for all access to customer data and that all JIT actions (consent and access) are logged for audit. Evidence that procedures have been established for granting temporary access for Azure personnel to customer data and applications upon appropriate approval for customer support or incident handling purposes is available from the Azure [SOC 2 Type 2 attestation report](/azure/compliance/offerings/offering-soc-2) produced by an independent third-party auditing firm.
-JIT access works with multifactor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published guidance on [securing privileged access](/security/compass/overview). Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they don't have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
+JIT access works with multi-factor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published guidance on [securing privileged access](/security/compass/overview). Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they don't have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
### Customer Lockbox
azure-maps Render Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/render-coverage.md
Title: Render coverage | Microsoft Azure Maps
description: Learn whether Azure Maps renders various regions with detailed or simplified data. See the level it uses for raster-tile and vector-tile maps in those regions. Previously updated : 03/22/2019 Last updated : 01/14/2022
Azure Maps uses both raster tiles and vector tiles to create maps. At the lowest
However, Maps doesn't have the same level of information and accuracy for all regions. The following tables detail the level of information you can render for each region.
-## Legend
+### Legend
-| Symbol | Meaning |
-|--||
-| Γ£ô | Region is represented with detailed data. |
-| ├ÿ | Region is represented with simplified data. |
--
-## Africa
--
-| Country/Region | Raster Tiles Unified | Vector Tiles Unified |
-| | :: | :: |
-| Algeria | Γ£ô | Γ£ô |
-| Angola | Γ£ô | Γ£ô |
-| Benin | Γ£ô | Γ£ô |
-| Botswana | Γ£ô | Γ£ô |
-| Burkina Faso | Γ£ô | Γ£ô |
-| Burundi | Γ£ô | Γ£ô |
-| Cabo Verde | Γ£ô | Γ£ô |
-| Cameroon | Γ£ô | Γ£ô |
-| Central African Republic | Γ£ô | ├ÿ |
-| Chad | Γ£ô | ├ÿ |
-| Comoros | Γ£ô | ├ÿ |
-| Democratic Republic of the Congo | Γ£ô | Γ£ô |
-| C├┤te d'Ivoire | Γ£ô | ├ÿ |
-| Djibouti | Γ£ô | ├ÿ |
-| Egypt | Γ£ô | Γ£ô |
-| Equatorial Guinea | Γ£ô | ├ÿ |
-| Eritrea | Γ£ô | ├ÿ |
-| Ethiopia | Γ£ô | ├ÿ |
-| Gabon | Γ£ô | Γ£ô |
-| Gambia | Γ£ô | ├ÿ |
-| Ghana | Γ£ô | Γ£ô |
-| Guinea | Γ£ô | ├ÿ |
-| Guinea-Bissau | Γ£ô | ├ÿ |
-| Kenya | Γ£ô | Γ£ô |
-| Lesotho | Γ£ô | Γ£ô |
-| Liberia | Γ£ô | ├ÿ |
-| Libya | Γ£ô | ├ÿ |
-| Madagascar | Γ£ô | ├ÿ |
-| Malawi | Γ£ô | Γ£ô |
-| Mali | Γ£ô | Γ£ô |
-| Mauritania | Γ£ô | Γ£ô |
-| Mauritius | Γ£ô | Γ£ô |
-| Mayotte | Γ£ô | Γ£ô |
-| Morocco | Γ£ô | Γ£ô |
-| Mozambique | Γ£ô | Γ£ô |
-| Namibia | Γ£ô | Γ£ô |
-| Niger | Γ£ô | Γ£ô |
-| Nigeria | Γ£ô | Γ£ô |
-| Réunion | ✓ | ✓ |
-| Rwanda | Γ£ô | Γ£ô |
-| Saint Helena, Ascension and Tristan da Cunha | Γ£ô | ├ÿ |
-| São Tomé and Príncipe | ✓ | Ø |
-| Senegal | Γ£ô | Γ£ô |
-| Sierra Leone | Γ£ô | Γ£ô |
-| Somalia | Γ£ô | Γ£ô |
-| South Africa | Γ£ô | Γ£ô |
-| South Sudan | Γ£ô | Γ£ô |
-| Sudan | Γ£ô | Γ£ô |
-| Swaziland | Γ£ô | Γ£ô |
-| United Republic of Tanzania | Γ£ô | Γ£ô |
-| Togo | Γ£ô | Γ£ô |
-| Tunisia | Γ£ô | Γ£ô |
-| Uganda | Γ£ô | Γ£ô |
-| Zambia | Γ£ô | Γ£ô |
-| Zimbabwe | Γ£ô | Γ£ô |
+| Symbol | Meaning |
+|--|-|
+| Γ£ô | Country is provided with detailed data. |
+| Γùæ | Country is provided with simplified data. |
+| Country is missing | Country data is not provided. |
## Americas
-| Country/Region | Raster Tiles Unified | Vector Tiles Unified |
-| | :: | :: |
-| Anguilla | Γ£ô | Γ£ô |
-| Antigua and Barbuda | Γ£ô | Γ£ô |
-| Argentina | Γ£ô | Γ£ô |
-| Aruba | Γ£ô | Γ£ô |
-| Bahamas | Γ£ô | Γ£ô |
-| Barbados | Γ£ô | Γ£ô |
-| Belize | Γ£ô | Γ£ô |
-| Bermuda | Γ£ô | Γ£ô |
-| Plurinational State of Bolivia | Γ£ô | Γ£ô |
-| Bonaire, Sint Eustatius, and Saba | Γ£ô | Γ£ô |
-| Brazil | Γ£ô | Γ£ô |
-| Canada | Γ£ô | Γ£ô |
-| Cayman Islands | Γ£ô | Γ£ô |
-| Chile | Γ£ô | Γ£ô |
-| Colombia | Γ£ô | Γ£ô |
-| Costa Rica | Γ£ô | Γ£ô |
-| Cuba | Γ£ô | Γ£ô |
-| Curaçao | ✓ | ✓ |
-| Dominica | Γ£ô | Γ£ô |
-| Dominican Republic | Γ£ô | Γ£ô |
-| Ecuador | Γ£ô | Γ£ô |
-| Falkland Islands (Malvinas) | Γ£ô | Γ£ô |
-| French Guiana | Γ£ô | Γ£ô |
-| Greenland | Γ£ô | ├ÿ |
-| Grenada | Γ£ô | Γ£ô |
-| Guadeloupe | Γ£ô | Γ£ô |
-| Guatemala | Γ£ô | Γ£ô |
-| Guyana | Γ£ô | Γ£ô |
-| Haiti | Γ£ô | Γ£ô |
-| Honduras | Γ£ô | Γ£ô |
-| Jamaica | Γ£ô | Γ£ô |
-| Martinique | Γ£ô | Γ£ô |
-| Mexico | Γ£ô | Γ£ô |
-| Montserrat | Γ£ô | Γ£ô |
-| Nicaragua | Γ£ô | Γ£ô |
-| Northern Mariana Islands | Γ£ô | Γ£ô |
-| Panama | Γ£ô | Γ£ô |
-| Paraguay | Γ£ô | Γ£ô |
-| Peru | Γ£ô | Γ£ô |
-| Puerto Rico | Γ£ô | Γ£ô |
-| Quebec (Canada) | Γ£ô | Γ£ô |
-| Saint Barthélemy | ✓ | ✓ |
-| Saint Kitts and Nevis | Γ£ô | Γ£ô |
-| Saint Lucia | Γ£ô | Γ£ô |
-| Saint Martin (French) | Γ£ô | Γ£ô |
-| Saint Pierre and Miquelon | Γ£ô | Γ£ô |
-| Saint Vincent and the Grenadines | Γ£ô | Γ£ô |
-| Sint Maarten (Dutch) | Γ£ô | Γ£ô |
-| South Georgia and the South Sandwich Islands | Γ£ô | Γ£ô |
-| Suriname | Γ£ô | Γ£ô |
-| Trinidad and Tobago | Γ£ô | Γ£ô |
-| Turks and Caicos Islands | Γ£ô | Γ£ô |
-| United States | Γ£ô | Γ£ô |
-| Uruguay | Γ£ô | Γ£ô |
-| Venezuela | Γ£ô | Γ£ô |
-| Virgin Islands, British | Γ£ô | Γ£ô |
-| Virgin Islands, U.S. | Γ£ô | Γ£ô |
-
-## Asia
-
-| Country/Region | Raster Tiles Unified | Vector Tiles Unified |
-| | :: | :: |
-| Afghanistan | | ├ÿ |
-| Bahrain | Γ£ô | Γ£ô |
-| Bangladesh | | ├ÿ |
-| Bhutan | | ├ÿ |
-| British Indian Ocean Territory | | ├ÿ |
-| Brunei | Γ£ô | Γ£ô |
-| Cambodia | | ├ÿ |
-| China | | ├ÿ |
-| Cocos (Keeling) Islands | | ├ÿ |
-| Democratic People's Republic of Korea | | ├ÿ |
-| Hong Kong SAR | Γ£ô | Γ£ô |
-| India | ├ÿ | Γ£ô |
-| Indonesia | Γ£ô | Γ£ô |
-| Iran | | ├ÿ |
-| Iraq | Γ£ô | Γ£ô |
-| Israel | | Γ£ô |
-| Japan | | ├ÿ |
-| Jordan | Γ£ô | Γ£ô |
-| Kazakhstan | | Γ£ô |
-| Kuwait | Γ£ô | Γ£ô |
-| Kyrgyzstan | | ├ÿ |
-| Lao People's Democratic Republic | | ├ÿ |
-| Lebanon | Γ£ô | Γ£ô |
-| Macao SAR | Γ£ô | Γ£ô |
-| Malaysia | Γ£ô | Γ£ô |
-| Maldives | | ├ÿ |
-| Mongolia | | ├ÿ |
-| Myanmar | | ├ÿ |
-| Nepal | | ├ÿ |
-| Oman | Γ£ô | Γ£ô |
-| Pakistan | | ├ÿ |
-| Philippines | Γ£ô | Γ£ô |
-| Qatar | Γ£ô | Γ£ô |
-| Republic of Korea | Γ£ô | ├ÿ |
-| Saudi Arabia | Γ£ô | Γ£ô |
-| Senkaku Islands | | Γ£ô |
-| Singapore | Γ£ô | Γ£ô|
-| Sri Lanka | | ├ÿ |
-| Syrian Arab Republic | | ├ÿ |
-| Taiwan | Γ£ô | Γ£ô |
-| Tajikistan | | ├ÿ |
-| Thailand | Γ£ô | Γ£ô |
-| Timor-Leste | | ├ÿ |
-| Turkmenistan | | ├ÿ |
-| United Arab Emirates | Γ£ô | Γ£ô |
-| United States Minor Outlying Islands | | ├ÿ |
-| Uzbekistan | | ├ÿ |
-| Vietnam | Γ£ô | Γ£ô |
-| Yemen | Γ£ô | Γ£ô |
-
-## Oceania
-
-| Country/Region | Raster Tiles Unified | Vector Tiles Unified |
-| | :: | :: |
-| American Samoa | | Γ£ô |
-| Australia | Γ£ô | Γ£ô |
-| Cook Islands | | ├ÿ |
-| Fiji | | ├ÿ |
-| French Polynesia | | ├ÿ |
-| Guam | Γ£ô | Γ£ô |
-| Kiribati | | ├ÿ |
-| Marshall Islands | | ├ÿ |
-| Micronesia | | ├ÿ |
-| Nauru | | ├ÿ |
-| New Caledonia | | ├ÿ |
-| New Zealand | Γ£ô | Γ£ô |
-| Niue | | ├ÿ |
-| Norfolk Island | | ├ÿ |
-| Palau | | ├ÿ |
-| Papua New Guinea | | ├ÿ |
-| Pitcairn | | ├ÿ |
-| Samoa | | ├ÿ |
-| Solomon Islands | | ├ÿ|
-| Tokelau | | ├ÿ |
-| Tonga | | ├ÿ |
-| Tuvalu | | ├ÿ |
-| Vanuatu | | ├ÿ |
-| Wallis and Futuna | | ├ÿ |
-
+| Country/Region | Coverage |
+|-|:--:|
+| Anguilla | Γ£ô |
+| Antigua & Barbuda | Γ£ô |
+| Argentina | Γ£ô |
+| Aruba | Γ£ô |
+| Bahamas | Γ£ô |
+| Barbados | Γ£ô |
+| Bermuda | Γ£ô |
+| Bonaire, St Eustatius & Saba | Γ£ô |
+| Brazil | Γ£ô |
+| British Virgin Islands | Γ£ô |
+| Canada | Γ£ô |
+| Cayman Islands | Γ£ô |
+| Chile | Γ£ô |
+| Clipperton Island | Γ£ô |
+| Colombia | Γ£ô |
+| Curaçao | ✓ |
+| Dominica | Γ£ô |
+| Falkland Islands | Γ£ô |
+| Grenada | Γ£ô |
+| Guadeloupe | Γ£ô |
+| Haiti | Γ£ô |
+| Jamaica | Γ£ô |
+| Martinique | Γ£ô |
+| Mexico | Γ£ô |
+| Montserrat | Γ£ô |
+| Peru | Γ£ô |
+| Puerto Rico | Γ£ô |
+| Saint Barthélemy | ✓ |
+| Saint Kitts & Nevis | Γ£ô |
+| Saint Lucia | Γ£ô |
+| Saint Martin | Γ£ô |
+| Saint Pierre & Miquelon | Γ£ô |
+| Saint Vincent & Grenadines | Γ£ô |
+| Sint Maarten | Γ£ô |
+| South Georgia & Sandwich Islands | Γ£ô |
+| Trinidad & Tobago | Γ£ô |
+| Turks & Caicos Islands | Γ£ô |
+| U.S. Virgin Islands | Γ£ô |
+| United States | Γ£ô |
+| Uruguay | Γ£ô |
+| Venezuela | Γ£ô |
+
+## Asia Pacific
+
+| Country/Region | Coverage |
+|-|:--:|
+| Australia | Γ£ô |
+| Brunei | Γ£ô |
+| Cambodia | Γ£ô |
+| Guam | Γ£ô |
+| Hong Kong | Γ£ô |
+| India | Γ£ô |
+| Indonesia | Γ£ô |
+| Laos | Γ£ô |
+| Macao | Γ£ô |
+| Malaysia | Γ£ô |
+| Myanmar | Γ£ô |
+| New Zealand | Γ£ô |
+| Philippines | Γ£ô |
+| Singapore | Γ£ô |
+| South Korea | Γùæ |
+| Taiwan | Γ£ô |
+| Thailand | Γ£ô |
+| Vietnam | Γ£ô |
## Europe
-| Country/Region | Raster Tiles Unified | Vector Tiles Unified |
-| | :: | :: |
-| Albania | Γ£ô | Γ£ô |
-| Andorra | Γ£ô | Γ£ô |
-| Armenia | Γ£ô | ├ÿ |
-| Austria | Γ£ô | Γ£ô |
-| Azerbaijan | Γ£ô | ├ÿ |
-| Belarus | ├ÿ | Γ£ô |
-| Belgium | Γ£ô | Γ£ô |
-| Bosnia-Herzegovina | Γ£ô | Γ£ô |
-| Bulgaria | Γ£ô | Γ£ô |
-| Croatia | Γ£ô | Γ£ô |
-| Cyprus | Γ£ô | Γ£ô |
-| Czech Republic | Γ£ô | Γ£ô |
-| Denmark | Γ£ô | Γ£ô |
-| Estonia | Γ£ô | Γ£ô |
-| Faroe Islands | Γ£ô | ├ÿ |
-| Finland | Γ£ô | Γ£ô |
-| France | Γ£ô | Γ£ô |
-| Georgia | Γ£ô | ├ÿ |
-| Germany | Γ£ô | Γ£ô |
-| Gibraltar | Γ£ô | Γ£ô |
-| Greece | Γ£ô | Γ£ô |
-| Guernsey | Γ£ô | Γ£ô |
-| Hungary | Γ£ô | Γ£ô |
-| Iceland | Γ£ô | Γ£ô |
-| Ireland | Γ£ô | Γ£ô |
-| Isle of Man | Γ£ô | Γ£ô |
-| Italy | Γ£ô | Γ£ô |
-| Jan Mayen | Γ£ô | Γ£ô |
-| Jersey | Γ£ô | Γ£ô |
-| Latvia | Γ£ô | Γ£ô |
-| Liechtenstein | Γ£ô | Γ£ô |
-| Lithuania | Γ£ô | Γ£ô |
-| Luxembourg | Γ£ô | Γ£ô |
-| North Macedonia | Γ£ô | Γ£ô |
-| Malta | Γ£ô | Γ£ô |
-| Moldova | Γ£ô | Γ£ô |
-| Monaco | Γ£ô | Γ£ô |
-| Montenegro | Γ£ô | Γ£ô |
-| Netherlands | Γ£ô | Γ£ô |
-| Norway | Γ£ô | Γ£ô |
-| Poland | Γ£ô | Γ£ô |
-| Portugal | Γ£ô | Γ£ô |
-| Romania | Γ£ô | Γ£ô |
-| Russian Federation | Γ£ô | Γ£ô |
-| San Marino | Γ£ô | Γ£ô |
-| Serbia | Γ£ô | Γ£ô |
-| Slovakia | Γ£ô | Γ£ô |
-| Slovenia | Γ£ô | Γ£ô |
-| Southern Kurils | Γ£ô | Γ£ô |
-| Spain | Γ£ô | Γ£ô |
-| Svalbard | Γ£ô | Γ£ô |
-| Sweden | Γ£ô | Γ£ô |
-| Switzerland | Γ£ô | Γ£ô |
-| Turkey | Γ£ô | Γ£ô |
-| Ukraine | Γ£ô | Γ£ô |
-| United Kingdom | Γ£ô | Γ£ô |
-| Vatican City | Γ£ô | Γ£ô |
-
-## Next steps
-
-For more information about Azure Maps rendering, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
-
-Learn about the [coverage areas for the Maps routing service](routing-coverage.md).
+| Country/Region | Coverage |
+|--|:--:|
+| Albania | Γ£ô |
+| Andorra | Γ£ô |
+| Austria | Γ£ô |
+| Belarus | Γ£ô |
+| Belgium | Γ£ô |
+| Bosnia-Herzegovina | Γ£ô |
+| Bulgaria | Γ£ô |
+| Croatia | Γ£ô |
+| Cyprus | Γ£ô |
+| Czech Republic | Γ£ô |
+| Denmark | Γ£ô |
+| Estonia | Γ£ô |
+| Finland | Γ£ô |
+| France | Γ£ô |
+| Germany | Γ£ô |
+| Gibraltar | Γ£ô |
+| Greece | Γ£ô |
+| Hungary | Γ£ô |
+| Iceland | Γ£ô |
+| Ireland | Γ£ô |
+| Italy | Γ£ô |
+| Latvia | Γ£ô |
+| Liechtenstein | Γ£ô |
+| Lithuania | Γ£ô |
+| Luxembourg | Γ£ô |
+| Macedonia | Γ£ô |
+| Malta | Γ£ô |
+| Moldova | Γ£ô |
+| Monaco | Γ£ô |
+| Montenegro | Γ£ô |
+| Netherlands | Γ£ô |
+| Norway | Γ£ô |
+| Poland | Γ£ô |
+| Portugal | Γ£ô |
+| Romania | Γ£ô |
+| Russian Federation | Γ£ô |
+| San Marino | Γ£ô |
+| Serbia | Γ£ô |
+| Slovakia | Γ£ô |
+| Slovenia | Γ£ô |
+| Spain | Γ£ô |
+| Sweden | Γ£ô |
+| Switzerland | Γ£ô |
+| Turkey | Γ£ô |
+| Ukraine | Γ£ô |
+| United Kingdom | Γ£ô |
+| Vatican City | Γ£ô |
+
+## Middle East & Africa
+
+| Country/Region | Coverage |
+||:--:|
+| Algeria | Γ£ô |
+| Angola | Γ£ô |
+| Bahrain | Γ£ô |
+| Benin | Γ£ô |
+| Botswana | Γ£ô |
+| Burkina Faso | Γ£ô |
+| Burundi | Γ£ô |
+| Cameroon | Γ£ô |
+| Congo | Γ£ô |
+| Democratic Republic of Congo | Γ£ô |
+| Egypt | Γ£ô |
+| Gabon | Γ£ô |
+| Ghana | Γ£ô |
+| Iraq | Γ£ô |
+| Jordan | Γ£ô |
+| Kenya | Γ£ô |
+| Kuwait | Γ£ô |
+| Lebanon | Γ£ô |
+| Lesotho | Γ£ô |
+| Malawi | Γ£ô |
+| Mali | Γ£ô |
+| Mauritania | Γ£ô |
+| Mauritius | Γ£ô |
+| Mayotte | Γ£ô |
+| Morocco | Γ£ô |
+| Mozambique | Γ£ô |
+| Namibia | Γ£ô |
+| Niger | Γ£ô |
+| Nigeria | Γ£ô |
+| Oman | Γ£ô |
+| Qatar | Γ£ô |
+| Reunion | Γ£ô |
+| Rwanda | Γ£ô |
+| Saudi Arabia | Γ£ô |
+| Senegal | Γ£ô |
+| South Africa | Γ£ô |
+| Swaziland | Γ£ô |
+| Tanzania | Γ£ô |
+| Togo | Γ£ô |
+| Tunisia | Γ£ô |
+| Uganda | Γ£ô |
+| United Arab Emirates | Γ£ô |
+| Yemen | Γ£ô |
+| Zambia | Γ£ô |
+| Zimbabwe | Γ£ô |
+
+## Additional information
+
+- See [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) for more information about Azure Maps rendering.
+
+- [Azure Maps routing service](routing-coverage.md).
azure-maps Traffic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/traffic-coverage.md
Title: Traffic coverage | Microsoft Azure Maps
description: Learn about traffic coverage in Azure Maps. See whether information on traffic flow and incidents is available in various regions throughout the world. Previously updated : 09/22/2018 Last updated : 01/13/2022
Azure Maps provides rich traffic information in the form of traffic **flow** and **incidents**. This data can be visualized on maps or used to generate smarter routes that factor in real driving conditions.
-However, Maps doesn't have the same level of information and accuracy for all countries or regions. The following table provides information about what kind of traffic information you can request from each country or region:
+The following tables provide information about what kind of traffic information you can request from each country or region. If a market is missing in the following tables, it is not currently supported.
## Americas
-|Country/Region |Incidents |Flow |
-||::|::|
-|Argentina |Γ£ô |Γ£ô |
-|Brazil |Γ£ô |Γ£ô |
-|Canada |Γ£ô |Γ£ô |
-|Chile |Γ£ô |Γ£ô |
-|Colombia |Γ£ô |Γ£ô |
-|Mexico |Γ£ô |Γ£ô |
-|Peru |Γ£ô |Γ£ô |
-|United States |Γ£ô |Γ£ô |
-|+Puerto Rico |Γ£ô |Γ£ô |
-|Uruguay |Γ£ô |Γ£ô |
-
+| Country/Region | Incidents | Flow |
+|-|::|:-:|
+| Argentina | Γ£ô | Γ£ô |
+| Brazil | Γ£ô | Γ£ô |
+| Canada | Γ£ô | Γ£ô |
+| Chile | Γ£ô | Γ£ô |
+| Colombia | Γ£ô | Γ£ô |
+| Guadeloupe | Γ£ô | Γ£ô |
+| Martinique | Γ£ô | Γ£ô |
+| Mexico | Γ£ô | Γ£ô |
+| Peru | Γ£ô | Γ£ô |
+| United States | Γ£ô | Γ£ô |
+| Uruguay | Γ£ô | Γ£ô |
## Asia Pacific
-|Country/Region |Incidents |Flow |
-||::|::|
-|Australia |Γ£ô |Γ£ô |
-|Brunei |Γ£ô |Γ£ô |
-|Hong Kong SAR |Γ£ô |Γ£ô |
-|India |Γ£ô |Γ£ô |
-|Indonesia |Γ£ô |Γ£ô |
-|Kazakhstan |Γ£ô |Γ£ô |
-|Macao SAR |Γ£ô |Γ£ô |
-|Malaysia |Γ£ô |Γ£ô |
-|New Zealand |Γ£ô |Γ£ô |
-|Philippines |Γ£ô |Γ£ô |
-|Singapore |Γ£ô |Γ£ô |
-|Taiwan |Γ£ô |Γ£ô |
-|Thailand |Γ£ô |Γ£ô |
-|Vietnam |Γ£ô |Γ£ô |
-
+| Country/Region | Incidents | Flow |
+|-|::|:-:|
+| Australia | Γ£ô | Γ£ô |
+| Brunei | Γ£ô | Γ£ô |
+| Hong Kong | Γ£ô | Γ£ô |
+| India | Γ£ô | Γ£ô |
+| Indonesia | Γ£ô | Γ£ô |
+| Kazakhstan | Γ£ô | Γ£ô |
+| Macao | Γ£ô | Γ£ô |
+| Malaysia | Γ£ô | Γ£ô |
+| New Zealand | Γ£ô | Γ£ô |
+| Philippines | Γ£ô | Γ£ô |
+| Singapore | Γ£ô | Γ£ô |
+| Taiwan | Γ£ô | Γ£ô |
+| Thailand | Γ£ô | Γ£ô |
+| Vietnam | Γ£ô | Γ£ô |
## Europe
-|Country/Region |Incidents |Flow |
-||::|::|
-|Andorra |Γ£ô |Γ£ô |
-|Austria |Γ£ô |Γ£ô |
-|Belarus |Γ£ô |Γ£ô |
-|Belgium |Γ£ô |Γ£ô |
-|Bosnia and Herzegovina |Γ£ô |Γ£ô |
-|Bulgaria |Γ£ô |Γ£ô |
-|Croatia |Γ£ô |Γ£ô |
-|Czech Republic |Γ£ô |Γ£ô |
-|Denmark |Γ£ô |Γ£ô |
-|Estonia | | Γ£ô |
-|Finland |Γ£ô |Γ£ô |
-|+Åland Islands |✓ |✓ |
-|France |Γ£ô |Γ£ô |
-|Monaco |Γ£ô |Γ£ô |
-|Germany |Γ£ô |Γ£ô |
-|Greece |Γ£ô |Γ£ô |
-|Hungary |Γ£ô |Γ£ô |
-|Iceland |Γ£ô |Γ£ô |
-|Ireland |Γ£ô |Γ£ô |
-|Italy |Γ£ô |Γ£ô |
-|Kazakhstan |Γ£ô |Γ£ô |
-|Latvia |Γ£ô |Γ£ô |
-|Lesotho |Γ£ô |Γ£ô |
-|Liechtenstein |Γ£ô |Γ£ô |
-|Lithuania |Γ£ô |Γ£ô |
-|Luxembourg |Γ£ô |Γ£ô |
-|Malta |Γ£ô |Γ£ô |
-|Monaco |Γ£ô |Γ£ô |
-|Netherlands |Γ£ô |Γ£ô |
-|Norway |Γ£ô |Γ£ô |
-|Poland |Γ£ô |Γ£ô |
-|Portugal |Γ£ô |Γ£ô |
-|+Azores and Madeira |Γ£ô |Γ£ô |
-|Romania |Γ£ô |Γ£ô |
-|Russian Federation |Γ£ô |Γ£ô |
-|San Marino |Γ£ô |Γ£ô |
-|Serbia |Γ£ô |Γ£ô |
-|Slovakia |Γ£ô |Γ£ô |
-|Slovenia |Γ£ô |Γ£ô |
-|Spain |Γ£ô |Γ£ô |
-|+Andorra |Γ£ô |Γ£ô |
-|+Balearic Islands |Γ£ô |Γ£ô |
-|+Canary Islands |Γ£ô |Γ£ô |
-|Sweden |Γ£ô |Γ£ô |
-|Switzerland |Γ£ô |Γ£ô |
-|Turkey |Γ£ô |Γ£ô |
-|Ukraine |Γ£ô |Γ£ô |
-|United Kingdom |Γ£ô |Γ£ô |
-|+Gibraltar |Γ£ô |Γ£ô |
-|+Guernsey & Jersey |Γ£ô |Γ£ô |
-|+Isle of Man |Γ£ô |Γ£ô |
-|Vatican City |Γ£ô |Γ£ô |
-
+| Country/Region | Incidents | Flow |
+||::|:-:|
+| Belarus | Γ£ô | Γ£ô |
+| Belgium | Γ£ô | Γ£ô |
+| Bosnia and Herzegovina | Γ£ô | Γ£ô |
+| Bulgaria | Γ£ô | Γ£ô |
+| Croatia | Γ£ô | Γ£ô |
+| Cyprus | Γ£ô | Γ£ô |
+| Czech Republic | Γ£ô | Γ£ô |
+| Denmark | Γ£ô | Γ£ô |
+| Estonia | Γ£ô | Γ£ô |
+| Finland | Γ£ô | Γ£ô |
+| France | Γ£ô | Γ£ô |
+| Germany | Γ£ô | Γ£ô |
+| Gibraltar | Γ£ô | Γ£ô |
+| Greece | Γ£ô | Γ£ô |
+| Hungary | Γ£ô | Γ£ô |
+| Iceland | Γ£ô | Γ£ô |
+| Ireland | Γ£ô | Γ£ô |
+| Italy | Γ£ô | Γ£ô |
+| Latvia | Γ£ô | Γ£ô |
+| Liechtenstein | Γ£ô | Γ£ô |
+| Lithuania | Γ£ô | Γ£ô |
+| Luxembourg | Γ£ô | Γ£ô |
+| Malta | Γ£ô | Γ£ô |
+| Monaco | Γ£ô | Γ£ô |
+| Netherlands | Γ£ô | Γ£ô |
+| Norway | Γ£ô | Γ£ô |
+| Poland | Γ£ô | Γ£ô |
+| Portugal | Γ£ô | Γ£ô |
+| Romania | Γ£ô | Γ£ô |
+| Russian Federation | Γ£ô | Γ£ô |
+| San Marino | Γ£ô | Γ£ô |
+| Serbia | Γ£ô | Γ£ô |
+| Slovakia | Γ£ô | Γ£ô |
+| Slovenia | Γ£ô | Γ£ô |
+| Spain | Γ£ô | Γ£ô |
+| Sweden | Γ£ô | Γ£ô |
+| Switzerland | Γ£ô | Γ£ô |
+| Turkey | Γ£ô | Γ£ô |
+| Ukraine | Γ£ô | Γ£ô |
+| United Kingdom | Γ£ô | Γ£ô |
## Middle East and Africa
-|Country/Region |Incidents |Flow |
-||::|::|
-|Bahrain |Γ£ô |Γ£ô |
-|Egypt |Γ£ô |Γ£ô |
-|Israel |Γ£ô |Γ£ô |
-|Kenya |Γ£ô |Γ£ô |
-|Kuwait |Γ£ô |Γ£ô |
-|Morocco |Γ£ô |Γ£ô |
-|Mozambique |Γ£ô |Γ£ô |
-|Nigeria |Γ£ô |Γ£ô |
-|Oman |Γ£ô |Γ£ô |
-|Qatar |Γ£ô |Γ£ô |
-|Saudi Arabia |Γ£ô |Γ£ô |
-|South Africa |Γ£ô |Γ£ô |
-|United Arab Emirates |Γ£ô |Γ£ô |
-
-## Next steps
-
-For more information about Azure Maps traffic data, see the [Traffic](/rest/api/maps/traffic) reference pages.
+| Country/Region | Incidents | Flow |
+|-|::|:-:|
+| Bahrain | Γ£ô | Γ£ô |
+| Egypt | Γ£ô | Γ£ô |
+| Israel | Γ£ô | Γ£ô |
+| Kenya | Γ£ô | Γ£ô |
+| Kuwait | Γ£ô | Γ£ô |
+| Lesotho | Γ£ô | Γ£ô |
+| Morocco | Γ£ô | Γ£ô |
+| Mozambique | Γ£ô | Γ£ô |
+| Nigeria | Γ£ô | Γ£ô |
+| Oman | Γ£ô | Γ£ô |
+| Qatar | Γ£ô | Γ£ô |
+| Reunion | Γ£ô | Γ£ô |
+| Saudi Arabia | Γ£ô | Γ£ô |
+| South Africa | Γ£ô | Γ£ô |
+| United Arab Emirates | Γ£ô | Γ£ô |
+
+## Additional information
+
+For more information about incorporating Azure Maps traffic data into your mapping applications, see the [Traffic](/rest/api/maps/traffic) REST API reference.
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agent-manage.md
The following steps demonstrate how to reconfigure the Linux agent if you decide
The agent service does not need to be restarted in order for the changes to take effect. ## Update proxy settings
-To configure the agent to communicate to the service through a proxy server or [Log Analytics gateway](./gateway.md) after deployment, use one of the following methods to complete this task.
+Log Analytics Agent (MMA) does not use the system proxy settings. Hence, user has to pass proxy setting while installing MMA and these settings will be stored under MMA configuration(registry) on VM. To configure the agent to communicate to the service through a proxy server or [Log Analytics gateway](./gateway.md) after deployment, use one of the following methods to complete this task.
### Windows agent
To configure the agent to communicate to the service through a proxy server or [
4. Click **Use a proxy server** and provide the URL and port number of the proxy server or gateway. If your proxy server or Log Analytics gateway requires authentication, type the username and password to authenticate and then click **OK**. + #### Update settings using PowerShell Copy the following sample PowerShell code, update it with information specific to your environment, and save it with a PS1 file name extension. Run the script on each computer that connects directly to the Log Analytics workspace in Azure Monitor.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-manage.md
WE strongly recommended to update to generally available versions listed as foll
|:|:|:|:|:| | June 2021 | General availability announced. <ul><li>All features except metrics destination now generally available</li><li>Production quality, security and compliance</li><li>Availability in all public regions</li><li>Performance and scale improvements for higher EPS</li></ul> [Learn more](https://azure.microsoft.com/updates/azure-monitor-agent-and-data-collection-rules-now-generally-available/) | 1.0.12.0 | 1.9.1.0 | | July 2021 | <ul><li>Support for direct proxies</li><li>Support for Log Analytics gateway</li></ul> [Learn more](https://azure.microsoft.com/updates/general-availability-azure-monitor-agent-and-data-collection-rules-now-support-direct-proxies-and-log-analytics-gateway/) | 1.1.1.0 | 1.10.5.0 |
-| August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>1</sup> |
-| September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Addressed regression introduced in 1.1.3.1<sup>2</sup> for Arc Windows servers</li></ul> | 1.1.3.2 | 1.12.2.0 <sup>2</sup> |
-| December 2021 | Fixed issues impacting Linux Arc-enabled servers | N/A | 1.14.7.0<sup>3</sup> |
+| August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>Hotfix</sup> |
+| September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Fixed issue for Arc Windows servers</li></ul> | 1.1.3.2<sup>Hotfix</sup> | 1.12.2.0 <sup>1</sup> |
+| December 2021 | <ul><li>Fixed issues impacting Linux Arc-enabled servers</li><li>'Heartbeat' table > 'Category' column reports "Azure Monitor Agent" in Log Analytics for Windows</li></ul> | 1.1.4.0 | 1.14.7.0<sup>2</sup> |
+| January 2021 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><ul> | Not available yet | 1.15.2.0<sup>Hotfix</sup> |
-<sup>1</sup> Do not use AMA Linux version 1.10.7.0
-<sup>2</sup> Known regression where it's not working on Arc-enabled servers
-<sup>3</sup> Bug identified wherein Linux performance counters data stops flowing on restarting/rebooting the machine(s). Fix underway and will be available in next monthly version update.
+<sup>Hotfix</sup> Do not use AMA Linux versions v1.10.7, v1.15.1 and AMA Windows v1.1.3.1. Please use hotfixed versions listed above.
+<sup>1</sup> Known issue: No data collected from Linux Arc-enabled servers
+<sup>2</sup> Known issue: Linux performance counters data stops flowing on restarting/rebooting the machine(s)
## Prerequisites
To uninstall the Azure Monitor agent using the Azure portal, navigate to your vi
### Update To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above. -
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature. Navigate to your virtual machine or scale set, select the **Extensions** tab and click on **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that pops up, click **Enable automatic upgrade**.
## Using Resource Manager template
Remove-AzVMExtension -Name AMALinux -ResourceGroupName <resource-group-name> -VM
### Update on Azure virtual machines
-To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
+To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following PowerShell commands.
+# [Windows](#tab/PowerShellWindows)
+```powershell
+Set-AzVMExtension -ExtensionName AMAWindows -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorWindowsAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true
+```
+# [Linux](#tab/PowerShellLinux)
+```powershell
+Set-AzVMExtension -ExtensionName AMALinux -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorLinuxAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true
+```
+
+
### Install on Azure Arc-enabled servers Use the following PowerShell commands to install the Azure Monitor agent on Azure Arc-enabled servers.
az vm extension delete --resource-group <resource-group-name> --vm-name <virtual
### Update on Azure virtual machines
-To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
+To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following CLI commands.
+# [Windows](#tab/CLIWindows)
+```azurecli
+az vm extension set -name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true
+```
+# [Linux](#tab/CLILinux)
+```azurecli
+az vm extension set -name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true
+```
++ ### Install on Azure Arc-enabled servers Use the following CLI commands to install the Azure Monitor agent onAzure Azure Arc-enabled servers.
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The Azure Monitor agent replaces the following legacy agents that are currently
In addition to consolidating this functionality into a single agent, the Azure Monitor agent provides the following benefits over the existing agents: - **Scope of monitoring:** Centrally configure collection for different sets of data from different sets of VMs.-- **Linux multi-homing:** Send data from Linux VMs to multiple workspaces.
+- **Linux multi-homing:** Send data from Windows and Linux VMs to multiple Log Analytics workspaces (i.e. "multi-homing") and/or other [supported destinations](#data-sources-and-destinations).
- **Windows event filtering:** Use XPATH queries to filter which Windows events are collected. - **Improved extension management:** The Azure Monitor agent uses a new method of handling extensibility that's more transparent and controllable than management packs and Linux plug-ins in the current Log Analytics agents.
Azure Monitor agent is available in all public regions that support Log Analytic
## Supported operating systems For a list of the Windows and Linux operating system versions that are currently supported by the Azure Monitor agent, see [Supported operating systems](agents-overview.md#supported-operating-systems).
+## Data sources and destinations
+The following table lists the types of data you can currently collect with the Azure Monitor agent by using data collection rules and where you can send that data. For a list of insights, solutions, and other solutions that use the Azure Monitor agent to collect other kinds of data, see [What is monitored by Azure Monitor?](../monitor-reference.md).
+
+The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log Analytics workspace supporting Azure Monitor Logs.
+
+| Data source | Destinations | Description |
+|:|:|:|
+| Performance | Azure Monitor Metrics (preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads |
+| Windows event logs | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system |
+| Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system |
+
+<sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.
+<sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee and CEF (Common Event Format).
+ ## Supported services and features The following table shows the current support for the Azure Monitor agent with other Azure services.
As such, ensure you're not collecting the same data from both agents. If you are
## Costs There's no cost for the Azure Monitor agent, but you might incur charges for the data ingested. For details on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-## Data sources and destinations
-The following table lists the types of data you can currently collect with the Azure Monitor agent by using data collection rules and where you can send that data. For a list of insights, solutions, and other solutions that use the Azure Monitor agent to collect other kinds of data, see [What is monitored by Azure Monitor?](../monitor-reference.md).
-
-The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log Analytics workspace supporting Azure Monitor Logs.
-
-| Data source | Destinations | Description |
-|:|:|:|
-| Performance | Azure Monitor Metrics (preview)<sup>1</sup><br>Log Analytics workspace | Numerical values measuring performance of different aspects of operating system and workloads |
-| Windows event logs | Log Analytics workspace | Information sent to the Windows event logging system |
-| Syslog | Log Analytics workspace | Information sent to the Linux event logging system |
-
-<sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
## Security The Azure Monitor agent doesn't require any keys but instead requires a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). You must have a system-assigned managed identity enabled on each virtual machine before you deploy the agent.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To specify other logs and performance counters from the [currently supported dat
[![Data source custom](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
-On the **Destination** tab, add one or more destinations for the data source. Windows event and Syslog data sources can only send to Azure Monitor Logs. Performance counters can send to both Azure Monitor Metrics and Azure Monitor Logs.
+On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of same of different types, for instance multiple Log Analytics workspaces (i.e. "multi-homing"). Windows event and Syslog data sources can only send to Azure Monitor Logs. Performance counters can send to both Azure Monitor Metrics and Azure Monitor Logs.
[![Destination](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/log-analytics-agent.md
If you plan to use the Azure Automation Hybrid Runbook Worker to connect to and
### Proxy configuration
-The Windows and Linux agent supports communicating either through a proxy server or Log Analytics gateway to Azure Monitor using the HTTPS protocol. Both anonymous and basic authentication (username/password) are supported. For the Windows agent connected directly to the service, the proxy configuration is specified during installation or [after deployment](../agents/agent-manage.md#update-proxy-settings) from Control Panel or with PowerShell.
+The Windows and Linux agent supports communicating either through a proxy server or Log Analytics gateway to Azure Monitor using the HTTPS protocol. Both anonymous and basic authentication (username/password) are supported. For the Windows agent connected directly to the service, the proxy configuration is specified during installation or [after deployment](../agents/agent-manage.md#update-proxy-settings) from Control Panel or with PowerShell. Log Analytics Agent (MMA) does not use the system proxy settings. Hence, user has to pass proxy setting while installing MMA and these settings will be stored under MMA configuration(registry) on VM.
For the Linux agent, the proxy server is specified during installation or [after installation](../agents/agent-manage.md#update-proxy-settings) by modifying the proxy.conf configuration file. The Linux agent proxy configuration value has the following syntax:
azure-monitor Alerts Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-activity-log.md
For example:
```
-For more information about the activity log fields, see [Azure activity log event schema](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fazure-monitor%2Fplatform%2Factivity-log-schema&data=02%7C01%7CNoga.Lavi%40microsoft.com%7C90b7c2308c0647c0347908d7c9a2918d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637199572373563632&sdata=6QXLswwZgUHFXCuF%2FgOSowLzA8iOALVgvL3GMVhkYJY%3D&reserved=0).
+For more information about the activity log fields, see [Azure activity log event schema](../essentials/activity-log-schema.md).
> [!NOTE] > It might take up to 5 minutes for the new activity log alert rule to become active.
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-web-apps-java.md
# Application Monitoring for Azure App Service and Java
-Monitoring of your Java-based web applications running on [Azure App Services](../../app-service/index.yml) does not require any modifications to the code. This article will walk you through enabling Azure Monitor application insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
+Monitoring of your Java web applications running on [Azure App Services](../../app-service/index.yml) does not require any modifications to the code. This article will walk you through enabling Azure Monitor Application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
## Enable Application Insights
-The recommended way to enable application monitoring for Java application running on Azure App Services is through Azure portal. Turning on application monitoring in Azure portal will automatically instrument your application with application insights.
+The recommended way to enable application monitoring for Java applications running on Azure App Services is through Azure portal.
+Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes.
+You can apply additional configurations, and then based on your specific scenario you [add your own custom telemetry](./java-in-process-agent.md#modify-telemetry) if needed.
### Auto-instrumentation through Azure portal
-This method requires no code change or advanced configurations, making it the easiest way to get started with monitoring for Azure App Services. You can apply additional configurations, and then based on your specific scenario you can evaluate whether more advanced monitoring through [manual instrumentation](./java-2x-get-started.md?tabs=maven) is needed.
-
-### Enable backend monitoring
-
-You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. Application Insights for Java is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows - code-based apps. It is important to know how your application will be monitored. The integration adds [Application Insights Java 3.x](./java-in-process-agent.md) and you will get the telemetry auto-collected.
+You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required.
+Application Insights for Java is integrated with Azure App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps.
+The integration adds [Application Insights Java 3.x](./java-in-process-agent.md) and you will get the telemetry auto-collected.
1. **Select Application Insights** in the Azure control panel for your app service, then select **Enable**.
You can turn on monitoring for your Java apps running in Azure App Service just
> [!NOTE] > When you select **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**.
- :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot of Change your resource dropdown.":::
+ :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot of Change your resource dropdown.":::
-3. This step is not required. After specifying which resource to use, you can configure the Java agent. If you do not configure the Java agent, default configurations will apply.
+3. This last step is optional. After specifying which resource to use, you can configure the Java agent. If you do not configure the Java agent, default configurations will apply.
The full [set of configurations](./java-standalone-config.md) is available, you just need to paste a valid [json file](./java-standalone-config.md#an-example). **Exclude the connection string and any configurations that are in preview** - you will be able to add the items that are currently in preview as they become generally available.
To enable client-side monitoring for your Java application, you need to [manuall
## Automate monitoring
-### Application settings
- In order to enable telemetry collection with Application Insights, only the following Application settings need to be set:
-|App setting name | Definition | Value |
-|--|:|-:|
-|ApplicationInsightsAgent_EXTENSION_VERSION | Controls runtime monitoring | `~2` for Windows or `~3` for Linux |
-|XDT_MicrosoftApplicationInsights_Java | Flag to control that Java agent is included | 0 or 1 only applicable in Windows
-|APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL | Only use it if you need to debug the integration of Application Insights with App Service | debug
+
+### Application settings definitions
+| App setting name | Definition | Value |
+|||:|
+| ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` in Windows or `~3` in Linux. |
+| XDT_MicrosoftApplicationInsights_Java | Flag to control if Java agent is included. | 0 or 1 (only applicable in Windows). |
> [!NOTE] > Profiler and snapshot debugger are not available for Java applications
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-web-apps-nodejs.md
# Application Monitoring for Azure App Service and Node.js
-Enabling monitoring on your Node.js based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
+Monitoring of your Node.js web applications running on [Azure App Services](../../app-service/index.yml) does not require any modifications to the code. This article will walk you through enabling Azure Monitor Application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
+
+## Enable Application Insights
+
+The easiest way to enable application monitoring for Node.js applications running on Azure App Services is through Azure portal.
+Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes.
> [!NOTE] > If both agent-based monitoring and manual SDK-based instrumentation is detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below.
-## Enable agent-based monitoring
+### Auto-instrumentation through Azure portal
-You can monitor your Node.js apps running in Azure App Service without any code change, just with a couple of simple steps. Application insights for Node.js applications is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps. The integration is in public preview. The integration adds Node.js SDK, which is in GA.
+You can turn on monitoring for your Node.js apps running in Azure App Service just with one click, no code change required.
+Application Insights for Node.js is integrated with Azure App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps.
+The integration is in public preview. The integration adds Node.js SDK, which is in GA.
1. **Select Application Insights** in the Azure control panel for your app service, then select **Enable**.
You can monitor your Node.js apps running in Azure App Service without any code
2. Choose to create a new resource, or select an existing Application Insights resource for this application.
- > [!NOTE]
- > When you click **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**.
-
+ > [!NOTE]
+ > When you select **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**.
+ :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot of Change your resource dropdown.":::
-
-3. Once you have specified which resource to use, you are all set to go.
+3. Once you have specified which resource to use, you are all set to go.
:::image type="content"source="./media/azure-web-apps-nodejs/app-service-node.png" alt-text="Screenshot of instrument your application.":::
To enable client-side monitoring for your Node.js application, you need to [manu
## Automate monitoring
-In order to enable telemetry collection with Application Insights, only the Application settings need to be set:
-
+In order to enable telemetry collection with Application Insights, only the following Application settings need to be set:
### Application settings definitions
-|App setting name | Definition | Value |
-|--|:|-:|
-|ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` in Windows or `~3` in Linux |
-|XDT_MicrosoftApplicationInsights_NodeJS | Flag to control if node.js Agent is included. | 0 or 1 only applicable in Windows. |
+| App setting name | Definition | Value |
+|||:|
+| ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` in Windows or `~3` in Linux. |
+| XDT_MicrosoftApplicationInsights_NodeJS | Flag to control if node.js agent is included. | 0 or 1 (only applicable in Windows). |
+> [!NOTE]
+> Profiler and snapshot debugger are not available for Node.js applications
[!INCLUDE [azure-web-apps-arm-automation](../../../includes/azure-monitor-app-insights-azure-web-apps-arm-automation.md)] - ## Troubleshooting Below is our step-by-step troubleshooting guide for extension/agent based monitoring for Node.js based applications running on Azure App Services.
Below is our step-by-step troubleshooting guide for extension/agent based monito
- Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.`
- If it is not running, follow the [enable Application Insights monitoring instructions](#enable-agent-based-monitoring).
+ If it is not running, follow the [enable Application Insights monitoring instructions](#enable-application-insights).
- Navigate to *D:\local\Temp\status.json* and open *status.json*.
Below is our step-by-step troubleshooting guide for extension/agent based monito
## Release notes
-For the latest updates and bug fixes [consult the release notes](web-app-extension-release-notes.md).
+For the latest updates and bug fixes, [consult the release notes](web-app-extension-release-notes.md).
## Next steps+ * [Monitor Azure Functions with Application Insights](monitor-functions.md). * [Enable Azure diagnostics](../agents/diagnostics-extension-to-application-insights.md) to be sent to Application Insights. * [Monitor service health metrics](../data-platform.md) to make sure your service is available and responsive.
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
If you want to attach custom dimensions to your logs, use [Log4j 1.2 MDC](https:
For help with troubleshooting, see [Troubleshooting](java-standalone-troubleshoot.md).
+## Release notes
+
+See the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub.
+ ## Support To get support:
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
To set the retention of a particular data type (in this example SecurityEvent) t
Valid values for `retentionInDays` are from 4 through 730.
-The `Usage` and `AzureActivity` data types can't be set with custom retention. They take on the maximum of the default workspace retention or 90 days.
- A great tool to connect directly to Azure Resource Manager to set retention by data type is the OSS tool [ARMclient](https://github.com/projectkudu/ARMClient). Learn more about ARMclient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and Daniel Bowbyes. Here's an example using ARMClient, setting SecurityEvent data to a 730-day retention: ```
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
na Previously updated : 07/02/2021 Last updated : 02/02/2022 # Linux NFS read-ahead best practices for Azure NetApp Files
Read-ahead can be defined either dynamically per NFS mount using the following s
To show the current read-ahead value (the returned value is in KiB), run the following command:
-`$ ./readahead.sh show <mount-point>`
+`$ ./readahead.sh show <mount-point>`
To set a new value for read-ahead, run the following command:
-`$ ./readahead.sh show <mount-point> [read-ahead-kb]`
+`$ ./readahead.sh set <mount-point> [read-ahead-kb]`
### Example
azure-percept How To Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-deploy-model.md
Follow this guide to deploy a vision AI model to your Azure Percept DK from with
:::image type="content" source="./media/how-to-deploy-model/select-device.png" alt-text="Percept devices list.":::
-1. On the next page, click **Deploy a sample model** if you would like to deploy one of the pre-trained sample vision models. If you would like to deploy an existing [custom no-code vision solution](./tutorial-nocode-vision.md), click **Deploy a Custom Vision project**. If you do not see your Custom Vision projects, set project's domain to one of Compact domains on [Custom Vision portal](https://www.customvision.ai/) and train a model again. Only Compact domains support model export to edge devices.
+1. On the next page, click **Deploy a sample model** if you would like to deploy one of the pre-trained sample vision models. If you would like to deploy an existing [custom no-code vision solution](./tutorial-nocode-vision.md), click **Deploy a Custom Vision project**. If you do not see your Custom Vision projects, set project's domain to "General (Compact)" on [Custom Vision portal](https://www.customvision.ai/) and train a model again. Other domains are not supported currently.
:::image type="content" source="./media/how-to-deploy-model/deploy-model.png" alt-text="Model choices for deployment.":::
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | namespaces | global | 6-50 | Alphanumerics and hyphens.<br><br>Start with letter. End with letter or number. | > | namespaces / AuthorizationRules | namespace | 1-50 | Alphanumerics, periods, hyphens and underscores.<br><br>Start and end with letter or number. | > | namespaces / disasterRecoveryConfigs | global | 6-50 | Alphanumerics and hyphens.<br><br>Start with letter. End with alphanumeric. |
-> | namespaces / eventhubs | namespace | 1-50 | Alphanumerics, periods, hyphens and underscores.<br><br>Start and end with letter or number. |
+> | namespaces / eventhubs | namespace | 1-256 | Alphanumerics, periods, hyphens and underscores.<br><br>Start and end with letter or number. |
> | namespaces / eventhubs / authorizationRules | event hub | 1-50 | Alphanumerics, periods, hyphens and underscores.<br><br>Start and end with letter or number. | > | namespaces / eventhubs / consumergroups | event hub | 1-50 | Alphanumerics, periods, hyphens and underscores.<br><br>Start and end with letter or number. |
azure-sql Authentication Azure Ad Only Authentication Create Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-azure-ad-only-authentication-create-server.md
The [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql
The following section provides you with examples and scripts on how to create a logical server or managed instance with an Azure AD admin set for the server or instance, and have Azure AD-only authentication enabled during server creation. For more information on the feature, see [Azure AD-only authentication](authentication-azure-ad-only-authentication.md).
-In our examples, we're enabling Azure AD-only authentication during server or managed instance creation, with a system assigned server admin and password. This will prevent server admin access when Azure AD-only authentication is enabled, and only allows the Azure AD admin to access the resource. It's optional to add parameters to the APIs to include your own server admin and password during server creation. However, the password cannot be reset until you disable Azure AD-only authentication.
+In our examples, we're enabling Azure AD-only authentication during server or managed instance creation, with a system assigned server admin and password. This will prevent server admin access when Azure AD-only authentication is enabled, and only allows the Azure AD admin to access the resource. It's optional to add parameters to the APIs to include your own server admin and password during server creation. However, the password cannot be reset until you disable Azure AD-only authentication. An example of how to use these optional parameters to specify the server admin login name is presented in the [PowerShell](?tabs=azure-powershell#azure-sql-database) tab on this page.
> [!NOTE] > To change the existing properties after server or managed instance creation, other existing APIs should be used. For more information, see [Managing Azure AD-only authentication using APIs](authentication-azure-ad-only-authentication.md#managing-azure-ad-only-authentication-using-apis) and [Configure and manage Azure AD authentication with Azure SQL](authentication-aad-configure.md).
Replace the following values in the example:
New-AzSqlServer -ResourceGroupName "<ResourceGroupName>" -Location "<Location>" -ServerName "<ServerName>" -ServerVersion "12.0" -ExternalAdminName "<AzureADAccount>" -EnableActiveDirectoryOnlyAuthentication ```
+Here is an example of specifying the server admin name (instead of letting the server admin name being automatically created) at the time of logical server creation. As mentioned earlier, this login is not usable when Azure AD-only authentication is enabled.
+
+```powershell
+$cred = Get-Credential
+New-AzSqlServer -ResourceGroupName "<ResourceGroupName>" -Location "<Location>" -ServerName "<ServerName>" -ServerVersion "12.0" -ExternalAdminName "<AzureADAccount>" -EnableActiveDirectoryOnlyAuthentication -SqlAdministratorCredentials $cred
+```
+ For more information, see [New-AzSqlServer](/powershell/module/az.sql/new-azsqlserver). # [Rest API](#tab/rest-api)
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-overview.md
Previously updated : 10/25/2021 Last updated : 2/2/2022 # Use auto-failover groups to enable transparent and coordinated geo-failover of multiple databases
The failover group will manage geo-failover of all databases on the primary mana
### <a name="using-read-write-listener-for-oltp-workload"></a> Use the read-write listener to connect to the primary managed instance
-For read-write workloads, use `<fog-name>.zone_id.database.windows.net` as the server name. Connections will be automatically directed to the primary. This name does not change after failover. The geo-failover involves updating the DNS record, so the client connections are redirected to the new primary only after the client DNS cache is refreshed. Because the secondary instance shares the DNS zone with the primary, the client application will be able to reconnect to it using the same server-side SAN certificate.
+For read-write workloads, use `<fog-name>.zone_id.database.windows.net` as the server name. Connections will be automatically directed to the primary. This name does not change after failover. The geo-failover involves updating the DNS record, so the client connections are redirected to the new primary only after the client DNS cache is refreshed. Because the secondary instance shares the DNS zone with the primary, the client application will be able to reconnect to it using the same server-side SAN certificate. The read-write listener and read-only listener cannot be reached via [public endpoint for managed instance](../managed-instance/public-endpoint-configure.md).
### <a name="using-read-only-listener-to-connect-to-the-secondary-instance"></a> Use the read-only listener to connect to the geo-secondary managed instance
-If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-secondary. To connect directly to the geo-secondary, use `<fog-name>.secondary.<zone_id>.database.windows.net` as the server name.
+If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-secondary. To connect directly to the geo-secondary, use `<fog-name>.secondary.<zone_id>.database.windows.net` as the server name. The read-write listener and read-only listener cannot be reached via [public endpoint for managed instance](../managed-instance/public-endpoint-configure.md).
> [!NOTE] > In the Business Critical tier, SQL Managed Instance supports the use of [read-only replicas](read-scale-out.md) to offload read-only query workloads, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-replicated secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location.
When you set up a failover group between primary and secondary SQL Managed Insta
## <a name="upgrading-or-downgrading-primary-database"></a> Scale primary database
-You can scale up or scale down the primary database to a different compute size (within the same service tier) without disconnecting any geo-secondaries. WWhen scaling up, we recommend that you scale up the geo-secondary first, and then scale up the primary. When scaling down, reverse the order: scale down the primary first, and then scale down the secondary. When you scale a database to a different service tier, this recommendation is enforced.
+You can scale up or scale down the primary database to a different compute size (within the same service tier) without disconnecting any geo-secondaries. When scaling up, we recommend that you scale up the geo-secondary first, and then scale up the primary. When scaling down, reverse the order: scale down the primary first, and then scale down the secondary. When you scale a database to a different service tier, this recommendation is enforced.
This sequence is recommended specifically to avoid the problem where the geo-secondary at a lower SKU gets overloaded and must be re-seeded during an upgrade or downgrade process. You could also avoid the problem by making the primary read-only, at the expense of impacting all read-write workloads against the primary.
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
Previously updated : 1/14/2022 Last updated : 2/2/2022 # Hyperscale service tier
These are the current limitations to the Hyperscale service tier as of GA. We'r
| Shrink Database | DBCC SHRINKDATABASE or DBCC SHRINKFILE isn't currently supported for Hyperscale databases. | | Database integrity check | DBCC CHECKDB isn't currently supported for Hyperscale databases. DBCC CHECKTABLE ('TableName') WITH TABLOCK and DBCC CHECKFILEGROUP WITH TABLOCK may be used as a workaround. See [Data Integrity in Azure SQL Database](https://azure.microsoft.com/blog/data-integrity-in-azure-sql-database/) for details on data integrity management in Azure SQL Database. | | Elastic Jobs | Using a Hyperscale database as the Job database is not supported. However, elastic jobs can target Hyperscale databases in the same way as any other Azure SQL database. |
+|Data Sync| Using a Hyperscale database as a Hub or Sync Metadata database is not supported. However, a Hyperscale database can be a member database in a Data Sync topology. |
## Next steps
azure-sql Sql Data Sync Data Sql Server Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-data-sync-data-sql-server-sql-database.md
Previously updated : 09/09/2021 Last updated : 2/2/2022 # What is SQL Data Sync for Azure?
Provisioning and deprovisioning during sync group creation, update, and deletion
- Moving servers between different subscriptions isn't supported. - If two primary keys are only different in case (e.g. Foo and foo), Data Sync won't support this scenario. - Truncating tables is not an operation supported by Data Sync (changes won't be tracked).-- Hyperscale databases are not supported.
+- Using a Hyperscale database as a Hub or Sync Metadata database is not supported. However, a Hyperscale database can be a member database in a Data Sync topology.
- Memory-optimized tables are not supported. #### Unsupported data types
azure-sql User Initiated Failover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/user-initiated-failover.md
Last updated 02/27/2021
# User-initiated manual failover on SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article explains how to manually failover a primary node on SQL Managed Instance General Purpose (GP) and Business Critical (BC) service tiers, and how to manually failover a secondary read-only replica node on the BC service tier only.
+This article explains how to manually failover a primary node on SQL Managed Instance General Purpose (GP) and Business Critical (BC) service tiers, and how to manually failover a secondary read-only replica node on the BC service tier only.
+
+> [!NOTE]
+> This article is not related with cross-region failovers on [auto-failover groups](../database/auto-failover-group-overview.md).
## When to use manual failover
backup Backup Azure Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-delete-vault.md
Title: Delete a Microsoft Azure Recovery Services vault description: In this article, learn how to remove dependencies and then delete an Azure Backup Recovery Services vault. Previously updated : 12/20/2021 Last updated : 01/28/2022
Choose a client:
>The following operation is destructive and can't be undone. All backup data and backup items associated with the protected server will be permanently deleted. Proceed with caution. >[!Note]
->If you're sure that all backed-up items in the vault are no longer required and want to delete them at once without reviewing, [run this PowerShell script](?tabs=powershell#script-for-delete-vault). The script will delete all backup items recursively and eventually the entire vault.
+>If you're sure that all backed-up items in the vault are no longer required and want to delete them at once without reviewing, [run this PowerShell script](./scripts/delete-recovery-services-vault.md). The script will delete all backup items recursively and eventually the entire vault.
To delete a vault, follow these steps:
Follow these steps:
Install-Module -Name Az.RecoveryServices -Repository PSGallery -Force -AllowClobber ``` -- **Step 3**: Copy the following script, change the parameters (vault name, resource group name, subscription name, and subscription ID), and run it in your PowerShell environment.
-
- The file prompts the user for authentication. Provide the user details to start the vault deletion process.
-
- Alternately, you can use Cloud Shell in Azure portal for vaults with fewer backups.
+- **Step 3**: Save the PowerShell script in .ps1 format. Then, to run the script in your PowerShell console, type `./NameOfFile.ps1`. This recursively deletes all backup items and eventually the entire Recovery Services vault.
- :::image type="content" source="./media/backup-azure-delete-vault/delete-vault-using-cloud-shell-inline.png" alt-text="Screenshot showing to delete a vault using Cloud Shell." lightbox="./media/backup-azure-delete-vault/delete-vault-using-cloud-shell-expanded.png":::
+ >[!Note]
+ >To access the PowerShell script for vault deletion, see the [PowerShell script for vault deletion](./scripts/delete-recovery-services-vault.md) article.
**Run the script in the PowerShell console**
Follow these steps:
1. Delete Disaster Recovery items 1. Remove private endpoints
-###### Script for delete vault
-
-```azurepowershell-interactive
-Connect-AzAccount
-
-$VaultName = "Vault name" #enter vault name
-$Subscription = "Subscription name" #enter Subscription name
-$ResourceGroup = "Resource group name" #enter Resource group name
-$SubscriptionId = "Subscription ID" #enter Subscription ID
-
-Select-AzSubscription $Subscription
-$VaultToDelete = Get-AzRecoveryServicesVault -Name $VaultName -ResourceGroupName $ResourceGroup
-Set-AzRecoveryServicesAsrVaultContext -Vault $VaultToDelete
-
-Set-AzRecoveryServicesVaultProperty -Vault $VaultToDelete.ID -SoftDeleteFeatureState Disable #disable soft delete
-Write-Host "Soft delete disabled for the vault" $VaultName
-$containerSoftDelete = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID | Where-Object {$_.DeleteState -eq "ToBeDeleted"} #fetch backup items in soft delete state
-foreach ($softitem in $containerSoftDelete)
-{
- Undo-AzRecoveryServicesBackupItemDeletion -Item $softitem -VaultId $VaultToDelete.ID -Force #undelete items in soft delete state
-}
-#Invoking API to disable enhanced security
-$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
-$profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
-$accesstoken = Get-AzAccessToken
-$token = $accesstoken.Token
-$authHeader = @{
- 'Content-Type'='application/json'
- 'Authorization'='Bearer ' + $token
-}
-$body = @{properties=@{enhancedSecurityState= "Disabled"}}
-$restUri = 'https://management.azure.com/subscriptions/'+$SubscriptionId+'/resourcegroups/'+$ResourceGroup+'/providers/Microsoft.RecoveryServices/vaults/'+$VaultName+'/backupconfig/vaultconfig?api-version=2019-05-13'
-$response = Invoke-RestMethod -Uri $restUri -Headers $authHeader -Body ($body | ConvertTo-JSON -Depth 9) -Method PATCH
-
-#Fetch all protected items and servers
-$backupItemsVM = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID
-$backupItemsSQL = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $VaultToDelete.ID
-$backupItemsAFS = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -VaultId $VaultToDelete.ID
-$backupItemsSAP = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType SAPHanaDatabase -VaultId $VaultToDelete.ID
-$backupContainersSQL = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"}
-$protectableItemsSQL = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $VaultToDelete.ID | Where-Object {$_.IsAutoProtected -eq $true}
-$backupContainersSAP = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"}
-$StorageAccounts = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -VaultId $VaultToDelete.ID
-$backupServersMARS = Get-AzRecoveryServicesBackupContainer -ContainerType "Windows" -BackupManagementType MAB -VaultId $VaultToDelete.ID
-$backupServersMABS = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID| Where-Object { $_.BackupManagementType -eq "AzureBackupServer" }
-$backupServersDPM = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID | Where-Object { $_.BackupManagementType-eq "SCDPM" }
-$pvtendpoints = Get-AzPrivateEndpointConnection -PrivateLinkResourceId $VaultToDelete.ID
-
-foreach($item in $backupItemsVM)
- {
- Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete Azure VM backup items
- }
-Write-Host "Disabled and deleted Azure VM backup items"
-
-foreach($item in $backupItemsSQL)
- {
- Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete SQL Server in Azure VM backup items
- }
-Write-Host "Disabled and deleted SQL Server backup items"
-
-foreach($item in $protectableItems)
- {
- Disable-AzRecoveryServicesBackupAutoProtection -BackupManagementType AzureWorkload -WorkloadType MSSQL -InputItem $item -VaultId $VaultToDelete.ID #disable auto-protection for SQL
- }
-Write-Host "Disabled auto-protection and deleted SQL protectable items"
-
-foreach($item in $backupContainersSQL)
- {
- Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister SQL Server in Azure VM protected server
- }
-Write-Host "Deleted SQL Servers in Azure VM containers"
-
-foreach($item in $backupItemsSAP)
- {
- Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete SAP HANA in Azure VM backup items
- }
-Write-Host "Disabled and deleted SAP HANA backup items"
-
-foreach($item in $backupContainersSAP)
- {
- Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister SAP HANA in Azure VM protected server
- }
-Write-Host "Deleted SAP HANA in Azure VM containers"
-
-foreach($item in $backupItemsAFS)
- {
- Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete Azure File Shares backup items
- }
-Write-Host "Disabled and deleted Azure File Share backups"
-
-foreach($item in $StorageAccounts)
- {
- Unregister-AzRecoveryServicesBackupContainer -container $item -Force -VaultId $VaultToDelete.ID #unregister storage accounts
- }
-Write-Host "Unregistered Storage Accounts"
-
-foreach($item in $backupServersMARS)
- {
- Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister MARS servers and delete corresponding backup items
- }
-Write-Host "Deleted MARS Servers"
-
-foreach($item in $backupServersMABS)
- {
- Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister MABS servers and delete corresponding backup items
- }
-Write-Host "Deleted MAB Servers"
-
-foreach($item in $backupServersDPM)
- {
- Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister DPM servers and delete corresponding backup items
- }
-Write-Host "Deleted DPM Servers"
-
-#Deletion of ASR Items
-
-$fabricObjects = Get-AzRecoveryServicesAsrFabric
-if ($null -ne $fabricObjects) {
- # First DisableDR all VMs.
- foreach ($fabricObject in $fabricObjects) {
- $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
- foreach ($containerObject in $containerObjects) {
- $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
- # DisableDR all protected items
- foreach ($protectedItem in $protectedItems) {
- Write-Host "Triggering DisableDR(Purge) for item:" $protectedItem.Name
- Remove-AzRecoveryServicesAsrReplicationProtectedItem -InputObject $protectedItem -Force
- Write-Host "DisableDR(Purge) completed"
- }
-
- $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
- -ProtectionContainer $containerObject
- # Remove all Container Mappings
- foreach ($containerMapping in $containerMappings) {
- Write-Host "Triggering Remove Container Mapping: " $containerMapping.Name
- Remove-AzRecoveryServicesAsrProtectionContainerMapping -ProtectionContainerMapping $containerMapping -Force
- Write-Host "Removed Container Mapping."
- }
- }
- $NetworkObjects = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject
- foreach ($networkObject in $NetworkObjects)
- {
- #Get the PrimaryNetwork
- $PrimaryNetwork = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject -FriendlyName $networkObject
- $NetworkMappings = Get-AzRecoveryServicesAsrNetworkMapping -Network $PrimaryNetwork
- foreach ($networkMappingObject in $NetworkMappings)
- {
- #Get the Neetwork Mappings
- $NetworkMapping = Get-AzRecoveryServicesAsrNetworkMapping -Name $networkMappingObject.Name -Network $PrimaryNetwork
- Remove-AzRecoveryServicesAsrNetworkMapping -InputObject $NetworkMapping
- }
- }
- # Remove Fabric
- Write-Host "Triggering Remove Fabric:" $fabricObject.FriendlyName
- Remove-AzRecoveryServicesAsrFabric -InputObject $fabricObject -Force
- Write-Host "Removed Fabric."
- }
-}
-
-foreach($item in $pvtendpoints)
- {
- $penamesplit = $item.Name.Split(".")
- $pename = $penamesplit[0]
- Remove-AzPrivateEndpointConnection -ResourceId $item.PrivateEndpoint.Id -Force #remove private endpoint connections
- Remove-AzPrivateEndpoint -Name $pename -ResourceGroupName $ResourceGroup -Force #remove private endpoints
- }
-Write-Host "Removed Private Endpoints"
-
-#Recheck ASR items in vault
-$fabricCount = 0
-$ASRProtectedItems = 0
-$ASRPolicyMappings = 0
-$fabricObjects = Get-AzRecoveryServicesAsrFabric
-if ($null -ne $fabricObjects) {
- foreach ($fabricObject in $fabricObjects) {
- $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
- foreach ($containerObject in $containerObjects) {
- $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
- foreach ($protectedItem in $protectedItems) {
- $ASRProtectedItems++
- }
- $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
- -ProtectionContainer $containerObject
- foreach ($containerMapping in $containerMappings) {
- $ASRPolicyMappings++
- }
- }
- $fabricCount++
- }
-}
-#Recheck presence of backup items in vault
-$backupItemsVMFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID
-$backupItemsSQLFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $VaultToDelete.ID
-$backupContainersSQLFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"}
-$protectableItemsSQLFin = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $VaultToDelete.ID | Where-Object {$_.IsAutoProtected -eq $true}
-$backupItemsSAPFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType SAPHanaDatabase -VaultId $VaultToDelete.ID
-$backupContainersSAPFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"}
-$backupItemsAFSFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -VaultId $VaultToDelete.ID
-$StorageAccountsFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -VaultId $VaultToDelete.ID
-$backupServersMARSFin = Get-AzRecoveryServicesBackupContainer -ContainerType "Windows" -BackupManagementType MAB -VaultId $VaultToDelete.ID
-$backupServersMABSFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID| Where-Object { $_.BackupManagementType -eq "AzureBackupServer" }
-$backupServersDPMFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID | Where-Object { $_.BackupManagementType-eq "SCDPM" }
-$pvtendpointsFin = Get-AzPrivateEndpointConnection -PrivateLinkResourceId $VaultToDelete.ID
-Write-Host "Number of backup items left in the vault and which need to be deleted:" $backupItemsVMFin.count "Azure VMs" $backupItemsSQLFin.count "SQL Server Backup Items" $backupContainersSQLFin.count "SQL Server Backup Containers" $protectableItemsSQLFin.count "SQL Server Instances" $backupItemsSAPFin.count "SAP HANA backup items" $backupContainersSAPFin.count "SAP HANA Backup Containers" $backupItemsAFSFin.count "Azure File Shares" $StorageAccountsFin.count "Storage Accounts" $backupServersMARSFin.count "MARS Servers" $backupServersMABSFin.count "MAB Servers" $backupServersDPMFin.count "DPM Servers" $pvtendpointsFin.count "Private endpoints"
-Write-Host "Number of ASR items left in the vault and which need to be deleted:" $ASRProtectedItems "ASR protected items" $ASRPolicyMappings "ASR policy mappings" $fabricCount "ASR Fabrics" $pvtendpointsFin.count "Private endpoints. Warning: This script will only remove the replication configuration from Azure Site Recovery and not from the source. Please cleanup the source manually. Visit https://go.microsoft.com/fwlink/?linkid=2182781 to learn more"
-Remove-AzRecoveryServicesVault -Vault $VaultToDelete
-#Finish
-
-```
--
-To delete individual backup items or to write your own script, use the following PowerShell commands:
-
-To stop protection and delete the backup data:
--- If you're using SQL in Azure VMs backup and enabled autoprotection for SQL instances, first disable the autoprotection.
+To delete an individual backup items or write your own script, use the following PowerShell commands:
+
+- Stop protection and delete the backup data:
+
+ If you're using SQL in Azure VMs backup and enabled autoprotection for SQL instances, first disable the autoprotection.
```PowerShell Disable-AzRecoveryServicesBackupAutoProtection
To stop protection and delete the backup data:
[Learn more](/powershell/module/az.recoveryservices/disable-azrecoveryservicesbackupautoprotection) on how to disable protection for an Azure Backup-protected item. -- Stop protection and delete data for all backup-protected items in cloud (for example: IaaS VM, Azure file share, and so on):
+- Stop protection and delete data for all backup-protected items in cloud (for example, IaaS VM, Azure file share, and so on):
```PowerShell Disable-AzRecoveryServicesBackupProtection
To stop protection and delete the backup data:
[<CommonParameters>] ```
- [Learn more](/powershell/module/az.recoveryservices/disable-azrecoveryservicesbackupprotection) about disables protection for a Backup-protected item.
+ [Learn more](/powershell/module/az.recoveryservices/disable-azrecoveryservicesbackupprotection) about disabling protection for a Backup-protected item.
After deleting the backed-up data, unregister any on-premises containers and management servers.
For more information on the ARMClient command, see [ARMClient README](https://gi
## Next steps -- [Learn about Recovery Services vaults](backup-azure-recovery-services-vault-overview.md)-- [Learn about monitoring and managing Recovery Services vaults](backup-azure-manage-windows-server.md)
+- [Learn about Recovery Services vaults](backup-azure-recovery-services-vault-overview.md).
+- [Learn about monitoring and managing Recovery Services vaults](backup-azure-manage-windows-server.md).
backup Delete Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/scripts/delete-recovery-services-vault.md
+
+ Title: Script Sample - Delete a Recovery Services vault
+description: Learn about how to use a PowerShell script to delete a Recovery Services vault.
+ Last updated : 01/30/2022+++++
+# PowerShell script to delete a Recovery Services vault
+
+This script helps you to delete a Recovery Services vault.
+
+## How to execute the script?
+
+1. Save the script in the following section on your machine with a name of your choice and _.ps1_ extension.
+1. In the script, change the parameters (vault name, resource group name, subscription name, and subscription ID).
+1. To run it in your PowerShell environment, continue with the next steps.
+
+ Alternatively, you can use Cloud Shell in Azure portal for vaults with fewer backups.
+
+ :::image type="content" source="../media/backup-azure-delete-vault/delete-vault-using-cloud-shell-inline.png" alt-text="Screenshot showing to delete a vault using Cloud Shell." lightbox="../media/backup-azure-delete-vault/delete-vault-using-cloud-shell-expanded.png":::
+
+1. To upgrade to the latest version of PowerShell 7, if not done, run the following command in the PowerShell window:
+
+ ```azurepowershell-interactive
+ iex "& { $(irm https://aka.ms/install-powershell.ps1) } -UseMSI"
+ ```
+
+1. Launch PowerShell 7 as Administrator.
+1. Before you run the script for vault deletion, run the following command to upgrade the _Az module_ to the latest version:
+
+ ```azurepowershell-interactive
+ Uninstall-Module -Name Az.RecoveryServices
+ Set-ExecutionPolicy -ExecutionPolicy Unrestricted
+ Install-Module -Name Az.RecoveryServices -Repository PSGallery -Force -AllowClobber
+ ```
+
+1. In the PowerShell window, change the path to the location the file is present, and then run the file using **./NameOfFile.ps1**.
+1. Provide authentication via browser by signing into your Azure account.
+
+The script will continue to delete all the backup items and ultimately the entire vault recursively.
+
+## Script
+
+```azurepowershell-interactive
+Connect-AzAccount
+
+$VaultName = "Vault name" #enter vault name
+$Subscription = "Subscription name" #enter Subscription name
+$ResourceGroup = "Resource group name" #enter Resource group name
+$SubscriptionId = "Subscription ID" #enter Subscription ID
+
+Select-AzSubscription $Subscription
+$VaultToDelete = Get-AzRecoveryServicesVault -Name $VaultName -ResourceGroupName $ResourceGroup
+Set-AzRecoveryServicesAsrVaultContext -Vault $VaultToDelete
+
+Set-AzRecoveryServicesVaultProperty -Vault $VaultToDelete.ID -SoftDeleteFeatureState Disable #disable soft delete
+Write-Host "Soft delete disabled for the vault" $VaultName
+$containerSoftDelete = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID | Where-Object {$_.DeleteState -eq "ToBeDeleted"} #fetch backup items in soft delete state
+foreach ($softitem in $containerSoftDelete)
+{
+ Undo-AzRecoveryServicesBackupItemDeletion -Item $softitem -VaultId $VaultToDelete.ID -Force #undelete items in soft delete state
+}
+#Invoking API to disable enhanced security
+$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+$profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
+$accesstoken = Get-AzAccessToken
+$token = $accesstoken.Token
+$authHeader = @{
+ 'Content-Type'='application/json'
+ 'Authorization'='Bearer ' + $token
+}
+$body = @{properties=@{enhancedSecurityState= "Disabled"}}
+$restUri = 'https://management.azure.com/subscriptions/'+$SubscriptionId+'/resourcegroups/'+$ResourceGroup+'/providers/Microsoft.RecoveryServices/vaults/'+$VaultName+'/backupconfig/vaultconfig?api-version=2019-05-13' #Replace "management.azure.com" with "management.usgovcloudapi.net" if your subscription is in USGov.
+$response = Invoke-RestMethod -Uri $restUri -Headers $authHeader -Body ($body | ConvertTo-JSON -Depth 9) -Method PATCH
++
+#Fetch all protected items and servers
+$backupItemsVM = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID
+$backupItemsSQL = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $VaultToDelete.ID
+$backupItemsAFS = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -VaultId $VaultToDelete.ID
+$backupItemsSAP = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType SAPHanaDatabase -VaultId $VaultToDelete.ID
+$backupContainersSQL = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"}
+$protectableItemsSQL = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $VaultToDelete.ID | Where-Object {$_.IsAutoProtected -eq $true}
+$backupContainersSAP = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"}
+$StorageAccounts = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -VaultId $VaultToDelete.ID
+$backupServersMARS = Get-AzRecoveryServicesBackupContainer -ContainerType "Windows" -BackupManagementType MAB -VaultId $VaultToDelete.ID
+$backupServersMABS = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID| Where-Object { $_.BackupManagementType -eq "AzureBackupServer" }
+$backupServersDPM = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID | Where-Object { $_.BackupManagementType-eq "SCDPM" }
+$pvtendpoints = Get-AzPrivateEndpointConnection -PrivateLinkResourceId $VaultToDelete.ID
+
+foreach($item in $backupItemsVM)
+ {
+ Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete Azure VM backup items
+ }
+Write-Host "Disabled and deleted Azure VM backup items"
+
+foreach($item in $backupItemsSQL)
+ {
+ Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete SQL Server in Azure VM backup items
+ }
+Write-Host "Disabled and deleted SQL Server backup items"
+
+foreach($item in $protectableItems)
+ {
+ Disable-AzRecoveryServicesBackupAutoProtection -BackupManagementType AzureWorkload -WorkloadType MSSQL -InputItem $item -VaultId $VaultToDelete.ID #disable auto-protection for SQL
+ }
+Write-Host "Disabled auto-protection and deleted SQL protectable items"
+
+foreach($item in $backupContainersSQL)
+ {
+ Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister SQL Server in Azure VM protected server
+ }
+Write-Host "Deleted SQL Servers in Azure VM containers"
+
+foreach($item in $backupItemsSAP)
+ {
+ Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete SAP HANA in Azure VM backup items
+ }
+Write-Host "Disabled and deleted SAP HANA backup items"
+
+foreach($item in $backupContainersSAP)
+ {
+ Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister SAP HANA in Azure VM protected server
+ }
+Write-Host "Deleted SAP HANA in Azure VM containers"
+
+foreach($item in $backupItemsAFS)
+ {
+ Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete Azure File Shares backup items
+ }
+Write-Host "Disabled and deleted Azure File Share backups"
+
+foreach($item in $StorageAccounts)
+ {
+ Unregister-AzRecoveryServicesBackupContainer -container $item -Force -VaultId $VaultToDelete.ID #unregister storage accounts
+ }
+Write-Host "Unregistered Storage Accounts"
+
+foreach($item in $backupServersMARS)
+ {
+ Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister MARS servers and delete corresponding backup items
+ }
+Write-Host "Deleted MARS Servers"
+
+foreach($item in $backupServersMABS)
+ {
+ Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister MABS servers and delete corresponding backup items
+ }
+Write-Host "Deleted MAB Servers"
+
+foreach($item in $backupServersDPM)
+ {
+ Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister DPM servers and delete corresponding backup items
+ }
+Write-Host "Deleted DPM Servers"
+
+#Deletion of ASR Items
+
+$fabricObjects = Get-AzRecoveryServicesAsrFabric
+if ($null -ne $fabricObjects) {
+ # First DisableDR all VMs.
+ foreach ($fabricObject in $fabricObjects) {
+ $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
+ foreach ($containerObject in $containerObjects) {
+ $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
+ # DisableDR all protected items
+ foreach ($protectedItem in $protectedItems) {
+ Write-Host "Triggering DisableDR(Purge) for item:" $protectedItem.Name
+ Remove-AzRecoveryServicesAsrReplicationProtectedItem -InputObject $protectedItem -Force
+ Write-Host "DisableDR(Purge) completed"
+ }
+
+ $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
+ -ProtectionContainer $containerObject
+ # Remove all Container Mappings
+ foreach ($containerMapping in $containerMappings) {
+ Write-Host "Triggering Remove Container Mapping: " $containerMapping.Name
+ Remove-AzRecoveryServicesAsrProtectionContainerMapping -ProtectionContainerMapping $containerMapping -Force
+ Write-Host "Removed Container Mapping."
+ }
+ }
+ $NetworkObjects = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject
+ foreach ($networkObject in $NetworkObjects)
+ {
+ #Get the PrimaryNetwork
+ $PrimaryNetwork = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject -FriendlyName $networkObject
+ $NetworkMappings = Get-AzRecoveryServicesAsrNetworkMapping -Network $PrimaryNetwork
+ foreach ($networkMappingObject in $NetworkMappings)
+ {
+ #Get the Neetwork Mappings
+ $NetworkMapping = Get-AzRecoveryServicesAsrNetworkMapping -Name $networkMappingObject.Name -Network $PrimaryNetwork
+ Remove-AzRecoveryServicesAsrNetworkMapping -InputObject $NetworkMapping
+ }
+ }
+ # Remove Fabric
+ Write-Host "Triggering Remove Fabric:" $fabricObject.FriendlyName
+ Remove-AzRecoveryServicesAsrFabric -InputObject $fabricObject -Force
+ Write-Host "Removed Fabric."
+ }
+}
+
+foreach($item in $pvtendpoints)
+ {
+ $penamesplit = $item.Name.Split(".")
+ $pename = $penamesplit[0]
+ Remove-AzPrivateEndpointConnection -ResourceId $item.PrivateEndpoint.Id -Force #remove private endpoint connections
+ Remove-AzPrivateEndpoint -Name $pename -ResourceGroupName $ResourceGroup -Force #remove private endpoints
+ }
+Write-Host "Removed Private Endpoints"
+
+#Recheck ASR items in vault
+$fabricCount = 0
+$ASRProtectedItems = 0
+$ASRPolicyMappings = 0
+$fabricObjects = Get-AzRecoveryServicesAsrFabric
+if ($null -ne $fabricObjects) {
+ foreach ($fabricObject in $fabricObjects) {
+ $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
+ foreach ($containerObject in $containerObjects) {
+ $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
+ foreach ($protectedItem in $protectedItems) {
+ $ASRProtectedItems++
+ }
+ $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
+ -ProtectionContainer $containerObject
+ foreach ($containerMapping in $containerMappings) {
+ $ASRPolicyMappings++
+ }
+ }
+ $fabricCount++
+ }
+}
+#Recheck presence of backup items in vault
+$backupItemsVMFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID
+$backupItemsSQLFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $VaultToDelete.ID
+$backupContainersSQLFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SQL"}
+$protectableItemsSQLFin = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $VaultToDelete.ID | Where-Object {$_.IsAutoProtected -eq $true}
+$backupItemsSAPFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType SAPHanaDatabase -VaultId $VaultToDelete.ID
+$backupContainersSAPFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $VaultToDelete.ID | Where-Object {$_.ExtendedInfo.WorkloadType -eq "SAPHana"}
+$backupItemsAFSFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -VaultId $VaultToDelete.ID
+$StorageAccountsFin = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -VaultId $VaultToDelete.ID
+$backupServersMARSFin = Get-AzRecoveryServicesBackupContainer -ContainerType "Windows" -BackupManagementType MAB -VaultId $VaultToDelete.ID
+$backupServersMABSFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID| Where-Object { $_.BackupManagementType -eq "AzureBackupServer" }
+$backupServersDPMFin = Get-AzRecoveryServicesBackupManagementServer -VaultId $VaultToDelete.ID | Where-Object { $_.BackupManagementType-eq "SCDPM" }
+$pvtendpointsFin = Get-AzPrivateEndpointConnection -PrivateLinkResourceId $VaultToDelete.ID
+Write-Host "Number of backup items left in the vault and which need to be deleted:" $backupItemsVMFin.count "Azure VMs" $backupItemsSQLFin.count "SQL Server Backup Items" $backupContainersSQLFin.count "SQL Server Backup Containers" $protectableItemsSQLFin.count "SQL Server Instances" $backupItemsSAPFin.count "SAP HANA backup items" $backupContainersSAPFin.count "SAP HANA Backup Containers" $backupItemsAFSFin.count "Azure File Shares" $StorageAccountsFin.count "Storage Accounts" $backupServersMARSFin.count "MARS Servers" $backupServersMABSFin.count "MAB Servers" $backupServersDPMFin.count "DPM Servers" $pvtendpointsFin.count "Private endpoints"
+Write-Host "Number of ASR items left in the vault and which need to be deleted:" $ASRProtectedItems "ASR protected items" $ASRPolicyMappings "ASR policy mappings" $fabricCount "ASR Fabrics" $pvtendpointsFin.count "Private endpoints. Warning: This script will only remove the replication configuration from Azure Site Recovery and not from the source. Please cleanup the source manually. Visit https://go.microsoft.com/fwlink/?linkid=2182781 to learn more"
+Remove-AzRecoveryServicesVault -Vault $VaultToDelete
+#Finish
+
+```
+
+## Next steps
+
+[Learn more](../backup-azure-delete-vault.md) about vault deletion process.
cdn Cdn Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-features.md
The following table compares the features available with each product.
| Easy integration with Azure services, such as [Storage](cdn-create-a-storage-account-with-cdn.md), [Web Apps](cdn-add-to-web-app.md), and [Media Services](../media-services/previous/media-services-portal-manage-streaming-endpoints.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | Management via [REST API](/rest/api/cdn/), [.NET](cdn-app-dev-net.md), [Node.js](cdn-app-dev-node.md), or [PowerShell](cdn-manage-powershell.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [Compression MIME types](./cdn-improve-performance.md) |Configurable |Configurable |Configurable |Configurable |
-| Compression encodings |gzip, brotli |gzip |gzip, deflate, bzip2, brotli |gzip, deflate, bzip2, brotli |
+| Compression encodings |gzip, brotli |gzip |gzip, deflate, bzip2 |gzip, deflate, bzip2 |
## Migration
cognitive-services Call Center Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/call-center-transcription.md
Title: Call Center Transcription - Speech service
-description: A common scenario for speech-to-text is transcribing large volumes of telephony data that come from various systems, such as Interactive Voice Response (IVR). Using Speech service and the Unified speech model, a business can get high-quality transcriptions with audio capture systems.
+description: A common scenario for speech-to-text is transcribing large volumes of telephony data that come from various systems, such as interactive voice response (IVR). By using Speech service and the Unified speech model, a business can get high-quality transcriptions with audio capture systems.
# Speech service for telephony data
-Telephony data that is generated through landlines, mobile phones, and radios are typically low quality, and narrowband in the range of 8 KHz, which creates challenges when converting speech-to-text. The latest speech recognition models from the Speech service excel at transcribing this telephony data, even in cases when the data is difficult for a human to understand. These models are trained with large volumes of telephony data, and have best-in-market recognition accuracy, even in noisy environments.
+Telephony data that's generated through landlines, mobile phones, and radios is ordinarily of low quality. This data is also narrowband, in the range of 8&nbsp;KHz, which can create challenges when you're converting speech to text.
-A common scenario for speech-to-text is transcribing large volumes of telephony data that may come from various systems, such as Interactive Voice Response (IVR). The audio these systems provide can be stereo or mono, and raw with little-to-no post processing done on the signal. Using the Speech service and the Unified speech model, a business can get high-quality transcriptions, whatever systems are used to capture audio.
+The latest Speech service speech-recognition models excel at transcribing this telephony data, even when the data is difficult for a human to understand. These models are trained with large volumes of telephony data, and they have best-in-market recognition accuracy, even in noisy environments.
-Telephony data can be used to better understand your customers' needs, identify new marketing opportunities, or evaluate the performance of call center agents. After the data is transcribed, a business can use the output for purposes such as improved telemetry, identifying key phrases, or analyzing customer sentiment.
+A common scenario for speech-to-text is the transcription of large volumes of telephony data that comes from a variety of systems, such as interactive voice response (IVR). The audio that these systems provide can be stereo or mono, and raw, with little to no post-processing done on the signal. By using Speech service and the Unified speech model, your business can get high-quality transcriptions, whatever systems you use to capture audio.
-The technologies outlined in this page are by Microsoft internally for various support call processing services, both in real-time and batch mode.
+You can use telephony data to better understand your customers' needs, identify new marketing opportunities, or evaluate the performance of call center agents. After the data is transcribed, your business can use the output for improving telemetry, identifying key phrases, analyzing customer *sentiment*, and other purposes.
-Let's review some of the technology and related features the Speech service offers.
+The technologies outlined in this article are from Microsoft internally for various support-call processing services, both in real-time and batch mode.
+
+This article discusses some of the technology and related features that Speech service offers.
> [!IMPORTANT]
-> The Speech service Unified model is trained with diverse data and offers a single-model solution to a number of scenario from Dictation to Telephony analytics.
+> The Speech service Unified model is trained with diverse data and offers a single-model solution to many scenarios, from dictation to telephony analytics.
+
+## Azure technology for call centers
-## Azure Technology for Call Centers
+Beyond the functional aspect of the Speech service features, their primary purpose, as applied to the call center, is to improve the customer experience in three separate domains:
-Beyond the functional aspect of the Speech service features, their primary purpose ΓÇô when applied to the call center ΓÇô is to improve the customer experience. Three clear domains exist in this regard:
+- Post-call analytics, which is essentially the batch processing of call recordings after the call.
+- Real-time analytics, which is the processing of an audio signal to extract various insights as the call is taking place (with sentiment as a prominent use case).
+- Voice assistants (bots), which either drive the dialogue between customers and the bot in an attempt to solve their issues, without agent participation, or apply AI protocols to assist the agent.
-- Post-call analytics, which is essentially batch processing of call recordings after the call.-- Real-time analytics, which is processing of the audio signal to extract various insights as the call is taking place (with sentiment being a prominent use case).-- Voice assistants (bots), either driving the dialogue between the customer and the bot in an attempt to solve the customer's issue with no agent participation, or being the application of artificial intelligence (AI) protocols to assist the agent.
+Here is an architecture diagram showing a typical implementation of a batch scenario:
+![Diagram of call center transcription architecture.](media/scenarios/call-center-transcription-architecture.png)
-A typical architecture diagram of the implementation of a batch scenario is depicted in the picture below
-![Call center transcription architecture](media/scenarios/call-center-transcription-architecture.png)
+## Components of speech analytics technology
-## Speech Analytics Technology Components
+Whether the domain is post-call or real-time, Azure offers a set of mature and emerging technologies to help improve the customer experience.
-Whether the domain is post-call or real-time, Azure offers a set of mature and emerging technologies to improve the customer experience.
+### Speech-to-text
-### Speech to text (STT)
+[Speech-to-text](speech-to-text.md) is the most sought-after feature in any call center solution. Because many of the downstream analytics processes rely on transcribed text, the word error rate (WER) metric is of utmost importance. One of the key challenges in call center transcription is the noise thatΓÇÖs prevalent in the call center (for example, other agents speaking in the background), the rich variety of language locales and dialects, and the low quality of the actual telephone signal.
-[Speech-to-text](speech-to-text.md) is the most sought-after feature in any call center solution. Because many of the downstream analytics processes rely on transcribed text, the word error rate (_WER_) is of utmost importance. One of the key challenges in call center transcription is the noise thatΓÇÖs prevalent in the call center (for example other agents speaking in the background), the rich variety of language locales and dialects as well as the low quality of the actual telephone signal. WER is highly correlated with how well the acoustic and language models are trained for a given locale, thus the ability to customize the model to your locale is important. Our latest Unified version 4.x models are the solution to both transcription accuracy and latency. Trained with tens of thousands of hours of acoustic data and billions of lexical information, Unified models are the most accurate models in the market to transcribe call center data.
+WER is highly correlated with how well the acoustic and language models are trained for a specific locale. Therefore, it's important to be able to customize the model to your locale. Our latest Unified version 4.x models are the solution to both transcription accuracy and latency. Because they're trained with tens of thousands of hours of acoustic data and billions of bits of lexical information, Unified models are the most accurate in the market for transcribing call center data.
### Sentiment
-Gauging whether the customer had a good experience is one of the most important areas of Speech analytics when applied to the call center space. Our [Batch Transcription API](batch-transcription.md) offers sentiment analysis per utterance. You can aggregate the set of values obtained as part of a call transcript to determine the sentiment of the call for both your agents and the customer.
+In the call center space, the ability to gauge whether customers have had a good experience is one of the most important areas of Speech analytics. The Microsoft [Batch Transcription API](batch-transcription.md) offers sentiment analysis per utterance. You can aggregate the set of values that are obtained as part of a call transcript to determine the sentiment of the call for both your agents and the customer.
### Silence (non-talk)
-It is not uncommon for 35 percent of a support call to be what we call non-talk time. Some scenarios for which non-talk occurs are: agents looking up prior case history with a customer, agents using tools that allow them to access the customer's desktop and perform functions, customers sitting on hold waiting for a transfer, and so on. It is extremely important to gauge when silence is occurring in a call as there are number of important customer sensitivities that occur around these types of scenarios and where they occur in the call.
+It's not uncommon for as much as 35 percent of a support call to be what's called *non-talk time*. Some scenarios during which non-talk occurs might include:
+* Agents taking time to look up prior case history with a customer.
+* Agents using tools that allow them to access the customer's desktop and perform certain functions.
+* Customers waiting on hold for a call transfer.
+
+It's important to gauge when silence is occurring in a call, because critical customer sensitivities can result from these types of scenarios and where they occur in the call.
### Translation
-Some companies are experimenting with providing translated transcripts from foreign language support calls so that delivery managers can understand the world-wide experience of their customers. Our [translation](./speech-translation.md) capabilities are unsurpassed. We can translate audio-to-audio or audio-to-text for a large number of locales.
+Some companies are experimenting with providing translated transcripts from foreign-language support calls, so that delivery managers can understand the world-wide experience of their customers. Speech service's [translation](./speech-translation.md) capabilities are excellent, featuring the audio-to-audio or audio-to-text translation for a large number of locales.
-### Text to Speech
+### Text-to-speech
-[Text-to-speech](text-to-speech.md) is another important area in implementing bots that interact with the customers. The typical pathway is that the customer speaks, their voice is transcribed to text, the text is analyzed for intents, a response is synthesized based on the recognized intent, and then an asset is either surfaced to the customer or a synthesized voice response is generated. Of course all of this has to occur quickly ΓÇô thus low-latency is an important component in the success of these systems.
+[Text-to-speech](text-to-speech.md) is another important technology where bots interact with customers. The typical pathway is that a customer speaks, the voice is transcribed to text, the text is analyzed for intents, a response is synthesized based on the recognized intent, and then an asset is either surfaced to the customer or a synthesized voice response is generated. Because this entire process must occur quickly, low latency is an important component in the success of these systems.
-Our end-to-end latency is considerably low for the various technologies involved such as [Speech-to-text](speech-to-text.md), [LUIS](https://azure.microsoft.com/services/cognitive-services/language-understanding-intelligent-service/), [Bot Framework](https://dev.botframework.com/), [Text-to-speech](text-to-speech.md).
+Speech service's end-to-end latency is considerably low for the various technologies involved, such as [speech-to-text](speech-to-text.md), [Language Understanding (LUIS)](https://azure.microsoft.com/services/cognitive-services/language-understanding-intelligent-service/), [Bot Framework](https://dev.botframework.com/), and [text-to-speech](text-to-speech.md).
-Our new voices are also indistinguishable from human voices. You can use our voices to give your bot its unique personality.
+Our new synthesized voices are also nearly indistinguishable from human voices. You can use them to give your bot its unique personality.
### Search
-Another staple of analytics is to identify interactions where a specific event or experience has occurred. This is typically done with one of two approaches; either an ad hoc search where the user simply types a phrase and the system responds, or a more structured query where an analyst can create a set of logical statements that identify a scenario in a call, and then each call can be indexed against that set of queries. A good search example is the ubiquitous compliance statement "this call shall be recorded for quality purposes... ". Many companies want to make sure that their agents are providing this disclaimer to customers before the call is actually recorded. Most analytics systems have the ability to trend the behaviors found by query/search algorithms, and this reporting of trends is ultimately one of the most important functions of an analytics system. Through [Cognitive services directory](https://azure.microsoft.com/services/cognitive-services/directory/search/) your end-to-end solution can be significantly enhanced with indexing and search capabilities.
+Another staple of analytics is to identify interactions where a specific event or experience has occurred. You would ordinarily do this with either of two approaches:
+* An ad hoc search, where users simply type a phrase and the system responds.
+* A more structured query where an analyst can create a set of logical statements that identify a scenario in a call, and then each call can be indexed against that set of queries.
-### Key Phrase Extraction
+A good search example is the ubiquitous compliance statement, "This call will be recorded for quality purposes." Many companies want to make sure that their agents are providing this disclaimer to customers before the call is actually recorded. Most analytics systems have the ability to trend the behaviors found by query or search algorithms, and this reporting of trends is ultimately one of the most important functions of an analytics system. Through the [Cognitive Services directory](https://azure.microsoft.com/services/cognitive-services/directory/search/), your end-to-end solution can be significantly enhanced with indexing and search capabilities.
-This area is one of the more challenging analytics applications and one that is benefiting from the application of AI and machine learning. The primary scenario in this case is to infer customer intent. Why is the customer calling? What is the customer problem? Why did the customer have a negative experience? Our [Language service](https://azure.microsoft.com/services/cognitive-services/text-analytics/) provides a set of analytics out of the box for quickly upgrading your end-to-end solution for extracting those important keywords or phrases.
+### Key phrase extraction
-Let's now have a look at the batch processing and the real-time pipelines for speech recognition in a bit more detail.
+This area is one of the more challenging analytics applications, and one that benefits from the application of AI and machine learning. The primary scenario in this case is to infer customer intent. Why is the customer calling? What is the customer's problem? Why did the customer have a negative experience? [Cognitive Service for Language](https://azure.microsoft.com/services/cognitive-services/text-analytics/) provides a set of analytics out of the box for quickly upgrading your end-to-end solution for extracting those important keywords or phrases.
+
+The next sections cover batch processing and the real-time pipelines for speech recognition in a bit more detail.
## Batch transcription of call center data
-For transcribing bulk audio we developed the [Batch Transcription API](batch-transcription.md). The Batch Transcription API was developed to transcribe large amounts of audio data asynchronously. With regard to transcribing call center data, our solution is based on these pillars:
+To transcribe audio in bulk, Microsoft developed the [Batch Transcription API](batch-transcription.md), which transcribes large amounts of audio data asynchronously. For transcribing call center data specifically, this solution is based on three pillars:
+
+- **Accuracy**: By applying fourth-generation Unified models, we offer high-quality transcription.
+- **Latency**: Bulk transcriptions must be performed quickly. The transcription jobs that are initiated via the [Batch Transcription API](batch-transcription.md) are queued immediately, and when the job starts running, it's performed faster than real-time transcription.
+- **Security**: We understand that calls might contain sensitive data, so security is our highest priority. To this end, our service has obtained (ISO), SOC, HIPAA, and PCI certifications.
-- **Accuracy** - With fourth-generation Unified models, we offer unsurpassed transcription quality.-- **Latency** - We understand that when doing bulk transcriptions, the transcriptions are needed quickly. The transcription jobs initiated via the [Batch Transcription API](batch-transcription.md) will be queued immediately, and once the job starts running it's performed faster than real-time transcription.-- **Security** - We understand that calls may contain sensitive data. Rest assured that security is one of our highest priorities. Our service has obtained ISO, SOC, HIPAA, PCI certifications.
+Call centers generate large volumes of audio data on a daily basis. If your business stores telephony data in a central location, such as an Azure storage account, you can use the [Batch Transcription API](batch-transcription.md) to asynchronously request and receive transcriptions.
-Call centers generate large volumes of audio data on a daily basis. If your business stores telephony data in a central location, such as Azure Storage, you can use the [Batch Transcription API](batch-transcription.md) to asynchronously request and receive transcriptions.
+A typical solution uses these products and
-A typical solution uses these
+- **Speech service**: For transcribing speech-to-text. A standard subscription for Speech service is required to use the Batch Transcription API. Free subscriptions will not work.
+- **[Azure storage account](https://azure.microsoft.com/services/storage/)**: For storing telephony data and the transcripts that are returned by the Batch Transcription API. This storage account should use notifications, specifically for when new files are added. These notifications are used to trigger the transcription process.
+- **[Azure Functions](../../azure-functions/index.yml)**: For creating the shared access signature (SAS) URI for each recording, and triggering the HTTP POST request to start a transcription. Additionally, you use Azure Functions to create requests to retrieve and delete transcriptions by using the Batch Transcription API.
-- The Speech service is used to transcribe speech-to-text. A standard subscription (S0) for the Speech service is required to use the Batch Transcription API. Free subscriptions (F0) will not work.-- [Azure Storage](https://azure.microsoft.com/services/storage/) is used to store telephony data, and the transcripts returned by the Batch Transcription API. This storage account should use notifications, specifically for when new files are added. These notifications are used to trigger the transcription process.-- [Azure Functions](../../azure-functions/index.yml) is used to create the shared access signatures (SAS) URI for each recording, and trigger the HTTP POST request to start a transcription. Additionally, Azure Functions is used to create requests to retrieve and delete transcriptions using the Batch Transcription API.
+Internally, Microsoft uses these technologies to support Microsoft customer calls in batch mode, as shown in the following diagram:
-Internally we are using the above technologies to support Microsoft customer calls in Batch mode.
## Real-time transcription for call center data
-Some businesses are required to transcribe conversations in real-time. Real-time transcription can be used to identify key-words and trigger searches for content and resources relevant to the conversation, for monitoring sentiment, to improve accessibility, or to provide translations for customers and agents who aren't native speakers.
+Some businesses are required to transcribe conversations in real time. You can use real-time transcription to identify keywords and trigger searches for content and resources that are relevant to the conversation, to monitor sentiment, to improve accessibility, or to provide translations for customers and agents who aren't native speakers.
For scenarios that require real-time transcription, we recommend using the [Speech SDK](speech-sdk.md). Currently, speech-to-text is available in [more than 20 languages](language-support.md), and the SDK is available in C++, C#, Java, Python, JavaScript, Objective-C, and Go. Samples are available in each language on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk). For the latest news and updates, see [Release notes](releasenotes.md).
-Internally we are using the above technologies to analyze in real-time Microsoft customer calls as they happen, as illustrated in the following diagram.
+Internally, Microsoft uses the previously mentioned technologies to analyze Microsoft customer calls in real time, as shown in the following diagram:
-![Batch Architecture](media/scenarios/call-center-reatime-pipeline.png)
+![Diagram showing the technologies that are used to support Microsoft customer calls in real time.](media/scenarios/call-center-reatime-pipeline.png)
-## A word on IVRs
+## About interactive voice responses
-The Speech service can be easily integrated in any solution by using either the [Speech SDK](speech-sdk.md) or the [REST API](./overview.md#reference-docs). However, call center transcription may require additional technologies. Typically, a connection between an IVR system and Azure is required. Although we do not offer such components, here is a description what a connection to an IVR entails.
+You can easily integrate Speech service into any solution by using either the [Speech SDK](speech-sdk.md) or the [REST API](./overview.md#reference-docs). However, call center transcription might require additional technologies. Ordinarily, a connection between an IVR system and Azure is required. Although we don't offer such components, the next paragraph describes what a connection to an IVR entails.
-Several IVR or telephony service products (such as Genesys or AudioCodes) offer integration capabilities that can be leveraged to enable inbound and outbound audio pass-through to an Azure service. Basically, a custom Azure service might provide a specific interface to define phone call sessions (such as Call Start or Call End) and expose a WebSocket API to receive inbound stream audio that is used with the Speech service. Outbound responses, such as conversation transcription or connections with the Bot Framework, can be synthesized with Microsoft's text-to-speech service and returned to the IVR for playback.
+Several IVR or telephony service products (such as Genesys or AudioCodes) offer integration capabilities that can be applied to enable an inbound and outbound audio pass-through to an Azure service. Basically, a custom Azure service might provide a specific interface to define phone call sessions (such as Call Start or Call End) and expose a WebSocket API to receive inbound stream audio that's used with Speech service. Outbound responses, such as a conversation transcription or connections with the Bot Framework, can be synthesized with the Microsoft text-to-speech service and returned to the IVR for playback.
-Another scenario is direct integration with Session Initiation Protocol (SIP). An Azure service connects to a SIP Server, thus getting an inbound stream and an outbound stream, which is used for the speech-to-text and text-to-speech phases. To connect to a SIP Server there are commercial software offerings, such as Ozeki SDK, or the [Microsoft Graph communications API](/graph/api/resources/communications-api-overview), that are designed to support this type of scenario for audio calls.
+Another scenario is direct integration with the Session Initiation Protocol (SIP). An Azure service connects to a SIP server to get an inbound and outbound stream, which is used for the speech-to-text and text-to-speech phases. To connect to a SIP server there are commercial software offerings, such as Ozeki SDK, or the [Microsoft Graph Communications API](/graph/api/resources/communications-api-overview), that are designed to support this type of scenario for audio calls.
## Customize existing experiences
- The Speech service works well with built-in models. However, you may want to further customize and tune the experience for your product or environment. Customization options range from acoustic model tuning to unique voice fonts for your brand. After you've built a custom model, you can use it with any of the Speech service features in real-time or batch mode.
+The Speech service works well with built-in models. However, you might want to further customize and tune the experience for your product or environment. Customization options range from acoustic model tuning to unique voice fonts for your brand. After you've built a custom model, you can use it with any of the Speech service features in real-time or batch mode.
| Speech service | Model | Description | | -- | -- | -- |
-| Speech-to-text | [Acoustic model](./how-to-custom-speech-train-model.md) | Create a custom acoustic model for applications, tools, or devices that are used in particular environments like in a car or on a factory floor, each with specific recording conditions. Examples include accented speech, specific background noises, or using a specific microphone for recording. |
-| | [Language model](./how-to-custom-speech-train-model.md) | Create a custom language model to improve transcription of industry-specific vocabulary and grammar, such as medical terminology, or IT jargon. |
-| | [Pronunciation model](./how-to-custom-speech-train-model.md) | With a custom pronunciation model, you can define the phonetic form and display for a word or term. It's useful for handling customized terms, such as product names or acronyms. All you need to get started is a pronunciation file, which is a simple `.txt` file. |
-| Text-to-speech | [Voice font](./how-to-custom-voice-create-voice.md) | Custom voice fonts allow you to create a recognizable, one-of-a-kind voice for your brand. It only takes a small amount of data to get started. The more data that you provide, the more natural and human-like your voice font will sound. |
+| Speech-to-text | [Acoustic model](./how-to-custom-speech-train-model.md) | Create a custom acoustic model for applications, tools, or devices that are used in particular environments, such as in a car or on a factory floor, each with its own recording conditions. Examples include accented speech, background noises, or using a specific microphone for recording. |
+| | [Language model](./how-to-custom-speech-train-model.md) | Create a custom language model to improve transcription of industry-specific vocabulary and grammar, such as medical terminology or IT jargon. |
+| | [Pronunciation model](./how-to-custom-speech-train-model.md) | With a custom pronunciation model, you can define the phonetic form and display for a word or term. It's useful for handling customized terms, such as product names or acronyms. All you need to get started is a pronunciation file, which is a simple .txt file. |
+| Text-to-speech | [Voice font](./how-to-custom-voice-create-voice.md) | with custom voice fonts, you can create a recognizable, one-of-a-kind voice for your brand. It takes only a small amount of data to get started. The more data you provide, the more natural and human-like your voice font will sound. |
## Sample code
-Sample code is available on GitHub for each of the Speech service features. These samples cover common scenarios like reading audio from a file or stream, continuous and at-start recognition, and working with custom models. Use these links to view SDK and REST samples:
+Sample code is available on GitHub for each of the Speech service features. These samples cover common scenarios, such as reading audio from a file or stream, continuous and at-start recognition, and working with custom models. To view SDK and REST samples, see:
- [Speech-to-text and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk) - [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)
cognitive-services Custom Commands References https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-commands-references.md
Custom Commands supports the following parameter types:
* Temperature * Url
-Every locale supports the "String" parameter type, but availability of all other types differs by locale. Custom Commands uses LUIS's prebuilt entity resolution, so the availability of a parameter type in a locale depends on LUIS's prebuilt entity support in that locale. You can find [more details on LUIS's prebuilt entity support per locale](../luis/luis-reference-prebuilt-entities.md).
+Every locale supports the "String" parameter type, but availability of all other types differs by locale. Custom Commands uses LUIS's prebuilt entity resolution, so the availability of a parameter type in a locale depends on LUIS's prebuilt entity support in that locale. You can find [more details on LUIS's prebuilt entity support per locale](../luis/luis-reference-prebuilt-entities.md). Custom LUIS entities (such as machine learned entities) are currently not supported.
Some parameter types like Number, String and DateTime support default value configuration, which you can configure from the portal.
cognitive-services How To Select Audio Input Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-select-audio-input-devices.md
Title: How to select an audio input device with the Speech SDK
+ Title: Select an audio input device with the Speech SDK
-description: 'Learn about selecting audio input devices in the Speech SDK (C++, C#, Python, Objective-C, Java, JavaScript) by obtaining the IDs of the audio devices connected to a system.'
+description: 'Learn about selecting audio input devices in the Speech SDK (C++, C#, Python, Objective-C, Java, and JavaScript) by obtaining the IDs of the audio devices connected to a system.'
ms.devlang: cpp, csharp, java, javascript, objective-c, python
-# How to: Select an audio input device with the Speech SDK
+# Select an audio input device with the Speech SDK
-Version 1.3.0 of the Speech SDK introduces an API to select the audio input. This article describes how to obtain the IDs of the audio devices connected to a system. These can then be used in the Speech SDK by configuring the audio device through the `AudioConfig` object:
+Version 1.3.0 of the Speech SDK introduces an API to select the audio input. This article describes how to obtain the IDs of the audio devices connected to a system. These IDs can then be used in the Speech SDK. You configure the audio device through the `AudioConfig` object:
```C++ audioConfig = AudioConfig.FromMicrophoneInput("<device id>");
audioConfig = AudioConfiguration.fromMicrophoneInput("<device id>");
``` > [!Note]
-> Microphone usage is not available for JavaScript running in Node.js
+> Microphone use isn't available for JavaScript running in Node.js.
-## Audio device IDs on Windows for Desktop applications
+## Audio device IDs on Windows for desktop applications
-Audio device [endpoint ID strings](/windows/desktop/CoreAudio/endpoint-id-strings) can be retrieved from the [`IMMDevice`](/windows/desktop/api/mmdeviceapi/nn-mmdeviceapi-immdevice) object in Windows for Desktop applications.
+Audio device [endpoint ID strings](/windows/desktop/CoreAudio/endpoint-id-strings) can be retrieved from the [`IMMDevice`](/windows/desktop/api/mmdeviceapi/nn-mmdeviceapi-immdevice) object in Windows for desktop applications.
The following code sample illustrates how to use it to enumerate audio devices in C++:
void ListEndpoints()
PROPVARIANT varName; for (ULONG i = 0; i < count; i++) {
- // Get pointer to endpoint number i.
+ // Get the pointer to endpoint number i.
hr = pCollection->Item(i, &pEndpoint); EXIT_ON_ERROR(hr);
void ListEndpoints()
STGM_READ, &pProps); EXIT_ON_ERROR(hr);
- // Initialize container for property value.
+ // Initialize the container for property value.
PropVariantInit(&varName); // Get the endpoint's friendly-name property. hr = pProps->GetValue(PKEY_Device_FriendlyName, &varName); EXIT_ON_ERROR(hr);
- // Print endpoint friendly name and endpoint ID.
+ // Print the endpoint friendly name and endpoint ID.
printf("Endpoint %d: \"%S\" (%S)\n", i, varName.pwszVal, pwszID); CoTaskMemFree(pwszID);
Exit:
} ```
-In C#, the [NAudio](https://github.com/naudio/NAudio) library can be used to access the CoreAudio API and enumerate devices as follows:
+In C#, you can use the [NAudio](https://github.com/naudio/NAudio) library to access the CoreAudio API and enumerate devices as follows:
```cs using System;
A sample device ID is `{0.0.1.00000000}.{5f23ab69-6181-4f4a-81a4-45414013aac8}`.
## Audio device IDs on UWP
-On the Universal Windows Platform (UWP), audio input devices can be obtained using the `Id()` property of the corresponding [`DeviceInformation`](/uwp/api/windows.devices.enumeration.deviceinformation) object.
+On the Universal Windows Platform (UWP), you can obtain audio input devices by using the `Id()` property of the corresponding [`DeviceInformation`](/uwp/api/windows.devices.enumeration.deviceinformation) object.
-The following code samples show how to do this in C++ and C#:
+The following code samples show how to do this step in C++ and C#:
```cpp #include <winrt/Windows.Foundation.h>
A sample device ID is `\\\\?\\SWD#MMDEVAPI#{0.0.1.00000000}.{5f23ab69-6181-4f4a-
## Audio device IDs on Linux
-The device IDs are selected using standard ALSA device IDs.
+The device IDs are selected by using standard ALSA device IDs.
The IDs of the inputs attached to the system are contained in the output of the command `arecord -L`.
-Alternatively, they can be obtained using the [ALSA C library](https://www.alsa-project.org/alsa-doc/alsa-lib/).
+Alternatively, they can be obtained by using the [ALSA C library](https://www.alsa-project.org/alsa-doc/alsa-lib/).
Sample IDs are `hw:1,0` and `hw:CARD=CC,DEV=0`.
For example, the UID for the built-in microphone is `BuiltInMicrophoneDevice`.
## Audio device IDs on iOS
-Audio device selection with the Speech SDK is not supported on iOS. However, apps using the SDK can influence audio routing through the [`AVAudioSession`](https://developer.apple.com/documentation/avfoundation/avaudiosession?language=objc) Framework.
+Audio device selection with the Speech SDK isn't supported on iOS. Apps that use the SDK can influence audio routing through the [`AVAudioSession`](https://developer.apple.com/documentation/avfoundation/avaudiosession?language=objc) Framework.
For example, the instruction
For example, the instruction
withOptions:AVAudioSessionCategoryOptionAllowBluetooth error:NULL]; ```
-enables the use of a Bluetooth headset for a speech-enabled app.
+Enables the use of a Bluetooth headset for a speech-enabled app.
## Audio device IDs in JavaScript
-In JavaScript the [MediaDevices.enumerateDevices()](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices) method can be used to enumerate the media devices and find a device ID to pass to `fromMicrophone(...)`.
+In JavaScript, the [MediaDevices.enumerateDevices()](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices) method can be used to enumerate the media devices and find a device ID to pass to `fromMicrophone(...)`.
## Next steps > [!div class="nextstepaction"]
-> [Explore our samples on GitHub](https://aka.ms/csspeech/samples)
+> [Explore samples on GitHub](https://aka.ms/csspeech/samples)
## See also
cognitive-services How To Specify Source Language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-specify-source-language.md
Title: How to specify source language for speech to text
+ Title: Specify source language for speech to text
-description: The Speech SDK allows you to specify the source language when converting speech to text. This article describes how to use the FromConfig and SourceLanguageConfig methods to let the Speech service know the source language and provide a custom model target.
+description: The Speech SDK allows you to specify the source language when you convert speech to text. This article describes how to use the FromConfig and SourceLanguageConfig methods to let the Speech service know the source language and provide a custom model target.
ms.devlang: cpp, csharp, java, javascript, objective-c, python
-# Specify source language for speech to text
+# Specify source language for speech-to-text
-In this article, you'll learn how to specify the source language for an audio input passed to the Speech SDK for speech recognition. Additionally, example code is provided to specify a custom speech model for improved recognition.
+In this article, you'll learn how to specify the source language for an audio input passed to the Speech SDK for speech recognition. The example code that's provided specifies a custom speech model for improved recognition.
::: zone pivot="programming-language-csharp"
-## How to specify source language in C#
+## Specify source language in C#
-In the following example, the source language is provided explicitly as a parameter using `SpeechRecognizer` construct.
+In the following example, the source language is provided explicitly as a parameter by using the `SpeechRecognizer` construct:
```csharp var recognizer = new SpeechRecognizer(speechConfig, "de-DE", audioConfig); ```
-In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
+In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
```csharp var sourceLanguageConfig = SourceLanguageConfig.FromLanguage("de-DE"); var recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig); ```
-In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
+In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
```csharp var sourceLanguageConfig = SourceLanguageConfig.FromLanguage("de-DE", "The Endpoint ID for your custom model.");
var recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioC
``` >[!Note]
-> `SpeechRecognitionLanguage` and `EndpointId` set methods are deprecated from the `SpeechConfig` class in C#. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
+> The `SpeechRecognitionLanguage` and `EndpointId` set methods are deprecated from the `SpeechConfig` class in C#. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
::: zone-end ::: zone pivot="programming-language-cpp"
+## Specify source language in C++
-## How to specify source language in C++
-
-In the following example, the source language is provided explicitly as a parameter using the `FromConfig` method.
+In the following example, the source language is provided explicitly as a parameter by using the `FromConfig` method.
```C++ auto recognizer = SpeechRecognizer::FromConfig(speechConfig, "de-DE", audioConfig); ```
-In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `FromConfig` when creating the `recognizer`.
+In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to `FromConfig` when you create the `recognizer` construct.
```C++ auto sourceLanguageConfig = SourceLanguageConfig::FromLanguage("de-DE"); auto recognizer = SpeechRecognizer::FromConfig(speechConfig, sourceLanguageConfig, audioConfig); ```
-In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. The `sourceLanguageConfig` is passed as a parameter to `FromConfig` when creating the `recognizer`.
+In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to `FromConfig` when you create the `recognizer` construct.
```C++ auto sourceLanguageConfig = SourceLanguageConfig::FromLanguage("de-DE", "The Endpoint ID for your custom model.");
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, sourceLanguageConfi
``` >[!Note]
-> `SetSpeechRecognitionLanguage` and `SetEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
+> `SetSpeechRecognitionLanguage` and `SetEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
::: zone-end ::: zone pivot="programming-language-java"
-## How to specify source language in Java
+## Specify source language in Java
-In the following example, the source language is provided explicitly when creating a new `SpeechRecognizer`.
+In the following example, the source language is provided explicitly when you create a new `SpeechRecognizer` construct.
```Java SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, "de-DE", audioConfig); ```
-In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter when creating a new `SpeechRecognizer`.
+In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter when you create a new `SpeechRecognizer` construct.
```Java SourceLanguageConfig sourceLanguageConfig = SourceLanguageConfig.fromLanguage("de-DE"); SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig); ```
-In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter when creating a new `SpeechRecognizer`.
+In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter when you create a new `SpeechRecognizer` construct.
```Java SourceLanguageConfig sourceLanguageConfig = SourceLanguageConfig.fromLanguage("de-DE", "The Endpoint ID for your custom model.");
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, sourceLanguageC
``` >[!Note]
-> `setSpeechRecognitionLanguage` and `setEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
+> `setSpeechRecognitionLanguage` and `setEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
::: zone-end ::: zone pivot="programming-language-python"
-## How to specify source language in Python
+## Specify source language in Python
-In the following example, the source language is provided explicitly as a parameter using `SpeechRecognizer` construct.
+In the following example, the source language is provided explicitly as a parameter by using the `SpeechRecognizer` construct.
```Python speech_recognizer = speechsdk.SpeechRecognizer( speech_config=speech_config, language="de-DE", audio_config=audio_config) ```
-In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `SourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
+In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `SourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
```Python source_language_config = speechsdk.languageconfig.SourceLanguageConfig("de-DE")
speech_recognizer = speechsdk.SpeechRecognizer(
speech_config=speech_config, source_language_config=source_language_config, audio_config=audio_config) ```
-In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `SourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
+In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `SourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
```Python source_language_config = speechsdk.languageconfig.SourceLanguageConfig("de-DE", "The Endpoint ID for your custom model.")
speech_recognizer = speechsdk.SpeechRecognizer(
``` >[!Note]
-> `speech_recognition_language` and `endpoint_id` properties are deprecated from the `SpeechConfig` class in Python. The use of these properties is discouraged, and they shouldn't be used when constructing a `SpeechRecognizer`.
+> The `speech_recognition_language` and `endpoint_id` properties are deprecated from the `SpeechConfig` class in Python. The use of these properties is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
::: zone-end ::: zone pivot="programming-language-more"
-## How to specify source language in Javascript
+## Specify source language in JavaScript
-The first step is to create a `SpeechConfig`:
+The first step is to create a `SpeechConfig` construct:
```Javascript var speechConfig = sdk.SpeechConfig.fromSubscription("YourSubscriptionkey", "YourRegion");
If you're using a custom model for recognition, you can specify the endpoint wit
speechConfig.endpointId = "The Endpoint ID for your custom model."; ```
-## How to specify source language in Objective-C
+## Specify source language in Objective-C
-In the following example, the source language is provided explicitly as a parameter using `SPXSpeechRecognizer` construct.
+In the following example, the source language is provided explicitly as a parameter by using the `SPXSpeechRecognizer` construct.
```Objective-C SPXSpeechRecognizer* speechRecognizer = \ [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig language:@"de-DE" audioConfiguration:audioConfig]; ```
-In the following example, the source language is provided using `SPXSourceLanguageConfiguration`. Then, the `SPXSourceLanguageConfiguration` is passed as a parameter to `SPXSpeechRecognizer` construct.
+In the following example, the source language is provided by using `SPXSourceLanguageConfiguration`. Then, `SPXSourceLanguageConfiguration` is passed as a parameter to the `SPXSpeechRecognizer` construct.
```Objective-C SPXSourceLanguageConfiguration* sourceLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"de-DE"];
SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpe
audioConfiguration:audioConfig]; ```
-In the following example, the source language and custom endpoint are provided using `SPXSourceLanguageConfiguration`. Then, the `SPXSourceLanguageConfiguration` is passed as a parameter to `SPXSpeechRecognizer` construct.
+In the following example, the source language and custom endpoint are provided by using `SPXSourceLanguageConfiguration`. Then, `SPXSourceLanguageConfiguration` is passed as a parameter to the `SPXSpeechRecognizer` construct.
```Objective-C SPXSourceLanguageConfiguration* sourceLanguageConfig = \
SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpe
``` >[!Note]
-> `speechRecognitionLanguage` and `endpointId` properties are deprecated from the `SPXSpeechConfiguration` class in Objective-C. The use of these properties is discouraged, and they shouldn't be used when constructing a `SPXSpeechRecognizer`.
+> The `speechRecognitionLanguage` and `endpointId` properties are deprecated from the `SPXSpeechConfiguration` class in Objective-C. The use of these properties is discouraged. Don't use them when you create a `SPXSpeechRecognizer` construct.
::: zone-end ## See also
-* For a list of supported languages and locales for speech to text, see [Language support](language-support.md).
+For a list of supported languages and locales for speech-to-text, see [Language support](language-support.md).
## Next steps
-* [Speech SDK reference documentation](speech-sdk.md)
+See the [Speech SDK reference documentation](speech-sdk.md).
cognitive-services How To Use Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-audio-input-streams.md
Title: Speech SDK audio input stream concepts
-description: An overview of the capabilities of the Speech SDK's audio input stream API.
+description: An overview of the capabilities of the Speech SDK audio input stream API.
# About the Speech SDK audio input stream API
-The Speech SDK's **Audio Input Stream** API provides a way to stream audio into the recognizers instead of using either the microphone or the input file APIs.
+The Speech SDK audio input stream API provides a way to stream audio into the recognizers instead of using either the microphone or the input file APIs.
-The following steps are required when using audio input streams:
+The following steps are required when you use audio input streams:
-- Identify the format of the audio stream. The format must be supported by the Speech SDK and the Speech service. Currently, only the following configuration is supported:
+- Identify the format of the audio stream. The format must be supported by the Speech SDK and the Azure Cognitive Services Speech service. Currently, only the following configuration is supported:
- Audio samples are in PCM format, one channel, 16 bits per sample, 8000 or 16000 samples per second (16000 or 32000 bytes per second), two block align (16 bit including padding for a sample).
+ Audio samples are:
- The corresponding code in the SDK to create the audio format looks like this:
+ - PCM format
+ - One channel
+ - 16 bits per sample, 8,000 or 16,000 samples per second (16,000 bytes or 32,000 bytes per second)
+ - Two-block aligned (16 bit including padding for a sample)
+
+ The corresponding code in the SDK to create the audio format looks like this example:
```csharp byte channels = 1;
The following steps are required when using audio input streams:
var audioFormat = AudioStreamFormat.GetWaveFormatPCM(samplesPerSecond, bitsPerSample, channels); ``` -- Make sure your code provides the RAW audio data according to these specifications. Also assure 16-bit samples arrive in little-endian format. Signed samples are also supported. If your audio source data doesn't match the supported formats, the audio must be transcoded into the required format.
+- Make sure that your code provides the RAW audio data according to these specifications. Also, make sure that 16-bit samples arrive in little-endian format. Signed samples are also supported. If your audio source data doesn't match the supported formats, the audio must be transcoded into the required format.
-- Create your own audio input stream class derived from `PullAudioInputStreamCallback`. Implement the `Read()` and `Close()` members. The exact function signature is language-dependent, but the code will look similar to this code sample:
+- Create your own audio input stream class derived from `PullAudioInputStreamCallback`. Implement the `Read()` and `Close()` members. The exact function signature is language-dependent, but the code looks similar to this code sample:
```csharp public class ContosoAudioStream : PullAudioInputStreamCallback {
The following steps are required when using audio input streams:
} public int Read(byte[] buffer, uint size) {
- // returns audio data to the caller.
- // e.g. return read(config.YYY, buffer, size);
+ // Returns audio data to the caller.
+ // E.g., return read(config.YYY, buffer, size);
} public void Close() {
- // close and cleanup resources.
+ // Close and clean up resources.
} }; ```
The following steps are required when using audio input streams:
var speechConfig = SpeechConfig.FromSubscription(...); var recognizer = new SpeechRecognizer(speechConfig, audioConfig);
- // run stream through recognizer
+ // Run stream through recognizer.
var result = await recognizer.RecognizeOnceAsync(); var text = result.GetText();
cognitive-services Keyword Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/keyword-recognition-overview.md
Title: Keyword recognition - Speech service
-description: An overview of the features, capabilities, and restrictions for keyword recognition using the Speech Software Development Kit (SDK).
+description: An overview of the features, capabilities, and restrictions for keyword recognition by using the Speech Software Development Kit (SDK).
# Keyword recognition
-Keyword recognition detects a word or short phrase within a stream of audio. It's also referred to as keyword spotting.
+Keyword recognition detects a word or short phrase within a stream of audio. It's also referred to as keyword spotting.
The most common use case of keyword recognition is voice activation of virtual assistants. For example, "Hey Cortana" is the keyword for the Cortana assistant. Upon recognition of the keyword, a scenario-specific action is carried out. For virtual assistant scenarios, a common resulting action is speech recognition of audio that follows the keyword. Generally, virtual assistants are always listening. Keyword recognition acts as a privacy boundary for the user. A keyword requirement acts as a gate that prevents unrelated user audio from crossing the local device to the cloud.
-To balance accuracy, latency, and computational complexity, keyword recognition is implemented as a multi-stage system. For all stages beyond the first, audio is only processed if the stage prior to it believed to have recognized the keyword of interest.
+To balance accuracy, latency, and computational complexity, keyword recognition is implemented as a multistage system. For all stages beyond the first, audio is only processed if the stage prior to it believed to have recognized the keyword of interest.
-The current system is designed with multiple stages spanning across the edge and cloud:
+The current system is designed with multiple stages that span the edge and cloud:
-![Multiple stages of keyword recognition across edge and cloud.](media/custom-keyword/kw-recognition-multi-stage.png)
+![Diagram that shows multiple stages of keyword recognition across the edge and cloud.](media/custom-keyword/kw-recognition-multi-stage.png)
Accuracy of keyword recognition is measured via the following metrics:
-* **Correct accept rate (CA)** ΓÇô Measures the systemΓÇÖs ability to recognize the keyword when it is spoken by an end-user. This is also known as the true positive rate.
-* **False accept rate (FA)** ΓÇô Measures the systemΓÇÖs ability to filter out audio that is not the keyword spoken by an end-user. This is also known as the false positive rate.
-The goal is to maximize the correct accept rate while minimizing the false accept rate. The current system is designed to detect a keyword or phrase preceded by a short amount of silence. Detecting a keyword in the middle of a sentence or utterance is not supported.
+* **Correct accept rate**: Measures the system's ability to recognize the keyword when it's spoken by a user. The correct accept rate is also known as the true positive rate.
+* **False accept rate**: Measures the system's ability to filter out audio that isn't the keyword spoken by a user. The false accept rate is also known as the false positive rate.
-## Custom Keyword for on-device models
+The goal is to maximize the correct accept rate while minimizing the false accept rate. The current system is designed to detect a keyword or phrase preceded by a short amount of silence. Detecting a keyword in the middle of a sentence or utterance isn't supported.
-The [Custom Keyword portal on Speech Studio](https://speech.microsoft.com/customkeyword) allows you to generate keyword recognition models that execute at the edge by specifying any word or short phrase. You can further personalize your keyword model by choosing the right pronunciations.
+## Custom keyword for on-device models
+
+With the [Custom Keyword portal on Speech Studio](https://speech.microsoft.com/customkeyword), you can generate keyword recognition models that execute at the edge by specifying any word or short phrase. You can further personalize your keyword model by choosing the right pronunciations.
### Pricing
-There's no cost to using Custom Keyword for generating models, including both Basic and Advanced models. There is also no cost for running models on-device with the Speech SDK.
+There's no cost to use custom keyword to generate models, including both Basic and Advanced models. There's also no cost to run models on-device with the Speech SDK.
### Types of models
-Custom Keyword allows you to generate two types of on-device models for any keyword.
+You can use custom keyword to generate two types of on-device models for any keyword.
| Model type | Description | | - | -- |
-| Basic | Best suited for demo or rapid prototyping purposes. Models are generated with a common base model and can take up to 15 minutes to be ready. Models may not have optimal accuracy characteristics. |
-| Advanced | Best suited for product integration purposes. Models are generated with adaptation of a common base model using simulated training data to improve accuracy characteristics. It can take up to 48 hours for models to be ready. |
+| Basic | Best suited for demo or rapid prototyping purposes. Models are generated with a common base model and can take up to 15 minutes to be ready. Models might not have optimal accuracy characteristics. |
+| Advanced | Best suited for product integration purposes. Models are generated with adaptation of a common base model by using simulated training data to improve accuracy characteristics. It can take up to 48 hours for models to be ready. |
> [!NOTE]
-> You can view a list of regions that support the **Advanced** model type in the [Keyword recognition region support](regions.md#keyword-recognition) documentation.
+> You can view a list of regions that support the **Advanced** model type in the [keyword recognition region support](regions.md#keyword-recognition) documentation.
-Neither model type requires you to upload training data. Custom Keyword fully handles data generation and model training.
+Neither model type requires you to upload training data. Custom keyword fully handles data generation and model training.
### Pronunciations
-When creating a new model, Custom Keyword automatically generates possible pronunciations of the provided keyword. You can listen to each pronunciation and choose all that closely represent the way you expect end-users to say the keyword. All other pronunciations should not be selected.
+When you create a new model, custom keyword automatically generates possible pronunciations of the provided keyword. You can listen to each pronunciation and choose all variations that closely represent the way you expect users to say the keyword. All other pronunciations shouldn't be selected.
-It is important to be deliberate about the pronunciations you select to ensure the best accuracy characteristics. For example, choosing more pronunciations than needed can lead to higher false accept rates. Choosing too few pronunciations, where not all expected variations are covered, can lead to lower correct accept rates.
+It's important to be deliberate about the pronunciations you select to ensure the best accuracy characteristics. For example, if you choose more pronunciations than you need, you might get higher false accept rates. If you choose too few pronunciations, where not all expected variations are covered, you might get lower correct accept rates.
-### Testing models
+### Test models
-Once on-device models are generated by Custom Keyword, they can be tested directly on the portal. The portal allows you to speak directly into your browser and get keyword recognition results.
+After on-device models are generated by custom keyword, they can be tested directly on the portal. You can use the portal to speak directly into your browser and get keyword recognition results.
-## Keyword Verification
+## Keyword verification
-Keyword Verification is a cloud service that reduces the impact of false accepts from on-device models with robust models running on Azure. There is no tuning or training required for Keyword Verification to work with your keyword. Incremental model updates are continually deployed to the service to improve accuracy and latency, completely transparent to client applications.
+Keyword verification is a cloud service that reduces the impact of false accepts from on-device models with robust models running on Azure. Tuning or training isn't required for keyword verification to work with your keyword. Incremental model updates are continually deployed to the service to improve accuracy and latency and are transparent to client applications.
### Pricing
-Keyword Verification is always used in combination with Speech-to-text, and there is no cost to using Keyword Verification beyond the cost of Speech-to-text.
+Keyword verification is always used in combination with speech-to-text. There's no cost to use keyword verification beyond the cost of speech-to-text.
+
+### Keyword verification and speech-to-text
-### Keyword Verification and Speech-to-text
+When keyword verification is used, it's always in combination with speech-to-text. Both services run in parallel, which means audio is sent to both services for simultaneous processing.
-When Keyword Verification is used, it is always in combination with Speech-to-text. Both services run in parallel. This means that audio is sent to both services for simultaneous processing.
+![Diagram that shows parallel processing of keyword verification and speech-to-text.](media/custom-keyword/kw-verification-parallel-processing.png)
-![Parallel processing of Keyword Verification and Speech-to-text.](media/custom-keyword/kw-verification-parallel-processing.png)
+Running keyword verification and speech-to-text in parallel yields the following benefits:
-Running Keyword Verification and Speech-to-text in parallel yields the following benefits:
-* **No additional latency on Speech-to-text results** ΓÇô Parallel execution means Keyword Verification adds no latency, and the client receives Speech-to-text results just as quickly. If Keyword Verification determines the keyword was not present in the audio, Speech-to-text processing is terminated, which protects against unnecessary Speech-to-text processing. However, network and cloud model processing increases the user-perceived latency of voice activation. For details, see [Recommendations and guidelines](keyword-recognition-guidelines.md).
-* **Forced keyword prefix in Speech-to-text results** ΓÇô Speech-to-text processing will ensure that the results sent to the client are prefixed with the keyword. This allows for increased accuracy in the Speech-to-text results for speech that follows the keyword.
-* **Increased Speech-to-text timeout** ΓÇô Due to the expected presence of the keyword at the beginning of audio, Speech-to-text will allow for a longer pause of up to 5 seconds after the keyword, before determining end of speech and terminating Speech-to-text processing. This ensures the end-user experience is correctly handled for both staged commands (*\<keyword> \<pause> \<command>*) and chained commands (*\<keyword> \<command>*).
+* **No other latency on speech-to-text results**: Parallel execution means that keyword verification adds no latency. The client receives speech-to-text results as quickly. If keyword verification determines the keyword wasn't present in the audio, speech-to-text processing is terminated. This action protects against unnecessary speech-to-text processing. Network and cloud model processing increases the user-perceived latency of voice activation. For more information, see [Recommendations and guidelines](keyword-recognition-guidelines.md).
+* **Forced keyword prefix in speech-to-text results**: Speech-to-text processing ensures that the results sent to the client are prefixed with the keyword. This behavior allows for increased accuracy in the speech-to-text results for speech that follows the keyword.
+* **Increased speech-to-text timeout**: Because of the expected presence of the keyword at the beginning of audio, speech-to-text allows for a longer pause of up to five seconds after the keyword before it determines the end of speech and terminates speech-to-text processing. This behavior ensures that the user experience is correctly handled for staged commands (*\<keyword> \<pause> \<command>*) and chained commands (*\<keyword> \<command>*).
-### Keyword Verification responses and latency considerations
+### Keyword verification responses and latency considerations
-For each request to the service, Keyword Verification will return one of two responses: Accepted or Rejected. The processing latency varies depending on the length of the keyword and the length of the audio segment expected to contain the keyword. Processing latency does not include network cost between the client and Azure Speech services.
+For each request to the service, keyword verification returns one of two responses: accepted or rejected. The processing latency varies depending on the length of the keyword and the length of the audio segment expected to contain the keyword. Processing latency doesn't include network cost between the client and Azure Speech services.
-| Keyword Verification response | Description |
+| Keyword verification response | Description |
| -- | -- |
-| Accepted | Indicates the service believed the keyword was present in the audio stream provided as part of the request. |
-| Rejected | Indicates the service believed the keyword was not present in the audio stream provided as part of the request. |
+| Accepted | Indicates the service believed that the keyword was present in the audio stream provided as part of the request. |
+| Rejected | Indicates the service believed that the keyword wasn't present in the audio stream provided as part of the request. |
+
+Rejected cases often yield higher latencies as the service processes more audio than accepted cases. By default, keyword verification processes a maximum of two seconds of audio to search for the keyword. If the keyword is determined not to be present in two seconds, the service times out and signals a rejected response to the client.
+
+### Use keyword verification with on-device models from custom keyword
-Rejected cases often yield higher latencies as the service processes more audio than accepted cases. By default, Keyword Verification will process a maximum of two seconds of audio to search for the keyword. If the keyword is determined not to be present in the two seconds, the service will time out and signal a rejected response to the client.
+The Speech SDK enables seamless use of on-device models generated by using custom keyword with keyword verification and speech-to-text. It transparently handles:
-### Using Keyword Verification with on-device models from Custom Keyword
+* Audio gating to keyword verification and speech recognition based on the outcome of an on-device model.
+* Communicating the keyword to keyword verification.
+* Communicating any more metadata to the cloud for orchestrating the end-to-end scenario.
-The Speech SDK facilitates seamless use of on-device models generated using Custom Keyword with Keyword Verification and Speech-to-text. It transparently handles:
-* Audio gating to Keyword Verification & Speech recognition based on the outcome of on-device model.
-* Communicating the keyword to the Keyword Verification service.
-* Communicating any additional metadata to the cloud for orchestrating the end-to-end scenario.
+You don't need to explicitly specify any configuration parameters. All necessary information is automatically extracted from the on-device model generated by custom keyword.
-You do not need to explicitly specify any configuration parameters. All necessary information will automatically be extracted from the on-device model generated by Custom Keyword.
+The sample and tutorials linked here show how to use the Speech SDK:
-The sample and tutorials linked below show how to use the Speech SDK:
* [Voice assistant samples on GitHub](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant) * [Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)
- * [Tutorial: Create a Custom Commands application with simple voice commands](./how-to-develop-custom-commands-application.md)
+ * [Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)
## Speech SDK integration and scenarios
-The Speech SDK facilitates easy use of personalized on-device keyword recognition models generated with Custom Keyword and the Keyword Verification service. To ensure your product needs can be met, the SDK supports two scenarios:
+The Speech SDK enables easy use of personalized on-device keyword recognition models generated with custom keyword and keyword verification. To ensure that your product needs can be met, the SDK supports the following two scenarios:
| Scenario | Description | Samples | | -- | -- | - |
-| End-to-end keyword recognition with Speech-to-text | Best suited for products that will use a customized on-device keyword model from Custom Keyword with Azure SpeechΓÇÖs Keyword Verification and Speech-to-text services. This is the most common scenario. | <ul><li>[Voice assistant sample code.](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)</li><li>[Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK.](./tutorial-voice-enable-your-bot-speech-sdk.md)</li><li>[Tutorial: Create a Custom Commands application with simple voice commands.](./how-to-develop-custom-commands-application.md)</li></ul> |
-| Offline keyword recognition | Best suited for products without network connectivity that will use a customized on-device keyword model from Custom Keyword. | <ul><li>[C# on Windows UWP sample.](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer)</li><li>[Java on Android sample.](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer)</li></ul>
+| End-to-end keyword recognition with speech-to-text | Best suited for products that will use a customized on-device keyword model from custom keyword with Azure Speech keyword verification and speech-to-text. This scenario is the most common. | <ul><li>[Voice assistant sample code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)</li><li>[Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)</li><li>[Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)</li></ul> |
+| Offline keyword recognition | Best suited for products without network connectivity that will use a customized on-device keyword model from custom keyword. | <ul><li>[C# on Windows UWP sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer)</li><li>[Java on Android sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer)</li></ul>
## Next steps
-* [Read the quickstart to generate on-device keyword recognition models using Custom Keyword.](custom-keyword-basics.md)
-* [Learn more about Voice Assistants.](voice-assistants.md)
+* [Read the quickstart to generate on-device keyword recognition models using custom keyword](custom-keyword-basics.md)
+* [Learn more about voice assistants](voice-assistants.md)
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
See below for information about changes to Speech services and resources.
* Speech SDK 1.20.0 released January 2022. Updates include extended programming language support for DialogServiceConnector, Unity on Linux, enhancements to IntentRecognizer, added support for Python 3.10, and a fix to remove a 10-second delay while stopping a speech recognizer (when using a PushAudioInputStream, and no new audio is pushed in after StopContinuousRecognition is called). * Speech CLI 1.20.0 released January 2022. Updates include microphone input for Speaker recognition and expanded support for Intent recognition.
-* Speaker Recognition service is generally available (GA). With [Speaker Recognition](./speaker-recognition-overview.md) you can accurately verify and identify speakers by their unique voice characteristics.
-* Custom Neural Voice extended to support [49 locales](./language-support.md#custom-neural-voice).
-* Prebuilt Neural Voice added new [languages and variants](./language-support.md#prebuilt-neural-voices).
-* Commitment Tiers added to [pricing options](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+* TTS Service January 2022, added 10 new languages and variants for Neural text-to-speech and new voices in preview for en-GB, fr-FR and de-DE.
+* Containers v3.0.0 released January 2022, with support for using containers in disconnected environments.
## Release notes
cognitive-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md
Title: Speech phonetic alphabets - Speech service
-description: Speech service phonetic alphabet and International Phonetic Alphabet (IPA) examples.
+description: This article presents Speech service phonetic alphabet and International Phonetic Alphabet (IPA) examples.
# SSML phonetic alphabets
-Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to improve pronunciation of Text-to-speech voices. See [Use phonemes to improve pronunciation](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation) to learn when and how to use each alphabet.
+Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to improve the pronunciation of text-to-speech voices. To learn when and how to use each alphabet, see [Use phonemes to improve pronunciation](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
## Speech service phonetic alphabet
-For some locales, the Speech service defines its own phonetic alphabets that typically map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The 7 locales that support `sapi` are: `en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, and `zh-TW`.
+For some locales, Speech service defines its own phonetic alphabets, which ordinarily map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The seven locales that support the Microsoft Speech API (SAPI, or `sapi`) are en-US, fr-FR, de-DE, es-ES, ja-JP, zh-CN, and zh-TW.
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
#### English suprasegmentals
-|Example 1 (Onset for consonant, word initial for vowel)|Example 2 (Intervocalic for consonant, word medial nucleus for vowel)|Example 3 (Coda for consonant, word final for vowel)|Comments|
+|Example&nbsp;1 (onset for consonant, word-initial for vowel)|Example&nbsp;2 (intervocalic for consonant, word-medial nucleus for vowel)|Example&nbsp;3 (coda for consonant, word-final for vowel)|Comments|
|--|--|--|--|
-| burger /b er **1** r - g ax r/ | falafel /f ax - l aa **1** - f ax l/ | guitar /g ih - t aa **1** r/ | Speech service phone set put stress after the vowel of the stressed syllable |
-| inopportune /ih **2** - n aa - p ax r - t uw 1 n/ | dissimilarity /d ih - s ih **2**- m ax - l eh 1 - r ax - t iy/ | workforce /w er 1 r k - f ao **2** r s/ | Speech service phone set put stress after the vowel of the sub-stressed syllable |
+| burger /b er **1** r - g ax r/ | falafel /f ax - l aa **1** - f ax l/ | guitar /g ih - t aa **1** r/ | The Speech service phone set puts stress after the vowel of the stressed syllable. |
+| inopportune /ih **2** - n aa - p ax r - t uw 1 n/ | dissimilarity /d ih - s ih **2**- m ax - l eh 1 - r ax - t iy/ | workforce /w er 1 r k - f ao **2** r s/ | The Speech service phone set puts stress after the vowel of the sub-stressed syllable. |
#### English vowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-||--|--| | iy | `i` | **ea**t | f**ee**l | vall**ey** | | ih | `ɪ` | **i**f | f**i**ll | | | ey | `eɪ` | **a**te | g**a**te | d**ay** |
-| eh | `ɛ` | **e**very | p**e**t | m**eh** (rare word finally) |
-| ae | `æ` | **a**ctive | c**a**t | n**ah** (rare word finally) |
-| aa | `ɑ` | **o**bstinate | p**o**ppy | r**ah** (rare word finally) |
+| eh | `ɛ` | **e**very | p**e**t | m**eh** (rare word-final) |
+| ae | `æ` | **a**ctive | c**a**t | n**ah** (rare word-final) |
+| aa | `ɑ` | **o**bstinate | p**o**ppy | r**ah** (rare word-final) |
| ao | `ɔ` | **o**range | c**au**se | Ut**ah** | | uh | `ʊ` | b**oo**k | | | | ow | `oʊ` | **o**ld | cl**o**ne | g**o** |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
#### English R-colored vowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--|-|| | ih r | `ɪɹ` | **ear**s | t**ir**amisu | n**ear** | | eh r | `ɛɹ` | **air**plane | app**ar**ently | sc**ar**e |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
| er r | `ɝ` | **ear**th | b**ir**d | f**ur** | | ax r | `ɚ` | | all**er**gy | supp**er** |
-#### English Semivowels
+#### English semivowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|||--| | w | `w` | **w**ith, s**ue**de | al**w**ays | | | y | `j` | **y**ard, f**e**w | on**i**on | | #### English aspirated oral stops
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--|-|| | p | `p` | **p**ut | ha**pp**en | fla**p** | | b | `b` | **b**ig | num**b**er | cra**b** |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
| k | `k` | **c**ut | sla**ck**er | Ira**q** | | g | `g` | **g**o | a**g**o | dra**g** |
-#### English Nasal stops
+#### English nasal stops
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|||-| | m | `m` | **m**at, smash | ca**m**era | roo**m** | | n | `n` | **n**o, s**n**ow | te**n**t | chicke**n** |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
#### English fricatives
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|-||| | f | `f` | **f**ork | le**f**t | hal**f** | | v | `v` | **v**alue | e**v**ent | lo**v**e |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
#### English affricates
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--|--|| | ch | `tʃ` | **ch**in | fu**t**ure | atta**ch** | | jh | `dʒ` | **j**oy | ori**g**inal | oran**g**e | #### English approximants
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--||--| | l | `l` | **l**id, g**l**ad | pa**l**ace | chi**ll** | | r | `╔╣` | **r**ed, b**r**ing | bo**rr**ow | ta**r** |
You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#u
#### French suprasegmentals
-The Speech service phone set puts stress after the vowel of the stressed syllable, however; the `fr-FR` Speech service phone set doesn't support the IPA substress 'ˌ'. If the IPA substress is needed, you should use the IPA directly.
+The Speech service phone set puts stress after the vowel of the stressed syllable. However, the `fr-FR` Speech service phone set doesn't support the IPA substress 'ˌ'. If the IPA substress is needed, you should use the IPA directly.
#### French vowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-||--|--| | a | `a` | **a**rbre | p**a**tte | ir**a** | | aa | `ɑ` | | p**â**te | p**a**s | | aa ~ | `ɑ̃` | **en**fant | enf**en**t | t**em**ps | | ax | `ə` | | p**e**tite | l**e** | | eh | `ɛ` | **e**lle | p**e**rdu | ét**ai**t |
-| eu | `ø` | **œu**fs | cr**eu**ser | qu**eu** |
+| eu | `ø` | **œu**fs | cr**eu**ser | qu**eu**e |
| ey | `e` | ému | crétin | ôté | | eh ~ | `ɛ̃` | **im**portant | p**ein**ture | mat**in** | | iy | `i` | **i**dée | pet**i**te | am**i** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
#### French consonants
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|-||-| | b | `b` | **b**ête | ha**b**ille | ro**b**e | | d | `d` | **d**ire | ron**d**eur | chau**d**e | | f | `f` | **f**emme | su**ff**ixe | bo**f** | | g | `g` | **g**auche | é**g**ale | ba**gu**e |
-| ng | `ŋ` | | | [<sup>1</sup>](#fr-1)park**ing** |
+| ng | `ŋ` | | | park**ing**[<sup>1</sup>](#fr-1) |
| hy | `ɥ` | h**u**ile | n**u**ire | | | k | `k` | **c**arte | é**c**aille | be**c** | | l | `l` | **l**ong | é**l**ire | ba**l** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
| | `z‿` | | | di**x** | <a id="fr-1"></a>
-**1** *Only for some foreign words.*
+**1** *Only for some foreign words*.
> [!TIP] > The `fr-FR` Speech service phone set doesn't support the following French liasions, `n‿`, `t‿`, and `z‿`. If they are needed, you should consider using the IPA directly.
The Speech service phone set puts stress after the vowel of the stressed syllabl
#### German suprasegmentals
-| Example 1 (Onset for consonant, word initial for vowel) | Example 2 (Intervocalic for consonant, word medial nucleus for vowel) | Example 3 (Coda for consonant, word final for vowel) | Comments |
+| Example&nbsp;1 (Onset for consonant, word-initial for vowel) | Example&nbsp;2 (Intervocalic for consonant, word-medial nucleus for vowel) | Example&nbsp;3 (Coda for consonant, word-final for vowel) | Comments |
|--|--|--|--| | anders /a **1** n - d ax r s/ | Multiplikationszeichen /m uh l - t iy - p l iy - k a - ts y ow **1** n s - ts ay - c n/ | Biologie /b iy - ow - l ow - g iy **1**/ | Speech service phone set put stress after the vowel of the stressed syllable |
-| Allgemeinwissen /a **2** l - g ax - m ay 1 n - v ih - s n/ | Abfallentsorgungsfirma /a 1 p - f a l - ^ eh n t - z oh **2** ax r - g uh ng s - f ih ax r - m a/ | Computertomographie /k oh m - p y uw 1 - t ax r - t ow - m ow - g r a - f iy **2**/ | Speech service phone set put stress after the vowel of the sub-stressed syllable |
+| Allgemeinwissen /a **2** l - g ax - m ay 1 n - v ih - s n/ | Abfallentsorgungsfirma /a 1 p - f a l - ^ eh n t - z oh **2** ax r - g uh ng s - f ih ax r - m a/ | Computertomographie /k oh m - p y uw 1 - t ax r - t ow - m ow - g r a - f iy **2**/ | The Speech service phone set puts stress after the vowel of the sub-stressed syllable |
#### German vowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|--|||| | a: | `aː` | **A**ber | Maßst**a**b | Schem**a** | | a | `a` | **A**bfall | B**a**ch | Agath**a** | | oh | `ɔ` | **O**sten | Pf**o**sten | |
-| eh: | `ɛː` | **Ä**hnlichkeit | B**ä**r | [<sup>1</sup>](#de-v-1)Fasci**ae** |
+| eh: | `ɛː` | **Ä**hnlichkeit | B**ä**r | Fasci**ae**[<sup>1</sup>](#de-v-1) |
| eh | `ɛ` | **ä**ndern | Proz**e**nt | Amygdal**ae** |
-| ax | `ə` | [<sup>2</sup>](#de-v-2)'v**e**rstauen | Aach**e**n | Frag**e** |
+| ax | `ə` | 'v**e**rstauen[<sup>2</sup>](#de-v-2) | Aach**e**n | Frag**e** |
| iy | `iː` | **I**ran | abb**ie**gt | Relativitätstheor**ie** | | ih | `ɪ` | **I**nnung | s**i**ngen | Wood**y** | | eu | `øː` | **Ö**sen | abl**ö**sten | Malm**ö** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
| uy | `ʏ` | **ü**ppig | S**y**stem | | <a id="de-v-1"></a>
-**1** *Only in words of foreign origin, such as: Fasci**ae**.*<br>
+**1** *Only in words of foreign origin, such as Fasci**ae***.<br>
<a id="de-v-2"></a>
-**2** *Word-intially only in words of foreign origin such as **A**ppointment. Syllable-initially in: 'v**e**rstauen.*
+**2** *Word-initial only in words of foreign origin, such as **A**ppointment. Syllable-initial in 'v**e**rstauen*.
#### German diphthong
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--|--|--| | ay | `ai` | **ei**nsam | Unabhängigk**ei**t | Abt**ei** | | aw | `au` | **au**ßen | abb**au**st | St**au** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
#### German semivowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--|--|| | ax r | `ɐ` | | abänd**er**n | lock**er** | #### German consonants
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|--|--|--|--|
-| b | `b` | **B**ank | | [<sup>1</sup>](#de-c-1)Pu**b** |
-| c | `ç` | **Ch**emie | mögli**ch**st | [<sup>2</sup>](#de-c-2)i**ch** |
-| d | `d` | **d**anken | [<sup>3</sup>](#de-c-3)Len**d**l | [<sup>4</sup>](#de-c-4)Clau**d**e |
-| jh | `ʤ` | **J**eff | gemana**g**t | [<sup>5</sup>](#de-c-5)Chan**g**e |
+| b | `b` | **B**ank | | Pu**b**[<sup>1</sup>](#de-c-1) |
+| c | `ç` | **Ch**emie | mögli**ch**st | i**ch**[<sup>2</sup>](#de-c-2) |
+| d | `d` | **d**anken | Len**d**l[<sup>3</sup>](#de-c-3) | Clau**d**e[<sup>4</sup>](#de-c-4) |
+| jh | `ʤ` | **J**eff | gemana**g**t | Chan**g**e[<sup>5</sup>](#de-c-5) |
| f | `f` | **F**ahrtdauer | angri**ff**slustig | abbruchrei**f** |
-| g | `g` | **g**ut | [<sup>6</sup>](#de-c-6)Gre**g** | |
+| g | `g` | **g**ut | Gre**g**[<sup>6</sup>](#de-c-6) | |
| h | `h` | **H**ausanbau | | | | y | `j` | **J**od | Reakt**i**on | hu**i** | | k | `k` | **K**oma | Aspe**k**t | Flec**k** | | l | `l` | **l**au | ähne**l**n | zuvie**l** | | m | `m` | **M**ut | A**m**t | Leh**m** | | n | `n` | **n**un | u**n**d | Huh**n** |
-| ng | `ŋ` | [<sup>7</sup>](#de-c-7)**Ng**uyen | Schwa**nk** | R**ing** |
+| ng | `ŋ` | **Ng**uyen[<sup>7</sup>](#de-c-7) | Schwa**nk** | R**ing** |
| p | `p` | **P**artner | abru**p**t | Ti**p** | | pf | `pf` | **Pf**erd | dam**pf**t | To**pf** | | r | `ʀ`, `r`, `ʁ` | **R**eise | knu**rr**t | Haa**r** |
-| s | `s` | [<sup>8</sup>](#de-c-8)**S**taccato | bi**s**t | mie**s** |
+| s | `s` | **S**taccato[<sup>8</sup>](#de-c-8) | bi**s**t | mie**s** |
| sh | `ʃ` | **Sch**ule | mi**sch**t | lappi**sch** | | t | `t` | **T**raum | S**t**raße | Mu**t** | | ts | `ts` | **Z**ug | Ar**z**t | Wit**z** | | ch | `tʃ` | **Tsch**echien | aufgepu**tsch**t | bundesdeu**tsch** |
-| v | `v` | **w**inken | Q**u**alle | [<sup>9</sup>](#de-c-9)Gr**oo**ve |
-| x | [<sup>10</sup>](#de-c-10)`x`,[<sup>11</sup>](#de-c-11)`ç` | [<sup>12</sup>](#de-c-12)Ba**ch**erach | Ma**ch**t mögli**ch**st | Schma**ch** 'i**ch** |
+| v | `v` | **w**inken | Q**u**alle | Gr**oo**ve[<sup>9</sup>](#de-c-9) |
+| x | `x`[<sup>10</sup>](#de-c-10), `ç`[<sup>11</sup>](#de-c-11) | Ba**ch**erach[<sup>12</sup>](#de-c-12) | Ma**ch**t mögli**ch**st | Schma**ch** 'i**ch** |
| z | `z` | **s**uper | | | | zh | `ʒ` | **G**enre | B**re**ezinski | Edvi**g**e | <a id="de-c-1"></a>
-**1** *Only in words of foreign origin, such as: Pu**b**.*<br>
+**1** *Only in words of foreign origin, such as Pu**b***.<br>
<a id="de-c-2"></a>
-**2** *Soft "ch" after "e" and "i"*<br>
+**2** *Soft "ch" after "e" and "i"*.<br>
<a id="de-c-3"></a>
-**3** *Only in words of foreign origin, such as: Len**d**l.*<br>
+**3** *Only in words of foreign origin, such as Len**d**l*.<br>
<a id="de-c-4"></a>
-**4** *Only in words of foreign origin such as: Clau**d**e.*<br>
+**4** *Only in words of foreign origin, such as Clau**d**e*.<br>
<a id="de-c-5"></a>
-**5** *Only in words of foreign origin such as: Chan**g**e.*<br>
+**5** *Only in words of foreign origin, such as Chan**g**e*.<br>
<a id="de-c-6"></a>
-**6** *Word-terminally only in words of foreign origin such as Gre**g**.*<br>
+**6** *Word-terminally only in words of foreign origin, such as Gre**g***.<br>
<a id="de-c-7"></a>
-**7** *Only in words of foreign origin such as: **Ng**uyen.*<br>
+**7** *Only in words of foreign origin, such as **Ng**uyen*.<br>
<a id="de-c-8"></a>
-**8** *Only in words of foreign origin such as: **S**taccato.*<br>
+**8** *Only in words of foreign origin, such as **S**taccato*.<br>
<a id="de-c-9"></a>
-**9** *Only in words of foreign origin, such as: Gr**oo**ve.*<br>
+**9** *Only in words of foreign origin, such as Gr**oo**ve*.<br>
<a id="de-c-10"></a>
-**10** *The IPA `x` is a hard "ch" after all non-front vowels (a, aa, oh, ow, uh, uw and the diphthong aw).*<br>
+**10** *The IPA `x` is a hard "ch" after all non-front vowels (a, aa, oh, ow, uh, uw, and the diphthong aw)*.<br>
<a id="de-c-11"></a>
-**11** *The IPA `ç` is a soft 'ch' after front vowels (ih, iy, eh, ae, uy, ue, oe, eu also in diphthongs ay, oy) and consonants*<br>
+**11** *The IPA `ç` is a soft "ch" after front vowels (ih, iy, eh, ae, uy, ue, oe, eu, and diphthongs ay, oy) and consonants*.<br>
<a id="de-c-12"></a>
-**12** *Word-initially only in words of foreign origin, such as: **J**uan. Syllable-initially also in words like: Ba**ch**erach.*<br>
+**12** *Word-initial only in words of foreign origin, such as **J**uan. Syllable-initial also in words such as Ba**ch**erach*.<br>
#### German oral consonants
-| `sapi` | `ipa` | Example 1 |
+| `sapi` | `ipa` | Example |
|--|-|--| | ^ | `ʔ` | beachtlich /b ax - ^ a 1 x t - l ih c/ | > [!NOTE]
-> We need to add a [gs\] phone between two distinct vowels, except the two vowels are a genuine diphthong. This oral consonant is a glottal stop, for more information, see [glottal stop](http://en.wikipedia.org/wiki/Glottal_stop).
+> We need to add a [gs\] phone between two distinct vowels, except when the two vowels are a genuine diphthong. This oral consonant is a glottal stop. For more information, see [glottal stop](http://en.wikipedia.org/wiki/Glottal_stop).
### [es-ES](#tab/es-ES) #### Spanish vowels
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|-|--||--| | a | `a` | **a**lto | c**a**ntar | cas**a** | | i | `i` | **i**bérica | av**i**spa | tax**i** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
#### Spanish consonants
-| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 |
+| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|--|||-|-| | b | `b` | **b**aobab | | am**b** | | | `╬▓` | | bao**b**ab | baoba**b** |
The Speech service phone set puts stress after the vowel of the stressed syllabl
| x | `x` | **j**ota | a**j**o | relo**j** | > [!TIP]
-> The `es-ES` Speech service phone set doesn't support the following Spanish IPA, `β`, `ð`, and `ɣ`. If they are needed, you should consider using the IPA directly.
+> The `es-ES` Speech service phone set doesn't support the following Spanish IPA: `β`, `ð`, and `ɣ`. If they're needed, consider using the IPA directly.
### [zh-CN](#tab/zh-CN)
The Speech service phone set for `zh-TW` is based on the native phone [Bopomofo]
#### Tone
-| Speech service tone | Bopomofo tone | Example (word) | Speech service phones | Bopomofo | Pinyin (拼音) |
+| Speech service tone | Bopomofo tone | Example (word) | Speech service phones | Bopomofo | Pinyin (拼音) |
|||-|--|-|-| | ˉ | empty | 偵 | ㄓㄣˉ | ㄓㄣ | zhēn | | ˊ | ˊ | 察 | ㄔㄚˊ | ㄔㄚˊ | chá |
The Speech service phone set for `zh-TW` is based on the native phone [Bopomofo]
| ˋ | ˋ | 望 | ㄨㄤˋ | ㄨㄤˋ | wàng | | ˙ | ˙ | 影子 | 一ㄥˇ ㄗ˙ | 一ㄥˇ ㄗ˙ | yǐng zi |
-#### Example
+#### Examples
| Character | `sapi` | |--|-|
The Speech service phone set for `ja-JP` is based on the native phone [Kana](htt
| `ˈ` | `ˈ` mainstress | | `+` | `ˌ` substress |
-#### Example
+#### Examples
| Character | `sapi` | `ipa` | |--||-|
The Speech service phone set for `ja-JP` is based on the native phone [Kana](htt
## International Phonetic Alphabet
-For the locales below, the Speech service uses the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet).
+For the following locales, Speech service uses the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet).
You set `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
-These locales all use the same IPA stress and syllables described here.
+These locales all use the IPA stress and syllable symbols that are listed here:
|`ipa` | Symbol | |-|-|
These locales all use the same IPA stress and syllables described here.
| `.` | Syllable boundary |
-Select a tab for the IPA phonemes specific to each locale.
+Select a tab to view the IPA phonemes that are specific to each locale.
### [ca-ES](#tab/ca-ES)
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-|-||-| | `a` | **a**men | am**a**ro | est**à** | | `ɔ` | **o**dre | ofert**o**ri | microt**ò** |
Select a tab for the IPA phonemes specific to each locale.
#### Vowels
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||--|-| | `ɑː` | | f**a**st | br**a** | | `æ` | | f**a**t | |
Select a tab for the IPA phonemes specific to each locale.
#### Consonants
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||--|-| | `b ` | **b**ike | ri**bb**on | ri**b** | | `tʃ ` | **ch**allenge | na**t**ure | ri**ch** |
Select a tab for the IPA phonemes specific to each locale.
#### Vowels
-| `ipa` | Example 1 | Example 2 | Example 3|
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3|
|-||-|-| | `ɑ` | **a**zúcar | tom**a**te | rop**a** | | `e` | **e**so | rem**e**ro | am**é** |
Select a tab for the IPA phonemes specific to each locale.
#### Consonants
-| `ipa` | Example 1 | Example 2 | Example 3|
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3|
|-||-|-| | `b` | **b**ote | | | | `╬▓` | ├│r**b**ita | envol**v**ente | |
Select a tab for the IPA phonemes specific to each locale.
#### Vowels
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||--|--| | `a` | **a**mo | s**a**no | scort**a** | | `ai` | **ai**cs | abb**ai**no | m**ai** |
Select a tab for the IPA phonemes specific to each locale.
#### Consonants
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||--|--| | `b` | **b**ene | e**b**anista | Euroclu**b** | | `bː` | | go**bb**a | |
Select a tab for the IPA phonemes specific to each locale.
### [pt-BR](#tab/pt-BR)
-#### VOWELS
+#### Vowels
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-|--||--| | `i` | **i**lha | f**i**car | com**i** | | `ĩ` | **in**tacto | p**in**tar | aberd**een** |
Select a tab for the IPA phonemes specific to each locale.
#### Consonants
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-|--||--| | `w̃` | | | atualizaçã**o** | | `w` | **w**ashington | ág**u**a | uso**u** |
Select a tab for the IPA phonemes specific to each locale.
### [pt-PT](#tab/pt-PT)
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-|-|--|| | `a` | **á**bdito | consul**a**r | medir**á** | | `ɐ` | **a**bacaxi | dom**a**ção | long**a** |
Select a tab for the IPA phonemes specific to each locale.
### [ru-RU](#tab/ru-RU)
-#### VOWELS
+#### Vowels
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||-|-| | `a` | **а**дрес | р**а**дость | бед**а** | | `ʌ` | **о**блаков | з**а**стенчивость | внучк**а** |
Select a tab for the IPA phonemes specific to each locale.
| `ɔ` | **о**крик | м**о**т | весл**о** | | `u` | **у**жин | к**у**ст | пойд**у** |
-#### CONSONANT
+#### Consonants
-| `ipa` | Example 1 | Example 2 | Example 3 |
+| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
|-||-|-| | `p` | **п**рофессор | по**п**лавок | укро**п** | | `pʲ` | **П**етербург | осле**п**ительно | сте**пь** |
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-studio-overview.md
# What is Speech Studio?
-[Speech Studio](https://speech.microsoft.com) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech in your applications. You create projects in Speech Studio using a no-code approach, and then reference those assets in your applications using the [Speech SDK](speech-sdk.md), [Speech CLI](spx-overview.md), or REST APIs.
+[Speech Studio](https://speech.microsoft.com) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
-## Set up your Azure account
+## Prerequisites
-You need to have an Azure account and add a Speech resource before you can use [Speech Studio](https://speech.microsoft.com). If you don't have an account and resource, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
+Before you can begin using [Speech Studio](https://speech.microsoft.com), you need to have an Azure account and a Speech resource. If you don't already have an account and a resource, [try Speech service for free](overview.md#try-the-speech-service-for-free).
-After you create an Azure account and a Speech service resource:
+After you've created an Azure account and a Speech service resource, do the following:
-1. Sign in to the [Speech Studio](https://speech.microsoft.com) with your Azure account.
-1. Select a Speech resource in your subscription. You can change the resources anytime in "Settings" in the top menu.
+1. Sign in to [Speech Studio](https://speech.microsoft.com) with your Azure account.
+1. In your Speech Studio subscription, select a Speech resource. You can change the resource at any time by selecting **Settings** at the top of the pane.
## Speech Studio features
-The following Speech service features are available as project types in Speech Studio.
+In Speech Studio, the following Speech service features are available as project types:
+
+* **Real-time speech-to-text**: Quickly test speech-to-text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech-to-text works on your audio samples. To explore the full functionality, see [What is speech-to-text?](speech-to-text.md).
+
+* **Custom Speech**: Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Prepare data for Custom Speech](how-to-custom-speech-test-and-train.md).
+
+* **Pronunciation assessment**: Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
-* **Real-time speech-to-text**: Quickly test speech-to-text by dragging and dropping audio files without using any code. This is a demo tool for seeing how speech-to-text works on your audio samples, but see the [overview](speech-to-text.md) for speech-to-text to explore the full functionality that's available.
-* **Custom Speech**: Custom Speech allows you to create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to using a base speech recognition model, Custom Speech models become part of your unique competitive advantage because they are not publicly accessible. See the [quickstart](how-to-custom-speech-test-and-train.md) to get started with uploading sample audio to create a Custom Speech model.
-* **Pronunciation Assessment**: Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly with no code, but see the [how-to](how-to-pronunciation-assessment.md) article for using the feature with the Speech SDK in your applications.
* **Voice Gallery**: Build apps and services that speak naturally. Choose from more than 170 voices in over 70 languages and variants. Bring your scenarios to life with highly expressive and human-like neural voices.
-* **Custom Voice**: Custom Voice allows you to create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. See the [how-to](how-to-custom-voice-create-voice.md) article on creating and using custom voices via endpoints.
-* **Audio Content Creation**: [Audio Content Creation](how-to-audio-content-creation.md) is an easy-to-use tool that lets you build highly natural audio content for a variety of scenarios, like audiobooks, news broadcasts, video narrations, and chat bots. Speech Studio allows you to export your created audio files to use in your applications.
-* **Custom Keyword**: A Custom Keyword is a word or short phrase that allows your product to be voice-activated. You create a Custom Keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
-* **Custom Commands**: Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a code-free authoring experience in Speech Studio, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios. See the [how-to](how-to-develop-custom-commands-application.md) guide for building Custom Commands applications, and also see the guide for [integrating your Custom Commands application with the Speech SDK](how-to-custom-commands-setup-speech-sdk.md).
+
+* **Custom Voice**: Create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
+
+* **Audio Content Creation**: Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots, with the easy-to-use [Audio Content Creation](how-to-audio-content-creation.md) tool. With Speech Studio, you can export these audio files to use in your applications.
+
+* **Custom Keyword**: A custom keyword is a word or short phrase that you can use to voice-activate a product. You create a custom keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
+
+* **Custom Commands**: Easily build rich, voice-command apps that are optimized for voice-first interaction experiences. Custom Commands provides a code-free authoring experience in Speech Studio, an automatic hosting model, and relatively lower complexity. The feature helps you focus on building the best solution for your voice-command scenarios. For more information, see the [Develop Custom Commands applications](how-to-develop-custom-commands-application.md) guide. Also see [Integrate with a client application by using the Speech SDK](how-to-custom-commands-setup-speech-sdk.md).
## Next steps
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-translation.md
Title: Speech translation overview - Speech service
-description: Speech translation allows you to add end-to-end, real-time, multi-language translation of speech to your applications, tools, and devices. The same API can be used for both speech-to-speech and speech-to-text translation. This article is an overview of the benefits and capabilities of the speech translation service.
+description: With speech translation, you can add end-to-end, real-time, multi-language translation of speech to your applications, tools, and devices.
keywords: speech translation
# What is speech translation?
-In this overview, you learn about the benefits and capabilities of the speech translation service, which enables real-time, [multi-language speech-to-speech](language-support.md#speech-translation) and speech-to-text translation of audio streams. With the Speech SDK, your applications, tools, and devices have access to source transcriptions and translation outputs for provided audio. Interim transcription and translation results are returned as speech is detected, and final results can be converted into synthesized speech.
+In this article, you learn about the benefits and capabilities of the speech translation service, which enables real-time, multi-language speech-to-speech and speech-to-text translation of audio streams. By using the Speech SDK, you can give your applications, tools, and devices access to source transcriptions and translation outputs for the provided audio. Interim transcription and translation results are returned as speech is detected, and the final results can be converted into synthesized speech.
+
+For a list of languages that the Speech Translation API supports, see the "Speech translation" section of [Language and voice support for the Speech service](language-support.md#speech-translation).
## Core features
In this overview, you learn about the benefits and capabilities of the speech tr
* Support for translation to multiple target languages. * Interim recognition and translation results.
-## Get started
+## Before you begin
-See the [quickstart](get-started-speech-translation.md) to get started with speech translation. The speech translation service is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
+As your first step, see [Get started with speech translation](get-started-speech-translation.md). The speech translation service is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
## Sample code
-Sample code for the Speech SDK is available on GitHub. These samples cover common scenarios like reading audio from a file or stream, continuous and at-start recognition/translation, and working with custom models.
-
-* [Speech-to-text and translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
+You'll find [Speech SDK speech-to-text and translation samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk) on GitHub. These samples cover common scenarios, such as reading audio from a file or stream, continuous and at-start recognition and translation, and working with custom models.
## Migration guides
-If your applications, tools, or products are using the [Translator Speech API](./how-to-migrate-from-translator-speech-api.md), we've created guides to help you migrate to the Speech service.
-
-* [Migrate from the Translator Speech API to the Speech service](how-to-migrate-from-translator-speech-api.md)
+If your applications, tools, or products are using the [Translator Speech API](./how-to-migrate-from-translator-speech-api.md), see [Migrate from the Translator Speech API to Speech service](how-to-migrate-from-translator-speech-api.md).
## Reference docs
If your applications, tools, or products are using the [Translator Speech API](.
## Next steps
-* Complete the speech translation [quickstart](get-started-speech-translation.md)
-* [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free)
-* [Get the Speech SDK](speech-sdk.md)
+* Read the [Get started with speech translation](get-started-speech-translation.md) quickstart article.
+* Get a [Speech service subscription key for free](overview.md#try-the-speech-service-for-free).
+* Get the [Speech SDK](speech-sdk.md).
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-basics.md
Title: "Speech CLI quickstart - Speech service"
+ Title: "Quickstart: The Speech CLI - Speech service"
-description: Get started with the Azure Speech CLI. You can interact with Speech services like speech to text, text to speech, and speech translation without writing code.
+description: By using the Azure Speech CLI, you can interact with speech-to-text, text-to-speech, and speech translation without having to write code.
# Get started with the Azure Speech CLI
-In this article, you'll learn how to use the Azure Speech CLI (command-line interface) to access Speech services like speech to text, text to speech, and speech translation without writing code. The Speech CLI is production ready and can be used to automate simple workflows in the Speech service, using `.bat` or shell scripts.
+In this article, you'll learn how to use the Azure Speech CLI (also called SPX) to access Speech services such as speech-to-text, text-to-speech, and speech translation, without having to write any code. The Speech CLI is production ready, and you can use it to automate simple workflows in the Speech service by using `.bat` or shell scripts.
-This article assumes that you have working knowledge of the command prompt, terminal, or PowerShell.
+This article assumes that you have working knowledge of the Command Prompt window, terminal, or PowerShell.
> [!NOTE] > In PowerShell, the [stop-parsing token](/powershell/module/microsoft.powershell.core/about/about_special_characters#stop-parsing-token) (`--%`) should follow `spx`. For example, run `spx --% config @region` to view the current region config value.
This article assumes that you have working knowledge of the command prompt, term
[!INCLUDE [](includes/spx-setup.md)]
-## Create subscription config
+## Create a subscription configuration
# [Terminal](#tab/terminal)
-You need an Azure subscription key and region identifier (ex. `eastus`, `westus`) to get started. See the [Speech service overview](overview.md#find-keys-and-locationregion) documentation for steps to get these credentials.
+To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). To learn how to get these credentials, see the [Speech service overview](overview.md#find-keys-and-locationregion) documentation.
-You run the following commands in a terminal to configure your subscription key and region identifier.
+To configure your subscription key and region identifier, run the following commands:
```console spx config @key --set SUBSCRIPTION-KEY spx config @region --set REGION ```
-The key and region are stored for future Speech CLI commands. Run the following commands to view the current configuration.
+The key and region are stored for future Speech CLI commands. To view the current configuration, run the following commands:
```console spx config @key spx config @region ```
-As needed, include the `clear` option to remove either stored value.
+As needed, include the `clear` option to remove either stored value:
```console spx config @key --clear
spx config @region --clear
# [PowerShell](#tab/powershell)
-You need an Azure subscription key and region identifier (ex. `eastus`, `westus`) to get started. See the [Speech service overview](overview.md#find-keys-and-locationregion) documentation for steps to get these credentials.
+To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). To learn how to get these credentials, see the [Speech service overview](overview.md#find-keys-and-locationregion) documentation.
-You run the following commands in PowerShell to configure your subscription key and region identifier.
+To configure your subscription key and region identifier, run the following commands in PowerShell:
```powershell spx --% config @key --set SUBSCRIPTION-KEY spx --% config @region --set REGION ```
-The key and region are stored for future Speech CLI commands. Run the following commands to view the current configuration.
+The key and region are stored for future SPX commands. To view the current configuration, run the following commands:
```powershell spx --% config @key spx --% config @region ```
-As needed, include the `clear` option to remove either stored value.
+As needed, include the `clear` option to remove either stored value:
```powershell spx --% config @key --clear
spx --% config @region --clear
## Basic usage
-This section shows a few basic SPX commands that are often useful for first-time testing and experimentation. Start by viewing the help built in to the tool by running the following command.
+This section shows a few basic SPX commands that are often useful for first-time testing and experimentation. Start by viewing the help that's built into the tool by running the following command:
```console spx ```
-You can search help topics by keyword. For example, run the following command to see a list of Speech CLI usage examples:
+You can search help topics by keyword. For example, to see a list of Speech CLI usage examples, run the following command:
```console spx help find --topics "examples" ```
-Run the following command to see options for the recognize command:
+To see options for the recognize command, run the following command:
```console spx help recognize
spx help recognize
Additional help commands are listed in the console output. You can enter these commands to get detailed help about subcommands.
-## Speech to text (speech recognition)
+## Speech-to-text (speech recognition)
-You run this command to convert speech to text (speech recognition) using your system's default microphone.
+To convert speech to text (speech recognition) by using your system's default microphone, run the following command:
```console spx recognize --microphone ```
-After entering the command, SPX will begin listening for audio on the current active input device, and stop when you press **ENTER**. The spoken audio is then recognized and converted to text in the console output.
+After you run the command, SPX begins listening for audio on the current active input device. It stops listening when you select **Enter**. The spoken audio is then recognized and converted to text in the console output.
-With the Speech CLI, you can also recognize speech from an audio file.
+With the Speech CLI, you can also recognize speech from an audio file. Run the following command:
```console spx recognize --file /path/to/file.wav ``` > [!NOTE]
-> If you are using a Docker container, `--microphone` will not work.
+> If you're using a Docker container, `--microphone` will not work.
>
-> If you're recognizing speech from an audio file in a Docker container, make sure that the audio file is located in the directory that you mounted in the previous step.
+> If you're recognizing speech from an audio file in a Docker container, make sure that the audio file is located in the directory that you mounted previously.
> [!TIP]
-> If you get stuck or want to learn more about the Speech CLI's recognition options, you can run ```spx help recognize```.
+> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help recognize```.
-## Text to speech (speech synthesis)
+## Text-to-speech (speech synthesis)
-Running the following command will take text as input, and output the synthesized speech to the current active output device (for example, your computer speakers).
+The following command takes text as input and then outputs the synthesized speech to the current active output device (for example, your computer speakers).
```console spx synthesize --text "Testing synthesis using the Speech CLI" --speakers ```
-You can also save the synthesized output to file. In this example, we'll create a file named `my-sample.wav` in the directory that the command is run.
+You can also save the synthesized output to a file. In this example, let's create a file named *my-sample.wav* in the directory where you're running the command.
```console spx synthesize --text "Enjoy using the Speech CLI." --audio output my-sample.wav ```
-These examples presume that you're testing in English. However, we support speech synthesis in many languages. You can pull down a full list of voices with this command, or by visiting the [language support page](./language-support.md).
+These examples presume that you're testing in English. However, Speech service supports speech synthesis in many languages. You can pull down a full list of voices either by running the following command or by visiting the [language support page](./language-support.md).
```console spx synthesize --voices ```
-Here's how you use one of the voices you've discovered.
+Here's a command for using one of the voices you've discovered.
```console spx synthesize --text "Bienvenue chez moi." --voice fr-CA-Caroline --speakers ``` > [!TIP]
-> If you get stuck or want to learn more about the Speech CLI's recognition options, you can run ```spx help synthesize```.
+> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help synthesize```.
-## Speech to text translation
+## Speech-to-text translation
-With the Speech CLI, you can also do speech to text translation. Run this command to capture audio from your default microphone, and output the translation as text. Keep in mind that you need to supply the `source` and `target` language with the `translate` command.
+With the Speech CLI, you can also do speech-to-text translation. Run the following command to capture audio from your default microphone and output the translation as text. Keep in mind that you need to supply the `source` and `target` language with the `translate` command.
```console spx translate --microphone --source en-US --target ru-RU ```
-When translating into multiple languages, separate language codes with `;`.
+When you're translating into multiple languages, separate the language codes with a semicolon (`;`).
```console spx translate --microphone --source en-US --target ru-RU;fr-FR;es-ES
spx translate --file /some/file/path/input.wav --source en-US --target ru-RU --o
``` > [!NOTE]
-> See the [language and locale article](language-support.md) for a list of all supported languages with their corresponding locale codes.
+> For a list of all supported languages and their corresponding locale codes, see [Language and voice support for the Speech service](language-support.md).
> [!TIP]
-> If you get stuck or want to learn more about the Speech CLI's recognition options, you can run ```spx help translate```.
+> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help translate```.
## Next steps
-* [Install GStreamer to use Speech CLI with MP3 and other formats](./how-to-use-codec-compressed-audio-input-streams.md)
-* [Speech CLI configuration options](./spx-data-store-configuration.md)
+* [Install GStreamer to use the Speech CLI with MP3 and other formats](./how-to-use-codec-compressed-audio-input-streams.md)
+* [Configuration options for the Speech CLI](./spx-data-store-configuration.md)
* [Batch operations with the Speech CLI](./spx-batch-operations.md)
cognitive-services Spx Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-overview.md
Title: The Azure Speech CLI
-description: The Speech CLI is a command-line tool for using the Speech service without writing any code. The Speech CLI requires minimal setup, and it's easy to immediately start experimenting with key features of the Speech service to see if your use-cases can be met.
+description: In this article, you learn about the Speech CLI, a command-line tool for using Speech service without having to write any code.
# What is the Speech CLI?
-The Speech CLI is a command-line tool for using the Speech service without writing any code. The Speech CLI requires minimal setup, and it's easy to immediately start experimenting with key features of the Speech service to see if your use-cases can be met. Within minutes, you can run simple test workflows like batch speech-recognition from a directory of files, or text-to-speech on a collection of strings from a file. Beyond simple workflows, the Speech CLI is production-ready and can be scaled up to run larger processes using automated `.bat` or shell scripts.
+The Speech CLI is a command-line tool for using Speech service without having to write any code. The Speech CLI requires minimal setup. You can easily use it to experiment with key features of Speech service and see how it works with your use cases. Within minutes, you can run simple test workflows, such as batch speech-recognition from a directory of files or text-to-speech on a collection of strings from a file. Beyond simple workflows, the Speech CLI is production-ready, and you can scale it up to run larger processes by using automated `.bat` or shell scripts.
-Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI. Consider the following guidance to decide when to use the Speech CLI or the Speech SDK.
+Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI. As you're deciding when to use the Speech CLI or the Speech SDK, consider the following guidance.
Use the Speech CLI when:
-* You want to experiment with Speech service features with minimal setup and no code
-* You have relatively simple requirements for a production application using the Speech service
+* You want to experiment with Speech service features with minimal setup and without having to write code.
+* You have relatively simple requirements for a production application that uses Speech service.
Use the Speech SDK when:
-* You want to integrate Speech service functionality within a specific language or platform (for example, C#, Python, C++)
-* You have complex requirements that may require advanced service requests, or developing custom behavior including response streaming
+* You want to integrate Speech service functionality within a specific language or platform (for example, C#, Python, or C++).
+* You have complex requirements that might require advanced service requests.
+* You're developing custom behavior, including response streaming.
## Core features
-* Speech recognition - Convert speech-to-text either from audio files or directly from a microphone, or transcribe a recorded conversation.
+* **Speech recognition**: Convert speech to text either from audio files or directly from a microphone, or transcribe a recorded conversation.
-* Speech synthesis - Convert text-to-speech using either input from text files, or input directly from the command line. Customize speech output characteristics using [SSML configurations](speech-synthesis-markup.md), and [neural voices](speech-synthesis-markup.md#prebuilt-neural-voices-and-custom-neural-voices).
+* **Speech synthesis**: Convert text to speech either by using input from text files or by inputting directly from the command line. Customize speech output characteristics by using [Speech Synthesis Markup Language (SSML) configurations](speech-synthesis-markup.md), and [neural voices](speech-synthesis-markup.md#prebuilt-neural-voices-and-custom-neural-voices).
-* Speech translation - Translate audio in a source language to text or audio in a target language.
+* **Speech translation**: Translate audio in a source language to text or audio in a target language.
-* Run on Azure compute resources - Send Speech CLI commands to run on an Azure remote compute resource using `spx webjob`.
+* **Run on Azure compute resources**: Send Speech CLI commands to run on an Azure remote compute resource by using `spx webjob`.
## Get started
-To get started with the Speech CLI, see the [quickstart](spx-basics.md). This article shows you how to run some basic commands, and also shows slightly more advanced commands for running batch operations for speech-to-text and text-to-speech. After reading the basics article, you should have enough of an understanding of the syntax to start writing some custom commands, or automating simple Speech service operations.
+To get started with the Speech CLI, see the [quickstart](spx-basics.md). This article shows you how to run some basic commands. It also gives you slightly more advanced commands for running batch operations for speech-to-text and text-to-speech. After you've read the basics article, you should understand the syntax well enough to start writing some custom commands or automate simple Speech service operations.
## Next steps -- Get started with the [Speech CLI quickstart](spx-basics.md)-- [Configure your data store](./spx-data-store-configuration.md)-- Learn how to [run batch operations with the Speech CLI](./spx-batch-operations.md)
+- [Get started with the Azure Speech CLI](spx-basics.md)
+- [Speech CLI configuration options](./spx-data-store-configuration.md)
+- [Speech CLI batch operations](./spx-batch-operations.md)
cognitive-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/voice-assistants.md
Title: Voice assistants - Speech service
-description: An overview of the features, capabilities, and restrictions for voice assistants using the Speech Software Development Kit (SDK).
+description: An overview of the features, capabilities, and restrictions for voice assistants with the Speech SDK.
# What is a voice assistant?
-Voice assistants using the Speech service empowers developers to create natural, human-like conversational interfaces for their applications and experiences.
+By using voice assistants with the Speech service, developers can create natural, human-like, conversational interfaces for their applications and experiences.
-The voice assistant service provides fast, reliable interaction between a device and an assistant implementation that uses either (1) [Direct Line Speech](direct-line-speech.md) (via Azure Bot Service) for adding voice capabilities to your bots, or, (2) Custom Commands for voice commanding scenarios.
+The voice assistant service provides fast, reliable interaction between a device and an assistant implementation that uses either [Direct Line Speech](direct-line-speech.md) (via Azure Bot Service) for adding voice capabilities to your bots or Custom Commands for voice-command scenarios.
-## Choosing an assistant solution
+## Choose an assistant solution
-The first step to creating a voice assistant is to decide what it should do. The Speech service provides multiple, complementary solutions for crafting your assistant interactions. You can add voice in and voice out capabilities to your flexible and versatile bot built using Azure Bot Service with the [Direct Line Speech](direct-line-speech.md) channel, or leverage the simplicity of authoring a [Custom Commands](custom-commands.md) app for straightforward voice commanding scenarios.
+The first step in creating a voice assistant is to decide what you want it to do. Speech service provides multiple, complementary solutions for crafting assistant interactions. For flexibility and versatility, you can add voice in and voice out capabilities to a bot by using Azure Bot Service with the [Direct Line Speech](direct-line-speech.md) channel, or you can simply author a [Custom Commands](custom-commands.md) app for more straightforward voice-command scenarios.
-| If you want... | Then consider... | For example... |
+| If you want... | Consider using... | Examples |
|-||-| |Open-ended conversation with robust skills integration and full deployment control | Azure Bot Service bot with [Direct Line Speech](direct-line-speech.md) channel | <ul><li>"I need to go to Seattle"</li><li>"What kind of pizza can I order?"</li></ul>
-|Voice commanding or simple task-oriented conversations with simplified authoring and hosting | [Custom Commands](custom-commands.md) | <ul><li>"Turn on the overhead light"</li><li>"Make it 5 degrees warmer"</li><li>Other samples [available here](https://speech.microsoft.com/customcommands)</li></ul>
+|Voice-command or simple task-oriented conversations with simplified authoring and hosting | [Custom Commands](custom-commands.md) | <ul><li>"Turn on the overhead light"</li><li>"Make it 5 degrees warmer"</li><li>More examples at [Speech Studio](https://speech.microsoft.com/customcommands)</li></ul>
-We recommend [Direct Line Speech](direct-line-speech.md) as the best default choice if you aren't yet sure what you'd like your assistant to handle. It offers integration with a rich set of tools and authoring aids such as the [Virtual Assistant Solution and Enterprise Template](/azure/bot-service/bot-builder-enterprise-template-overview) and the [QnA Maker service](../qnamaker/overview/overview.md) to build on common patterns and use your existing knowledge sources.
+If you aren't yet sure what you want your assistant to do, we recommend [Direct Line Speech](direct-line-speech.md) as the best option. It offers integration with a rich set of tools and authoring aids, such as the [Virtual Assistant solution and enterprise template](/azure/bot-service/bot-builder-enterprise-template-overview) and the [QnA Maker service](../qnamaker/overview/overview.md), to build on common patterns and use your existing knowledge sources.
-[Custom Commands](custom-commands.md) makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios.
+If you want to keep it simpler for now, [Custom Commands](custom-commands.md) makes it easy to build rich, voice-command apps that are optimized for voice-first interaction. Custom Commands provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, all of which can help you focus on building the best solution for your voice-command scenario.
- ![Comparison of assistant solutions](media/voice-assistants/assistant-solution-comparison.png "Comparison of assistant solutions")
+ ![Screenshot of a graph comparing the relative complexity and flexibility of the two voice assistant solutions.](media/voice-assistants/assistant-solution-comparison.png)
+## Reference architecture for building a voice assistant by using the Speech SDK
-## Reference Architecture for building a voice assistant using the Speech SDK
-
- ![Conceptual diagram of the voice assistant orchestration service flow](media/voice-assistants/overview.png "The voice assistant flow")
+ ![Conceptual diagram of the voice assistant orchestration service flow.](media/voice-assistants/overview.png)
## Core features
Whether you choose [Direct Line Speech](direct-line-speech.md) or [Custom Comman
| Category | Features | |-|-|
-|[Custom keyword](./custom-keyword-basics.md) | Users can start conversations with assistants with a custom keyword like "Hey Contoso." An app does this with a custom keyword engine in the Speech SDK, which can be configured with a custom keyword [that you can generate here](./custom-keyword-basics.md). Voice assistants can use service-side keyword verification to improve the accuracy of the keyword activation (versus the device alone).
-|[Speech to text](speech-to-text.md) | Voice assistants convert real-time audio into recognized text using [Speech-to-text](speech-to-text.md) from the Speech service. This text is available, as it's transcribed, to both your assistant implementation and your client application.
-|[Text to speech](text-to-speech.md) | Textual responses from your assistant are synthesized using [Text-to-speech](text-to-speech.md) from the Speech service. This synthesis is then made available to your client application as an audio stream. Microsoft offers the ability to build your own custom, high-quality Neural TTS voice that gives a voice to your brand. To learn more, [contact us](mailto:mstts@microsoft.com).
+|[Custom keyword](./custom-keyword-basics.md) | Users can start conversations with assistants by using a custom keyword such as "Hey Contoso." An app does this with a custom keyword engine in the Speech SDK, which you can configure by going to [Get started with custom keywords](./custom-keyword-basics.md). Voice assistants can use service-side keyword verification to improve the accuracy of the keyword activation (versus using the device alone).
+|[Speech-to-text](speech-to-text.md) | Voice assistants convert real-time audio into recognized text by using [speech-to-text](speech-to-text.md) from the Speech service. This text is available, as it's transcribed, to both your assistant implementation and your client application.
+|[Text-to-speech](text-to-speech.md) | Textual responses from your assistant are synthesized through [text-to-speech](text-to-speech.md) from the Speech service. This synthesis is then made available to your client application as an audio stream. Microsoft offers the ability to build your own custom, high-quality Neural Text to Speech (Neural TTS) voice that gives a voice to your brand. To learn more, [contact us](mailto:mstts@microsoft.com).
-## Getting started with voice assistants
+## Get started with voice assistants
-We offer quickstarts designed to have you running code in less than 10 minutes. This table includes a list of voice assistant quickstarts, organized by language.
+We offer the following quickstart articles, organized by programming language, that are designed to have you running code in less than 10 minutes:
-* [Quickstart: Create a custom voice assistant using Direct Line Speech](quickstarts/voice-assistants.md)
-* [Quickstart: Build a voice commanding app using Custom Commands](quickstart-custom-commands-application.md)
+* [Quickstart: Create a custom voice assistant by using Direct Line Speech](quickstarts/voice-assistants.md)
+* [Quickstart: Build a voice-command app by using Custom Commands](quickstart-custom-commands-application.md)
-## Sample code and Tutorials
+## Sample code and tutorials
-Sample code for creating a voice assistant is available on GitHub. These samples cover the client application for connecting to your assistant in several popular programming languages.
+Sample code for creating a voice assistant is available on GitHub. The samples cover the client application for connecting to your assistant in several popular programming languages.
* [Voice assistant samples on GitHub](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)
-* [Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](tutorial-voice-enable-your-bot-speech-sdk.md)
+* [Tutorial: Voice-enable an assistant that's built by using Azure Bot Service with the C# Speech SDK](tutorial-voice-enable-your-bot-speech-sdk.md)
* [Tutorial: Create a Custom Commands application with simple voice commands](./how-to-develop-custom-commands-application.md) ## Customization
-Voice assistants built using Azure Speech services can use the full range of customization options.
+Voice assistants that you build by using Speech service can use a full range of customization options.
* [Custom Speech](./custom-speech-overview.md) * [Custom Voice](how-to-custom-voice.md) * [Custom Keyword](keyword-recognition-overview.md) > [!NOTE]
-> Customization options vary by language/locale (see [Supported languages](language-support.md)).
+> Customization options vary by language and locale. To learn more, see [Supported languages](language-support.md).
## Next steps
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
Previously updated : 09/16/2021 Last updated : 02/02/2022 recommendations: false ms.devlang: csharp, golang, java, javascript, python
Operation-Location | https://<<span>NAME-OF-YOUR-RESOURCE>.cognitiveservices.a
} }
-}
``` ### [Node.js](#tab/javascript)
cognitive-services Start Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/start-translation.md
Previously updated : 06/22/2021 Last updated : 02/01/2022 # Start translation
Source of the input documents.
| | | | | |filter|DocumentFilter[]|False|DocumentFilter[] listed below.| |filter.prefix|string|False|A case-sensitive prefix string to filter documents in the source path for translation. For example, when using an Azure storage blob Uri, use the prefix to restrict sub folders for translation.|
-|filter.suffix|string|False|A case-sensitive suffix string to filter documents in the source path for translation. This is most often use for file extensions.|
-|language|string|False|Language code If none is specified, we will perform auto detect on the document.|
+|filter.suffix|string|False|A case-sensitive suffix string to filter documents in the source path for translation. It's most often use for file extensions.|
+|language|string|False|Language code If none is specified, we'll perform auto detect on the document.|
|sourceUrl|string|True|Location of the folder / container or single file with your documents.| |storageSource|StorageSource|False|StorageSource listed below.| |storageSource.AzureBlob|string|False||
Destination for the finished translated documents.
|category|string|False|Category / custom system for translation request.| |glossaries|Glossary[]|False|Glossary listed below. List of Glossary.| |glossaries.format|string|False|Format.|
-|glossaries.glossaryUrl|string|True (if using glossaries)|Location of the glossary. We will use the file extension to extract the formatting if the format parameter isn't supplied. If the translation language pair isn't present in the glossary, it won't be applied.|
+|glossaries.glossaryUrl|string|True (if using glossaries)|Location of the glossary. We'll use the file extension to extract the formatting if the format parameter isn't supplied. If the translation language pair isn't present in the glossary, it won't be applied.|
|glossaries.storageSource|StorageSource|False|StorageSource listed above.| |glossaries.version|string|False|Optional Version. If not specified, default is used.| |targetUrl|string|True|Location of the folder / container with your documents.|
Destination for the finished translated documents.
The following are examples of batch requests.
+> [!NOTE]
+> In the following examples, limited access has been granted to the contents of an Azure Storage container [using a shared access signature(SAS)](/azure/storage/common/storage-sas-overview) token.
+ **Translating all documents in a container** ```json
The following are examples of batch requests.
**Translating all documents in a container applying glossaries**
-Ensure you have created glossary URL & SAS token for the specific blob/document (not for the container)
- ```json { "inputs": [
Ensure you have created glossary URL & SAS token for the specific blob/document
**Translating specific folder in a container**
-Ensure you have specified the folder name (case sensitive) as prefix in filter ΓÇô though the SAS token is still for the container.
+Make sure you've specified the folder name (case sensitive) as prefix in filter.
```json {
Ensure you have specified the folder name (case sensitive) as prefix in filter
**Translating specific document in a container**
-* Ensure you have specified "storageType": "File"
-* Ensure you have created source URL & SAS token for the specific blob/document (not for the container)
-* Ensure you have specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
-* Sample request below shows a single document getting translated into two target languages
+* Specify "storageType": "File"
+* Create source URL & SAS token for the specific blob/document.
+* Specify the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
+
+The sample request below shows a single document translated into two target languages
```json {
The following are the possible HTTP status codes that a request returns.
| | | |202|Accepted. Successful request and the batch request are created by the service. The header Operation-Location will indicate a status url with the operation ID.HeadersOperation-Location: string| |400|Bad Request. Invalid request. Check input parameters.|
-|401|Unauthorized. Please check your credentials.|
+|401|Unauthorized. Check your credentials.|
|429|Request rate is too high.| |500|Internal Server Error.|
-|503|Service is currently unavailable. Please try again later.|
+|503|Service is currently unavailable. Try again later.|
|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>| ## Error response
The following are the possible HTTP status codes that a request returns.
| | | | |code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Cognitive Services API Guidelines. This contains required properties: ErrorCode, message and optional properties target, details(key value pair), and inner error(this can be nested).|
|inner.Errorcode|string|Gets code error string.| |innerError.message|string|Gets high-level error message.|
-|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
+|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document ID" if the document is invalid.|
## Examples
Follow our quickstart to learn more about using Document Translation and the cli
> [!div class="nextstepaction"] > [Get started with Document Translation](../get-started-with-document-translation.md)+
cognitive-services Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/reference/rest-api-guide.md
Text Translation is a cloud-based feature of the Azure Translator service and is
|[**detect**](v3-0-detect.md) | **POST** | Identify the source language. | |[**breakSentence**](v3-0-break-sentence.md) | **POST** | Returns an array of integers representing the length of sentences in a source text. | | [**dictionary/lookup**](v3-0-dictionary-lookup.md) | **POST** | Returns alternatives for single word translations. |
-| [**dictionary/examples**](v3-0-dictionary-lookup.md) | **POST** | Returns how a term is used in context. |
+| [**dictionary/examples**](v3-0-dictionary-examples.md) | **POST** | Returns how a term is used in context. |
> [!div class="nextstepaction"] > [Create a Translator resource in the Azure portal.](../translator-how-to-signup.md)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md
Previously updated : 11/02/2021 Last updated : 02/02/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/personally-identifiable-information/quickstart.md
Previously updated : 11/19/2021 Last updated : 02/02/2022 ms.devlang: csharp, java, javascript, python
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
A number of specific Android devices fail to join calls and meetings. The device
### iOS 15.1 users joining group calls or Microsoft Teams meetings.
-* Low volume. Known regression introduced by Apple with the release of iOS 15.1. Related webkit bug [here](https://bugs.webkit.org/show_bug.cgi?id=230902).
* Sometimes when incoming PSTN is received the tab with the call or meeting will hang. Related webkit bugs [here](https://bugs.webkit.org/show_bug.cgi?id=233707) and [here](https://bugs.webkit.org/show_bug.cgi?id=233708#c0). ### Device mutes and incoming video stops rendering when certain interruptions occur on iOS Safari.
To recover from all these cases, the user must go back to the application to unm
Occasionally, microphone or camera devices won't be released on time, and that can cause issues with the original call. For example, if the user tries to unmute while watching a YouTube video, or if a PSTN call is active simultaneously.
+Incoming video streams won't stop rendering if the user is on iOS 15.2+ and is using SDK version 1.4.1-beta.1+, the unmute/start video steps will still be required to re-start outgoing audio and video.
+ ### iOS with Safari crashes and refreshes the page if a user tries to switch from front camera to back camera. ACS Calling SDK version 1.2.3-beta.1 introduced a bug that affects all of the calls made from iOS Safari. The problem occurs when a user tries to switch the camera video stream from front to back. Switching camera results in Safari browser to crash and reload the page.
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-sftp-ssh.md
Title: Connect to SFTP server with SSH
-description: Automate tasks that monitor, create, manage, send, and receive files for an SFTP server by using SSH and Azure Logic Apps
+description: Automate tasks that monitor, create, manage, send, and receive files for an SFTP server by using SSH and Azure Logic Apps.
ms.suite: integration -- Previously updated : 01/12/2022++ Last updated : 02/02/2022 tags: connectors
For differences between the SFTP-SSH connector and the SFTP connector, review th
* OpenText GXS * Globalscape * SFTP for Azure Blob Storage
+ * FileMage Gateway
-* SFTP-SSH actions that support [chunking](../logic-apps/logic-apps-handle-large-messages.md) can handle files up to 1 GB, while SFTP-SSH actions that don't support chunking can handle files up to 50 MB. The default chunk size is 15 MB. However, this size can dynamically change, starting from 5 MB and gradually increasing to the 50-MB maximum. Dynamic sizing is based on factors such as network latency, server response time, and so on.
-
- > [!NOTE]
- > For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
- > this connector's ISE-labeled version requires chunking to use the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
-
- You can override this adaptive behavior when you [specify a constant chunk size](#change-chunk-size) to use instead. This size can range from 5 MB to 50 MB. For example, suppose you have a 45-MB file and a network that can that support that file size without latency. Adaptive chunking results in several calls, rather that one call. To reduce the number of calls, you can try setting a 50-MB chunk size. In different scenario, if your logic app is timing out, for example, when using 15-MB chunks, you can try reducing the size to 5 MB.
-
- Chunk size is associated with a connection. This attribute means you can use the same connection for both actions that support chunking and actions that don't support chunking. In this case, the chunk size for actions that don't support chunking ranges from 5 MB to 50 MB. This table shows which SFTP-SSH actions support chunking:
+* The following SFTP-SSH actions support [chunking](../logic-apps/logic-apps-handle-large-messages.md):
| Action | Chunking support | Override chunk size support | |--||--|
For differences between the SFTP-SSH connector and the SFTP connector, review th
| **Update file** | No | Not applicable | ||||
-* SFTP-SSH triggers don't support message chunking. When requesting file content, triggers select only files that are 15 MB or smaller. To get files larger than 15 MB, follow this pattern instead:
+ SFTP-SSH actions that support chunking can handle files up to 1 GB, while SFTP-SSH actions that don't support chunking can handle files up to 50 MB. The default chunk size is 15 MB. However, this size can dynamically change, starting from 5 MB and gradually increasing to the 50-MB maximum. Dynamic sizing is based on factors such as network latency, server response time, and so on.
+
+ > [!NOTE]
+ > For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
+ > this connector's ISE-labeled version requires chunking to use the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
+
+ You can override this adaptive behavior when you [specify a constant chunk size](#change-chunk-size) to use instead. This size can range from 5 MB to 50 MB. For example, suppose you have a 45-MB file and a network that can that support that file size without latency. Adaptive chunking results in several calls, rather that one call. To reduce the number of calls, you can try setting a 50-MB chunk size. In different scenario, if your logic app is timing out, for example, when using 15-MB chunks, you can try reducing the size to 5 MB.
+
+ Chunk size is associated with a connection. This attribute means you can use the same connection for both actions that support chunking and actions that don't support chunking. In this case, the chunk size for actions that don't support chunking ranges from 5 MB to 50 MB.
+
+* SFTP-SSH triggers don't support message chunking. When triggers request file content, they select only files that are 15 MB or smaller. To get files larger than 15 MB, follow this pattern instead:
1. Use an SFTP-SSH trigger that returns only file properties. These triggers have names that include the description, **(properties only)**.
The following list describes key SFTP-SSH capabilities that differ from the SFTP
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Your SFTP server address and account credentials, so your workflow can access your SFTP account. You also need access to an SSH private key and the SSH private key password. To upload large files using chunking, you need both read and write access for the root folder on your SFTP server. Otherwise, you get a "401 Unauthorized" error. The SFTP-SSH connector supports both private key authentication and password authentication. However, the SFTP-SSH connector supports *only* these private key formats, encryption algorithms, fingerprints, and key exchange algorithms: * **Private key formats**: RSA (Rivest Shamir Adleman) and DSA (Digital Signature Algorithm) keys in both OpenSSH and ssh.com formats. If your private key is in PuTTY (.ppk) file format, first [convert the key to the OpenSSH (.pem) file format](#convert-to-openssh).
- * **Encryption algorithms**: DES-EDE3-CBC, DES-EDE3-CFB, DES-CBC, AES-128-CBC, AES-192-CBC, and AES-256-CBC
+ * **Encryption algorithms**: Review [Encryption Method - SSH.NET](https://github.com/sshnet/SSH.NET#encryption-method).
* **Fingerprint**: MD5
- * **Key exchange algorithms**: curve25519-sha256, curve25519-sha256@libssh.org, ecdh-sha2-nistp256, ecdh-sha2-nistp384, ecdh-sha2-nistp521, diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1, diffie-hellman-group16-sha512, diffie-hellman-group14-sha256, diffie-hellman-group14-sha1, and diffie-hellman-group1-sha1
+ * **Key exchange algorithms**: Review [Key Exchange Method - SSH.NET](https://github.com/sshnet/SSH.NET#key-exchange-method).
After you add an SFTP-SSH trigger or action to your workflow, you have to provide connection information for your SFTP server. When you provide your SSH private key for this connection, ***don't manually enter or edit the key***, which might cause the connection to fail. Instead, make sure that you ***copy the key*** from your SSH private key file, and ***paste*** that key into the connection details. For more information, see the [Connect to SFTP with SSH](#connect) section later this article.
When a trigger finds a new file, the trigger checks that the new file is complet
### Trigger recurrence shift and drift
-Connection-based triggers where you need to create a connection first, such as the SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends. To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
+Connection-based triggers where you need to create a connection first, such as the SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In connection-based recurrence triggers, the schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends. To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
<a name="convert-to-openssh"></a>
If this trigger problem happens, remove the files from the folder that the trigg
To create a file on your SFTP server, you can use the SFTP-SSH **Create file** action. When this action creates the file, the Logic Apps service also automatically calls your SFTP server to get the file's metadata. However, if you move the newly created file before the Logic Apps service can make the call to get the metadata, you get a `404` error message, `'A reference was made to a file or folder which does not exist'`. To skip reading the file's metadata after file creation, follow the steps to [add and set the **Get all file metadata** property to **No**](#file-does-not-exist).
+> [!IMPORTANT]
+> If you use chunking with SFTP-SSH operations that create files on your SFTP server,
+> these operations create temporary `.partial` and `.lock` files. These files help
+> the operations use chunking. Don't remove or change these files. Otherwise,
+> the file operations fail. When the operations finish, they delete the temporary files.
+ <a name="connect"></a> ## Connect to SFTP with SSH
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/microservices-dapr.md
Get the storage account key with the following command:
STORAGE_ACCOUNT_KEY=`az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT --query '[0].value' --out tsv` ```
-```bash
-echo $STORAGE_ACCOUNT_KEY
-```
- # [PowerShell](#tab/powershell) ```powershell $STORAGE_ACCOUNT_KEY=(Get-AzStorageAccountKey -ResourceGroupName $RESOURCE_GROUP -AccountName $STORAGE_ACCOUNT)| Where-Object -Property KeyName -Contains 'key1' | Select-Object -ExpandProperty Value ```
-```powershell
-echo $STORAGE_ACCOUNT_KEY
-```
Create a config file named *components.yaml* with the properties that you source
# should be securely stored. For more information, see # https://docs.dapr.io/operations/components/component-secrets - name: accountName
- value: <YOUR_STORAGE_ACCOUNT_NAME>
+ secretRef: storage-account-name
- name: accountKey
- value: <YOUR_STORAGE_ACCOUNT_KEY>
+ secretRef: storage-account-key
- name: containerName
- value: <YOUR_STORAGE_CONTAINER_NAME>
+ value: mycontainer
```
-To use this file, make sure to replace the placeholder values between the `<>` brackets with your own values.
+To use this file, make sure to replace the value of `containerName` with your own value if you have changed `STORAGE_ACCOUNT_CONTAINER` variable from its original value, `mycontainer`.
> [!NOTE] > Container Apps does not currently support the native [Dapr components schema](https://docs.dapr.io/operations/components/component-schema/). The above example uses the supported schema.
->
-> In a production-grade application, follow [secret management](https://docs.dapr.io/operations/components/component-secrets) instructions to securely manage your secrets.
## Deploy the service application (HTTP web server)
az containerapp create \
--enable-dapr \ --dapr-app-port 3000 \ --dapr-app-id nodeapp \
+ --secrets "storage-account-name=${STORAGE_ACCOUNT},storage-account-key=${STORAGE_ACCOUNT_KEY}" \
--dapr-components ./components.yaml ```
az containerapp create `
--enable-dapr ` --dapr-app-port 3000 ` --dapr-app-id nodeapp `
+ --secrets "storage-account-name=${STORAGE_ACCOUNT},storage-account-key=${STORAGE_ACCOUNT_KEY}" `
--dapr-components ./components.yaml ```
Remove-AzResourceGroup -Name $RESOURCE_GROUP -Force
This command deletes the resource group that includes all of the resources created in this tutorial.
- [!NOTE]
+
+> [!NOTE]
> Since `pythonapp` continuously makes calls to `nodeapp` with messages that get persisted into your configured state store, it is important to complete these cleanup steps to avoid ongoing billable operations. > [!TIP]
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/overview.md
Applications built on Azure Container Apps can dynamically scale based on the fo
Azure Container Apps enables executing application code packaged in any container and is unopinionated about runtime or programming model. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of managing cloud infrastructure and complex container orchestrators.
+## Features
+ With Azure Container Apps, you can: - [**Run multiple container revisions**](application-lifecycle-management.md) and manage the container app's application lifecycle.
With Azure Container Apps, you can:
<sup>1</sup> Applications that [scale on CPU or memory load](scale-app.md) can't scale to zero.
+## Introductory video
+
+> [!VIDEO https://www.youtube.com/embed/b3dopSTnSRg]
+ ### Next steps > [!div class="nextstepaction"]
container-registry Buffer Gate Public Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/buffer-gate-public-content.md
Title: Manage public content in private container registry
description: Practices and workflows in Azure Container Registry to manage dependencies on public images from Docker Hub and other public content -+ Last updated 02/01/2022
container-registry Container Registry Troubleshoot Login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-login.md
May include one or more of the following:
* Unable to login to registry and you receive error `unauthorized: authentication required` or `unauthorized: Application not registered with AAD` * Unable to login to registry and you receive Azure CLI error `Could not connect to the registry login server` * Unable to push or pull images and you receive Docker error `unauthorized: authentication required`
+* Unable to access a registry using `az acr login` and you receive error `CONNECTIVITY_REFRESH_TOKEN_ERROR. Access to registry was denied. Response code: 403.Unable to get admin user credentials with message: Admin user is disabled.Unable to authenticate using AAD or admin login credentials.`
* Unable to access registry from Azure Kubernetes Service, Azure DevOps, or another Azure service * Unable to access registry and you receive error `Error response from daemon: login attempt failed with status: 403 Forbidden` - See [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md) * Unable to access or view registry settings in Azure portal or manage registry using the Azure CLI
May include one or more of the following:
* Docker isn't configured properly in your environment - [solution](#check-docker-configuration) * The registry doesn't exist or the name is incorrect - [solution](#specify-correct-registry-name) * The registry credentials aren't valid - [solution](#confirm-credentials-to-access-registry)
+* The registry public access is disabled.Public network access rules on the registry prevent access - [solution](container-registry-troubleshoot-access.md#configure-public-access-to-registry)
* The credentials aren't authorized for push, pull, or Azure Resource Manager operations - [solution](#confirm-credentials-are-authorized-to-access-registry) * The credentials are expired - [solution](#check-that-credentials-arent-expired)
cosmos-db Account Databases Containers Items https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/account-databases-containers-items.md
An Azure Cosmos container has a set of system-defined properties. Depending on w
|TimeToLive | User-configurable | Provides the ability to delete items automatically from a container after a set time period. For details, see [Time to Live](time-to-live.md). | Yes | No | No | No | Yes | |changeFeedPolicy | User-configurable | Used to read changes made to items in a container. For details, see [Change feed](change-feed.md). | Yes | No | No | No | Yes | |uniqueKeyPolicy | User-configurable | Used to ensure the uniqueness of one or more values in a logical partition. For more information, see [Unique key constraints](unique-keys.md). | Yes | No | No | No | Yes |
+|AnalyticalTimeToLive | User-configurable | Provides the ability to delete items automatically from a container after a set time period. For details, see [Time to Live](analytical-store-introduction.md). | Yes | No | Yes | No | No |
### Operations on an Azure Cosmos container
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
Analytical store partitioning is completely independent of partitioning in
## Security
-* **Authentication with the analytical store** is the same as the transactional store for a given database. You can use primary, secondary, or read-only keys for authentication. You can leverage linked service in Synapse Studio to prevent pasting the Azure Cosmos DB keys in the Spark notebooks. For Azure Synapse SQL serverless, you can use SQL credentials to also prevent pasting the Azure Cosmos DB keys in the SQL notebooks. The Access to these Linked Services or to these SQL credentials are available to anyone who has access to the workspace.
+* **Authentication with the analytical store** is the same as the transactional store for a given database. You can use primary, secondary, or read-only keys for authentication. You can leverage linked service in Synapse Studio to prevent pasting the Azure Cosmos DB keys in the Spark notebooks. For Azure Synapse SQL serverless, you can use SQL credentials to also prevent pasting the Azure Cosmos DB keys in the SQL notebooks. The Access to these Linked Services or to these SQL credentials are available to anyone who has access to the workspace. Please note that the Cosmos DB read only key can also be used.
* **Network isolation using private endpoints** - You can control network access to the data in the transactional and analytical stores independently. Network isolation is done using separate managed private endpoints for each store, within managed virtual networks in Azure Synapse workspaces. To learn more, see how to [Configure private endpoints for analytical store](analytical-store-private-endpoints.md) article.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/introduction.md
Build fast with open source APIs, multiple SDKs, schemaless data and no-ETL anal
- Deeply integrated with key Azure services used in modern (cloud-native) app development including Azure Functions, IoT Hub, AKS (Azure Kubernetes Service), App Service, and more. - Choose from multiple database APIs including the native Core (SQL) API, API for MongoDB, Cassandra API, Gremlin API, and Table API. - Build apps on Core (SQL) API using the languages of your choice with SDKs for .NET, Java, Node.js and Python. Or your choice of drivers for any of the other database APIs.-- Run no-ETL analytics over the near-real time operational data stored in Azure Cosmos DB with Azure Synapse Analytics. - Change feed makes it easy to track and manage changes to database containers and create triggered events with Azure Functions. - Azure Cosmos DBΓÇÖs schema-less service automatically indexes all your data, regardless of the data model, to deliver blazing fast queries.
End-to-end database management, with serverless and automatic scaling matching y
- Serverless model offers spiky workloads automatic and responsive service to manage traffic bursts on demand. - Autoscale provisioned throughput automatically and instantly scales capacity for unpredictable workloads, while maintaining [SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db).
+### Azure Synapse Link for Azure Cosmos DB
+
+[Azure Synapse Link for Azure Cosmos DB](synapse-link.md) is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables near real time analytics over operational data in Azure Cosmos DB. Azure Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
+
+- Reduced analytics complexity with No ETL jobs to manage.
+- Near real-time insights into your operational data.
+- No impact on operational workloads.
+- Optimized for large-scale analytics workloads.
+- Cost effective.
+- Analytics for locally available, globally distributed, multi-region writes.
+- Native integration with Azure Synapse Analytics.
++ ## Solutions that benefit from Azure Cosmos DB Any [web, mobile, gaming, and IoT application](use-cases.md) that needs to handle massive amounts of data, reads, and writes at a [global scale](distribute-data-globally.md) with near-real response times for a variety of data will benefit from Cosmos DB's [guaranteed high availability](https://azure.microsoft.com/support/legal/sl#web-and-mobile-applications).
cosmos-db Mongodb Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/mongodb-introduction.md
The Azure Cosmos DB API for MongoDB makes it easy to use Cosmos DB as if it were
The API for MongoDB has numerous added benefits of being built on [Azure Cosmos DB](../introduction.md) when compared to service offerings such as MongoDB Atlas: * **Instantaneous scalability**: By enabling the [Autoscale](../provision-throughput-autoscale.md) feature, your database can scale up/down with zero warmup period.
-* **Automatic and transparent sharding**: The API for MongoDB manages all of the infrastructure for you. This includes sharding and the number of shards, unlike other MongoDB offerings such as MongoDB Atlas, which require your to specify and manage sharding to horizontally scale. This gives you more time to focus on developing applications for your users.
+* **Automatic and transparent sharding**: The API for MongoDB manages all of the infrastructure for you. This includes sharding and the number of shards, unlike other MongoDB offerings such as MongoDB Atlas, which require you to specify and manage sharding to horizontally scale. This gives you more time to focus on developing applications for your users.
* **Five 9's of availability**: [99.999% availability](../high-availability.md) is easily configurable to ensure your data is always there for you.
-* **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. API for MongoDB users are running databases with over 600TB of storage today. Scaling is done in a cost-efficient manner, since unlike other MongoDB service offering, the Cosmos DB platform can scale in increments as small as 1/100th of a VM due to economies of scale and resource governance.
+* **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. APIs for MongoDB users are running databases with over 600TB of storage today. Scaling is done in a cost-efficient manner, since unlike other MongoDB service offering, the Cosmos DB platform can scale in increments as small as 1/100th of a VM due to economies of scale and resource governance.
* **Serverless deployments**: Unlike MongoDB Atlas, the API for MongoDB is a cloud native database that offers a [serverless capacity mode](../serverless.md). With [Serverless](../serverless.md), you are only charged per operation, and don't pay for the database when you don't use it. * **Free Tier**: With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage in your account for free forever, applied at the account level. * **Upgrades take seconds**: All API versions are contained within one codebase, making version changes as simple as [flipping a switch](upgrade-mongodb-version.md), with zero downtime.
Azure Cosmos DB API for MongoDB is compatible with the following MongoDB server
- [Version 3.6](feature-support-36.md) - [Version 3.2](feature-support-32.md)
-All the API for MongoDB versions run on the same codebase, making upgrades a simple task that can be completed in seconds with zero downtime. Azure Cosmos DB simply flips a few feature flags to go from one version to another. The feature flags also enable continued support for older API versions such as 3.2 and 3.6. You can choose the server version that works best for you.
+All the APIs for MongoDB versions run on the same codebase, making upgrades a simple task that can be completed in seconds with zero downtime. Azure Cosmos DB simply flips a few feature flags to go from one version to another. The feature flags also enable continued support for older API versions such as 3.2 and 3.6. You can choose the server version that works best for you.
:::image type="content" source="./media/mongodb-introduction/cosmosdb-mongodb.png" alt-text="Azure Cosmos DB's API for MongoDB" border="false":::
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use i
* Connect to a Cosmos account using [Robo 3T](connect-using-robomongo.md). * Learn how to [Configure read preferences for globally distributed apps](tutorial-global-distribution-mongodb.md). * Find the solutions to commonly found errors in our [Troubleshooting guide](error-codes-solutions.md)
+* Configure near real time analytics with [Azure Synapse Link for Azure Cosmos DB](../configure-synapse-link.md)
<sup>Note: This article describes a feature of Azure Cosmos DB that provides wire protocol compatibility with MongoDB databases. Microsoft does not run MongoDB databases to provide this service. Azure Cosmos DB is not affiliated with MongoDB, Inc.</sup>
cosmos-db Performance Tips Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/performance-tips-java-sdk-v4-sql.md
For a variety of reasons, you may want or need to add logging in a thread which
* ***Configure an async logger***
-The latency of a synchronous logger necessarily factors into the overall latency calculation of your request-generating thread. An async logger such as [log4j2](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flogging.apache.org%2Flog4j%2Flog4j-2.3%2Fmanual%2Fasync.html&data=02%7C01%7CCosmosDBPerformanceInternal%40service.microsoft.com%7C36fd15dea8384bfe9b6b08d7c0cf2113%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637189868158267433&sdata=%2B9xfJ%2BWE%2F0CyKRPu9AmXkUrT3d3uNA9GdmwvalV3EOg%3D&reserved=0) is recommended to decouple logging overhead from your high-performance application threads.
+The latency of a synchronous logger necessarily factors into the overall latency calculation of your request-generating thread. An async logger such as [log4j2](https://logging.apache.org/log4j/log4j-2.3/manual/async.html) is recommended to decouple logging overhead from your high-performance application threads.
* ***Disable netty's logging***
data-lake-store Data Lake Store Get Started Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/data-lake-store-get-started-python.md
To work with Data Lake Storage Gen1 using Python, you need to install three modu
Use the following commands to install the modules. ```console
+pip install azure-identity
pip install azure-mgmt-resource pip install azure-mgmt-datalake-store pip install azure-datalake-store
pip install azure-datalake-store
2. Add the following snippet to import the required modules ```python
- ## Use this only for Azure AD service-to-service authentication
- from azure.common.credentials import ServicePrincipalCredentials
-
- ## Use this only for Azure AD end-user authentication
- from azure.common.credentials import UserPassCredentials
-
- ## Use this only for Azure AD multi-factor authentication
- from msrestazure.azure_active_directory import AADTokenCredentials
+ # Acquire a credential object for the app identity. When running in the cloud,
+ # DefaultAzureCredential uses the app's managed identity (MSI) or user-assigned service principal.
+ # When run locally, DefaultAzureCredential relies on environment variables named
+ # AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, and AZURE_TENANT_ID.
+ from azure.identity import DefaultAzureCredential
## Required for Data Lake Storage Gen1 account management from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient
- from azure.mgmt.datalake.store.models import DataLakeStoreAccount
+ from azure.mgmt.datalake.store.models import CreateDataLakeStoreAccountParameters
## Required for Data Lake Storage Gen1 filesystem management from azure.datalake.store import core, lib, multithread # Common Azure imports
+ import adal
from azure.mgmt.resource.resources import ResourceManagementClient from azure.mgmt.resource.resources.models import ResourceGroup
- ## Use these as needed for your application
+ # Use these as needed for your application
import logging, getpass, pprint, uuid, time ```
subscriptionId = 'FILL-IN-HERE'
adlsAccountName = 'FILL-IN-HERE' resourceGroup = 'FILL-IN-HERE' location = 'eastus2'
+credential = DefaultAzureCredential()
## Create Data Lake Storage Gen1 account management client object
-adlsAcctClient = DataLakeStoreAccountManagementClient(armCreds, subscriptionId)
+adlsAcctClient = DataLakeStoreAccountManagementClient(credential, subscription_id=subscriptionId)
## Create a Data Lake Storage Gen1 account
-adlsAcctResult = adlsAcctClient.account.create(
+adlsAcctResult = adlsAcctClient.accounts.begin_create(
resourceGroup, adlsAccountName,
- DataLakeStoreAccount(
+ CreateDataLakeStoreAccountParameters(
location=location )
-).wait()
+)
```
adlsAcctResult = adlsAcctClient.account.create(
```python ## List the existing Data Lake Storage Gen1 accounts
-result_list_response = adlsAcctClient.account.list()
+result_list_response = adlsAcctClient.accounts.list()
result_list = list(result_list_response) for items in result_list: print(items)
for items in result_list:
```python ## Delete an existing Data Lake Storage Gen1 account
-adlsAcctClient.account.delete(adlsAccountName)
+adlsAcctClient.accounts.begin_delete(resourceGroup, adlsAccountName)
```
adlsAcctClient.account.delete(adlsAccountName)
## See also * [azure-datalake-store Python (Filesystem) reference](/python/api/azure-datalake-store/azure.datalake.store.core)
-* [Open Source Big Data applications compatible with Azure Data Lake Storage Gen1](data-lake-store-compatible-oss-other-applications.md)
+* [Open Source Big Data applications compatible with Azure Data Lake Storage Gen1](data-lake-store-compatible-oss-other-applications.md)
data-share How To Share From Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-sql.md
Previously updated : 12/17/2021 Last updated : 02/02/2022 # Share and receive data from Azure SQL Database and Azure Synapse Analytics [!INCLUDE [appliesto-sql](includes/appliesto-sql.md)]
-Azure Data Share allows you to securely share data snapshots from your Azure SQL Database and Azure Synapse Analytics resources, to other Azure subscriptions. Including Azure subscriptions outside your tenant. This article will guide you through what kinds of data can be shared, how to prepare you environment, how to create a share, and how to receive shared data.
+[Azure Data Share](overview.md) allows you to securely share data snapshots from your Azure SQL Database and Azure Synapse Analytics resources, to other Azure subscriptions. Including Azure subscriptions outside your tenant.
+
+This article describes sharing data from **Azure SQL Database** and **Azure Synapse Analytics**, but Azure Data Share also allows sharing from these other kinds of resources:
+
+- [Azure Storage](how-to-share-from-storage.md)
+- [Azure Data Explorer](/data-explorer/data-share.md)
+
+This article will guide you through:
+
+- [What kinds of data can be shared](#whats-supported)
+- [How to prepare your environment](#prerequisites-to-share-data)
+- [How to create a share](#create-a-share)
+- [How to receive shared data](#receive-shared-data)
You can use the table of contents to jump to the section you need, or continue with this article to follow the process from start to finish.
Azure Data Share supports sharing full data snapshots from several SQL resources
> [!NOTE] > Currently, Azure Data Share does not support sharing from these resources:
-> * Azure Synapse Analytics (workspace) serverless SQL pool
-> * Azure SQL databases with Always Encrypted configured
+>
+> - Azure Synapse Analytics (workspace) serverless SQL pool
+> - Azure SQL databases with Always Encrypted configured
-### Receive shared data
+### Receive data
Data consumers can choose to accept shared data into several Azure resources:
-* Azure Data Lake Storage Gen2
-* Azure Blob Storage
-* Azure SQL Database
-* Azure Synapse Analytics
+- Azure Data Lake Storage Gen2
+- Azure Blob Storage
+- Azure SQL Database
+- Azure Synapse Analytics
Shared data in **Azure Data Lake Storage Gen 2** or **Azure Blob Storage** can be stored as a csv or parquet file. Full data snapshots overwrite the contents of the target file if already exists.
-Shared data in **Azure SQL Database** and **Azure Synapse Analytics** is stored in tables. If the target table doesn't already exist, Azure Data Share creates the SQL table with the source schema. If a target table with the same name already exists, it will be dropped and overwritten with the latest full snapshot.
+Shared data in **Azure SQL Database** and **Azure Synapse Analytics** is stored in tables. If the target table doesn't already exist, Azure Data Share creates the SQL table with the source schema. If a target table with the same name already exists, it will be dropped and overwritten with the latest full snapshot.
->[!NOTE]
+>[!NOTE]
> For source SQL tables with dynamic data masking, data will appear masked on the recipient side. ### Supported data types
-When you share data from a SQL source, the following mappings are used from SQL Server data types to Azure Data Share interim data types during the snapshot process.
+
+When you share data from a SQL source, the following mappings are used from SQL Server data types to Azure Data Share interim data types during the snapshot process.
>[!NOTE]
-> 1. For data types that map to the Decimal interim type, currently snapshot supports precision up to 28. If you have data that requires precision larger than 28, consider converting to a string.
-> 1. If you are sharing data from Azure SQL database to Azure Synapse Analytics, not all data types are supported. Refer to [Table data types in dedicated SQL pool](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-data-types.md) for details.
+>
+> 1. For data types that map to the Decimal interim type, currently snapshot supports precision up to 28. If you have data that requires precision larger than 28, consider converting to a string.
+> 1. If you are sharing data from Azure SQL database to Azure Synapse Analytics, not all data types are supported. Refer to [Table data types in dedicated SQL pool](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-data-types.md) for details.
| SQL Server data type | Azure Data Share interim data type | |: |: |
When you share data from a SQL source, the following mappings are used from SQL
| varchar |String, Char[] | | xml |String | -
-## Prerequisites to share data
+## Prerequisites to share data
To share data snapshots from your Azure SQL resources, you first need to prepare your environment. You'll need:
-* An Azure subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md) with tables and views that you want to share.
-* [An Azure Data Share account](share-your-data-portal.md#create-a-data-share-account).
-* Your data recipient's Azure sign-in e-mail address (using their e-mail alias won't work).
-* If your Azure SQL resource is in a different Azure subscription than your Azure Data Share account, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where your source Azure SQL resource is located.
+- An Azure subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md) with tables and views that you want to share.
+- [An Azure Data Share account](share-your-data-portal.md#create-a-data-share-account).
+- Your data recipient's Azure sign in e-mail address (using their e-mail alias won't work).
+- If your Azure SQL resource is in a different Azure subscription than your Azure Data Share account, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where your source Azure SQL resource is located.
-There are also source-specific prerequisites for sharing. Select your data share source and follow the steps:
+### Source-specific prerequisites
-* [Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)](#prerequisitesforsharingazuresqlorsynapse)
-* [Azure Synapse Analytics (workspace) SQL pool](#prerequisitesforsharingazuresynapseworkspace)
+There are also prerequisites for sharing that depend on where your data is coming from. Select your data share source and follow the steps:
+
+- [Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)](#prerequisitesforsharingazuresqlorsynapse)
+- [Azure Synapse Analytics (workspace) SQL pool](#prerequisitesforsharingazuresynapseworkspace)
<a id="prerequisitesforsharingazuresqlorsynapse"></a> ### Prerequisites for sharing from Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW) You can use one of these methods to authenticate with Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW):
-* [Azure Active Directory authentication](#azure-active-directory-authentication)
-* [SQL authentication](#sql-authentication)
+
+- [Azure Active Directory authentication](#azure-active-directory-authentication)
+- [SQL authentication](#sql-authentication)
#### Azure Active Directory authentication These prerequisites cover the authentication you'll need so Azure Data Share can connect with your Azure SQL Database:
-* You'll need permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
-* SQL Server **Azure Active Directory Admin** permissions.
-* SQL Server Firewall access:
+- You'll need permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
+- SQL Server **Azure Active Directory Admin** permissions.
+- SQL Server Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to your SQL server. Select *Firewalls and virtual networks* from left navigation. 1. Select **Yes** for *Allow Azure services and resources to access this server*. 1. Select **+Add client IP**. Client IP address can change, so you may need to add your client IP again next time you share data from the portal.
These prerequisites cover the authentication you'll need so Azure Data Share can
You can follow the [step by step demo video](https://youtu.be/hIE-TjJD8Dc) to configure authentication, or complete each of these prerequisites:
-* Permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
-* Permission for the Azure Data Share resource's managed identity to access the database:
+- Permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the **Contributor** role.
+- Permission for the Azure Data Share resource's managed identity to access the database:
1. In the [Azure portal](https://portal.azure.com/), navigate to the SQL server and set yourself as the **Azure Active Directory Admin**.
- 1. Connect to the Azure SQL Database/Data Warehouse using the [Query Editor](../azure-sql/database/connect-query-portal.md#connect-using-azure-active-directory) or SQL Server Management Studio with Azure Active Directory authentication.
- 1. Execute the following script to add the Data Share resource-Managed Identity as a db_datareader. Connect using Active Directory and not SQL Server authentication.
-
+ 1. Connect to the Azure SQL Database/Data Warehouse using the [Query Editor](../azure-sql/database/connect-query-portal.md#connect-using-azure-active-directory) or SQL Server Management Studio with Azure Active Directory authentication.
+ 1. Execute the following script to add the Data Share resource-Managed Identity as a db_datareader. Connect using Active Directory and not SQL Server authentication.
+ ```sql create user "<share_acct_name>" from external provider; exec sp_addrolemember db_datareader, "<share_acct_name>";
- ```
+ ```
+ > [!Note]
- > The *<share_acc_name>* is the name of your Data Share resource.
+ > The *<share_acc_name>* is the name of your Data Share resource.
-* An Azure SQL Database User with **'db_datareader'** access to navigate and select the tables or views you wish to share.
+- An Azure SQL Database User with **'db_datareader'** access to navigate and select the tables or views you wish to share.
-* SQL Server Firewall access:
+- SQL Server Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to SQL server. Select *Firewalls and virtual networks* from left navigation. 1. Select **Yes** for *Allow Azure services and resources to access this server*. 1. Select **+Add client IP**. Client IP address can change, so you may need to add your client IP again next time you share data from the portal.
- 1. Select **Save**.
+ 1. Select **Save**.
<a id="prerequisitesforsharingazuresynapseworkspace"></a> ### Prerequisites for sharing from Azure Synapse Analytics (workspace) SQL pool
-* Permission to write to the SQL pool in Synapse workspace: *Microsoft.Synapse/workspaces/sqlPools/write*. This permission exists in the **Contributor** role.
-* Permission for the Data Share resource's managed identity to access Synapse workspace SQL pool:
+- Permission to write to the SQL pool in Synapse workspace: *Microsoft.Synapse/workspaces/sqlPools/write*. This permission exists in the **Contributor** role.
+- Permission for the Data Share resource's managed identity to access Synapse workspace SQL pool:
1. In the [Azure portal](https://portal.azure.com/), navigate to your Synapse workspace. Select **SQL Active Directory admin** from left navigation and set yourself as the **Azure Active Directory admin**. 1. Open the Synapse Studio, select **Manage** from the left navigation. Select **Access control** under Security. Assign yourself the **SQL admin** or **Workspace admin** role.
- 1. Select **Develop** from the left navigation in the Synapse Studio. Execute the following script in SQL pool to add the Data Share resource-Managed Identity as a db_datareader.
-
+ 1. Select **Develop** from the left navigation in the Synapse Studio. Execute the following script in SQL pool to add the Data Share resource-Managed Identity as a db_datareader.
+ ```sql create user "<share_acct_name>" from external provider; exec sp_addrolemember db_datareader, "<share_acct_name>";
- ```
+ ```
+ > [!Note] > The *<share_acc_name>* is the name of your Data Share resource.
-* Synapse workspace Firewall access:
+- Synapse workspace Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to Synapse workspace. Select **Firewalls** from left navigation. 1. Select **ON** for **Allow Azure services and resources to access this workspace**. 1. Select **+Add client IP**. Client IP address can change, so you may need to add your client IP again next time you share data from the portal.
- 1. Select **Save**.
+ 1. Select **Save**.
## Create a share 1. Navigate to your Data Share Overview page.
- ![Share your data](./media/share-receive-data.png "Share your data")
+ ![Share your data](./media/share-receive-data.png "Share your data")
1. Select **Start sharing your data**.
-1. Select **Create**.
+1. Select **Create**.
-1. Fill out the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
+1. Fill out the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
- ![EnterShareDetails](./media/enter-share-details.png "Enter Share details")
+ ![EnterShareDetails](./media/enter-share-details.png "Enter Share details")
1. Select **Continue**.
-1. To add Datasets to your share, select **Add Datasets**.
+1. To add Datasets to your share, select **Add Datasets**.
![Add Datasets to your share](./media/datasets.png "Datasets")
-1. Select the dataset type that you would like to add. There will be a different list of dataset types depending on the share type (snapshot or in-place) you selected in the previous step.
+1. Select the dataset type that you would like to add. There will be a different list of dataset types depending on the share type (snapshot or in-place) you selected in the previous step.
- ![AddDatasets](./media/add-datasets.png "Add Datasets")
+ ![AddDatasets](./media/add-datasets.png "Add Datasets")
-1. Select your SQL server or Synapse workspace. If you're using Azure Active Directory authentication and the checkbox **Allow Data Share to run the above 'create user' SQL script on my behalf** appears, check the checkbox. If you're using SQL authentication, provide credentials, and be sure you have followed the prerequisites so that you have permissions.
+1. Select your SQL server or Synapse workspace. If you're using Azure Active Directory authentication and the checkbox **Allow Data Share to run the above 'create user' SQL script on my behalf** appears, check the checkbox. If you're using SQL authentication, provide credentials, and be sure you've followed the prerequisites so that you have permissions.
- Select **Next** to navigate to the object you would like to share and select 'Add Datasets'. You can select tables and views from Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL DW), or tables from Azure Synapse Analytics (workspace) dedicated SQL pool.
+ Select **Next** to navigate to the object you would like to share and select 'Add Datasets'. You can select tables and views from Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL DW), or tables from Azure Synapse Analytics (workspace) dedicated SQL pool.
- ![SelectDatasets](./media/select-datasets-sql.png "Select Datasets")
+ ![SelectDatasets](./media/select-datasets-sql.png "Select Datasets")
-1. In the Recipients tab, enter in the email addresses of your Data Consumer by selecting '+ Add Recipient'. The email address needs to be recipient's Azure sign-in email.
+1. In the Recipients tab, enter in the email addresses of your Data Consumer by selecting '+ Add Recipient'. The email address needs to be recipient's Azure sign in email.
- ![AddRecipients](./media/add-recipient.png "Add recipients")
+ ![AddRecipients](./media/add-recipient.png "Add recipients")
1. Select **Continue**.
-1. If you have selected snapshot share type, you can configure snapshot schedule to provide updates of your data to your data consumer.
+1. If you have selected snapshot share type, you can configure snapshot schedule to provide updates of your data to your data consumer.
- ![EnableSnapshots](./media/enable-snapshots.png "Enable snapshots")
+ ![EnableSnapshots](./media/enable-snapshots.png "Enable snapshots")
-1. Select a start time and recurrence interval.
+1. Select a start time and recurrence interval.
1. Select **Continue**. 1. In the Review + Create tab, review your Package Contents, Settings, Recipients, and Synchronization Settings. Select **Create**.
-Your Azure Data Share has now been created and the recipient of your Data Share can now accept your invitation.
+Your Azure Data Share has now been created and the recipient of your Data Share can now accept your invitation.
## Prerequisites to receive data+ Before you can accept a data share invitation, you need to prepare your environment. Confirm that all pre-requisites are complete before accepting a data share invitation:
-* Azure Subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* A Data Share invitation: An invitation from Microsoft Azure with a subject titled "Azure Data Share invitation from **<yourdataprovider@domain.com>**".
-* Register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the Azure subscription where you will create a Data Share resource and the Azure subscription where your target Azure data stores are located.
-* You'll need a resource in Azure to store the shared data. You can use these kinds of resources:
- - [Azure Storage](../storage/common/storage-account-create.md)
- - [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md)
- - [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md)
- - [Azure Synapse Analytics (workspace) dedicated SQL pool](../synapse-analytics/get-started-analyze-sql-pool.md)
+- Azure Subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- A Data Share invitation: An invitation from Microsoft Azure with a subject titled "Azure Data Share invitation from **<yourdataprovider@domain.com>**".
+- Register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the Azure subscription where you'll create a Data Share resource and the Azure subscription where your target Azure data stores are located.
+- You'll need a resource in Azure to store the shared data. You can use these kinds of resources:
+ - [Azure Storage](../storage/common/storage-account-create.md)
+ - [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md)
+ - [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md)
+ - [Azure Synapse Analytics (workspace) dedicated SQL pool](../synapse-analytics/get-started-analyze-sql-pool.md)
-There are also prerequisites for the resource where the received data will be stored.
+There are also prerequisites for the resource where the received data will be stored.
Select your resource type and follow the steps:
-* [Azure Storage prerequisites](#prerequisites-for-target-storage-account)
-* [Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW) prerequisites](#prerequisitesforreceivingtoazuresqlorsynapse)
-* [Azure Synapse Analytics (workspace) SQL pool prerequisites](#prerequisitesforreceivingtoazuresynapseworkspacepool)
+- [Azure Storage prerequisites](#prerequisites-for-target-storage-account)
+- [Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW) prerequisites](#prerequisitesforreceivingtoazuresqlorsynapse)
+- [Azure Synapse Analytics (workspace) SQL pool prerequisites](#prerequisitesforreceivingtoazuresynapseworkspacepool)
### Prerequisites for target storage account+ If you choose to receive data into Azure Storage, complete these prerequisites before accepting a data share:
-* An [Azure Storage account](../storage/common/storage-account-create.md).
-* Permission to write to the storage account: *Microsoft.Storage/storageAccounts/write*. This permission exists in the Azure RBAC **Contributor** role.
-* Permission to add role assignment of the Data Share resource's managed identity to the storage account: which is present in *Microsoft.Authorization/role assignments/write*. This permission exists in the Azure RBAC **Owner** role.
+- An [Azure Storage account](../storage/common/storage-account-create.md).
+- Permission to write to the storage account: *Microsoft.Storage/storageAccounts/write*. This permission exists in the Azure RBAC **Contributor** role.
+- Permission to add role assignment of the Data Share resource's managed identity to the storage account: which is present in *Microsoft.Authorization/role assignments/write*. This permission exists in the Azure RBAC **Owner** role.
<a id="prerequisitesforreceivingtoazuresqlorsynapse"></a>
-### Prerequisites for receiving data into Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)
+### Prerequisites for receiving data into Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW)
For a SQL server where you're the **Azure Active Directory admin** of the SQL server, complete these prerequisites before accepting a data share:
-* An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md).
-* Permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
-* SQL Server Firewall access:
+- An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md).
+- Permission to write to the databases on SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
+- SQL Server Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to your SQL server. Select **Firewalls and virtual networks** from left navigation. 1. Select **Yes** for *Allow Azure services and resources to access this server*. 1. Select **+Add client IP**. Client IP address can change, so you may need to add your client IP again next time you share data from the portal.
- 1. Select **Save**.
-
-For a SQL server where you're **not** the **Azure Active Directory admin**, complete these prerequisites before accepting a data share:
+ 1. Select **Save**.
+
+For a SQL server where you're **not** the **Azure Active Directory admin**, complete these prerequisites before accepting a data share:
You can follow the [step by step demo video](https://youtu.be/aeGISgK1xro), or the steps below to configure prerequisites.
-* An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md).
-* Permission to write to databases on the SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
-* Permission for the Data Share resource's managed identity to access the Azure SQL Database or Azure Synapse Analytics:
+- An [Azure SQL Database](../azure-sql/database/single-database-create-quickstart.md) or [Azure Synapse Analytics (formerly Azure SQL DW)](../synapse-analytics/get-started-create-workspace.md).
+- Permission to write to databases on the SQL server: *Microsoft.Sql/servers/databases/write*. This permission exists in the Azure RBAC **Contributor** role.
+- Permission for the Data Share resource's managed identity to access the Azure SQL Database or Azure Synapse Analytics:
1. In the [Azure portal](https://portal.azure.com/), navigate to the SQL server and set yourself as the **Azure Active Directory Admin**.
- 1. Connect to the Azure SQL Database/Data Warehouse using the [Query Editor](../azure-sql/database/connect-query-portal.md#connect-using-azure-active-directory) or SQL Server Management Studio with Azure Active Directory authentication.
+ 1. Connect to the Azure SQL Database/Data Warehouse using the [Query Editor](../azure-sql/database/connect-query-portal.md#connect-using-azure-active-directory) or SQL Server Management Studio with Azure Active Directory authentication.
1. Execute the following script to add the Data Share Managed Identity as a 'db_datareader, db_datawriter, db_ddladmin'. ```sql
You can follow the [step by step demo video](https://youtu.be/aeGISgK1xro), or t
exec sp_addrolemember db_datareader, "<share_acc_name>"; exec sp_addrolemember db_datawriter, "<share_acc_name>"; exec sp_addrolemember db_ddladmin, "<share_acc_name>";
- ```
+ ```
+ > [!Note]
- > The *<share_acc_name>* is the name of your Data Share resource.
+ > The *<share_acc_name>* is the name of your Data Share resource.
-* SQL Server Firewall access:
+- SQL Server Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to the SQL server and select **Firewalls and virtual networks**. 1. Select **Yes** for **Allow Azure services and resources to access this server**. 1. Select **+Add client IP**. Client IP address can change, so you may need to add your client IP again next time you share data from the portal.
- 1. Select **Save**.
-
+ 1. Select **Save**.
+ <a id="prerequisitesforreceivingtoazuresynapseworkspacepool"></a> ### Prerequisites for receiving data into Azure Synapse Analytics (workspace) SQL pool
-* An Azure Synapse Analytics (workspace) dedicated SQL pool. Receiving data into serverless SQL pool is not currently supported.
-* Permission to write to the SQL pool in Synapse workspace: *Microsoft.Synapse/workspaces/sqlPools/write*. This permission exists in the Azure RBAC **Contributor** role.
-* Permission for the Data Share resource's managed identity to access the Synapse workspace SQL pool:
- 1. In the [Azure portal](https://portal.azure.com/), navigate to Synapse workspace.
+- An Azure Synapse Analytics (workspace) dedicated SQL pool. Receiving data into serverless SQL pool isn't currently supported.
+- Permission to write to the SQL pool in Synapse workspace: *Microsoft.Synapse/workspaces/sqlPools/write*. This permission exists in the Azure RBAC **Contributor** role.
+- Permission for the Data Share resource's managed identity to access the Synapse workspace SQL pool:
+ 1. In the [Azure portal](https://portal.azure.com/), navigate to Synapse workspace.
1. Select SQL Active Directory admin from left navigation and set yourself as the **Azure Active Directory admin**. 1. Open Synapse Studio, select **Manage** from the left navigation. Select **Access control** under Security. Assign yourself the **SQL admin** or **Workspace admin** role.
- 1. In Synapse Studio, select **Develop** from the left navigation. Execute the following script in SQL pool to add the Data Share resource-Managed Identity as a 'db_datareader, db_datawriter, db_ddladmin'.
-
+ 1. In Synapse Studio, select **Develop** from the left navigation. Execute the following script in SQL pool to add the Data Share resource-Managed Identity as a 'db_datareader, db_datawriter, db_ddladmin'.
+ ```sql create user "<share_acc_name>" from external provider; exec sp_addrolemember db_datareader, "<share_acc_name>"; exec sp_addrolemember db_datawriter, "<share_acc_name>"; exec sp_addrolemember db_ddladmin, "<share_acc_name>";
- ```
+ ```
+ > [!Note] > The *<share_acc_name>* is the name of your Data Share resource.
-* Synapse workspace Firewall access:
+- Synapse workspace Firewall access:
1. In the [Azure portal](https://portal.azure.com/), navigate to Synapse workspace. Select *Firewalls* from left navigation. 1. Select **ON** for **Allow Azure services and resources to access this workspace**.
- 1. Select **+Add client IP**. Client IP address is subject to change. This process might need to be repeated the next time you are sharing SQL data from Azure portal.
- 1. Select **Save**.
+ 1. Select **+Add client IP**. Client IP address is subject to change. This process might need to be repeated the next time you're sharing SQL data from Azure portal.
+ 1. Select **Save**.
## Receive shared data ### Open invitation
-You can open invitation from email or directly from the [Azure portal](https://portal.azure.com/).
+You can open invitation from email or directly from the [Azure portal](https://portal.azure.com/).
-To open an invitation from email, check your inbox for an invitation from your data provider. The invitation is from Microsoft Azure, titled **Azure Data Share invitation from <yourdataprovider@domain.com>**. Select **View invitation** to see your invitation in Azure.
+To open an invitation from email, check your inbox for an invitation from your data provider. The invitation is from Microsoft Azure, titled **Azure Data Share invitation from <yourdataprovider@domain.com>**. Select **View invitation** to see your invitation in Azure.
To open an invitation from Azure portal directly, search for **Data Share Invitations** in the Azure portal, which takes you to the list of Data Share invitations. If you're a guest user on a tenant, you'll need to verify your email address for the tenant before viewing a Data Share invitation for the first time. Once verified, your email is valid for 12 months.
-![List of Invitations](./media/invitations.png "List of invitations")
+![List of Invitations](./media/invitations.png "List of invitations")
-Then, select the share you would like to view.
+Then, select the share you would like to view.
### Accept invitation
-1. Make sure all fields are reviewed, including the **Terms of Use**. If you agree to the terms of use, you'll be required to check the box to indicate you agree.
- ![Terms of use](./media/terms-of-use.png "Terms of use")
+1. Make sure all fields are reviewed, including the **Terms of Use**. If you agree to the terms of use, you'll be required to check the box to indicate you agree.
+
+ ![Terms of use](./media/terms-of-use.png "Terms of use")
-1. Under *Target Data Share Account*, select the Subscription and Resource Group that you'll be deploying your Data Share into.
+1. Under *Target Data Share Account*, select the Subscription and Resource Group that you'll be deploying your Data Share into.
-1. For the **Data Share Account** field, select **Create new** if you don't have an existing Data Share account. Otherwise, select an existing Data Share account that you'd like to accept your data share into.
+1. For the **Data Share Account** field, select **Create new** if you don't have an existing Data Share account. Otherwise, select an existing Data Share account that you'd like to accept your data share into.
-1. For the **Received Share Name** field, you may leave the default specified by the data provide, or specify a new name for the received share.
+1. For the **Received Share Name** field, you may leave the default specified by the data provide, or specify a new name for the received share.
-1. Once you've agreed to the terms of use and specified a Data Share account to manage your received share, Select **Accept and configure**. A share subscription will be created.
+1. Once you've agreed to the terms of use and specified a Data Share account to manage your received share, Select **Accept and configure**. A share subscription will be created.
- ![Accept options](./media/accept-options.png "Accept options")
+ ![Accept options](./media/accept-options.png "Accept options")
-If you don't want to accept the invitation, Select *Reject*.
+If you don't want to accept the invitation, Select *Reject*.
### Configure received share+ Follow the steps below to configure where you want to receive data.
-1. Select **Datasets** tab. Check the box next to the dataset you'd like to assign a destination to. Select **+ Map to target** to choose a target data store.
+1. Select **Datasets** tab. Check the box next to the dataset you'd like to assign a destination to. Select **+ Map to target** to choose a target data store.
- ![Map to target](./media/dataset-map-target.png "Map to target")
+ ![Map to target](./media/dataset-map-target.png "Map to target")
1. Select the target resource to store the shared data. Any data files or tables in the target data store with the same path and name will be overwritten. If you're receiving data into a SQL store and the **Allow Data Share to run the above 'create user' SQL script on my behalf** checkbox appears, check the checkbox. Otherwise, follow the instruction in prerequisites to run the script appear on the screen. This will give Data Share resource write permission to your target SQL DB.
- ![Target storage account](./media/dataset-map-target-sql.png "Target Data Store")
+ ![Target storage account](./media/dataset-map-target-sql.png "Target Data Store")
-1. For snapshot-based sharing, if the data provider has created a snapshot schedule to provide regular updates to the data, you can also enable snapshot schedule by selecting the **Snapshot Schedule** tab. Check the box next to the snapshot schedule and select **+ Enable**.
+1. For snapshot-based sharing, if the data provider has created a snapshot schedule to provide regular updates to the data, you can also enable snapshot schedule by selecting the **Snapshot Schedule** tab. Check the box next to the snapshot schedule and select **+ Enable**.
> [!NOTE] > The first scheduled snapshot will start within one minute of the schedule time and the next snapshots will start within seconds of the scheduled time.
Follow the steps below to configure where you want to receive data.
![Enable snapshot schedule](./media/enable-snapshot-schedule.png "Enable snapshot schedule") ### Trigger a snapshot+ These steps only apply to snapshot-based sharing.
-1. You can trigger a snapshot by selecting **Details** tab followed by **Trigger snapshot**. Here, you can trigger a full snapshot of your data. If it's your first time receiving data from your data provider, select full copy. When a snapshot is executing, the next snapshots will not start until the previous one is complete.
+1. You can trigger a snapshot by selecting **Details** tab followed by **Trigger snapshot**. Here, you can trigger a full snapshot of your data. If it's your first time receiving data from your data provider, select full copy. When a snapshot is executing, the next snapshots won't start until the previous one is complete.
- ![Trigger snapshot](./media/trigger-snapshot.png "Trigger snapshot")
+ ![Trigger snapshot](./media/trigger-snapshot.png "Trigger snapshot")
-1. When the last run status is *successful*, go to target data store to view the received data. Select **Datasets**, and select the link in the Target Path.
+1. When the last run status is *successful*, go to target data store to view the received data. Select **Datasets**, and select the link in the Target Path.
- ![Consumer datasets](./media/consumer-datasets.png "Consumer dataset mapping")
+ ![Consumer datasets](./media/consumer-datasets.png "Consumer dataset mapping")
### View history
-This step only applies to snapshot-based sharing. To view history of your snapshots, select **History** tab. Here you'll find history of all snapshots that were generated for the past 30 days.
+
+This step only applies to snapshot-based sharing. To view history of your snapshots, select **History** tab. Here you'll find history of all snapshots that were generated for the past 30 days.
## Snapshot performance
-SQL snapshot performance is impacted by many factors. It is always recommended to conduct your own performance testing. Below are some example factors impacting performance.
-* Source or destination data store input/output operations per second (IOPS) and bandwidth.
-* Hardware configuration (For example: vCores, memory, DWU) of the source and target SQL data store.
-* Concurrent access to the source and target data stores. If you are sharing multiple tables and views from the same SQL data store, or receive multiple tables and views into the same SQL data store, performance will be impacted.
-* Network bandwidth between the source and destination data stores, and location of source and target data stores.
-* Size of the tables and views being shared. SQL snapshot sharing does a full copy of the entire table. If the size of the table grows over time, snapshot will take longer.
+SQL snapshot performance is impacted by many factors. It's always recommended to conduct your own performance testing. Below are some example factors impacting performance.
+
+- Source or destination data store input/output operations per second (IOPS) and bandwidth.
+- Hardware configuration (For example: vCores, memory, DWU) of the source and target SQL data store.
+- Concurrent access to the source and target data stores. If you're sharing multiple tables and views from the same SQL data store, or receive multiple tables and views into the same SQL data store, performance will be impacted.
+- Network bandwidth between the source and destination data stores, and location of source and target data stores.
+- Size of the tables and views being shared. SQL snapshot sharing does a full copy of the entire table. If the size of the table grows over time, snapshot will take longer.
For large tables where incremental updates are desired, you can export updates to storage account and use the storage accountΓÇÖs incremental sharing capability for faster performance. ## Troubleshoot snapshot failure
-The most common cause of snapshot failure is that Data Share does not have permission to the source or target data store. In order to grant Data Share permission to the source or target Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), you must run the provided SQL script when connecting to the SQL database using Azure Active Directory authentication. To troubleshoot other SQL snapshot failures, refer to [Troubleshoot snapshot failure](data-share-troubleshoot.md#snapshots).
+
+The most common cause of snapshot failure is that Data Share doesn't have permission to the source or target data store. In order to grant Data Share permission to the source or target Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), you must run the provided SQL script when connecting to the SQL database using Azure Active Directory authentication. To troubleshoot other SQL snapshot failures, refer to [Troubleshoot snapshot failure](data-share-troubleshoot.md#snapshots).
## Next steps
-You have learned how to share and receive data from SQL sources using Azure Data Share service. To learn more about sharing from other data sources, continue to [supported data stores](supported-data-stores.md).
+
+You've learned how to share and receive data from SQL sources using Azure Data Share service. To learn more about sharing from other data sources, continue to [supported data stores](supported-data-stores.md).
data-share How To Share From Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-storage.md
Previously updated : 09/10/2021 Last updated : 02/02/2022 + # Share and receive data from Azure Blob Storage and Azure Data Lake Storage [!INCLUDE[appliesto-storage](includes/appliesto-storage.md)]
-Azure Data Share supports snapshot-based sharing from a storage account. This article explains how to share and receive data from Azure Blob Storage, Azure Data Lake Storage Gen1, and Azure Data Lake Storage Gen2.
+[Azure Data Share](overview.md) allows you to securely share data snapshots from your Azure storage resources to other Azure subscriptions. Including Azure subscriptions outside your tenant.
-Azure Data Share supports the sharing of files, folders, and file systems from Azure Data Lake Gen1 and Azure Data Lake Gen2. It also supports the sharing of blobs, folders, and containers from Azure Blob Storage. You can share block, append, or page blobs, and they are received as block blobs. Data shared from these sources can be received by Azure Data Lake Gen2 or Azure Blob Storage.
+This article describes sharing data from **Azure Blob Storage**, **Azure Data Lake Storage Gen1**, and **Azure Data Lake Storage Gen2**.
+However, Azure Data Share also allows sharing from these other kinds of resources:
-When file systems, containers, or folders are shared in snapshot-based sharing, data consumers can choose to make a full copy of the share data. Or they can use the incremental snapshot capability to copy only new or updated files. The incremental snapshot capability is based on the last modified time of the files.
+- [Azure SQL Database and Azure Synapse Analytics](how-to-share-from-sql.md)
+- [Azure Data Explorer](/data-explorer/data-share.md)
-Existing files that have the same name are overwritten during a snapshot. A file that is deleted from the source isn't deleted on the target. Empty subfolders at the source aren't copied over to the target.
+This article will guide you through:
-## Share data
+- [What kinds of data can be shared](#whats-supported)
+- [How to prepare your environment](#prerequisites-to-share-data)
+- [How to create a share](#create-a-share)
+- [How to receive shared data](#receive-shared-data)
-Use the information in the following sections to share data by using Azure Data Share.
-### Prerequisites to share data
+You can use the table of contents to jump to the section you need, or continue with this article to follow the process from start to finish.
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* Find your recipient's Azure sign-in email address. The recipient's email alias won't work for your purposes.
-* If the source Azure data store is in a different Azure subscription than the one where you'll create the Data Share resource, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where the Azure data store is located.
+## What's supported
-### Prerequisites for the source storage account
+Azure Data Share supports sharing data from Azure Data Lake Gen1, Azure Data Lake Gen2, and Azure storage.
-* An Azure Storage account. If you don't already have an account, [create one](../storage/common/storage-account-create.md).
-* Permission to write to the storage account. Write permission is in *Microsoft.Storage/storageAccounts/write*. It's part of the Contributor role.
-* Permission to add role assignment to the storage account. This permission is in *Microsoft.Authorization/role assignments/write*. It's part of the Owner role.
+|Resource type | Sharable resource |
+|-|--
+|Azure Data Lake Gen1 and Gen2 |Files |
+||Folders|
+||File systems|
+|Azure Storage |*Blobs |
+||Folders|
+||Containers|
-### Sign in to the Azure portal
+>[!NOTE]
+> *Block, append, and page blobs are all supported. However, when they are shared they will be received as **block blobs**.
-Sign in to the [Azure portal](https://portal.azure.com/).
+Data shared from these sources can be received by Azure Data Lake Gen2 or Azure Blob Storage.
-### Create a Data Share account
+### Share behavior
-Create an Azure Data Share resource in an Azure resource group.
+For file systems, containers, or folders, you can choose to make full or incremental snapshots of your data.
-1. In the upper-left corner of the portal, open the menu and then select **Create a resource** (+).
+A **full snapshot** copies all specified files and folders at every snapshot.
-1. Search for *Data Share*.
+An **incremental snapshot** copies only new or updated files, based on the last modified time of the files.
-1. Select **Data Share** and **Create**.
+Existing files that have the same name are overwritten during a snapshot. A file that is deleted from the source isn't deleted on the target. Empty subfolders at the source aren't copied over to the target.
-1. Provide the basic details of your Azure Data Share resource:
+## Prerequisites to share data
- **Setting** | **Suggested value** | **Field description**
- ||||
- | Subscription | Your subscription | Select an Azure subscription for your data share account.|
- | Resource group | *test-resource-group* | Use an existing resource group or create a resource group. |
- | Location | *East US 2* | Select a region for your data share account.
- | Name | *datashareaccount* | Name your data share account. |
- | | |
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [An Azure Data Share account](share-your-data-portal.md#create-a-data-share-account).
+- Your data recipient's Azure sign in e-mail address (using their e-mail alias won't work).
+- If your Azure SQL resource is in a different Azure subscription than your Azure Data Share account, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where your source Azure SQL resource is located.
-1. Select **Review + create** > **Create** to provision your data share account. Provisioning a new data share account typically takes about 2 minutes.
+### Prerequisites for the source storage account
-1. When the deployment finishes, select **Go to resource**.
+- An Azure Storage account. If you don't already have an account, [create one](../storage/common/storage-account-create.md).
+- Permission to write to the storage account. Write permission is in *Microsoft.Storage/storageAccounts/write*. It's part of the Contributor role.
+- Permission to add role assignment to the storage account. This permission is in *Microsoft.Authorization/role assignments/write*. It's part of the Owner role.
### Create a share
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+ 1. Go to your data share **Overview** page. :::image type="content" source="./media/share-receive-data.png" alt-text="Screenshot showing the data share overview."::: 1. Select **Start sharing your data**.
-1. Select **Create**.
+1. Select **Create**.
-1. Provide the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
+1. Provide the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
- ![Screenshot showing data share details.](./media/enter-share-details.png "Enter the data share details.")
+ ![Screenshot showing data share details.](./media/enter-share-details.png "Enter the data share details.")
1. Select **Continue**.
-1. To add datasets to your share, select **Add Datasets**.
+1. To add datasets to your share, select **Add Datasets**.
![Screenshot showing how to add datasets to your share.](./media/datasets.png "Datasets.")
-1. Select a dataset type to add. The list of dataset types depends on whether you selected snapshot-based sharing or in-place sharing in the previous step.
+1. Select a dataset type to add. The list of dataset types depends on whether you selected snapshot-based sharing or in-place sharing in the previous step.
- ![Screenshot showing where to select a dataset type.](./media/add-datasets.png "Add datasets.")
+ ![Screenshot showing where to select a dataset type.](./media/add-datasets.png "Add datasets.")
-1. Go to the object you want to share. Then select **Add Datasets**.
+1. Go to the object you want to share. Then select **Add Datasets**.
- ![Screenshot showing how to select an object to share.](./media/select-datasets.png "Select datasets.")
+ ![Screenshot showing how to select an object to share.](./media/select-datasets.png "Select datasets.")
-1. On the **Recipients** tab, add the email address of your data consumer by selecting **Add Recipient**.
+1. On the **Recipients** tab, add the email address of your data consumer by selecting **Add Recipient**.
- ![Screenshot showing how to add recipient email addresses.](./media/add-recipient.png "Add recipients.")
+ ![Screenshot showing how to add recipient email addresses.](./media/add-recipient.png "Add recipients.")
1. Select **Continue**.
-1. If you selected a snapshot share type, you can set up the snapshot schedule to update your data for the data consumer.
+1. If you selected a snapshot share type, you can set up the snapshot schedule to update your data for the data consumer.
- ![Screenshot showing the snapshot schedule settings.](./media/enable-snapshots.png "Enable snapshots.")
+ ![Screenshot showing the snapshot schedule settings.](./media/enable-snapshots.png "Enable snapshots.")
-1. Select a start time and recurrence interval.
+1. Select a start time and recurrence interval.
1. Select **Continue**. 1. On the **Review + Create** tab, review your package contents, settings, recipients, and synchronization settings. Then select **Create**.
-You've now created your Azure data share. The recipient of your data share can accept your invitation.
+You've now created your Azure data share. The recipient of your data share can accept your invitation.
-## Receive data
+## Prerequisites to receive data
-The following sections describe how to receive shared data.
-### Prerequisites to receive data
-Before you accept a data share invitation, make sure you have the following prerequisites:
+Before you accept a data share invitation, make sure you have the following prerequisites:
-* An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/).
-* An invitation from Azure. The email subject should be "Azure Data Share invitation from *\<yourdataprovider\@domain.com>*".
-* A registered [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in:
- * The Azure subscription where you'll create a Data Share resource.
- * The Azure subscription where your target Azure data stores are located.
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/).
+- An invitation from Azure. The email subject should be "Azure Data Share invitation from *\<yourdataprovider\@domain.com>*".
+- A registered [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in:
+ - The Azure subscription where you'll create a Data Share resource.
+ - The Azure subscription where your target Azure data stores are located.
### Prerequisites for a target storage account
-* An Azure Storage account. If you don't already have one, [create an account](../storage/common/storage-account-create.md).
-* Permission to write to the storage account. This permission is in *Microsoft.Storage/storageAccounts/write*. It's part of the Contributor role.
-* Permission to add role assignment to the storage account. This assignment is in *Microsoft.Authorization/role assignments/write*. It's part of the Owner role.
-
-### Sign in to the Azure portal
+- An Azure Storage account. If you don't already have one, [create an account](../storage/common/storage-account-create.md).
+- Permission to write to the storage account. This permission is in *Microsoft.Storage/storageAccounts/write*. It's part of the Contributor role.
+- Permission to add role assignment to the storage account. This assignment is in *Microsoft.Authorization/role assignments/write*. It's part of the Owner role.
-Sign in to the [Azure portal](https://portal.azure.com/).
+## Receive shared data
### Open an invitation
-You can open an invitation from email or directly from the Azure portal.
+You can open an invitation from email or directly from the [Azure portal](https://portal.azure.com/).
-1. To open an invitation from email, check your inbox for an invitation from your data provider. The invitation from Microsoft Azure is titled "Azure Data Share invitation from *\<yourdataprovider\@domain.com>*". Select **View invitation** to see your invitation in Azure.
+1. To open an invitation from email, check your inbox for an invitation from your data provider. The invitation from Microsoft Azure is titled "Azure Data Share invitation from *\<yourdataprovider\@domain.com>*". Select **View invitation** to see your invitation in Azure.
To open an invitation from the Azure portal, search for *Data Share invitations*. You see a list of Data Share invitations.
- If you are a guest user of a tenant, you will be asked to verify your email address for the tenant prior to viewing Data Share invitation for the first time. Once verified, it is valid for 12 months.
+ If you're a guest user of a tenant, you'll be asked to verify your email address for the tenant prior to viewing Data Share invitation for the first time. Once verified, it's valid for 12 months.
- ![Screenshot showing the list of invitations in the Azure portal.](./media/invitations.png "List of invitations.")
+ ![Screenshot showing the list of invitations in the Azure portal.](./media/invitations.png "List of invitations.")
-1. Select the share you want to view.
+1. Select the share you want to view.
### Accept an invitation
-1. Review all of the fields, including the **Terms of use**. If you agree to the terms, select the check box.
- ![Screenshot showing the Terms of use area.](./media/terms-of-use.png "Terms of use.")
+1. Review all of the fields, including the **Terms of use**. If you agree to the terms, select the check box.
+
+ ![Screenshot showing the Terms of use area.](./media/terms-of-use.png "Terms of use.")
1. Under **Target Data Share account**, select the subscription and resource group where you'll deploy your Data Share. Then fill in the following fields:
- * In the **Data share account** field, select **Create new** if you don't have a Data Share account. Otherwise, select an existing Data Share account that will accept your data share.
+ - In the **Data share account** field, select **Create new** if you don't have a Data Share account. Otherwise, select an existing Data Share account that will accept your data share.
- * In the **Received share name** field, either leave the default that the data provider specified or specify a new name for the received share.
+ - In the **Received share name** field, either leave the default that the data provider specified or specify a new name for the received share.
-1. Select **Accept and configure**. A share subscription is created.
+1. Select **Accept and configure**. A share subscription is created.
- ![Screenshot showing where to accept the configuration options.](./media/accept-options.png "Accept options")
+ ![Screenshot showing where to accept the configuration options.](./media/accept-options.png "Accept options")
- The received share appears in your Data Share account.
+ The received share appears in your Data Share account.
- If you don't want to accept the invitation, select **Reject**.
+ If you don't want to accept the invitation, select **Reject**.
### Configure a received share
-Follow the steps in this section to configure a location to receive data.
-1. On the **Datasets** tab, select the check box next to the dataset where you want to assign a destination. Select **Map to target** to choose a target data store.
+1. On the **Datasets** tab, select the check box next to the dataset where you want to assign a destination. Select **Map to target** to choose a target data store.
- ![Screenshot showing how to map to a target.](./media/dataset-map-target.png "Map to target.")
+ ![Screenshot showing how to map to a target.](./media/dataset-map-target.png "Map to target.")
-1. Select a target data store for the data. Files in the target data store that have the same path and name as files in the received data will be overwritten.
+1. Select a target data store for the data. Files in the target data store that have the same path and name as files in the received data will be overwritten.
- ![Screenshot showing where to select a target storage account.](./media/map-target.png "Target storage.")
+ ![Screenshot showing where to select a target storage account.](./media/map-target.png "Target storage.")
-1. For snapshot-based sharing, if the data provider uses a snapshot schedule to regularly update the data, you can enable the schedule from the **Snapshot Schedule** tab. Select the box next to the snapshot schedule. Then select **Enable**. Note that the first scheduled snapshot will start within one minute of the schedule time and subsequent snapshots will start within seconds of the scheduled time.
+1. For snapshot-based sharing, if the data provider uses a snapshot schedule to regularly update the data, you can enable the schedule from the **Snapshot Schedule** tab. Select the box next to the snapshot schedule. Then select **Enable**. The first scheduled snapshot will start within one minute of the schedule time and subsequent snapshots will start within seconds of the scheduled time.
![Screenshot showing how to enable a snapshot schedule.](./media/enable-snapshot-schedule.png "Enable snapshot schedule.") ### Trigger a snapshot+ The steps in this section apply only to snapshot-based sharing.
-1. You can trigger a snapshot from the **Details** tab. On the tab, select **Trigger snapshot**. You can choose to trigger a full snapshot or incremental snapshot of your data. If you're receiving data from your data provider for the first time, select **Full copy**. When a snapshot is executing, subsequent snapshots will not start until the previous one complete.
+1. You can trigger a snapshot from the **Details** tab. On the tab, select **Trigger snapshot**. You can choose to trigger a full snapshot or incremental snapshot of your data. If you're receiving data from your data provider for the first time, select **Full copy**. When a snapshot is executing, subsequent snapshots won't start until the previous one complete.
- ![Screenshot showing the Trigger snapshot selection.](./media/trigger-snapshot.png "Trigger snapshot.")
+ ![Screenshot showing the Trigger snapshot selection.](./media/trigger-snapshot.png "Trigger snapshot.")
-1. When the last run status is *successful*, go to the target data store to view the received data. Select **Datasets**, and then select the target path link.
+1. When the last run status is *successful*, go to the target data store to view the received data. Select **Datasets**, and then select the target path link.
- ![Screenshot showing a consumer dataset mapping.](./media/consumer-datasets.png "Consumer dataset mapping.")
+ ![Screenshot showing a consumer dataset mapping.](./media/consumer-datasets.png "Consumer dataset mapping.")
### View history
-You can view the history of your snapshots only in snapshot-based sharing. To view the history, open the **History** tab. Here you see the history of all of the snapshots that were generated in the past 30 days.
+
+You can view the history of your snapshots only in snapshot-based sharing. To view the history, open the **History** tab. Here you see the history of all of the snapshots that were generated in the past 30 days.
## Storage snapshot performance
-Storage snapshot performance is impacted by a number of factors in addition to number of files and size of the shared data. It is always recommended to conduct your own performance testing. Below are some example factors impacting performance.
-* Concurrent access to the source and target data stores.
-* Location of source and target data stores.
-* For incremental snapshot, the number of files in the shared dataset can impact the time it takes to find the list of files with last modified time after the last successful snapshot.
+Storage snapshot performance is impacted by many factors in addition to number of files and size of the shared data. It's always recommended to conduct your own performance testing. Below are some example factors impacting performance.
+- Concurrent access to the source and target data stores.
+- Location of source and target data stores.
+- For incremental snapshot, the number of files in the shared dataset can impact the time it takes to find the list of files with last modified time after the last successful snapshot.
## Next steps
-You've learned how to share and receive data from a storage account by using the Azure Data Share service. To learn about sharing from other data sources, see [Supported data stores](supported-data-stores.md).
+
+You've learned how to share and receive data from a storage account by using the Azure Data Share service. To learn about sharing from other data sources, see the [supported data stores](supported-data-stores.md).
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/migration-using-azure-data-studio.md
The workflow of the migration process is illustrated below.
:::image type="content" source="media/migration-using-azure-data-studio/architecture-ads-sql-migration.png" alt-text="Diagram of architecture for database migration using Azure Data Studio with DMS":::
-1. **Source SQL Server**: SQL Server instance on-premises, private cloud, or any public cloud virtual machine. All editions of SQL Server 2008 and above are supported.
+1. **Source SQL Server**: SQL Server instance on-premises, private cloud, or any public cloud virtual machine. All editions of SQL Server 2016 and above are supported.
1. **Target Azure SQL**: Supported Azure SQL targets are Azure SQL Managed Instance or SQL Server on Azure Virtual Machines (registered with SQL IaaS Agent extension in [Full management mode](../azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md#management-modes)) 1. **Network File Share**: Server Message Block (SMB) network file share where backup files are stored for the database(s) to be migrated. Azure Storage blob containers and Azure Storage file share are also supported. 1. **Azure Data Studio**: Download and install the [Azure SQL Migration extension in Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
Azure Database Migration Service prerequisites that are common across all suppor
- Server roles - Server audit - Automating migrations with Azure Data Studio using PowerShell / CLI isn't supported.
+- SQL Server 2014 and below are not supported.
- Migrating to Azure SQL Database isn't supported. - Azure storage accounts secured by specific firewall rules or configured with a private endpoint are not supported for migrations. - You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL Migration extension in Azure Data Studio and can be reused for further database migrations.
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
To complete this tutorial, you need to:
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- - Owner or Contributor role for the Azure subscription.
+ - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
* Create a SQL Managed Instance by following the detail in the article [Create a SQL Managed Instance in the Azure portal](../azure-sql/managed-instance/instance-create-quickstart.md). * Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission. * Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files, which Azure Database Migration Service can use for database migration.
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- - Owner or Contributor role for the Azure subscription.
+ - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
* Create a target [Azure SQL Managed Instance](../azure-sql/managed-instance/instance-create-quickstart.md). * Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission. * Use one of the following storage options for the full database and transaction log backup files:
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Last updated 10/05/2021
# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine offline using Azure Data Studio with DMS (Preview)
-Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
+Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance (SQL Server 2016 and above) to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
In this tutorial, you migrate the **Adventureworks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with the offline migration method by using Azure Data Studio with Azure Database Migration Service.
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
Last updated 10/05/2021
# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio with DMS (Preview)
-Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
+Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance (SQL Server 2016 and above) to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
In this tutorial, you migrate the **Adventureworks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with minimal downtime by using Azure Data Studio with Azure Database Migration Service.
dns Private Dns Autoregistration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/private-dns-autoregistration.md
# What is the auto registration feature in Azure DNS private zones?
-The Azure DNS private zones auto registration feature manages DNS records for virtual machines deployed in a virtual network. When you [link a virtual network](./private-dns-virtual-network-links.md) with a private DNS zone with this setting enabled. A DNS record gets created for each virtual machine deployed in the virtual network.
+The Azure DNS private zones auto registration feature manages DNS records for virtual machines deployed in a virtual network. When you [link a virtual network](./private-dns-virtual-network-links.md) with a private DNS zone with this setting enabled, a DNS record gets created for each virtual machine deployed in the virtual network.
For each virtual machine, an A record and a PTR record are created. DNS records for newly deployed virtual machines are also automatically created in the linked private DNS zone. When a virtual machine gets deleted, any associated DNS records also get deleted from the private DNS zone.
To enable auto registration, select the checkbox for "Enable auto registration"
* Read about some common [private zone scenarios](./private-dns-scenarios.md) that can be realized with private zones in Azure DNS.
-* For common questions and answers about private zones in Azure DNS, including specific behavior you can expect for certain kinds of operations, see [Private DNS FAQ](./dns-faq-private.yml).
+* For common questions and answers about private zones in Azure DNS, including specific behavior you can expect for certain kinds of operations, see [Private DNS FAQ](./dns-faq-private.yml).
event-hubs Event Hubs Capture Enable Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-capture-enable-through-portal.md
You can configure Capture at the event hub creation time using the [Azure portal
For more information, see the [Event Hubs Capture overview][capture-overview]. > [!IMPORTANT]
-> The destination storage (Azure Storage or Azure Data Lake Storage) account must be in the same subscription as the event hub.
+> - The destination storage (Azure Storage or Azure Data Lake Storage) account must be in the same subscription as the event hub.
+> - Event Hubs doesn't support capturing events in a **premium** storage account.
+ ## Capture data to Azure Storage
When you create an event hub, you can enable Capture by clicking the **On** butt
The default time window is 5 minutes. The minimum value is 1, the maximum 15. The **Size** window has a range of 10-500 MB.
+You can enable or disable emitting empty files when no events occur during the Capture window.
+ ![Time window for capture][1]
-> [!NOTE]
-> You can enable or disable emitting empty files when no events occur during the Capture window.
## Capture data to Azure Data Lake Storage Gen 2
event-hubs Event Hubs Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-capture-overview.md
Azure Event Hubs enables you to automatically capture the streaming data in Even
Event Hubs Capture enables you to process real-time and batch-based pipelines on the same stream. This means you can build solutions that grow with your needs over time. Whether you're building batch-based systems today with an eye towards future real-time processing, or you want to add an efficient cold path to an existing real-time solution, Event Hubs Capture makes working with streaming data easier. > [!IMPORTANT]
-> The destination storage (Azure Storage or Azure Data Lake Storage) account must be in the same subscription as the event hub.
+> - The destination storage (Azure Storage or Azure Data Lake Storage) account must be in the same subscription as the event hub.
+> - Event Hubs doesn't support capturing events in a **premium** storage account.
## How Event Hubs Capture works
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-premium-overview.md
Title: Overview of Event Hubs Premium description: This article provides an overview of Azure Event Hubs Premium, which offers multi-tenant deployments of Event Hubs for high-end streaming needs. Previously updated : 10/20/2021 Last updated : 02/02/2022 # Overview of Event Hubs Premium
+The Event Hubs Premium (premium tier) is designed for high-end streaming scenarios that require elastic, superior performance with predictable latency. The performance is achieved by providing reserved compute, memory, and storage resources, which minimize cross-tenant interference in a managed multi-tenant PaaS environment.
-The Event Hubs Premium tier is designed for high-end streaming scenarios that require elastic, superior performance with predictable latency. The performance is achieved by providing reserved compute, memory, and storage resources, which minimize cross-tenant interference in a managed multi-tenant PaaS environment.
+It replicates events to three replicas, distributed across Azure availability zones where available. All replicas are synchronously flushed to the underlying fast storage before the send operation is reported as completed. Events that aren't read immediately or that need to be re-read later can be retained up to 90 days, transparently held in an availability-zone redundant storage tier.
-Event Hubs Premium introduces a new, two-tier, native-code log engine that provides far more predictable and much lower send and passthrough latencies than the prior generation, without any durability compromises. Event Hubs Premium replicates every event to three replicas, distributed across Azure availability zones where available, and all replicas are synchronously flushed to the underlying fast storage before the send operation is reported as completed. Events that are not read immediately or that need to be re-read later can be retained up to 90 days, transparently held in an availability-zone redundant storage tier. Events in both the fast storage and retention storage tiers are encrypted; in Event Hubs Premium, the encryption keys can be supplied by you.
+In addition to these storage-related features and all capabilities and protocol support of the standard tier, the isolation model of the premium tier enables features like [dynamic partition scale-up](dynamically-add-partitions.md). You also get far more generous [quota allocations](event-hubs-quotas.md). Event Hubs Capture is included at no extra cost.
-In addition to these storage-related features and all capabilities and protocol support of the Event Hubs Standard offering, the isolation model of Event Hubs Premium enables new features like dynamic partition scale-up and yet-to-be-added future capabilities. You also get far more generous quota allocations. Event Hubs Capture is included at no extra cost.
-
-The Premium offering is billed by [Processing Units (PUs)](event-hubs-scalability.md#processing-units) which correspond to a share of isolated resources (CPU, Memory, and Storage) in the underlying infrastructure.
-
-In comparison to Dedicated offering, since Event Hubs Premium provides isolation inside a very large multi-tenant environment that can shift resources quickly, it can scale far more elastically and quicker and PUs can be dynamically adjusted. Therefore, Event Hubs Premium will often be a more cost effective option for mid-range (<120MB/sec) throughput requirements, especially with changing loads throughout the day or week, when compared to Event Hubs Dedicated.
> [!NOTE]
-> Please note that Event Hubs Premium will only support TLS 1.2 or greater .
+> Event Hubs Premium supports TLS 1.2 or greater .
-For the extra robustness gained by availability-zone support, the minimal deployment scale for Event Hubs Dedicated is 8 Capacity Units (CU), but you will have availability zone support in Event Hubs Premium from the first PU in all AZ regions.
+## Why premium?
+The premium tier offers three compelling benefits for customers who require better isolation in a multitenant environment with low latency and high throughput data ingestion needs.
-You can purchase 1, 2, 4, 8 and 16 Processing Units for each namespace. Since Event Hubs Premium is a capacity-based offering, the achievable throughput is not set by a throttle as it is in Event Hubs Standard, but depends on the work you ask Event Hubs to do, similar to Event Hubs Dedicated. The effective ingest and stream throughput per PU will depend on various factors, including:
+### Superior performance with the new two-tier storage engine
+The premium tier uses a new two-tier log storage engine that drastically improves the data ingress performance with substantially reduced overall latency without compromising the durability guarantees.
-* Number of producers and consumers
-* Payload size
-* Partition count
-* Egress request rate
-* Usage of Event Hubs Capture, Schema Registry, and other advanced features
+### Better isolation and predictability
+The premium tier offers an isolated compute and memory capacity to achieve more predictable latency and far reduced *noisy neighbor* impact risk in a multi-tenant deployment.
-Refer the [comparison between Event Hubs SKUs](event-hubs-quotas.md) for more details.
+It implements a *cluster in cluster* model in its multitenant clusters to provide predictability and performance while retaining all the benefits of a managed multitenant PaaS environment.
+### Cost savings and scalability
+As the premium tier is a multitenant offering, it can dynamically scale more flexibly and very quickly. Capacity is allocated in processing units (PUs) that allocate isolated pods of CPU/memory inside the cluster. The number of those pods can be scaled up/down per namespace. Therefore, the premium tier is a low-cost option for messaging scenarios with the overall throughput range that is less than 120 MB/s but higher than what you can achieve with the standard SKU.
-> [!NOTE]
-> All Event Hubs namespaces are enabled for the Apache Kafka RPC protocol by default can be used by your existing Kafka based applications. Having Kafka enabled on your cluster does not affect your non-Kafka use cases; there is no option or need to disable Kafka on a cluster.
+## Premium vs. dedicated tiers
+In comparison to the dedicated offering, the premium tier provides the following benefits:
-## Why Premium?
+- Isolation inside a very large multi-tenant environment that can shift resources quickly
+- Scale far more elastically and quicker
+- PUs can be dynamically adjusted
-Premium Event Hubs offers three compelling benefits for customers who require better isolation in a multitenant environment with low latency and high throughput data ingestion needs.
+Therefore, the premium tier is often a more cost effective option for mid-range (<120MB/sec) throughput requirements, especially with changing loads throughout the day or week, when compared to the dedicated tier.
-#### Superior performance with the new two-tier storage engine
+For the extra robustness gained by availability-zone support, the minimal deployment scale for the dedicated tier is 8 capacity units (CU), but you'll have availability zone support in the premium tier from the first PU in all availability zone regions.
-Event Hubs premium uses a new two-tier log storage engine that drastically improves the data ingress performance with substantially reduced overall latency and latency jitter without compromising the durability guarantees.
+You can purchase 1, 2, 4, 8 and 16 processing units for each namespace. As the premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle as it's' in the standard tier, but depends on the work you ask Event Hubs to do, similar to the dedicated tier. The effective ingest and stream throughput per PU will depend on various factors, including:
-#### Better isolation and predictability
+* Number of producers and consumers
+* Payload size
+* Partition count
+* Egress request rate
+* Usage of Event Hubs Capture, Schema Registry, and other advanced features
-Event Hubs premium offers an isolated compute and memory capacity to achieve more predictable latency and far reduced *noisy neighbor* impact risk in a multi-tenant deployment.
+For more information, see [comparison between Event Hubs SKUs](event-hubs-quotas.md).
-Event Hubs premium implements a *Cluster in Cluster* model in its multitenant clusters to provide predictability and performance while retaining all the benefits of a managed multitenant PaaS environment.
+## Encryption of events
+Azure Event Hubs provides encryption of data at rest with Azure Storage Service Encryption (Azure SSE). The Event Hubs service uses Azure Storage to store the data. All the data that's stored with Azure Storage is encrypted using Microsoft-managed keys. If you use your own key (also referred to as Bring Your Own Key (BYOK) or customer-managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key will be encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the BYOK feature is a one time setup process on your namespace. For more information, see [Configure customer-managed keys for encrypting Azure Event Hubs data at rest](configure-customer-managed-key.md).
+> [!NOTE]
+> All Event Hubs namespaces are enabled for the Apache Kafka RPC protocol by default can be used by your existing Kafka based applications. Having Kafka enabled on your cluster does not affect your non-Kafka use cases; there is no option or need to disable Kafka on a cluster.
-#### Cost savings and scalability
-As Event Hubs Premium is a multitenant offering, it can dynamically scale more flexibly and very quickly. Capacity is allocated in Processing Units that allocate isolated pods of CPU/Memory inside the cluster. The number of those pods can be scaled up/down per namespace. Therefore, Event Hubs Premium is a low-cost option for messaging scenarios with the overall throughput range that is less than 120 MB/s but higher than what you can achieve with the standard SKU.
## Quotas and limits The premium tier offers all the features of the standard plan, but with better performance, isolation and more generous quotas. For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md)
+## Pricing
+
+The Premium offering is billed by [Processing Units (PUs)](event-hubs-scalability.md#processing-units) which correspond to a share of isolated resources (CPU, Memory, and Storage) in the underlying infrastructure.
## FAQs
For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas
## Next steps
-You can start using Event Hubs Premium via [Azure portal](https://portal.azure.com/#create/Microsoft.EventHub). Refer [Event Hubs Premium pricing](https://azure.microsoft.com/pricing/details/event-hubs/) for more details on pricing and [Event Hubs FAQ](event-hubs-faq.yml) to find answers to some frequently asked questions about Event Hubs.
+See the following articles:
+
+- [Create an event hub](event-hubs-create.md). Select **Premium** for **Pricing tier**.
+- [Event Hubs Premium pricing](https://azure.microsoft.com/pricing/details/event-hubs/) for more details on pricing
+- [Event Hubs FAQ](event-hubs-faq.yml) to find answers to some frequently asked questions about Event Hubs.
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
The Scenario 2 is illustrated in the following diagram. In the diagram, green li
[![9]][9]
-The solution is illustrated in the following diagram. As illustrated, you can architect the scenario either using more specific route (Option 1) or AS-path prepend (Option 2) to influence VNet path selection. To influence on-premises network route selection for Azure bound traffic, you need configure the interconnection between the on-premises location as less preferable. Howe you configure the interconnection link as preferable depends on the routing protocol used within the on-premises network. You can use local preference with iBGP or metric with IGP (OSPF or IS-IS).
+The solution is illustrated in the following diagram. As illustrated, you can architect the scenario either using more specific route (Option 1) or AS-path prepend (Option 2) to influence VNet path selection. To influence on-premises network route selection for Azure bound traffic, you need configure the interconnection between the on-premises location as less preferable. How you configure the interconnection link as preferable depends on the routing protocol used within the on-premises network. You can use local preference with iBGP or metric with IGP (OSPF or IS-IS).
[![10]][10]
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/create-front-door-cli.md
az group delete \
az group delete \ --name myRGFDEast ```+
+## Next steps
+
+Advance to the next article to learn how to add a custom domain to your Front Door.
+> [!div class="nextstepaction"]
+> [Add a custom domain](how-to-add-custom-domain.md)
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-component-versioning.md
Basic support does not include the following:
Microsoft does not encourage creating analytics pipelines or solutions on clusters in basic support. We recommend migrating existing clusters to the most recent fully supported version.
+## HDInsight 3.6 to 4.0 Migration Guides
+- [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](spark/migrate-versions.md).
+- [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](interactive-query/apache-hive-migrate-workloads.md).
+- [Migrate Apache Kafka workloads to Azure HDInsight 4.0](kafk).
+- [Migrate an Apache HBase cluster to a new version](hbase/apache-hbase-migrate-new-version.md).
+ ## Release notes For additional release notes on the latest versions of HDInsight, see [HDInsight release notes](hdinsight-release-notes.md).
hdinsight Hdinsight Ubuntu 1804 Qa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-ubuntu-1804-qa.md
This article provides more details for HDInsight Ubuntu 18.04 OS update and pote
HDInsight has started rolling out the new HDInsight 4.0 cluster image running on Ubuntu 18.04 in May 2021. Newly created HDInsight 4.0 clusters will run on Ubuntu 18.04 by default once available. Existing clusters on Ubuntu 16.04 will run as is with full support.
-HDInsight 3.6 will continue to run on Ubuntu 16.04. It will reach the end of standard support by 30 June 2021, and will change to Basic support starting on 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md). Ubuntu 18.04 won't be supported for HDInsight 3.6. If youΓÇÖd like to use Ubuntu 18.04, youΓÇÖll need to migrate your clusters to HDInsight 4.0.
+HDInsight 3.6 will continue to run on Ubuntu 16.04. It will reach the end of standard support by 30 June 2021, and will change to Basic support starting on 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md). Ubuntu 18.04 won't be supported for HDInsight 3.6. If youΓÇÖd like to use Ubuntu 18.04, youΓÇÖll need to migrate your clusters to HDInsight 4.0. Spark 3.0 with HDInsight 4.0 is available only on Ubuntu 16.04. Spark 3.1 with HDInsight 4.0 will be shipping soon and will be available on Ubuntu 18.04.
Drop and recreate your clusters if youΓÇÖd like to move existing clusters to Ubuntu 18.04. Plan to create or recreate your cluster.
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Below is a summary of the supported RESTful capabilities. For more information o
| update with optimistic locking | Yes | Yes | | update (conditional) | Yes | Yes | | patch | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We have included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).|
-| patch (conditional) | Yes | Yes |
+| patch (conditional) | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We have included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).
| history | Yes | Yes | | create | Yes | Yes | Support both POST/PUT | | create (conditional) | Yes | Yes | Issue [#1382](https://github.com/microsoft/fhir-server/issues/1382) |
All the operations that are supported that extend the REST API.
| Search parameter type | Azure API for FHIR | FHIR service in Healthcare APIs| Comment | ||--|--||
-| [$export](../../healthcare-apis/data-transformation/export-data.md) (whole system) | Yes | Yes | Supports system, group, and patient. |
+| [$export](../../healthcare-apis/data-transformation/export-data.md) | Yes | Yes | Supports system, group, and patient. |
| [$convert-data](convert-data.md) | Yes | Yes | | | [$validate](validation-against-profiles.md) | Yes | Yes | | | [$member-match](tutorial-member-match.md) | Yes | Yes | |
Currently, the allowed actions for a given role are applied *globally* on the AP
## Service limits
-* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 10,000 RUs in the portal for Azure API for FHIR. You will need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 10,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000.
+* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 10,000 RUs in the portal for Azure API for FHIR. You will need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 10,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000. In addition, we support [autoscaling of RUs](autoscale-azure-api-fhir.md).
* **Bundle size** - Each bundle is limited to 500 items. * **Data size** - Data/Documents must each be slightly less than 2 MB.
-* **Subscription Limit** - By default, each subscription is limited to a maximum of 10 FHIR Server Instances. If you need more instances per subscription, open a support ticket and provide details about your needs.
-
-* **Concurrent connections and Instances** - By default, you have 15 concurrent connections on two instances in the cluster (for a total of 30 concurrent requests). If you need more concurrent requests, open a support ticket and provide details about your needs.
+* **Subscription Limit** - By default, each subscription is limited to a maximum of 10 FHIR server instances. If you need more instances per subscription, open a support ticket and provide details about your needs.
## Next steps
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/how-to-do-custom-search.md
To update a search parameter, use `PUT` to create a new version of the search pa
> If you don't know the ID for your search parameter, you can search for it. Using `GET {{FHIR_URL}}/SearchParameter` will return all custom search parameters, and you can scroll through the search parameter to find the search parameter you need. You could also limit the search by name. With the example below, you could search for name using `USCoreRace: GET {{FHIR_URL}}/SearchParameter?name=USCoreRace`. ```rest
-PUT {{FHIR_ULR}}/SearchParameter/{SearchParameter ID}
+PUT {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
{ "resourceType" : "SearchParameter",
healthcare-apis Fhir Service Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-service-autoscale.md
Previously updated : 11/16/2021 Last updated : 2/2/2022
The autoscale feature for the FHIR service is designed to provide optimized serv
## How does FHIR service autoscale work?
-When transaction workloads are high, the autoscale feature increases computing resources automatically. When transaction workloads are low, it decreases computing resources accordingly.
+The autoscale feature adjusts computing resources automatically to optimize the overall service scalability. It requires no action from customers.
-The autoscale feature adjusts computing resources automatically to optimize the overall service scalability. Whether you are performing read requests that include simple queries like getting patient information using a patient ID, and advanced queries like getting all `DiagnosticReport` resources for patients whose name is "Sarah", or you're creating or updating FHIR resources, the autoscale feature manages the dynamics and complexity of resource allocation to ensure high scalability.
-
-The autoscale feature is part of the managed service and requires no action from customers. However, customers are encouraged to share their feedback to help improve the feature. Customers can also raise a support ticket to address any scalability issue they may have experienced.
+When transaction workloads are high, the autoscale feature increases computing resources automatically. When transaction workloads are low, it decreases computing resources accordingly. Whether you are performing read requests that include simple queries like getting patient information using a patient ID, and advanced queries like getting all DiagnosticReport resources for patients whose name is "Sarah", or you're creating or updating FHIR resources, the autoscale feature manages the dynamics and complexity of resource allocation to ensure high scalability.
### What is the cost of the FHIR service autoscale?
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/how-to-do-custom-search.md
To update a search parameter, use `PUT` to create a new version of the search pa
> If you don't know the ID for your search parameter, you can search for it. Using `GET {{FHIR_URL}}/SearchParameter` will return all custom search parameters, and you can scroll through the search parameter to find the search parameter you need. You could also limit the search by name. With the example below, you could search for name using `USCoreRace: GET {{FHIR_URL}}/SearchParameter?name=USCoreRace`. ```rest
-PUT {{FHIR_ULR}}/SearchParameter/{SearchParameter ID}
+PUT {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
{ "resourceType" : "SearchParameter",
healthcare-apis Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/workspace-overview.md
Previously updated : 07/12/2021 Last updated : 2/2/2022
One or more workspaces can be created in a resource group from the Azure portal,
A workspace can't be deleted unless all child service instances within the workspace have been deleted. This feature helps prevent any accidental deletion of service instances. However, when a workspace resource group is deleted, all the workspaces and child service instances within the workspace resource group get deleted.
+Workspace names can be re-used in the same Azure subscription, but not in a different Azure subscription, after deletion. However, when the move operation is supported and enabled, workspaces and its child resources can be moved from one subscription to another subscription if certain requirements are met. One requirement is that the two subscriptions must be part of the same Azure Active Directory (Azure AD) tenant. Another requirement is that the Private Link configuration is not enabled. Names for FHIR services, DICOM services and IoT connectors can be re-used in the same or different subscription after deletion if there is no collision with the URLs of any existing services.
+ ## Workspace and Azure region selection When you create a workspace, it must be configured for an Azure region, which can be the same as or different from the resource group. The region cannot be changed after the workspace is created. Within each workspace, all Healthcare APIs services (FHIR service, DICOM service, and IoT Connector service) must be created in the region of the workspace and cannot be moved to a different workspace.
to. For more information, see [Azure RBAC](../role-based-access-control/index.ym
To start working with the Azure Healthcare APIs, follow the 5-minute quick start to deploying a workspace. >[!div class="nextstepaction"]
->[Deploy workspace in the Azure portal](healthcare-apis-quickstart.md)
+>[Deploy workspace in the Azure portal](healthcare-apis-quickstart.md)
iot-central Concepts Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-quotas-limits.md
There are various quotas and limits that apply to IoT Central applications. IoT Central applications internally use multiple Azure services such as IoT Hub and the Device Provisioning Service (DPS), and these services also have quotas and limits. Where relevant, quotas and limits in the underlying services are called out in this article. > [!NOTE]
-> The quotas and limits described in this article apply to the new multiple IoT hub architecture. Currently, there are a few legacy IoT Central applications that were created before April 2021 that haven't yet been migrated to the multiple IoT hub architecture. Use the `az iot central device manual-failover` command to check if your application still uses a single IoT hub.
+> The quotas and limits described in this article apply to the new multiple IoT hub architecture. Currently, there are a few legacy IoT Central applications that were created before April 2021 that haven't yet been migrated to the multiple IoT hub architecture. Use the `az iot central device manual-failover` command in the [Azure CLI](/cli/azure/?view=azure-cli-latest&preserve-view=true) to check if your application still uses a single IoT hub. This triggers an IoT hub failover if your application uses the multiple IoT hub architecture. It returns an error if your application uses the older architecture.
## Devices
There are various quotas and limits that apply to IoT Central applications. IoT
| Item | Quota or limit | Notes | | - | -- | -- | | Number of device templates in an application | 1,000 | For performance reasons, you shouldn't exceed this limit. |
-| Number of telemetry capabilities in a device template | 300 | For performance reasons, you shouldn't exceed this limit. |
+| Number of capabilities in a device template | 300 | For performance reasons, you shouldn't exceed this limit. |
## Device groups
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-organizations.md
After you've created your organization hierarchy you can use organizations in ar
## Default organization
-You can set an organization as the default organization to use in your application. The default organization becomes the default option whenever you choose an organization, such as when you add a new user to your IoT Central application.
+> [!TIP]
+> This is a personal preference that only applies to you.
+
+You can set an organization as the default organization to use in your application as a personal preference. The default organization becomes the default option whenever you choose an organization, such as when you add a new user or add a device to your IoT Central application.
To set the default organization, select **Settings** on the top menu bar: :::image type="content" source="media/howto-create-organization/set-default-organization.png" alt-text="Screenshot that shows how to set your default organization.":::
-> [!TIP]
-> This is a personal preference that only applies to you.
## Add organizations to an existing application
When you start adding organizations, all existing devices, users, and experience
## Limits
-To following limits apply to organizations:
+The following limits apply to organizations:
- The hierarchy can be no more than five levels deep. - The total number of organization cannot be more than 200. Each node in the hierarchy counts as an organization. + ## Next steps Now that you've learned how to manage Azure IoT Central organizations, the suggested next step is learn how to [Export IoT data to cloud destinations using data export](howto-export-data.md).
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-transform-data.md
In this scenario, an IoT Edge module transforms the data from downstream devices
1. **Verify**: Send data from a downstream device to the gateway and verify the transformed device data reaches your IoT Central application.
-In the example described in the following sections, the downstream device sends CSV data in the following format to the IoT Edge gateway device:
+In the example described in the following sections, the downstream device sends JSON data in the following format to the IoT Edge gateway device:
-```csv
-"<temperature >, <pressure>, <humidity>"
+```json
+{
+ "device": {
+ "deviceId": "<downstream-deviceid>"
+ },
+ "measurements": {
+ "temp": <temperature>,
+ "pressure": <pressure>,
+ "humidity": <humidity>,
+ "scale": "celsius",
+ }
+}
```
-You want to use an IoT Edge module to transform the data to the following JSON format before it's sent to IoT Central:
+You want to use an IoT Edge module to transform the data and convert the temperature value from `Celsius` to `Fahrenheit` before sending it to IoT Central:
```json {
You want to use an IoT Edge module to transform the data to the following JSON f
"temp": <temperature>, "pressure": <pressure>, "humidity": <humidity>,
+ "scale": "fahrenheit"
} } ```
To create a container registry:
1. Open the [Azure Cloud Shell](https://shell.azure.com/) and sign in to your Azure subscription.
+1. Select the **Bash** shell.
+ 1. Run the following commands to create an Azure container registry: ```azurecli
To create a container registry:
az acr credential show -n $REGISTRY_NAME ```
- Make a note of the `username` and `password` values, you use them later.
+ Make a note of the `username` and `password` values, you use them later. You only need one of the passwords shown in the command output.
To build the custom module in the [Azure Cloud Shell](https://shell.azure.com/):
-1. In the [Azure Cloud Shell](https://shell.azure.com/), navigate to a suitable folder.
+1. In the [Azure Cloud Shell](https://shell.azure.com/), create a new folder and navigate to it by running the following commands:
+
+ ```azurecli
+ mkdir yournewfolder
+ cd yournewfolder
+ ```
+ 1. To clone the GitHub repository that contains the module source code, run the following command: ```azurecli
To register a gateway device in IoT Central:
1. In your IoT Central application, navigate to the **Devices** page.
-1. Select **IoT Edge gateway device** and select **Create a device**. Enter *IoT Edge gateway device* as the device name, enter *gateway-01* as the device ID, make sure **IoT Edge gateway device** is selected as the device template. Select **Create**.
+1. Select **IoT Edge gateway device** and select **Create a device**. Enter *IoT Edge gateway device* as the device name, enter *gateway-01* as the device ID, make sure **IoT Edge gateway device** is selected as the device template and **No** is selected as **Simulate this device?**. Select **Create**.
1. In the list of devices, click on the **IoT Edge gateway device**, and then select **Connect**.
To register a downstream device in IoT Central:
1. In your IoT Central application, navigate to the **Devices** page.
-1. Don't select a device template. Select **+ New**. Enter *Downstream 01* as the device name, enter *downstream-01* as the device ID, make sure that the device template is **Unassigned**. Select **Create**.
+1. Don't select a device template. Select **+ New**. Enter *Downstream 01* as the device name, enter *downstream-01* as the device ID, make sure that the device template is **Unassigned** and **No** is selected as **Simulate this device?**. Select **Create**.
1. In the list of devices, click on the **Downstream 01**, and then select **Connect**.
For convenience, this article uses Azure virtual machines to run the gateway and
Select **Review + Create**, and then **Create**. It takes a couple of minutes to create the virtual machines in the **ingress-scenario** resource group.
-To check that the IoT Edge device is running correctly:
+To check that the IoT Edge gateway device is running correctly:
1. Open your IoT Central application. Then navigate to the **IoT Edge Gateway device** on the list of devices on the **Devices** page.
To generate the demo certificates and install them on your gateway device:
The example shown above assumes you're signed in as **AzureUser** and created a device CA certificated called "mycacert".
-1. Save the changes and restart the IoT Edge runtime:
+1. Save the changes and run the following command to verify that the *config.yaml* file is correct:
+
+ ```bash
+ sudo iotedge check
+ ```
+
+1. Restart the IoT Edge runtime:
```bash sudo systemctl restart iotedge
To connect a downstream device to the IoT Edge gateway device:
npm run-script start ```
+ During `sudo apt install nodejs npm node-typescript` commands, you could be asked to allow installations: press `Y` if prompted.
+ 1. Enter the device ID, scope ID, and SAS key for the downstream device you created previously. For the hostname, enter `edgegateway`. The output from the command looks like: ```output
To verify the scenario is running, navigate to your **IoT Edge gateway device**
{"temperature":85.21208,"pressure":59.97321,"humidity":77.718124,"scale":"farenheit"} ```
-Because the IoT Edge device is transforming the data from the downstream device, the telemetry is associated with the gateway device in IoT Central. To visualize the transformed telemetry, create a view in the **IoT Edge gateway device** template and republish it.
+The temperature is sent in Fahrenheit. Because the IoT Edge device is transforming the data from the downstream device, the telemetry is associated with the gateway device in IoT Central. To visualize the transformed telemetry, create a view in the **IoT Edge gateway device** template and republish it.
## Data transformation at egress
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-operator.md
To manage individual devices, use device views to set device and cloud propertie
To manage devices in bulk, create and schedule jobs. Jobs can update properties and run commands on multiple devices. To learn more, see [Create and run a job in your Azure IoT Central application](howto-manage-devices-in-bulk.md).
+To manage IoT Edge devices, [create and edit deployment manifests](concepts-iot-edge.md#iot-edge-deployment-manifests-and-iot-central-device-templates) and deploy them onto the device directly from IoT Central. You can also run commands on modules from within IoT Central.
+ If your IoT Central application uses *organizations*, an administrator controls which devices you have access to. ## Troubleshoot and remediate issues
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central.md
Title: What is Azure IoT Central | Microsoft Docs
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions and helps to reduce the burden and cost of IoT management operations, and development. This article provides an overview of the features of Azure IoT Central.
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. It helps to reduce the burden and cost of IoT management operations, and development. This article provides an overview of the features of Azure IoT Central.
Last updated 12/22/2021
# What is Azure IoT Central?
-IoT Central is an IoT application platform that reduces the burden and cost of developing, managing, and maintaining enterprise-grade IoT solutions. Choosing to build with IoT Central gives you the opportunity to focus time, money, and energy on transforming your business with IoT data, rather than just maintaining and updating a complex and continually evolving IoT infrastructure.
+IoT Central is an IoT application platform that reduces the burden and cost of developing, managing, and maintaining enterprise-grade IoT solutions. If you choose to build with IoT Central, you'll have the opportunity to focus time, money, and energy on transforming your business with IoT data, rather than just maintaining and updating a complex and continually evolving IoT infrastructure.
The web UI lets you quickly connect devices, monitor device conditions, create rules, and manage millions of devices and their data throughout their life cycle. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications.
-This article outlines, for IoT Central:
+This article provides an overview of IoT Central and describes its core functionality.
-- How to create your application.-- How to connect your devices to your application.-- How to integrate your application with other services.-- How to administer your application.-- The typical user roles associated with a project.-- Pricing options.-
-## Create your IoT Central application
+## Create an IoT Central application
[Quickly deploy a new IoT Central application](quick-deploy-iot-central.md) and then customize it to your specific requirements. Application templates in Azure IoT Central are a tool to help you kickstart your IoT solution development. You can use app templates for everything from getting a feel for what is possible, to fully customizing your application to resell to your customers.
Start with a generic _application template_ or with one of the industry-focused
- [Retail](../retail/tutorial-in-store-analytics-create-app.md) - [Energy](../energy/tutorial-smart-meter-app.md) - [Government](../government/tutorial-connected-waste-management.md)-- [Healthcare](../healthcare/tutorial-continuous-patient-monitoring.md).
+- [Healthcare](../healthcare/tutorial-continuous-patient-monitoring.md)
See the [Create a new application](quick-deploy-iot-central.md) quickstart for a walk-through of how to create your first application. ## Connect devices
-After creating your application, the first step is to create and connect devices. Every device connected to IoT Central uses a _device template_. A device template is the blueprint that defines the characteristics and behavior of a type of device such as the:
+After you create your application, the next step is to create and connect devices. Every device connected to IoT Central uses a _device template_. A device template is the blueprint that defines the characteristics and behavior of a type of device such as the:
- Telemetry it sends. Examples include temperature and humidity. Telemetry is streaming data. - Business properties that an operator can modify. Examples include a customer address and a last serviced date. - Device properties that are set by a device and are read-only in the application. For example, the state of a valve as either open or shut.-- Properties, that an operator sets, that determine the behavior of the device. For example, a target temperature for the device.-- Commands, that an operator can call, that run on a device. For example, a command to remotely reboot a device.
+- Properties that an operator sets, that determine the behavior of the device. For example, a target temperature for the device.
+- Commands that an operator can call, that run on a device. For example, a command to remotely reboot a device.
Every [device template](howto-set-up-template.md) includes:
iot-central Quick Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-configure-rules.md
In this quickstart, you create an IoT Central rule that sends an email when some
## Prerequisites
-Before you begin, you should complete the previous quickstart [Create and use an Azure IoT Central application](./quick-deploy-iot-central.md) to connect the **IoT Plug and Play** smartphone app to your IoT Central application.
+Before you begin, you should complete the previous quickstart [Connect your first device](./quick-deploy-iot-central.md). It shows you how to create an Azure IoT Central application and connect the **IoT Plug and Play** smartphone app to it.
## Create a telemetry-based rule
When the phone is lying on its back, the **z** value is greater than `9`, when t
1. In the **Target devices** section, select **IoT Plug and Play mobile** as the **Device template**. This option filters the devices the rule applies to by device template type. You can add more filter criteria by selecting **+ Filter**.
-1. In the **Conditions** section, you define what triggers your rule. Use the following information to define a single condition based on accelerometer z-axis telemetry. This rule uses aggregation so you receive a maximum of one email for each device every five minutes:
+1. In the **Conditions** section, you define what triggers your rule. Use the following information to define a single condition based on accelerometer z-axis telemetry. This rule uses aggregation, so you receive a maximum of one email for each device every five minutes:
| Field | Value | |||
iot-central Tutorial Water Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-quality-monitoring.md
The water quality monitoring application you created from the application templa
:::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitor-device1.png" alt-text="Select device 1":::
-1. On the **Cloud Properties** tab, change the **Acidity (pH) threshold** value from **8** to **9** and select **Save**.
+1. On the **Cloud Properties** tab, change the **Acidity (pH) threshold** value to **9** and select **Save**.
1. Explore the **Device Properties** tab and the **Device Dashboard** tab. > [!NOTE]
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
Last updated 12/20/2021
For many retailers, environmental conditions within their stores are a key differentiator from their competitors. Retailers want to maintain pleasant conditions within their stores for the benefit of their customers.
-You can use the IoT Central in-store analytics condition monitoring application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using of different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
+You can use the IoT Central in-store analytics condition monitoring application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
Use the application template to:
Use the IoT Central *in-store analytics* application template and the guidance i
:::image type="content" source="media/tutorial-in-store-analytics-create-app/store-analytics-architecture-frame.png" alt-text="Azure IoT Central Store Analytics."::: -- Set of IoT sensors sending telemetry data to a gateway device.-- Gateway devices sending telemetry and aggregated insights to IoT Central.-- Continuous data export to the desired Azure service for manipulation.-- Data can be structured in the desired format and sent to a storage service.-- Business applications can query data and generate insights that power retail operations.
+1. Set of IoT sensors sending telemetry data to a gateway device.
+1. Gateway devices sending telemetry and aggregated insights to IoT Central.
+1. Continuous data export to the desired Azure service for manipulation.
+1. Data can be structured in the desired format and sent to a storage service.
+1. Business applications can query data and generate insights that power retail operations.
## Condition monitoring sensors
To create a custom theme:
To update the application image:
-1. Select **Administration > Application settings**.
+1. Select **Administration > Your Application**.
1. Use the **Select image** button to choose an image to upload as the application image. This image appears on the application tile in the **My Apps** page of the IoT Central application manager.
To update the application image:
### Create device templates
-You can create device templates that enable you and the application operators to configure and manage devices. You create a template by building a custom one, by importing an existing template file, or by importing a template from the Azure IoT device catalog. After you create and customize a device template, use it to connect real devices to your application. Optionally, use a device template to generate simulated devices for testing.
+You can create device templates that enable you and the application operators to configure and manage devices. You can create a template by building a custom one, by importing an existing template file, or by importing a template from the Azure IoT device catalog. After you create and customize a device template, use it to connect real devices to your application. Optionally, use a device template to generate simulated devices for testing.
The **In-store analytics - checkout** application template has device templates for several devices. There are device templates for two of the three devices you use in the application. The RuuviTag device template isn't included in the **In-store analytics - checkout** application template. In this section, you add a device template for RuuviTag sensors to your application.
To add a RuuviTag device template to your application:
1. Find and select the **RuuviTag Multisensor** device template in the Azure IoT device catalog.
-1. Select **Next: Customize**.
+1. Select **Next: Review**.
:::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-template.png" alt-text="Screenshot that highlights the Next: Customize button.":::
To customize the built-in interfaces of the RuuviTag device template:
1. Select **Customize** in the RuuviTag device template menu.
-1. Scroll in the list of capabilities and find the `humidity` telemetry type. It's the row item with the editable **Display name** value of *humidity*.
+1. Scroll in the list of capabilities and find the `RelativeHumidity` telemetry type. It's the row item with the editable **Display name** value of *RelativeHumidity*.
-In the following steps, you customize the `humidity` telemetry type for the RuuviTag sensors. Optionally, customize some of the other telemetry types.
+In the following steps, you customize the `RelativeHumidity` telemetry type for the RuuviTag sensors. Optionally, customize some of the other telemetry types.
-For the `humidity` telemetry type, make the following changes:
+For the `RelativeHumidity` telemetry type, make the following changes:
1. Select the **Expand** control to expand the schema details for the row.
-1. Update the **Display Name** value from *humidity* to a custom value such as *Relative humidity*.
+1. Update the **Display Name** value from *RelativeHumidity* to a custom value such as *Humidity*.
-1. Change the **Semantic Type** option from *None* to *Humidity*. Optionally, set schema values for the humidity telemetry type in the expanded schema view. Schema settings allow you to create detailed validation requirements for the data that your sensors track. For example, you could set minimum and maximum operating range values for a given interface.
+1. Change the **Semantic Type** option from *Relative humidity* to *Humidity*. Optionally, set schema values for the humidity telemetry type in the expanded schema view. Schema settings allow you to create detailed validation requirements for the data that your sensors track. For example, you could set minimum and maximum operating range values for a given interface.
1. Select **Save** to save your changes.
To create a rule:
1. Enter *Humidity level* as the name of the rule.
-1. Choose the RuuviTag device template in **Scopes**. The rule you define will apply to all sensors based on that template. Optionally, you could create a filter that would apply the rule only to a defined subset of the sensors.
+1. Choose the RuuviTag device template in **Target devices**. The rule you define will apply to all sensors based on that template. Optionally, you could create a filter that would apply the rule only to a defined subset of the sensors.
-1. Choose `Relative humidity` as the **Telemetry**. It's the device capability that you customized in a previous step.
+1. Choose `Humidity` as the **Telemetry**. It's the device capability that you customized in a previous step.
1. Choose `Is greater than` as the **Operator**.
iot-develop Quickstart Devkit Microchip Atsame54 Xpro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md
Keep Termite open to monitor device output in the following steps.
* IAR Embedded Workbench for ARM (EW for ARM). You can download and install a [14-day free trial of IAR EW for ARM](https://www.iar.com/products/architectures/arm/iar-embedded-workbench-for-arm/).
-* Download the [Azure_RTOS_6.1_ATSAME54-XPRO_IAR_Samples_2020_10_10.zip](https://github.com/azure-rtos/samples/releases/download/rel_6.1_pnp_beta/Azure_RTOS_6.1_PnP_ATSAME54-XPRO_IAR_Sample_2021_03_18.zip) file and extract it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the [Azure_RTOS_6.1_ATSAME54-XPRO_IAR_Samples_2020_10_10.zip](https://github.com/azure-rtos/samples/releases/download/v6.1_rel/Azure_RTOS_6.1_ATSAME54-XPRO_IAR_Samples_2021_11_03.zip) file and extract it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
Keep Termite open to monitor device output in the following steps.
* [MPLAB XC32/32++ Compiler 2.4.0 or later](https://www.microchip.com/mplab/compilers).
-* Download the [Azure_RTOS_6.1_ATSAME54-XPRO_MPLab_Samples_2020_10_10.zip](https://github.com/azure-rtos/samples/releases/download/rel_6.1_pnp_beta/Azure_RTOS_6.1_PnP_ATSAME54-XPRO_MPLab_Sample_2021_03_18.zip) file and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the [Azure_RTOS_6.1_ATSAME54-XPRO_MPLab_Samples_2020_10_10.zip](https://github.com/azure-rtos/samples/releases/download/v6.1_rel/Azure_RTOS_6.1_ATSAME54-XPRO_MPLab_Samples_2021_11_03.zip) file and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-manage-device-certificates.md
For more information about the function of the different certificates on an IoT
For these two automatically generated certificates, you have the option of setting a flag in the config file to configure the number of days for the lifetime of the certificates. >[!NOTE]
->There is a third auto-generated certificate that the IoT Edge security manager creates, the **IoT Edge hub server certificate**. This certificate always has a 90 day lifetime, but is automatically renewed before expiring. The auto-generated CA lifetime value set in the config file doesn't affect this certificate.
+>There is a third auto-generated certificate that the IoT Edge security manager creates, the **IoT Edge hub server certificate**. This certificate always has a 30 day lifetime, but is automatically renewed before expiring. The auto-generated CA lifetime value set in the config file doesn't affect this certificate.
Upon expiry after the specified number of days, IoT Edge has to be restarted to regenerate the device CA certificate. The device CA certificate won't be renewed automatically.
iot-hub Iot Hub Device Streams Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-streams-overview.md
Two sides of each stream (on the device and service side) use the IoT Hub SDK to
Use the links below to learn more about device streams. > [!div class="nextstepaction"]
-> [Device streams on IoT show (Channel 9)](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fchannel9.msdn.com%2FShows%2FInternet-of-Things-Show%2FAzure-IoT-Hub-Device-Streams&data=02%7C01%7Crezas%40microsoft.com%7Cc3486254a89a43edea7c08d67a88bcea%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636831125031268909&sdata=S6u9qiehBN4tmgII637uJeVubUll0IZ4p2ddtG5pDBc%3D&reserved=0)
+> [Azure IoT Hub Device Streams Video](/shows/Internet-of-Things-Show/Azure-IoT-Hub-Device-Streams)
lighthouse Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/managed-services-offers.md
Title: Managed Service offers in Azure Marketplace description: Offer your Azure Lighthouse management services to customers through Managed Services offers in Azure Marketplace. Previously updated : 09/08/2021 Last updated : 02/02/2022
This article describes the **Managed Service** offer type in [Azure Marketplace]
Managed Service offers streamline the process of onboarding customers to Azure Lighthouse. When a customer purchases an offer in Azure Marketplace, they'll be able to specify which subscriptions and/or resource groups should be onboarded.
-For each offer, you define the access that users in your organization will have to work on resources in the customer tenant. This is done through a manifest that specifies the Azure Active Directory (Azure AD) users, groups, and service principals that will have access to customer resources, along with [roles](tenants-users-roles.md) that define their level of access.
+For each offer, you define the access that users in your organization will have to work on resources in the customer tenant. This is done through a manifest that specifies the Azure Active Directory (Azure AD) users, groups, and service principals that will have access to customer resources, along with [roles](tenants-users-roles.md#role-support-for-azure-lighthouse) that define their level of access.
> [!NOTE] > Managed Service offers may not be available in Azure Government and other national clouds.
-## Public and private offers
+## Public and private plans
Each Managed Service offer includes one or more plans. Plans can be either private or public.
-If you want to limit your offer to specific customers, you can publish a private plan. When you do so, the plan can only be purchased for the specific subscription IDs that you provide. For more info, see [Private offers](../../marketplace/private-offers.md).
+If you want to limit your offer to specific customers, you can publish a private plan. When you do so, the plan can only be purchased for the specific subscription IDs that you provide. For more info, see [Private plans](../../marketplace/private-plans.md).
> [!NOTE]
-> Private offers are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program.
+> Private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program.
Public plans let you promote your services to new customers. These are usually more appropriate when you only require limited access to the customer's tenant. Once you've established a relationship with a customer, if they decide to grant your organization additional access, you can do so either by publishing a new private plan for that customer only, or by [onboarding them for further access using Azure Resource Manager templates](../how-to/onboard-customer.md).
lighthouse Publish Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/publish-managed-services-offers.md
Title: Publish a Managed Service offer to Azure Marketplace description: Learn how to publish a Managed Service offer that onboards customers to Azure Lighthouse. Previously updated : 08/10/2021 Last updated : 02/02/2022 # Publish a Managed Service offer to Azure Marketplace
-In this article, you'll learn how to publish a public or private Managed Service offer to [Azure Marketplace](https://azuremarketplace.microsoft.com) using the [Commercial Marketplace](../../marketplace/overview.md) program in Partner Center. Customers who purchase the offer will then delegate subscriptions or resource groups, allowing you to manage them through [Azure Lighthouse](../overview.md).
+In this article, you'll learn how to publish a public or private Managed Service offer to [Azure Marketplace](https://azuremarketplace.microsoft.com) using the [commercial marketplace](../../marketplace/overview.md) program in Partner Center. Customers who purchase the offer will then delegate subscriptions or resource groups, allowing you to manage them through [Azure Lighthouse](../overview.md).
## Publishing requirements
-You need to have a valid [account in Partner Center](../../marketplace/create-account.md) to create and publish offers. If you don't have an account already, the [sign-up process](https://aka.ms/joinmarketplace) will lead you through the steps of creating an account in Partner Center and enrolling in the Commercial Marketplace program.
+You need to have a valid [account in Partner Center](../../marketplace/create-account.md) to create and publish offers. If you don't have an account already, the [sign-up process](https://aka.ms/joinmarketplace) will lead you through the steps of creating an account in Partner Center and enrolling in the commercial marketplace program.
Per the [Managed Service offer certification requirements](/legal/marketplace/certification-policies#700-managed-services), you must have a [Silver or Gold Cloud Platform competency level](/partner-center/learn-about-competencies) or be an [Azure Expert MSP](https://partner.microsoft.com/membership/azure-expert-msp) in order to publish a Managed Service offer. You must also [enter a lead destination that will create a record in your CRM system](../../marketplace/plan-managed-service-offer.md#customer-leads) each time a customer deploys your offer.
The following table can help determine whether to onboard customers by publishin
|Requires [Partner Center account](../../marketplace/create-account.md) |Yes |No | |Requires [Silver or Gold Cloud Platform competency level](/partner-center/learn-about-competencies) or [Azure Expert MSP](https://partner.microsoft.com/membership/azure-expert-msp) |Yes |No | |Available to new customers through Azure Marketplace |Yes |No |
-|Can limit offer to specific customers |Yes (only with private offers, which can't be used with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program) |Yes |
+|Can limit offer to specific customers |Yes (only with private plans, which can't be used with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program) |Yes |
|Requires customer acceptance in Azure portal |Yes |No | |Can use automation to onboard multiple subscriptions, resource groups, or customers |No |Yes | |Immediate access to new built-in roles and Azure Lighthouse features |Not always (generally available after some delay) |Yes |
The following table can help determine whether to onboard customers by publishin
For detailed instructions about how to create your offer, including all of the information and assets you'll need to provide, see [Create a Managed Service offer](../../marketplace/create-managed-service-offer.md).
-To learn about the general publishing process, review the [Commercial Marketplace documentation](../../marketplace/overview.md). You should also review the [commercial marketplace certification policies](/legal/marketplace/certification-policies), particularly the [Managed Services](/legal/marketplace/certification-policies#700-managed-services) section.
+To learn about the general publishing process, review the [commercial marketplace documentation](../../marketplace/overview.md). You should also review the [commercial marketplace certification policies](/legal/marketplace/certification-policies), particularly the [Managed Services](/legal/marketplace/certification-policies#700-managed-services) section.
Once a customer adds your offer, they will be able to delegate one or more subscriptions or resource groups, which will then be [onboarded to Azure Lighthouse](#the-customer-onboarding-process).
Once a customer adds your offer, they will be able to delegate one or more subsc
## Publish your offer
-Once you've completed all of the sections, your next step is to publish the offer to Azure Marketplace. Select the **Publish** button to initiate the process of making your offer live. More info about this process can be found [here](../../marketplace/review-publish-offer.md).
+Once you've completed all of the sections, your next step is to publish the offer. After you initiate the publishing process, your offer will go through several validation and publishing steps. For more information, see [Review and publish an offer to the commercial marketplace](../../marketplace/review-publish-offer.md)
-You can [publish an updated version of your offer](../../marketplace/update-existing-offer.md) at any time. For example, you may want to add a new role definition to a previously-published offer. When you do so, customers who have already added the offer will see an icon in the [**Service providers**](view-manage-service-providers.md) page in the Azure portal that lets them know an update is available. Each customer will be able to [review the changes](view-manage-service-providers.md#update-service-provider-offers) and decide whether they want to update to the new version.
+You can [publish an updated version of your offer](../../marketplace/update-existing-offer.md) at any time. For example, you may want to add a new role definition to a previously-published offer. When you do so, customers who have already added the offer will see an icon in the [**Service providers**](view-manage-service-providers.md) page in the Azure portal that lets them know an update is available. Each customer will be able to [review the changes and update to the new version](view-manage-service-providers.md#update-service-provider-offers).
## The customer onboarding process
-After a customer adds your offer, they'll be able to [delegate one or more specific subscriptions or resource groups](view-manage-service-providers.md#delegate-resources), which will then be onboarded to Azure Lighthouse. If a customer has accepted an offer but has not yet delegated any resources, they'll see a note at the top of the **Provider offers** section of the [**Service providers**](view-manage-service-providers.md) page in the Azure portal.
+After a customer adds your offer, they can [delegate one or more specific subscriptions or resource groups](view-manage-service-providers.md#delegate-resources), which will be onboarded to Azure Lighthouse. If a customer has accepted an offer but has not yet delegated any resources, they'll see a note at the top of the **Provider offers** section of the [**Service providers**](view-manage-service-providers.md) page in the Azure portal.
> [!IMPORTANT] > Delegation must be done by a non-guest account in the customer's tenant who has a role with the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), for the subscription being onboarded (or which contains the resource groups that are being onboarded). To find users who can delegate the subscription, a user in the customer's tenant can select the subscription in the Azure portal, open **Access control (IAM)**, and [view all users with the Owner role](../../role-based-access-control/role-assignments-list-portal.md#list-owners-of-a-subscription).
-Once the customer delegates a subscription (or one or more resource groups within a subscription), the **Microsoft.ManagedServices** resource provider will be registered for that subscription, and users in your tenant will be able to access the delegated resources according to the authorizations in your offer.
+Once the customer delegates a subscription (or one or more resource groups within a subscription), the **Microsoft.ManagedServices** resource provider will be registered for that subscription, and users in your tenant will be able to access the delegated resources according to the authorizations that you defined in your offer.
> [!NOTE] > To delegate additional subscriptions or resource groups to the same offer at a later time, the customer will need to [manually register the **Microsoft.ManagedServices** resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) on each subscription before delegating.
If you publish an updated version of your offer, the customer can [review the ch
## Next steps -- Learn about the [Commercial Marketplace](../../marketplace/overview.md).
+- Learn about the [commercial marketplace](../../marketplace/overview.md).
- [Link your partner ID](partner-earned-credit.md) to track your impact across customer engagements. - Learn about [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md). - [View and manage customers](view-manage-customers.md) by going to **My customers** in the Azure portal.
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-custom-probe-overview.md
Azure Monitor logs are not available for both public and internal Basic Load Bal
- HTTPS probes do not support mutual authentication with a client certificate. - You should assume Health probes will fail when TCP timestamps are enabled. - A basic SKU load balancer health probe isn't supported with a virtual machine scale set.
+- HTTP probes do not support probing on the following ports due to security concerns: 19, 21, 25, 70, 110, 119, 143, 220, 993.
## Next steps
load-balancer Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/whats-new.md
You can also find the latest Azure Load Balancer updates and subscribe to the RS
| Feature | Support for moves across resource groups | Standard Load Balancer and Standard Public IP support for [resource group moves](https://azure.microsoft.com/updates/standard-resource-group-move/). | October 2020 | | Feature | [Cross-region load balancing with Global tier on Standard LB](https://azure.microsoft.com/updates/preview-azure-load-balancer-now-supports-crossregion-load-balancing/) | Azure Load Balancer supports Cross Region Load Balancing. Previously, Standard Load Balancer had a regional scope. With this release, you can load balance across multiple Azure regions via a single, static, global anycast Public IP address. | September 2020 | | Feature| Azure Load Balancer Insights using Azure Monitor | Built as part of Azure Monitor for Networks, customers now have topological maps for all their Load Balancer configurations and health dashboards for their Standard Load Balancers preconfigured with metrics in the Azure portal. [Get started and learn more](https://azure.microsoft.com/blog/introducing-azure-load-balancer-insights-using-azure-monitor-for-networks/) | June 2020 |
-| Validation | Addition of validation for HA ports | A validation was added to ensure that HA port rules and non HA port rules are only configurable when Floating IP is enabled. Previously, the this configuration would go through, but not work as intended. No change to functionality was made. You can learn more [here](load-balancer-ha-ports-overview.md#limitations)| June 2020 |
+| Validation | Addition of validation for HA ports | A validation was added to ensure that HA port rules and non HA port rules are only configurable when Floating IP is enabled. Previously, this configuration would go through, but not work as intended. No change to functionality was made. You can learn more [here](load-balancer-ha-ports-overview.md#limitations)| June 2020 |
| Feature| IPv6 support for Azure Load Balancer (generally available) | You can have IPv6 addresses as your frontend for your Azure Load Balancers. Learn how to [create a dual stack application here](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md) |April 2020| | Feature| TCP Resets on Idle Timeout (generally available)| Use TCP resets to create a more predictable application behavior. [Learn more](load-balancer-tcp-reset.md)| February 2020 |
The product group is actively working on resolutions for the following known iss
|Issue |Description |Mitigation | | - ||| | IP based LB outbound IP | IP based LB leverages Azure's Default Outbound Access IP for outbound when no outbound rules are configured | In order to prevent outbound access from this IP, please leverage Outbound rules or a NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion |
+| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, is not respected. Load Balancer health probes will probe up/down immediately after 1 probe regardless of the property's configured value | To reflect the current behavior, please set the value of numberOfProbes ("Unhealthy threshold" in Portal) as 1 |
logic-apps Manage Logic Apps With Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/manage-logic-apps-with-azure-portal.md
To stop the trigger from firing the next time when the trigger condition is met,
1. Save your changes. This step resets your trigger's current state. 1. [Reactivate your logic app](#disable-enable-single-logic-app).
+* When a workflow is disabled, you can still resubmit runs.
+ <a name="disable-enable-single-logic-app"></a> ### Disable or enable a single logic app
To stop the trigger from firing the next time when the trigger condition is met,
1. To confirm whether your operation succeeded or failed, on the main Azure toolbar, open the **Notifications** list (bell icon).
-> [!NOTE]
-> When a logic app workflow is disabled, you can still resubmit runs.
- <a name="disable-or-enable-multiple-logic-apps"></a> ### Disable or enable multiple logic apps
logic-apps Manage Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/manage-logic-apps-with-visual-studio.md
Title: Edit and manage logic apps by using Visual Studio with Cloud Explorer
description: Edit, update, manage, add to source control, and deploy logic apps by using Visual Studio with Cloud Explorer ms.suite: integration--++ Last updated 01/28/2022
To stop the trigger from firing the next time when the trigger condition is met,
1. Save your changes. This step resets your trigger's current state. 1. [Reactivate your logic app](#enable-logic-apps).
+* When a workflow is disabled, you can still resubmit runs.
+ <a name="disable-logic-apps"></a> ### Disable logic apps
In Cloud Explorer, open your logic app's shortcut menu, and select **Disable**.
![Disable your logic app in Cloud Explorer](./media/manage-logic-apps-with-visual-studio/disable-logic-app-cloud-explorer.png)
-> [!NOTE]
-> When a logic app workflow is disabled, you can still resubmit runs.
- <a name="enable-logic-apps"></a> ### Enable logic apps
logic-apps Quickstart Create Logic Apps Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/quickstart-create-logic-apps-visual-studio-code.md
ms.suite: integration
Previously updated : 05/25/2021 Last updated : 02/02/2022 #Customer intent: As a developer, I want to create my first automated workflow by using Azure Logic Apps while working in Visual Studio Code
Before you start, make sure that you have these items:
* Basic knowledge about [logic app workflow definitions](../logic-apps/logic-apps-workflow-definition-language.md) and their structure as described with JSON
- If you're new to Logic Apps, try this [quickstart](../logic-apps/quickstart-create-first-logic-app-workflow.md), which creates your first logic apps in the Azure portal and focuses more on the basic concepts.
+ If you're new to Azure Logic Apps, try this [quickstart](../logic-apps/quickstart-create-first-logic-app-workflow.md), which creates your first logic apps in the Azure portal and focuses more on the basic concepts.
* Access to the web for signing in to Azure and your Azure subscription
Before you start, make sure that you have these items:
For more information, see [Extension Marketplace](https://code.visualstudio.com/docs/editor/extension-gallery). To contribute to this extension's open-source version, visit the [Azure Logic Apps extension for Visual Studio Code on GitHub](https://github.com/Microsoft/vscode-azurelogicapps).
-* If your logic app needs to communicate through a firewall that limits traffic to specific IP addresses, that firewall needs to allow access for *both* the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses used by the Logic Apps service or runtime in the Azure region where your logic app exists. If your logic app also uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#outbound) in your logic app's Azure region.
+* If your logic app needs to communicate through a firewall that limits traffic to specific IP addresses, that firewall needs to allow access for *both* the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses used by Azure Logic Apps or runtime in the Azure region where your logic app exists. If your logic app also uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#outbound) in your logic app's Azure region.
<a name="access-azure"></a>
In Visual Studio Code, you can open and review the earlier versions for your log
In Visual Studio Code, if you edit a published logic app and save your changes, you *overwrite* your already deployed app. To avoid breaking your logic app in production and minimize disruption, disable your logic app first. You can then reactivate your logic app after you've confirmed that your logic app still works.
-> [!NOTE]
-> Disabling a logic app affects workflow instances in the following ways:
->
-> * The Logic Apps service continues all in-progress and pending runs until they finish. Based on the volume or backlog, this process might take time to complete.
->
-> * The Logic Apps service doesn't create or run new workflow instances.
->
-> * The trigger won't fire the next time that its conditions are met. However, the trigger state remembers the point at which the logic app was stopped. So, if you reactivate the logic app, the trigger fires for all the unprocessed items since the last run.
->
-> To stop the trigger from firing on unprocessed items since the last run, clear the trigger's state before you reactivate the logic app:
->
-> 1. In the logic app, edit any part of the workflow's trigger.
-> 1. Save your changes. This step resets your trigger's current state.
-> 1. Reactivate your logic app.
+* Azure Logic Apps continues all in-progress and pending runs until they finish. Based on the volume or backlog, this process might take time to complete.
+
+* Azure Logic Apps doesn't create or run new workflow instances.
+
+* The trigger won't fire the next time that its conditions are met.
+
+* The trigger state remembers the point at which the logic app was stopped. So, if you reactivate the logic app, the trigger fires for all the unprocessed items since the last run.
+
+ To stop the trigger from firing on unprocessed items since the last run, clear the trigger's state before you reactivate the logic app:
+
+ 1. In the logic app, edit any part of the workflow's trigger.
+ 1. Save your changes. This step resets your trigger's current state.
+ 1. Reactivate your logic app.
+
+* When a workflow is disabled, you can still resubmit runs.
1. If you haven't signed in to your Azure account and subscription yet from inside Visual Studio Code, follow the [previous steps to sign in now](#access-azure).
In Visual Studio Code, if you edit a published logic app and save your changes,
Deleting a logic app affects workflow instances in the following ways:
-* The Logic Apps service makes a best effort to cancel any in-progress and pending runs.
+* Azure Logic Apps makes a best effort to cancel any in-progress and pending runs.
Even with a large volume or backlog, most runs are canceled before they finish or start. However, the cancellation process might take time to complete. Meanwhile, some runs might get picked up for execution while the service works through the cancellation process.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. You have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an `Unauthorized` error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions.
machine-learning Concept Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-differential-privacy.md
As the amount of data that an organization collects and uses for analyses increa
Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy may be required for regulatory compliance.
-> [!div class="mx-imgBorder"]
-> ![Differential privacy machine learning process](./media/concept-differential-privacy/differential-privacy-machine-learning.jpg)
In traditional scenarios, raw data is stored in files and databases. When users analyze data, they typically use the raw data. This is a concern because it might infringe on an individual's privacy. Differential privacy tries to deal with this problem by adding "noise" or randomness to the data so that users can't identify any individual data points. At the least, such a system provides plausible deniability. Therefore, the privacy of individuals is preserved with limited impact on the accuracy of the data.
Learn more about differential privacy in machine learning:
- [How to build a differentially private system](how-to-differential-privacy.md) in Azure Machine Learning.
+ - To learn more about the components of SmartNoise, check out the GitHub repositories for [SmartNoise Core](https://github.com/opendifferentialprivacy/smartnoise-core), [SmartNoise SDK](https://github.com/opendifferentialprivacy/smartnoise-sdk), and [SmartNoise samples](https://github.com/opendifferentialprivacy/smartnoise-samples).
machine-learning How To Compute Cluster Instance Os Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-compute-cluster-instance-os-upgrade.md
- Title: Upgrade host OS for compute cluster and instance -
-description: Upgrade the host OS for compute cluster and compute instance from Ubuntu 16.04 LTS to 18.04 LTS.
------ Previously updated : 03/03/2021-----
-# Upgrade compute instance and compute cluster host OS
-
-Azure Machine Learning __compute cluster__ and __compute instance__ are managed compute infrastructure. As a managed service, Microsoft manages the host OS and the packages and software versions that are installed.
-
-The host OS for compute cluster and compute instance has been Ubuntu 16.04 LTS. On **April 30, 2021**, Ubuntu is ending support for 16.04. Starting on __March 15, 2021__, Microsoft will automatically update the host OS to Ubuntu 18.04 LTS. Updating to 18.04 will ensure continued security updates and support from the Ubuntu community. This update will be rolled out across Azure regions and will be available in all regions by __April 09, 2021__. For more information on Ubuntu ending support for 16.04, see the [Ubuntu release blog](https://wiki.ubuntu.com/Releases).
-
-> [!TIP]
-> * The host OS is not the OS version you might specify for an [environment](how-to-use-environments.md) when training or deploying a model. Environments run inside Docker. Docker runs on the host OS.
-> * If you are currently using Ubuntu 16.04 based environments for training or deployment, Microsoft recommends that you switch to using Ubuntu 18.04 based images. For more information, see [How to use environments](how-to-use-environments.md) and the [Azure Machine Learning containers repository](https://github.com/Azure/AzureML-Containers/tree/master/base).
-> * When using an Azure Machine Learning compute instance based on Ubuntu 18.04, the default Python version is _Python 3.8_.
-## Creating new resources
-
-Compute cluster or compute instances created after __April 09, 2021__ use Ubuntu 18.04 LTS as the host OS by default. You cannot select a different host OS.
-
-## Upgrade existing resources
-
-If you have existing compute clusters or compute instances created before __March 15, 2021__, you need to take action to upgrade the host OS to Ubuntu 18.04. Depending on the region you access Azure Machine Learning from, we recommend you take these actions after __April 09, 2021__ to ensure our changes have rolled out to all regions:
-
-* __Azure Machine Learning compute cluster__:
-
- * If the cluster is configured with __min nodes = 0__, it will automatically be upgraded when all jobs are completed and it reduces to zero nodes.
- * If __min nodes > 0__, temporarily change the minimum nodes to zero and allow the cluster to reduce to zero nodes.
-
- For more information on changing the minimum nodes, see the [az ml computetarget update amlcompute](/cli/azure/ml(v1)/computetarget/update#az_ml_computetarget_update_amlcompute) Azure CLI command, or the [AmlCompute.update()](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute#update-min-nodes-none--max-nodes-none--idle-seconds-before-scaledown-none-) SDK reference.
-
-* __Azure Machine Learning compute instance__: Create a new compute instance (which will use Ubuntu 18.04) and delete the old instance.
-
- * Any notebook stored in the workspace file share, data stores, of datasets will be accessible from the new compute instance.
- * If you have created custom conda environments, you can export those environments from the existing instance and import on the new instance. For information on conda export and import, see [Conda documentation](https://docs.conda.io/) at docs.conda.io.
-
- For more information, see the [What is compute instance](concept-compute-instance.md) and [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) articles
-
-## Check host OS version
-
-For information on checking the host OS version, see the Ubuntu community wiki page on [checking your Ubuntu version](https://help.ubuntu.com/community/CheckingYourUbuntuVersion).
-
-> [!TIP]
-> To use the `lsb_release -a` command from the wiki, you can [use a terminal session on a compute instance](how-to-access-terminal.md).
-## Next steps
-
-If you have any further questions or concerns, contact us at [ubuntu18azureml@service.microsoft.com](mailto:ubuntu18azureml@service.microsoft.com).
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-network-security-overview.md
-- Previously updated : 12/07/2021++ Last updated : 02/02/2022
In this section, you learn how Azure Machine Learning securely communicates betw
1. Azure Batch service receives the job from the workspace. It then submits the training job to the compute environment through the public load balancer for the compute resource.
-1. The compute resource receives the job and begins training. The compute resource accesses secure storage accounts to download training files and upload output.
+1. The compute resource receives the job and begins training. The compute resource uses information stored in key vault to access storage accounts to download training files and upload output.
### Limitations - Azure Compute Instance and Azure Compute Clusters must be in the same VNet, region, and subscription as the workspace and its associated resources.
If you need to use a custom DNS solution for your virtual network, you must add
For more information on the required domain names and IP addresses, see [how to use a workspace with a custom DNS server](how-to-custom-dns.md).
+## Microsoft Sentinel
+
+Microsoft Sentinel is a security solution that can integrate with Azure Machine Learning. For example, using Jupyter notebooks provided through Azure Machine Learning. For more information, see [Use Jupyter notebooks to hunt for security threats](/azure/sentinel/notebooks).
+
+### Public access
+
+Microsoft Sentinel can automatically create a workspace for you if you are OK with a public endpoint. In this configuration, the security operations center (SOC) analysts and system administrators connect to notebooks in your workspace through Sentinel.
+
+For information on this process, see [Create an Azure ML workspace from Microsoft Sentinel](/azure/sentinel/notebooks?tabs=public-endpoint#create-an-azure-ml-workspace-from-microsoft-sentinel)
++
+### Private endpoint
+
+If you want to secure your workspace and associated resources in a VNet, you must create the Azure Machine Learning workspace first. You must also create a virtual machine 'jump box' in the same VNet as your workspace, and enable Azure Bastion connectivity to it. Similar to the public configuration, SOC analysts and administrators can connect using Microsoft Sentinel, but some operations must be performed using Azure Bastion to connect to the VM.
+
+For more information on this configuration, see [Create an Azure ML workspace from Microsoft Sentinel](/azure/sentinel/notebooks?tabs=private-endpoint#create-an-azure-ml-workspace-from-microsoft-sentinel)
++ ## Next steps This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-setup-authentication.md
Previously updated : 10/21/2021 Last updated : 02/02/2022
You can use a service principal for Azure CLI commands. For more information, se
### Use a service principal with the REST API (preview)
-The service principal can also be used to authenticate to the Azure Machine Learning [REST API](/rest/api/azureml/) (preview). You use the Azure Active Directory [client credentials grant flow](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md), which allow service-to-service calls for headless authentication in automated workflows. The examples are implemented with the [ADAL library](../active-directory/azuread-dev/active-directory-authentication-libraries.md) in both Python and Node.js, but you can also use any open-source library that supports OpenID Connect 1.0.
+The service principal can also be used to authenticate to the Azure Machine Learning [REST API](/rest/api/azureml/) (preview). You use the Azure Active Directory [client credentials grant flow](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md), which allow service-to-service calls for headless authentication in automated workflows.
-> [!NOTE]
-> MSAL.js is a newer library than ADAL, but you cannot do service-to-service authentication using client credentials with MSAL.js, since it is primarily a client-side library intended
-> for interactive/UI authentication tied to a specific user. We recommend using ADAL as shown below to build automated workflows with the REST API.
-
-#### Node.js
-
-Use the following steps to generate an auth token using Node.js. In your environment, run `npm install adal-node`. Then, use your `tenantId`, `clientId`, and `clientSecret` from the service principal you created in the steps above as values for the matching variables in the following script.
-
-```javascript
-const adal = require('adal-node').AuthenticationContext;
-
-const authorityHostUrl = 'https://login.microsoftonline.com/';
-const tenantId = 'your-tenant-id';
-const authorityUrl = authorityHostUrl + tenantId;
-const clientId = 'your-client-id';
-const clientSecret = 'your-client-secret';
-const resource = 'https://management.azure.com/';
-
-const context = new adal(authorityUrl);
-
-context.acquireTokenWithClientCredentials(
- resource,
- clientId,
- clientSecret,
- (err, tokenResponse) => {
- if (err) {
- console.log(`Token generation failed due to ${err}`);
- } else {
- console.dir(tokenResponse, { depth: null, colors: true });
- }
- }
-);
-```
-
-The variable `tokenResponse` is an object that includes the token and associated metadata such as expiration time. Tokens are valid for 1 hour, and can be refreshed by running the same call again to retrieve a new token. The following snippet is a sample response.
-
-```javascript
-{
- tokenType: 'Bearer',
- expiresIn: 3599,
- expiresOn: 2019-12-17T19:15:56.326Z,
- resource: 'https://management.azure.com/',
- accessToken: "random-oauth-token",
- isMRRT: true,
- _clientId: 'your-client-id',
- _authority: 'https://login.microsoftonline.com/your-tenant-id'
-}
-```
-
-Use the `accessToken` property to fetch the auth token. See the [REST API documentation](https://github.com/microsoft/MLOps/tree/master/examples/AzureML-REST-API) for examples on how to use the token to make API calls.
-
-#### Python
-
-Use the following steps to generate an auth token using Python. In your environment, run `pip install adal`. Then, use your `tenantId`, `clientId`, and `clientSecret` from the service principal you created in the steps above as values for the appropriate variables in the following script.
-
-```python
-from adal import AuthenticationContext
-
-client_id = "your-client-id"
-client_secret = "your-client-secret"
-resource_url = "https://login.microsoftonline.com"
-tenant_id = "your-tenant-id"
-authority = "{}/{}".format(resource_url, tenant_id)
-
-auth_context = AuthenticationContext(authority)
-token_response = auth_context.acquire_token_with_client_credentials("https://management.azure.com/", client_id, client_secret)
-print(token_response)
-```
-
-The variable `token_response` is a dictionary that includes the token and associated metadata such as expiration time. Tokens are valid for 1 hour, and can be refreshed by running the same call again to retrieve a new token. The following snippet is a sample response.
-
-```python
-{
- 'tokenType': 'Bearer',
- 'expiresIn': 3599,
- 'expiresOn': '2019-12-17 19:47:15.150205',
- 'resource': 'https://management.azure.com/',
- 'accessToken': 'random-oauth-token',
- 'isMRRT': True,
- '_clientId': 'your-client-id',
- '_authority': 'https://login.microsoftonline.com/your-tenant-id'
-}
-```
-
-Use `token_response["accessToken"]` to fetch the auth token. See the [REST API documentation](https://github.com/microsoft/MLOps/tree/master/examples/AzureML-REST-API) for examples on how to use the token to make API calls.
-
-#### Java
-
-In Java, retrieve the bearer token using a standard REST call:
-
-```java
-String tenantId = "your-tenant-id";
-String clientId = "your-client-id";
-String clientSecret = "your-client-secret";
-String resourceManagerUrl = "https://management.azure.com";
-
-HttpRequest tokenAuthenticationRequest = tokenAuthenticationRequest(tenantId, clientId, clientSecret, resourceManagerUrl);
-
-HttpClient client = HttpClient.newBuilder().build();
-Gson gson = new Gson();
-HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
-if (response.statusCode == 200)
-{
- body = gson.fromJson(body, AuthenticationBody.class);
-
- // ... etc ...
-}
-// ... etc ...
-
-static HttpRequest tokenAuthenticationRequest(String tenantId, String clientId, String clientSecret, String resourceManagerUrl){
- String authUrl = String.format("https://login.microsoftonline.com/%s/oauth2/token", tenantId);
- String clientIdParam = String.format("client_id=%s", clientId);
- String resourceParam = String.format("resource=%s", resourceManagerUrl);
- String clientSecretParam = String.format("client_secret=%s", clientSecret);
-
- String bodyString = String.format("grant_type=client_credentials&%s&%s&%s", clientIdParam, resourceParam, clientSecretParam);
-
- HttpRequest request = HttpRequest.newBuilder()
- .uri(URI.create(authUrl))
- .POST(HttpRequest.BodyPublishers.ofString(bodyString))
- .build();
- return request;
-}
-
-class AuthenticationBody {
- String access_token;
- String token_type;
- int expires_in;
- String scope;
- String refresh_token;
- String id_token;
-
- AuthenticationBody() {}
-}
-```
+> [!IMPORTANT]
+> If you are currently using Azure Active Directory Authentication Library (ADAL) to get credentials, we recommend that you [Migrate to the Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-migration). ADAL support is scheduled to end on June 30, 2022.
-The preceding code would have to handle exceptions and status codes other than `200 OK`, but shows the pattern:
+For information and samples on authenticating with MSAL, see the following articles:
-- Use the client ID and secret to validate that your program should have access-- Use your tenant ID to specify where `login.microsoftonline.com` should be looking-- Use Azure Resource Manager as the source of the authorization token
+* JavaScript - [How to migrate a Javascript app from ADAL.js to MSAL.js](/azure/active-directory/develop/msal-compare-msal-js-and-adal-js).
+* Node.js - [How to migrate a Node.js app from ADAL to MSAL](/azure/active-directory/develop/msal-node-migration).
+* Python - [ADAL to MSAL migration guide for Python](/azure/active-directory/develop/migrate-python-adal-msal).
## Use managed identity authentication
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow.md
ws = Workspace.from_config()
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri()) ```
->[!NOTE]
->The tracking URI is valid up to an hour or less. If you restart your script after some idle time, use the get_mlflow_tracking_uri API to get a new URI.
- Set the MLflow experiment name with `set_experiment()` and start your training run with `start_run()`. Then use `log_metric()` to activate the MLflow logging API and begin logging your training run metrics. ```Python
marketplace Create Managed Service Offer Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer-plans.md
Previously updated : 07/12/2021 Last updated : 02/02/2022 # Create plans for a Managed Service offer
To add up to 10,000 subscription IDs with a .CSV file:
## Technical configuration
-This section creates a manifest with authorization information for managing customer resources. This information is required in order to enable [Azure delegated resource management](../lighthouse/concepts/architecture.md).
+This section creates a manifest with authorization information for Azure Active Directory (Azure AD) user accounts. This information is required in order to enable access to the customer's resources through [Azure Lighthouse](../lighthouse/overview.md).
Review [Tenants, roles, and users in Azure Lighthouse scenarios](../lighthouse/concepts/tenants-users-roles.md#best-practices-for-defining-users-and-roles) to understand which roles are supported and the best practices for defining your authorizations.
Review [Tenants, roles, and users in Azure Lighthouse scenarios](../lighthouse/c
### Manifest 1. Under **Manifest**, provide a **Version** for the manifest. Use the format n.n.n (for example, 1.2.5).
-2. Enter your **Tenant ID**. This is a GUID associated with the Azure Active Directory (Azure AD) tenant ID of your organization; that is, the managing tenant from which you will access your customers' resources. If you don't have this handy, you can find it by hovering over your account name on the upper right-hand side of the Azure portal, or by selecting **Switch directory**.
+2. Enter your **Tenant ID**. This is a GUID associated with the Azure AD tenant ID of your organization; that is, the managing tenant from which you will access your customers' resources. If you don't have this handy, you can find it by hovering over your account name on the upper right-hand side of the Azure portal, or by selecting **Switch directory**.
If you publish a new version of your offer and need to create an updated manifest, select **+ New manifest**. Be sure to increase the version number from the previous manifest version.
marketplace Plan Managed Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-managed-service-offer.md
Previously updated : 12/06/2021 Last updated : 02/02/2022 # Plan a Managed Service offer This article introduces the requirements for publishing a Managed Service offer to the commercial marketplace using Partner Center.
-Managed Services are Azure Marketplace offers that enable cross-tenant and multi-tenant management with Azure Lighthouse. To learn more, see [What is Azure Lighthouse?](../lighthouse/overview.md) When a customer purchases a Managed Service offer, theyΓÇÖre able to delegate one or more subscription or resource group. You can then work on those resources by using the [Azure delegated resource management](../lighthouse/concepts/architecture.md) capabilities of Azure Lighthouse.
+Managed Services are Azure Marketplace offers that enable cross-tenant and multi-tenant management with Azure Lighthouse. To learn more, see [What is Azure Lighthouse?](../lighthouse/overview.md) When a customer purchases a Managed Service offer, theyΓÇÖre able to delegate one or more subscription or resource group. You can then work on those resources by using [Azure Lighthouse](../lighthouse/overview.md).
## Eligibility requirements
marketplace Power Bi Visual Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/power-bi-visual-properties.md
Previously updated : 09/21/2021 Last updated : 02/02/2022 # Configure Power BI visual offer properties This page lets you define the [categories](./categories.md) used to group your offer on Microsoft AppSource, the legal contracts that support your offer, and support documentation.
-## General info
--- Select up to three **[Categories](./categories.md)** for grouping your offer into the appropriate marketplace search areas.-- Select up to two **Industries** industries which will be used to display your offer when customers filter their search on industries in the online store.
+## Categories
+
+Select up to three **[Categories](./categories.md)** for grouping your offer into the appropriate marketplace search areas. This table shows the categories that are available for Power BI Visuals.
+
+| Category | Description |
+| | - |
+| All | All the different types of visuals that are certified for use within your organization. |
+| Change over time | These visuals are used to display the changing trend of measures over time. |
+| Comparison | These visuals are used to compare categories by their measures. |
+| Correlation | These visuals show the degree to which two or more variables are correlated. |
+| Distribution | These visuals show how the values of a variable are distributed. |
+| Flow | These visuals show the dynamic relationships, or flow between variables. |
+| Infographics | These visuals present information graphically, so it's easier to understand. |
+| Maps | Visualize your data in map form. |
+| Part-to-Whole | These visuals are used to display the parts of a variable in relation to the whole. |
+| R visuals | These visuals require R script to run. |
+| KPI | These visuals are used to display key performance indicators. |
+| Filters | Narrow down the data within a report by using filters. |
+| Narratives | Use narratives to tell a story with text and data. |
+| Other | More specialized visuals to discover. |
+|||
+
+## Industries
+
+Select up to two **Industries** industries which will be used to display your offer when customers filter their search on industries in the online store. This table shows the industries available for Power BI Visuals.
+
+| Industry |
+| |
+| Automotive |
+| Defense & Intelligence |
+| Distribution |
+| Education |
+| Energy |
+| Financial Services |
+| Government |
+| Healthcare |
+| Hospitality & Travel |
+| Manufacturing |
+| Media & Communications |
+| Nonprofit & IGO |
+| Professional services |
+| Retail |
+|||
## Legal and support info
Select **Save draft** before continuing to the next tab in the left-nav menu, **
## Next steps -- [**Offer listing**](power-bi-visual-offer-listing.md)
+- [**Offer listing**](power-bi-visual-offer-listing.md)
marketplace Test Publish Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/test-publish-saas-offer.md
Previously updated : 09/27/2021 Last updated : 02/01/2022 # How to test and publish a SaaS offer to the commercial marketplace
Use the following steps to preview your offer.
1. On the **Offer overview** page, select a preview link under the **Go live** button.
-1. To validate the end-to-end purchase and setup flow, purchase the plans in your offer while it's in preview. First, notify Microsoft with a [support ticket](https://aka.ms/marketplacesupport) to ensure we don't process a charge.
+1. To validate the end-to-end purchase flow, purchase plans using the _preview URL_ generated during the _Publisher Sign off_ phase of publishing. Note that the customer account used for the purchase will be billed and invoiced. Publisher Payout will occur when the [criteria](/partner-center/payment-thresholds-methods-timeframes) are met and will be paid out per the [payout schedule](/partner-center/payout-policy-details) with the agency fee deducted from the purchase price.
1. If your SaaS offer supports [metered billing using the commercial marketplace metering service](./partner-center-portal/saas-metered-billing.md), review and follow the testing best practices detailed in [Marketplace metered billing APIs](marketplace-metering-service-apis.md#development-and-testing-best-practices).
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-restrict-egress.md
The following FQDN / application rules are required:
| **`registry.redhat.io`** | **HTTPS:443** | Mandatory for core add-ons. This is used by the cluster to download core components such as dev tools, operator-based add-ons, and Red Hat provided container images. | **`mirror.openshift.com`** | **HTTPS:443** | This is required in the VDI environment or your laptop to access mirrored installation content and images. It's required in the cluster to download platform release signatures to know what images to pull from quay.io. | | **`api.openshift.com`** | **HTTPS:443** | Required by the cluster to check if there are available updates before downloading the image signatures. |
-| **`arosvc.azurecr.io`** | **HTTPS:443** | Internal Private registry for ARO Operators. Required if you do not allow the service-endpoints Microsoft.ContainerRegistry on your subnets. |
+| **`arosvc.azurecr.io`** | **HTTPS:443** | Global Internal Private registry for ARO Operators. Required if you do not allow the service-endpoints Microsoft.ContainerRegistry on your subnets. |
+| **`arosvc.$REGION.data.azurecr.io`** | **HTTPS:443** | Regional Internal Private registry for ARO Operators. Required if you do not allow the service-endpoints Microsoft.ContainerRegistry on your subnets. |
| **`management.azure.com`** | **HTTPS:443** | This is used by the cluster to access Azure APIs. | | **`login.microsoftonline.com`** | **HTTPS:443** | This is used by the cluster for authentication to Azure. | | **`gcs.prod.monitoring.core.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
payment-hsm Certification Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/payment-hsm/certification-compliance.md
+
+ Title: Azure Payment HSM certification and compliance
+description: Information on Azure Payment HSM certification and compliance
+++
+tags: azure-resource-manager
+++ Last updated : 01/25/2022+++
+# Certification and compliance
+
+Thales payShield 10K HSMs are certified to FIPS 140-2 Level 3 and PCI HSM v3.
+
+The Azure Payment HSM service is currently undergoing PCI DSS and PCI 3DS audit assessment.
+
+The Azure Payment HSM can be deployed as part of a validated PCI P2PE and PCI PIN component or solution, Microsoft can provide evidence of proof for customer to meet their P2PE and PIN certification requirements.
+
+## Next steps
+
+- Learn more about [Azure Payment HSM](overview.md)
+- See some common [deployment scenarios](deployment-scenarios.md)
+- Read the [frequently asked questions](faq.yml)
payment-hsm Deployment Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/payment-hsm/deployment-scenarios.md
+
+ Title: Azure Payment HSM deployment scenarios
+description: Azure HSM deployment scenarios for high availability deployment and disaster recovery deployment
+++
+tags: azure-resource-manager
+++ Last updated : 01/25/2022++++
+# Deployment scenarios
+
+Microsoft deploys payment hardware security modules (HSM) in stamps within a region and multi-region to enable high availability (HA) and disaster recovery. In a region, HSMs are deployed across different stamps to prevent single rack failure, and customers must provision two devices in a region from two separate stamps in order to achieve high availability. For disaster recovery, customer must provision HSM devices in an alternative region.
+
+Thales doesn't provide PayShield SDK to customers, which supports HA over a cluster (a collection of HSMs initialized with same LMK). However, the customers usage scenario of the Thales PayShield devices is like a Stateless Server. Thus, no synchronization is required between HSMs during application runtime. Customers handle the HA using their custom client. One implementation would be to load balance between healthy HSMs connected to the application. Customers are responsible for implementing high availability by provisioning multiple devices, load balancing them, and using any kind of available backup mechanism to back up keys.
+
+## Recommended high availability deployment
++
+For High Availability, customer must allocate HSM between stamp 1 and stamp 2 (in other words, no two HSMs from same stamp)
+
+## Recommended disaster recovery deployment
++
+This scenario caters to regional-level failure. The usual strategy is to completely switch the application stack (and its HSMs), rather than trying to reach an HSM in Region 2 from application in Region 1 due to latency.
+
+## Next steps
+
+- Learn more about [Azure Payment HSM](overview.md)
+- Find out how to [get started with Azure Payment HSM](getting-started.md)
+- Learn about [Certification and compliance](certification-compliance.md)
+- Read the [frequently asked questions](faq.yml)
payment-hsm Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/payment-hsm/getting-started.md
+
+ Title: Getting started with Azure Payment HSM
+description: Information to begin using Azure Payment HSM
+++
+tags: azure-resource-manager
+++ Last updated : 01/25/2022+++
+# Getting started with Azure Payment HSM
+
+To get started with Azure Payment HSM (preview), contact your Microsoft sales representative and request access [via email](mailto:paymentHSMRequest@microsoft.com). Upon approval, you'll be provided with onboarding instructions.
+
+## Availability
+
+The Azure Public Preview is currently available in **East US** and **North Europe**.
+
+## Prerequisites
+
+Azure Payment HSM customers must have:
+
+- Access to the Thales Customer Portal (Customer ID)
+- Thales smart cards and card reader for payShield Manager
+
+## Cost
+
+The HSM devices will be charged based on the service pricing page. All other Azure resources for networking and virtual machines will incur regular Azure costs too.
+
+## payShield customization considerations
+
+If you are using payShield on-premise today with a custom firmware, a porting exercise is required to update the firmware to a version compatible with the Azure deployment. Please contact your Thales account manager to request a quote.
+
+Ensure that the following information is provided:
+- Customization hardware platform (e.g., payShield 9000 or payShield 10K)
+- Customization firmware number
+
+## Support
+
+There is no service-level agreement (SLA) for this public preview. Use of this service for production workloads isn't supported
+
+The HSM base firmware installed in public preview is Thales payShield10K base software version 1.4a 1.8.3.
+
+Microsoft will provide support for hardware issues, networking issues, and provisioning issues. Support tickets can be created from the Azure portal. Select **Dedicated HSM** as the Service Type, and mention "payment HSM" in the summary field, with a severity case of B or C.
+
+Support through engineering escalation is only available during business hours: Monday - Friday, 9 AM - 5 PM PST.
+
+Thales provides application-level support, such as client software, HSM configuration, and backup.
+
+Customers are responsible for applying payShield security patches and upgrading payShield firmware for their provisioned HSMs. Thales payShield10K versions prior to 1.4a 1.8.3. aren't supported
+
+Microsoft will apply payShield security patches to unallocated HSMs.
+
+## Next steps
+
+- Learn more about [Azure Payment HSM](overview.md)
+- See some common [deployment scenarios](deployment-scenarios.md)
+- Learn about [Certification and compliance](certification-compliance.md)
+- Read the [frequently asked questions](faq.yml)
++
payment-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/payment-hsm/overview.md
+
+ Title: What is Azure Payment HSM?
+description: Learn how Azure Payment HSM is an Azure service that provide cryptographic key operations for real-time, critical payment transactions
++
+tags: azure-resource-manager
++++ Last updated : 01/20/2022++++
+# What is Azure Payment HSM?
+
+Azure Payment HSM Service is a "BareMetal" service delivered using [Thales payShield 10K payment hardware security modules (HSM)](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-10k) to provide cryptographic key operations for real-time, critical payment transactions in the Azure cloud. Azure Payment HSM is designed specifically to help a service provider and an individual financial institution accelerate their payment system's digital transformation strategy and adopt the public cloud. It meets the most stringent security, audit compliance, low latency, and high-performance requirements by the Payment Card Industry (PCI).
+
+Payment HSMs are provisioned and connected directly to users' virtual network, and HSMs are under users' sole administration control. HSMs can be easily provisioned as a pair of devices and configured for high availability. Users of the service utilize [Thales payShield Manager](https://cpl.thalesgroup.com/encryption/hardware-security-modules/payment-hsms/payshield-manager) for secure remote access to the HSMs as part of their Azure-based subscription. Multiple subscription options are available to satisfy a broad range of performance and multiple application requirements that can be upgraded quickly in line with end-user business growth. Azure payment HSM service offers highest performance level 2500 CPS.
+
+Azure Payment HSM a highly specialized service. Therefore, we recommend that you fully understand the key concepts, including [pricing](https://azure.microsoft.com/services/azure-payment-hsm/) and [support](getting-started.md#support).
+
+## Why use Azure Payment HSM?
+
+Momentum is building as financial institutions move some or all of their payment applications to the cloud. This entails a migration from the legacy on-premises (on-prem) applications and HSMs to a cloud-based infrastructure that isn't generally under their direct control. Often it means a subscription service rather than perpetual ownership of physical equipment and software. Corporate initiatives for efficiency and a scaled-down physical presence are the drivers for this. Conversely, with cloud-native organizations, the adoption of cloud-first without any on-premise presence is their fundamental business model. Whatever the reason, end users of a cloud-based payment infrastructure expect reduced IT complexity, streamlined security compliance, and flexibility to scale their solution seamlessly as their business grows.
+
+The cloud offers significant benefits, but challenges when migrating a legacy on-premise payment application (involving payment HSMs) to the cloud must be addressed. Some of these are:
+
+- Shared responsibility and trust ΓÇô what potential loss of control in some areas is acceptable?
+- Latency ΓÇô how can an efficient, high-performance link between the application and HSM be achieved?
+- Performing everything remotely ΓÇô what existing processes and procedures may need to be adapted?
+- Security certifications and audit compliance ΓÇô how will current stringent requirements be fulfilled?
+
+Azure Payment HSM addresses these challenges and delivers a compelling value proposition to users of the service through the following features.
+
+### Enhanced security and compliance
+
+End users of the service can leverage Microsoft security and compliance investments to increase their security posture. Microsoft maintains PCI DSS and PCI 3DS compliant Azure data centers, including those which house Azure Payment HSM solutions. The Azure Payment HSM solution can be deployed as part of a validated PCI P2PE / PCI PIN component or solution, helping to simplify ongoing security audit compliance. Thales payShield 10K HSMs deployed in the security infrastructure are certified to FIPS 140-2 Level 3 and PCI HSM v3.
+
+### Customer-managed HSM in Azure
+
+The Azure Payment HSM is a part of a subscription service that offers single-tenant HSMs for the service customer to have complete administrative control and exclusive access to the HSM. The customer could be a payment service provider acting on behalf of multiple financial institutions or a financial institution that wishes to directly access the Azure Payment HSM service. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released to ensure complete privacy and security is maintained. The customer is responsible for ensuring sufficient HSM subscriptions are active to meet their requirements for backup, disaster recovery, and resilience to achieve the same performance available on their on-premise HSMs.
+
+### Accelerate digital transformation and innovation in cloud
+
+For existing Thales payShield customers wishing to add a cloud option, the Azure Payment HSM solution offers native access to a payment HSM in Azure for "lift and shift" while still experiencing the low latency they're accustomed to via their on-premise payShield HSMs. The solution also offers high-performance transactions for mission-critical payment applications. Consequently, customers can continue their digital transformation strategy by leveraging technology innovation in the cloud. Existing Thales payShield customers can utilize their existing remote management solutions (payShield Manager and payShield TMD together with associated smart card readers and smart cards as appropriate) to work with the Azure Payment HSM service. Customers new to payShield can source the hardware accessories from Thales or one of its partners before deploying their HSM as part of the subscription service.
+
+## Typical use cases
+
+With benefits including low latency and the ability to quickly add more HSM capacity as required, the cloud service is a perfect fit for a broad range of use cases, including:
+Payment processing
+- Card & mobile payment authorization
+- PIN & EMV cryptogram validation
+- 3D-Secure authentication
+
+Payment credential issuing
+- Cards
+- Mobile secure elements
+- Wearables
+- Connected devices
+- Host card emulation (HCE) applications
+
+Securing keys & authentication data
+- POS, mPOS & SPOC key management
+- Remote key loading (for ATM & POS/mPOS devices)
+- PIN generation & printing
+- PIN routing
+
+Sensitive data protection
+- Point-to-point encryption (P2PE)
+- Security tokenization (for PCI DSS compliance)
+- EMV payment tokenization
+
+## Suitable for both existing and new payment HSM users
+
+The solution provides clear benefits for both Payment HSM users with a legacy on-premise HSM footprint and those new payment ecosystem entrants with no legacy infrastructure to support and who may choose a cloud-native approach from the outset.
+
+Benefits for existing on-premise HSM users
+- Requires no modifications to payment applications or HSM software to migrate existing applications to the Azure solution
+- Enables more flexibility and efficiency in HSM utilization
+- Simplifies HSM sharing between multiple teams, geographically dispersed
+- Reduces physical HSM footprint in their legacy data centers
+- Improves cash flow for new projects
+
+Benefits for new payment participants
+- Avoids introduction of on-premise HSM infrastructure
+- Lowers upfront investment via the Azure subscription model
+- Offers access to latest certified hardware and software on-demand
+
+## Glossary
+
+| Term | Definition |
+|||
+| 3DS | 3D Secure |
+| ATM | Automated Teller Machine |
+| EMV | Europay Mastercard Visa |
+| FIPS | Federal Information Processing Standards |
+| HCE | Host Card Emulation |
+| HSM | Hardware Security Module |
+| mPOS | Mobile Point of Sale |
+| P2PE | Point-to-Point Encryption |
+| PCI | Payment Card Industry |
+| PIN | Personal Identification Number |
+| POS | Point of Sale |
+| SPOC | Software-based PIN Entry on Commercial off the Shelf (COTS) Solutions |
+| TMD | payShield Trusted Management Device |
+
+## Next steps
+
+- Learn more about [Azure Payment HSM](overview.md)
+- Find out how to [get started with Azure Payment HSM](getting-started.md)
+- See some common [deployment scenarios](deployment-scenarios.md)
+- Learn about [Certification and compliance](certification-compliance.md)
+- Read the [frequently asked questions](faq.yml)
purview Azure Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/azure-purview-connector-overview.md
+
+ Title: Azure Purview supported data sources and file types
+description: This article provides details about supported data sources, file types, and functionalities in Azure Purview.
+++++ Last updated : 01/24/2022+++
+# Supported data sources and file types
+
+This article discusses currently supported data sources, file types, and scanning concepts in Azure Purview.
+
+## Azure Purview data sources
+
+The table below shows the supported capabilities for each data source. Select the data source, or the feature, to learn more.
+
+|**Category**| **Data Store** |**Technical metadata** |**Classification** |**Lineage** | **Access Policy** |
+|||||||
+| Azure | [Azure Blob Storage](register-scan-azure-blob-storage-source.md)| [Yes](register-scan-azure-blob-storage-source.md#register) | [Yes](register-scan-azure-blob-storage-source.md#scan)| Limited* | [Yes](how-to-access-policies-storage.md) |
+|| [Azure Cosmos DB](register-scan-azure-cosmos-database.md)| [Yes](register-scan-azure-cosmos-database.md#register) | [Yes](register-scan-azure-cosmos-database.md#scan)|No*|No|
+|| [Azure Data Explorer](register-scan-azure-data-explorer.md)| [Yes](register-scan-azure-data-explorer.md#register) | [Yes](register-scan-azure-data-explorer.md#scan)| No* | No |
+|| [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md)| [Yes](register-scan-adls-gen1.md#register) | [Yes](register-scan-adls-gen1.md#scan)| Limited* | No |
+|| [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)| [Yes](register-scan-adls-gen2.md#register) | [Yes](register-scan-adls-gen2.md#scan)| Limited* | [Yes](how-to-access-policies-storage.md) |
+|| [Azure Database for MySQL](register-scan-azure-mysql-database.md) | [Yes](register-scan-azure-mysql-database.md#register) | [Yes](register-scan-azure-mysql-database.md#scan) | No* | No |
+|| [Azure Database for PostgreSQL](register-scan-azure-postgresql.md) | [Yes](register-scan-azure-postgresql.md#register) | [Yes](register-scan-azure-postgresql.md#scan) | No* | No |
+|| [Azure Dedicated SQL pool (formerly SQL DW)](register-scan-azure-synapse-analytics.md)| [Yes](register-scan-azure-synapse-analytics.md#register) | [Yes](register-scan-azure-synapse-analytics.md#scan)| No* | No |
+|| [Azure Files](register-scan-azure-files-storage-source.md)|[Yes](register-scan-azure-files-storage-source.md#register) | [Yes](register-scan-azure-files-storage-source.md#scan) | Limited* | No |
+|| [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register) |[Yes](register-scan-azure-sql-database.md#scan)| No* | No |
+|| [Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md)| [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | [Yes](register-scan-azure-sql-database-managed-instance.md#scan) | No* | No |
+|| [Azure Synapse Analytics (Workspace)](register-scan-synapse-workspace.md)| [Yes](register-scan-synapse-workspace.md#register) | [Yes](register-scan-synapse-workspace.md#scan)| [Yes - Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| No|
+|Database| [Amazon RDS](register-scan-amazon-rds.md) | [Yes](register-scan-amazon-rds.md#register-an-amazon-rds-data-source) | [Yes](register-scan-amazon-rds.md#scan-an-amazon-rds-database) | No | No |
+|| [Cassandra](register-scan-cassandra-source.md)|[Yes](register-scan-cassandra-source.md#register) | No | [Yes](register-scan-cassandra-source.md#lineage)| No|
+|| [Db2](register-scan-db2.md) | [Yes](register-scan-db2.md#register) | No | [Yes](register-scan-db2.md#lineage) | No |
+|| [Google BigQuery](register-scan-google-bigquery-source.md)| [Yes](register-scan-google-bigquery-source.md#register)| No | [Yes](register-scan-google-bigquery-source.md#lineage)| No|
+|| [Hive Metastore Database](register-scan-hive-metastore-source.md) | [Yes](register-scan-hive-metastore-source.md#register) | No | [Yes*](register-scan-hive-metastore-source.md#lineage) | No|
+|| [MySQL](register-scan-mysql.md) | [Yes](register-scan-mysql.md#register) | No | [Yes](register-scan-mysql.md#scan) | No |
+|| [Oracle](register-scan-oracle-source.md) | [Yes](register-scan-oracle-source.md#register)| No | [Yes*](register-scan-oracle-source.md#lineage) | No|
+|| [PostgreSQL](register-scan-postgresql.md) | [Yes](register-scan-postgresql.md#register) | No | [Yes](register-scan-postgresql.md#lineage) | No |
+|| [SAP HANA](register-scan-sap-hana.md) | [Yes](register-scan-sap-hana.md#register) | No | No | No |
+|| [Snowflake](register-scan-snowflake.md) | [Yes](register-scan-snowflake.md#register) | No | [Yes](register-scan-snowflake.md#lineage) | No |
+|| [SQL Server](register-scan-on-premises-sql-server.md)| [Yes](register-scan-on-premises-sql-server.md#register) |[Yes](register-scan-on-premises-sql-server.md#scan) | No* | No|
+|| [Teradata](register-scan-teradata-source.md)| [Yes](register-scan-teradata-source.md#register)| No | [Yes*](register-scan-teradata-source.md#lineage) | No|
+|File|[Amazon S3](register-scan-amazon-s3.md)|[Yes](register-scan-amazon-s3.md)| [Yes](register-scan-amazon-s3.md)| Limited* | No|
+|Services and apps| [Erwin](register-scan-erwin-source.md)| [Yes](register-scan-erwin-source.md#register)| No | [Yes](register-scan-erwin-source.md#lineage)| No|
+|| [Looker](register-scan-looker-source.md)| [Yes](register-scan-looker-source.md#register)| No | [Yes](register-scan-looker-source.md#lineage)| No|
+|| [Power BI](register-scan-power-bi-tenant.md)| [Yes](register-scan-power-bi-tenant.md#register)| No | [Yes](how-to-lineage-powerbi.md)| No|
+|| [Salesforce](register-scan-salesforce.md) | [Yes](register-scan-salesforce.md#register) | No | No | No |
+|| [SAP ECC](register-scan-sapecc-source.md)| [Yes](register-scan-sapecc-source.md#register) | No | [Yes*](register-scan-sapecc-source.md#lineage) | No|
+|| [SAP S/4HANA](register-scan-saps4hana-source.md) | [Yes](register-scan-saps4hana-source.md#register)| No | [Yes*](register-scan-saps4hana-source.md#lineage) | No|
+
+\* Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).
+
+> [!NOTE]
+> Currently, Azure Purview can't scan an asset that has `/`, `\`, or `#` in its name. To scope your scan and avoid scanning assets that have those characters in the asset name, use the example in [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md#creating-the-scan).
+
+## Scan regions
+The following is a list of all the Azure data source (data center) regions where the Azure Purview scanner runs. If your Azure data source is in a region outside of this list, the scanner will run in the region of your Azure Purview instance.
+
+### Azure Purview scanner regions
+
+- Australia East
+- Australia Southeast
+- Brazil South
+- Canada Central
+- Central India
+- Central US
+- East Asia
+- East US
+- East US 2
+- France Central
+- Japan East
+- Korea Central
+- North Central US
+- North Europe
+- South Africa North
+- South Central US
+- Southeast Asia
+- UAE North
+- UK South
+- West Central US
+- West Europe
+- West US
+- West US 2
+
+## File types supported for scanning
+
+The following file types are supported for scanning, for schema extraction, and classification where applicable:
+
+- Structured file formats supported by extension: AVRO, ORC, PARQUET, CSV, JSON, PSV, SSV, TSV, TXT, XML, GZIP
+ > [!Note]
+ > * Azure Purview scanner only supports schema extraction for the structured file types listed above.
+ > * For AVRO, ORC, and PARQUET file types, Azure Purview scanner does not support schema extraction for files that contain complex data types (for example, MAP, LIST, STRUCT).
+ > * Azure Purview scanner supports scanning snappy compressed PARQUET types for schema extraction and classification.
+ > * For GZIP file types, the GZIP must be mapped to a single csv file within.
+ > Gzip files are subject to System and Custom Classification rules. We currently don't support scanning a gzip file mapped to multiple files within, or any file type other than csv.
+ > * For delimited file types(CSV, PSV, SSV, TSV, TXT), we do not support data type detection. The data type will be listed as "string" for all columns.
+- Document file formats supported by extension: DOC, DOCM, DOCX, DOT, ODP, ODS, ODT, PDF, POT, PPS, PPSX, PPT, PPTM, PPTX, XLC, XLS, XLSB, XLSM, XLSX, XLT
+- Azure Purview also supports custom file extensions and custom parsers.
+
+## Nested data
+
+Currently, nested data is only supported for JSON content.
+
+For all [system supported file types](#file-types-supported-for-scanning), if there is nested JSON content in a column, then the scanner parses the nested JSON data and surfaces it within the schema tab of the asset.
+
+Nested data, or nested schema parsing, is not supported in SQL. A column with nested data will be reported and classified as is, and subdata will not be parsed.
+
+## Sampling within a file
+
+In Azure Purview terminology,
+- L1 scan: Extracts basic information and meta data like file name, size and fully qualified name
+- L2 scan: Extracts schema for structured file types and database tables
+- L3 scan: Extracts schema where applicable and subjects the sampled file to system and custom classification rules
+
+For all structured file formats, Azure Purview scanner samples files in the following way:
+
+- For structured file types, it samples the top 128 rows in each column or the first 1 MB, whichever is lower.
+- For document file formats, it samples the first 20 MB of each file.
+ - If a document file is larger than 20 MB, then it is not subject to a deep scan (subject to classification). In that case, Azure Purview captures only basic meta data like file name and fully qualified name.
+- For **tabular data sources(SQL, CosmosDB)**, it samples the top 128 rows.
+
+## Resource set file sampling
+
+A folder or group of partition files is detected as a *resource set* in Azure Purview, if it matches with a system resource set policy or a customer defined resource set policy. If a resource set is detected, then Azure Purview will sample each folder that it contains. Learn more about resource sets [here](concept-resource-sets.md).
+
+File sampling for resource sets by file types:
+
+- **Delimited files (CSV, PSV, SSV, TSV)** - 1 in 100 files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set'
+- **Data Lake file types (Parquet, Avro, Orc)** - 1 in 18446744073709551615 (long max) files are sampled (L3 scan) within a folder or group of partition files that are considered a *resource set*
+- **Other structured file types (JSON, XML, TXT)** - 1 in 100 files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set'
+- **SQL objects and CosmosDB entities** - Each file is L3 scanned.
+- **Document file types** - Each file is L3 scanned. Resource set patterns don't apply to these file types.
+
+## Classification
+
+All 206 system classification rules apply to structured file formats. Only the MCE classification rules apply to document file types (Not the data scan native regex patterns, bloom filter-based detection). For more information on supported classifications, see [Supported classifications in Azure Purview](supported-classifications.md).
+
+## Next steps
+
+- [Register and scan Azure Blob storage source](register-scan-azure-blob-storage-source.md)
+- [Scans and ingestion in Azure Purview](concept-scans-and-ingestion.md)
+- [Manage data sources in Azure Purview](manage-data-sources.md)
purview Catalog Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-conditional-access.md
The following steps show how to configure Azure Purview to enforce a Conditional
## Prerequisites -- When multi-factor authentication is enabled, to login to Azure Purview Studio, you must perform multi-factor authentication.
+- When multi-factor authentication is enabled, to sign in to Azure Purview Studio, you must perform multi-factor authentication.
## Configure conditional access
The following steps show how to configure Azure Purview to enforce a Conditional
:::image type="content" source="media/catalog-conditional-access/conditional-access-blade.png" alt-text="Screenshot that shows Conditional Access blade"lightbox="media/catalog-conditional-access/conditional-access-blade.png":::
-2. In the **Conditional Access-Policies** blade, click **New policy**, provide a name, and then click **Configure rules**.
-3. Under **Assignments**, select **Users and groups**, check **Select users and groups**, and then select the user or group for Conditional Access. Click **Select**, and then click **Done** to accept your selection.
+1. In the **Conditional Access-Policies** menu, select **New policy**, provide a name, and then select **Configure rules**.
+1. Under **Assignments**, select **Users and groups**, check **Select users and groups**, and then select the user or group for Conditional Access. Select **Select**, and then select **Done** to accept your selection.
:::image type="content" source="media/catalog-conditional-access/select-users-and-groups.png" alt-text="Screenshot that shows User and Group selection"lightbox="media/catalog-conditional-access/select-users-and-groups.png":::
-4. Select **Cloud apps**, click **Select apps**. You see all apps available for Conditional Access. Select **Azure Purview**, at the bottom click **Select**, and then click **Done**.
-
+1. Select **Cloud apps**, select **Select apps**. You see all apps available for Conditional Access. Select **Azure Purview**, at the bottom select **Select**, and then select **Done**.
+
:::image type="content" source="media/catalog-conditional-access/select-azure-purview.png" alt-text="Screenshot that shows Applications selection"lightbox="media/catalog-conditional-access/select-azure-purview.png":::
-5. Select **Access controls**, select **Grant**, and then check the policy you want to apply. For this example, we select **Require multi-factor authentication**.
+1. Select **Access controls**, select **Grant**, and then check the policy you want to apply. For this example, we select **Require multi-factor authentication**.
:::image type="content" source="media/catalog-conditional-access/grant-access.png" alt-text="Screenshot that shows Grant access tab"lightbox="media/catalog-conditional-access/grant-access.png":::
-6. Set **Enable policy** to **On** and click **Create**.
+1. Set **Enable policy** to **On** and select **Create**.
## Next steps -- [Use Azure Purview Studio](/use-purview-studio.md)
+- [Use Azure Purview Studio](use-azure-purview-studio.md)
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-permissions.md
Title: Understand access and permissions
-description: This article gives an overview permissions, access control, and collections in Azure Purview. Role-based access control (RBAC) is managed within Azure Purview itself, so this guide will cover the basics to secure your information.
+description: This article gives an overview permission, access control, and collections in Azure Purview. Role-based access control (RBAC) is managed within Azure Purview itself, so this guide will cover the basics to secure your information.
Azure Purview uses **Collections** to organize and manage access across its sour
## Collections
-A collection is a tool Azure Purview uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All access to Azure Purview's resources are managed from collections in the Azure Purview account itself.
+A collection is a tool Azure Purview uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All accesses to Azure Purview's resources are managed from collections in the Azure Purview account itself.
> [!NOTE] > As of November 8th, 2021, ***Insights*** is accessible to Data Curators. Data Readers do not have access to Insights.
All other users can only access information within the Azure Purview account if
Users can only be added to a collection by a collection admin, or through permissions inheritance. The permissions of a parent collection are automatically inherited by its subcollections. However, you can choose to [restrict permission inheritance](how-to-create-and-manage-collections.md#restrict-inheritance) on any collection. If you do this, its subcollections will no longer inherit permissions from the parent and will need to be added directly, though collection admins that are automatically inherited from a parent collection can't be removed.
-You can assign Azure Purview roles to users, security groups and service principals from your Azure Active Directory which is associated with your purview account's subscription.
+You can assign Azure Purview roles to users, security groups and service principals from your Azure Active Directory that is associated with your purview account's subscription.
## Assign permissions to your users
For full instructions, see our [how-to guide for adding role assignments](how-to
Now that you have a base understanding of collections, and access control, follow the guides below to create and manage those collections, or get started with registering sources into your Azure Purview Resource. - [How to create and manage collections](how-to-create-and-manage-collections.md)-- [Azure Purview supported data sources](purview-connector-overview.md)
+- [Azure Purview supported data sources](azure-purview-connector-overview.md)
purview Concept Best Practices Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-classification.md
Here are some considerations to bear in mind as you're defining classifications:
* Describe the phases in the data preparation processes (raw zone, landing zone, and so on) and assign the classifications to specific assets to mark the phase in the process. * With Azure Purview, you can assign classifications at the asset or column level automatically by including relevant classifications in the scan rule, or you can assign them manually after you ingest the metadata into Azure Purview.
-* For automatic assignment, see [Supported data stores in Azure Purview](./purview-connector-overview.md).
+* For automatic assignment, see [Supported data stores in Azure Purview](./azure-purview-connector-overview.md).
* Before you scan your data sources in Azure Purview, it is important to understand your data and configure the appropriate scan rule set for it (for example, by selecting relevant system classification, custom classifications, or a combination of both), because it could affect your scan performance. For more information, see [Supported classifications in Azure Purview](./supported-classifications.md). * The Azure Purview scanner applies data sampling rules for deep scans (subject to classification) for both system and custom classifications. The sampling rule is based on the type of data sources. For more information, see the "Sampling within a file" section in [Supported data sources and file types in Azure Purview](./sources-and-scans.md#sampling-within-a-file).
purview Concept Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-security.md
This article provides best practices for common security requirements in Azure P
:::image type="content" source="media/concept-best-practices/security-defense-in-depth.png" alt-text="Screenshot that shows defense in depth in Azure Purview." :::
-Before applying these recommendations to your environment, you should consult your security team as some may not be applicable to your security requirements.
+Before applying these recommendations to your environment, you should consult your security team as some may not be applicable to your security requirements.
-## Network security
+## Network security
-Azure Purview is a Platform as a Service (PaaS) solution in Azure. You can enable the following network security capabilities for your Azure Purview accounts:
+Azure Purview is a Platform as a Service (PaaS) solution in Azure. You can enable the following network security capabilities for your Azure Purview accounts:
-- Enable [end-to-end network isolation](catalog-private-link-end-to-end.md) using Private Link Service.
+- Enable [end-to-end network isolation](catalog-private-link-end-to-end.md) using Private Link Service.
- Use [Azure Purview Firewall](catalog-private-link-end-to-end.md#firewalls-to-restrict-public-access) to disable Public access.-- Deploy [Network Security Group (NSG) rules](#use-network-security-groups) for subnets where Azure data sources private endpoints, Azure Purview private endpoints and self-hosted runtime VMs are deployed.
+- Deploy [Network Security Group (NSG) rules](#use-network-security-groups) for subnets where Azure data sources private endpoints, Azure Purview private endpoints and self-hosted runtime VMs are deployed.
- Implement Azure Purview with private endpoints managed by a Network Virtual Appliance, such as [Azure Firewall](../firewall/overview.md) for network inspection and network filtering. :::image type="content" source="media/concept-best-practices/security-networking.png" alt-text="Screenshot that shows Azure Purview account in a network."lightbox="media/concept-best-practices/security-networking.png":::
The Azure Purview _account_ private endpoint is used to add another layer of sec
The Azure Purview _portal_ private endpoint is required to enable connectivity to Azure Purview Studio using a private network.
-Azure Purview can scan data sources in Azure or an on-premises environment by using ingestion private endpoints.
+Azure Purview can scan data sources in Azure or an on-premises environment by using ingestion private endpoints.
- For scanning Azure _platform as a service_ data sources, review [Support matrix for scanning data sources through ingestion private endpoint](catalog-private-link.md#support-matrix-for-scanning-data-sources-through-ingestion-private-endpoint). - If you are deploying Azure Purview with end-to-end network isolation, to scan Azure data sources, these data sources must be also configured with private endpoints.
You can disable Azure Purview Public access to cut off access to the Azure Purvi
- Review [known limitations](catalog-private-link-troubleshoot.md). - To scan Azure platform as a service data sources, review [Support matrix for scanning data sources through ingestion private endpoint](catalog-private-link.md#support-matrix-for-scanning-data-sources-through-ingestion-private-endpoint). - Azure data sources must be also configured with private endpoints.-- To scan data sources you must use a self-hosted integration runtime.
+- To scan data sources, you must use a self-hosted integration runtime.
For more information, see [Firewalls to restrict public access](catalog-private-link-end-to-end.md#firewalls-to-restrict-public-access).
The following NSG rules are required on **self-hosted integration runtime VMs**
|Outbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Service tag: `KeyVault` | 443 | Any | Allow |
-The following NSG rules are required on for **Azure Purview account, portal and ingestion private endpoints**:
+The following NSG rules are required on for **Azure Purview account, portal and ingestion private endpoints**:
|Direction |Source |Source port range |Destination |Destination port |Protocol |Action | |||||||| |Inbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Azure Purview account and ingestion private endpoint IP addresses or subnets | 443 | Any | Allow | |Inbound | Management machines' private IP addresses or subnets | * | Azure Purview account and ingestion private endpoint IP addresses or subnets | 443 | Any | Allow |
-For more information, see [Self-hosted integration runtime networking requirements](manage-integration-runtimes.md#networking-requirements).
+For more information, see [Self-hosted integration runtime networking requirements](manage-integration-runtimes.md#networking-requirements).
-## Access management
+## Access management
Identity and Access Management provides the basis of a large percentage of security assurance. It enables access based on identity authentication and authorization controls in cloud services. These controls protect data and resources and decide which requests should be permitted.
-Related to roles and access management in Azure Purview, you can apply the following security best practices:
+Related to roles and access management in Azure Purview, you can apply the following security best practices:
- Define roles and responsibilities to manage Azure Purview in control plane and data plane: - Define roles and tasks required to deploy and manage Azure Purview inside an Azure subscription. - Define roles and task needed to perform data management and governance using Azure Purview. -- Assign roles to Azure Active Directory groups instead of assigning roles to individual users.
+- Assign roles to Azure Active Directory groups instead of assigning roles to individual users.
- Use Azure [Active Directory Entitlement Management](../active-directory/governance/entitlement-management-overview.md) to map user access to Azure AD groups using Access Packages. - Enforce multi-factor authentication for Azure Purview users, especially, for users with privileged roles such as collection admins, data source admins or data curators.
-### Manage an Azure Purview account in control plane and data plane
+### Manage an Azure Purview account in control plane and data plane
Control plane refers to all operations related to Azure deployment and management of Azure Purview inside Azure Resource Manager.
-Data plane refers to all operations, related to interacting with Azure Purview inside Data Map and Data Catalog.
+Data plane refers to all operations, related to interacting with Azure Purview inside Data Map and Data Catalog.
-You can assign control plane and data plane roles to users, security groups and service principals from your Azure Active Directory tenant which is associated to Azure Purview instance's Azure subscription.
+You can assign control plane and data plane roles to users, security groups and service principals from your Azure Active Directory tenant that is associated to Azure Purview instance's Azure subscription.
Examples of control plane operations and data plane operations: |Task |Scope |Recommended role |What roles to use? | ||||| |Deploy an Azure Purview account | Control plane | Azure subscription owner or contributor | Azure RBAC roles |
-|Setup a Private Endpoint for Azure Purview | Control plane | Contributor  | Azure RBAC roles |
+|Set up a Private Endpoint for Azure Purview | Control plane | Contributor  | Azure RBAC roles |
|Delete an Azure Purview account | Control plane | Contributor  | Azure RBAC roles | |View Azure Purview metrics to get current capacity units | Control plane | Reader | Azure RBAC roles | |Create a collection | Data plane | Collection Admin | Azure Purview roles |
Follow [Azure role-based access recommendations](../role-based-access-control/be
To gain access to Azure Purview, users must be authenticated and authorized. Authentication is the process of proving the user is who they claim to be. Authorization refers to controlling access inside Azure Purview assigned on collections.
-We use Azure Active Directory to provide authentication and authorization mechanisms for Azure Purview inside Collections. You can assign Azure Purview roles to the following security principals from your Azure Active Directory tenant which is associated with Azure subscription where your Azure Purview instance is hosted:
+We use Azure Active Directory to provide authentication and authorization mechanisms for Azure Purview inside Collections. You can assign Azure Purview roles to the following security principals from your Azure Active Directory tenant that is associated with Azure subscription where your Azure Purview instance is hosted:
- Users and guest users (if they are already added into your Azure AD tenant) - Security groups
For more information, see [Integrate Azure Purview with Azure security products]
### Secure metadata extraction and storage
-Azure Purview is a data governance solution in cloud. You can register and scan different data sources from various data systems from your on-premises, Azure, or multi-cloud environments into Azure Purview. While data source is registered and scanned in Azure Purview, the actual data and data sources stay in their original locations, only metadata is extracted from data sources and stored in Azure Purview Data Map which means, you do not need to move data out of the region or their original location to extract the metadata into Azure Purview.
+Azure Purview is a data governance solution in cloud. You can register and scan different data sources from various data systems from your on-premises, Azure, or multi-cloud environments into Azure Purview. While data source is registered and scanned in Azure Purview, the actual data and data sources stay in their original locations, only metadata is extracted from data sources and stored in Azure Purview Data Map, which means you do not need to move data out of the region or their original location to extract the metadata into Azure Purview.
-When an Azure Purview account is deployed, in addition, a managed resource group is also deployed in your Azure subscription. A managed Azure Storage Account and a Managed Event Hub are deployed inside this resource group. The managed storage account is used to ingest metadata from data sources during the scan. Since these resources are consumed by the Azure Purview they cannot be accessed by any other users or principals, except the Azure Purview account. This is because an Azure role-based access control (RBAC) deny assignment is added automatically for all principals to this resource group at the time of Azure Purview account deployment, preventing any CRUD operations on these resources if they are not initiated from Azure Purview.
+When an Azure Purview account is deployed, in addition, a managed resource group is also deployed in your Azure subscription. A managed Azure Storage Account and a Managed Event Hubs are deployed inside this resource group. The managed storage account is used to ingest metadata from data sources during the scan. Since these resources are consumed by the Azure Purview they cannot be accessed by any other users or principals, except the Azure Purview account. This is because an Azure role-based access control (RBAC) deny assignment is added automatically for all principals to this resource group at the time of Azure Purview account deployment, preventing any CRUD operations on these resources if they are not initiated from Azure Purview.
### Where is metadata stored?
For more information, see [Encrypt sensitive data at rest](/security/benchmark/a
## Credential management
-To extract metadata from a data source system into Azure Purview Data Map, it is required to register and scan the data source systems in Azure Purview Data Map. To automate this process, we have made available [connectors](purview-connector-overview.md) for different data source systems in Azure Purview to simplify the registration and scanning process.
+To extract metadata from a data source system into Azure Purview Data Map, it is required to register and scan the data source systems in Azure Purview Data Map. To automate this process, we have made available [connectors](azure-purview-connector-overview.md) for different data source systems in Azure Purview to simplify the registration and scanning process.
To connect to a data source Azure Purview requires a credential with read-only access to the data source system.
As a general rule, you can use the following options to set up integration runti
|Multi-cloud | Azure runtime or self-hosted integration runtime based on data source types | Supported credential options vary based on data sources types | |Power BI tenant | Azure Runtime | Azure Purview Managed Identity |
-Use [this guide](purview-connector-overview.md) to read more about each connector and their supported authentication options.
+Use [this guide](azure-purview-connector-overview.md) to read more about each connector and their supported authentication options.
-## Additional recommendations
+## Other recommendations
### Define required number of Azure Purview accounts for your organization
For self-hosted integration runtime VMs deployed as virtual machines in Azure, f
- Lock down inbound traffic to your VMs using Network Security Groups and [Azure Defender access Just-in-Time](../defender-for-cloud/just-in-time-access-usage.md). - Install antivirus or antimalware. - Deploy Azure Defender to get insights around any potential anomaly on the VMs. -- Limit the number of software in the self-hosted integration runtime VMs. Although it is not a mandatory requirement to have a dedicated VM for a self-hosted runtime for Azure Purview, we highly suggest using dedicated VMs especially for production environments. -- Monitor the VMs using [Azure Monitor for VMs](../azure-monitor/vm/vminsights-overview.md). By using Log analytics agent you can capture telemetry such as performance metrics to adjust required capacity for your VMs. -- By integrating virtual machines with Microsoft Defender for Cloud, you can you prevent, detect, and respond to threats .
+- Limit the amount of software in the self-hosted integration runtime VMs. Although it is not a mandatory requirement to have a dedicated VM for a self-hosted runtime for Azure Purview, we highly suggest using dedicated VMs especially for production environments.
+- Monitor the VMs using [Azure Monitor for VMs](../azure-monitor/vm/vminsights-overview.md). By using Log analytics agent, you can capture content such as performance metrics to adjust required capacity for your VMs.
+- By integrating virtual machines with Microsoft Defender for Cloud, you can you prevent, detect, and respond to threats.
- Keep your machines current. You can enable Automatic Windows Update or use [Update Management in Azure Automation](../automation/update-management/overview.md) to manage operating system level updates for the OS. -- Use multiple machines for greater resilience and availability. You can deploy and register multiple self-hosted integration runtime to distribute the scans across multiple self-hosted integration runtime machines or deploy the self-hosted integration runtime on a Virtual Machine Scale Set for higher redundancy and scalability. -- Optionally, you can plan to enable Azure backup from your self-hosted integration runtime VMs to increase the recovery time of a self-hosted integration runtime VM in case of a VM level disaster.
+- Use multiple machines for greater resilience and availability. You can deploy and register multiple self-hosted integration runtimes to distribute the scans across multiple self-hosted integration runtime machines or deploy the self-hosted integration runtime on a Virtual Machine Scale Set for higher redundancy and scalability.
+- Optionally, you can plan to enable Azure backup from your self-hosted integration runtime VMs to increase the recovery time of a self-hosted integration runtime VM if there is a VM level disaster.
## Next steps - [Azure Purview accounts architectures and best practices](concept-best-practices-accounts.md)
purview Concept Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-classification.md
Custom classification rules can be based on a *regular expression* pattern or *d
* [Read about classification best practices](concept-best-practices-classification.md) * [Create custom classifications](create-a-custom-classification-and-classification-rule.md) * [Apply classifications](apply-classifications.md)
-* [Use the Azure Purview Studio](use-purview-studio.md)
+* [Use the Azure Purview Studio](use-azure-purview-studio.md)
purview Concept Data Lineage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-data-lineage.md
Lineage is a critical feature of the Azure Purview Data Catalog to support quali
* [Quickstart: Create an Azure Purview account in the Azure portal](create-catalog-portal.md) * [Quickstart: Create an Azure Purview account using Azure PowerShell/Azure CLI](create-catalog-powershell.md)
-* [Use the Azure Purview Studio](use-purview-studio.md)
+* [Use the Azure Purview Studio](use-azure-purview-studio.md)
purview Create Azure Purview Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-azure-purview-dotnet.md
+
+ Title: 'Quickstart: Create Azure Purview Account using .NET SDK'
+description: Create an Azure Purview Account using .NET SDK.
+++
+ms.devlang: csharp
+ Last updated : 09/27/2021++
+# Quickstart: Create an Azure Purview account using .NET SDK
+
+In this quickstart, you'll use the [.NET SDK](/dotnet/api/overview/azure/purviewresourceprovider) to create an Azure Purview account.
+
+Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Azure Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+
+For more information about Azure Purview, [see our overview page](overview.md). For more information about deploying Azure Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
++
+### Visual Studio
+
+The walkthrough in this article uses Visual Studio 2019. The procedures for Visual Studio 2013, 2015, or 2017 may differ slightly.
+
+### Azure .NET SDK
+
+Download and install [Azure .NET SDK](https://azure.microsoft.com/downloads/) on your machine.
+
+## Create an application in Azure Active Directory
+
+1. In [Create an Azure Active Directory application](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal), create an application that represents the .NET application you are creating in this tutorial. For the sign-on URL, you can provide a dummy URL as shown in the article (`https://contoso.org/exampleapp`).
+1. In [Get values for signing in](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in), get the **application ID** and **tenant ID**, and note down these values that you use later in this tutorial.
+1. In [Certificates and secrets](../active-directory/develop/howto-create-service-principal-portal.md#authentication-two-options), get the **authentication key**, and note down this value that you use later in this tutorial.
+1. In [Assign the application to a role](../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application), assign the application to the **Contributor** role at the subscription level so that the application can create data factories in the subscription.
+
+## Create a Visual Studio project
+
+Next, create a C# .NET console application in Visual Studio:
+
+1. Launch **Visual Studio**.
+2. In the Start window, select **Create a new project** > **Console App (.NET Framework)**. .NET version 4.5.2 or above is required.
+3. In **Project name**, enter **PurviewQuickStart**.
+4. Select **Create** to create the project.
+
+## Install NuGet packages
+
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console**.
+2. In the **Package Manager Console** pane, run the following commands to install packages. For more information, see the [Microsoft.Azure.Management.Purview NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Management.Purview/).
+
+ ```powershell
+ Install-Package Microsoft.Azure.Management.Purview
+ Install-Package Microsoft.Azure.Management.ResourceManager -IncludePrerelease
+ Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory
+ ```
+
+## Create an Azure Purview client
+
+1. Open **Program.cs**, include the following statements to add references to namespaces.
+
+ ```csharp
+ using System;
+ using System.Collections.Generic;
+ using System.Linq;
+ using Microsoft.Rest;
+ using Microsoft.Rest.Serialization;
+ using Microsoft.Azure.Management.ResourceManager;
+ using Microsoft.Azure.Management.Purview;
+ using Microsoft.Azure.Management.Purview.Models;
+ using Microsoft.IdentityModel.Clients.ActiveDirectory;
+ ```
+
+2. Add the following code to the **Main** method that sets the variables. Replace the placeholders with your own values. For a list of Azure regions in which Azure Purview is currently available, search on **Azure Purview** and select the regions that interest you on the following page: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
+
+ ```csharp
+ // Set variables
+ string tenantID = "<your tenant ID>";
+ string applicationId = "<your application ID>";
+ string authenticationKey = "<your authentication key for the application>";
+ string subscriptionId = "<your subscription ID where the data factory resides>";
+ string resourceGroup = "<your resource group where the data factory resides>";
+ string region = "<the location of your resource group>";
+ string purviewAccountName =
+ "<specify the name of purview account to create. It must be globally unique.>";
+ ```
+
+3. Add the following code to the **Main** method that creates an instance of **PurviewManagementClient** class. You use this object to create an Azure Purview Account.
+
+ ```csharp
+ // Authenticate and create a purview management client
+ var context = new AuthenticationContext("https://login.windows.net/" + tenantID);
+ ClientCredential cc = new ClientCredential(applicationId, authenticationKey);
+ AuthenticationResult result = context.AcquireTokenAsync(
+ "https://management.azure.com/", cc).Result;
+ ServiceClientCredentials cred = new TokenCredentials(result.AccessToken);
+ var client = new PurviewManagementClient(cred)
+ {
+ SubscriptionId = subscriptionId
+ };
+ ```
+
+## Create an Azure Purview account
+
+Add the following code to the **Main** method that creates a **Azure Purview Account**.
+
+```csharp
+// Create a purview Account
+Console.WriteLine("Creating Azure Purview Account " + purviewAccountName + "...");
+Account account = new Account()
+{
+Location = region,
+Identity = new Identity(type: "SystemAssigned"),
+Sku = new AccountSku(name: "Standard", capacity: 4)
+};
+try
+{
+ client.Accounts.CreateOrUpdate(resourceGroup, purviewAccountName, account);
+ Console.WriteLine(client.Accounts.Get(resourceGroup, purviewAccountName).ProvisioningState);
+}
+catch (ErrorResponseModelException purviewException)
+{
+Console.WriteLine(purviewException.StackTrace);
+ }
+ Console.WriteLine(
+ SafeJsonConvert.SerializeObject(account, client.SerializationSettings));
+ while (client.Accounts.Get(resourceGroup, purviewAccountName).ProvisioningState ==
+ "PendingCreation")
+ {
+ System.Threading.Thread.Sleep(1000);
+ }
+Console.WriteLine("\nPress any key to exit...");
+Console.ReadKey();
+```
+
+## Run the code
+
+Build and start the application, then verify the execution.
+
+The console prints the progress of creating Azure Purview Account.
+
+### Sample output
+
+```json
+Creating Azure Purview Account testpurview...
+Succeeded
+{
+ "sku": {
+ "capacity": 4,
+ "name": "Standard"
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "location": "southcentralus"
+}
+
+Press any key to exit...
+```
+
+## Verify the output
+
+Go to the **Azure Purview accounts** page in the [Azure portal](https://portal.azure.com) and verify the account created using the above code.
+
+## Delete Azure Purview account
+
+To programmatically delete an Azure Purview Account, add the following lines of code to the program:
+
+```csharp
+Console.WriteLine("Deleting the Azure Purview Account");
+client.Accounts.Delete(resourceGroup, purviewAccountName);
+```
+
+## Check if Azure Purview account name is available
+
+To check availability of a purview account, use the following code:
+
+```csharp
+CheckNameAvailabilityRequest checkNameAvailabilityRequest = newCheckNameAvailabilityRequest()
+{
+ Name = purviewAccountName,
+ Type = "Microsoft.Purview/accounts"
+};
+Console.WriteLine("Check Azure Purview account name");
+Console.WriteLine(client.Accounts.CheckNameAvailability(checkNameAvailabilityRequest).NameAvailable);
+```
+
+The above code with print 'True' if the name is available and 'False' if the name is not available.
+
+## Next steps
+
+The code in this tutorial creates a purview account, deletes a purview account and checks for name availability of purview account. You can now download the .NET SDK and learn about other resource provider actions you can perform for an Azure Purview account.
+
+Follow these next articles to learn how to navigate the Azure Purview Studio, create a collection, and grant access to Azure Purview.
+
+* [How to use the Azure Purview Studio](use-azure-purview-studio.md)
+* [Create a collection](quickstart-create-collection.md)
+* [Add users to your Azure Purview account](catalog-permissions.md)
purview Create Azure Purview Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-azure-purview-portal-faq.md
+
+ Title: Create an Azure Policy exception for Azure Purview
+description: This article describes how to create an Azure Policy exception for Azure Purview while leaving existing Policies in place to maintain security.
++++ Last updated : 08/26/2021++
+# Create an Azure Policy exception for Azure Purview
+
+Many subscriptions have [Azure Policies](../governance/policy/overview.md) in place that restrict the creation of some resources. This is to maintain subscription security and cleanliness. However, Azure Purview accounts deploy two other Azure resources when they are created: an Azure Storage account, and an Event Hubs namespace. When you [create Azure Purview Account](create-catalog-portal.md), these resources will be deployed. They will be managed by Azure, so you don't need to maintain them, but you will need to deploy them.
+
+To maintain your policies in your subscription, but still allow the creation of these managed resources, you can create a policy exception.
+
+## Create a policy exception for Azure Purview
+
+1. Navigate to the [Azure portal](https://portal.azure.com) and search for **Policy**
+
+ :::image type="content" source="media/create-purview-portal-faq/search-for-policy.png" alt-text="Screenshot showing the Azure portal search bar, searching for Policy keyword.":::
+
+1. Follow [Create a custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md) or modify existing policy to add two exceptions with `not` operator and `resourceBypass` tag:
+
+ ```json
+ {
+ "mode": "All",
+ "policyRule": {
+ "if": {
+ "anyOf": [
+ {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.Storage/storageAccounts"
+ },
+ {
+ "not": {
+ "field": "tags['<resourceBypass>']",
+ "exists": true
+ }
+ }]
+ },
+ {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.EventHub/namespaces"
+ },
+ {
+ "not": {
+ "field": "tags['<resourceBypass>']",
+ "exists": true
+ }
+ }]
+ }]
+ },
+ "then": {
+ "effect": "deny"
+ }
+ },
+ "parameters": {}
+ }
+ ```
+
+ > [!Note]
+ > The tag could be anything beside `resourceBypass` and it's up to you to define value when creating Azure Purview in later steps as long as the policy can detect the tag.
+
+ :::image type="content" source="media/create-catalog-portal/policy-definition.png" alt-text="Screenshot showing how to create policy definition.":::
+
+1. [Create a policy assignment](../governance/policy/assign-policy-portal.md) using the custom policy created.
+
+ :::image type="content" source="media/create-catalog-portal/policy-assignment.png" alt-text="Screenshot showing how to create policy assignment" lightbox="./media/create-catalog-portal/policy-assignment.png":::
+
+> [!Note]
+> If you have **Azure Policy** and need to add exception as in **Prerequisites**, you need to add the correct tag. For example, you can add `resourceBypass` tag:
+> :::image type="content" source="media/create-catalog-portal/add-purview-tag.png" alt-text="Add tag to Azure Purview account.":::
+
+## Next steps
+
+To set up Azure Purview by using Private Link, see [Use private endpoints for your Azure Purview account](./catalog-private-link.md).
purview Create Azure Purview Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-azure-purview-python.md
+
+ Title: 'Quickstart: Create an Azure Purview account using Python'
+description: Create an Azure Purview account using Python.
+++
+ms.devlang: python
+ Last updated : 09/27/2021+++
+# Quickstart: Create an Azure Purview account using Python
+
+In this quickstart, youΓÇÖll create an Azure Purview account programatically using Python. [Python reference for Azure Purview](/python/api/azure-mgmt-purview/) is available, but this article will take you through all the steps needed to create an account with Python.
+
+Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Azure Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+
+For more information about Azure Purview, [see our overview page](overview.md). For more information about deploying Azure Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
++
+## Install the Python package
+
+1. Open a terminal or command prompt with administrator privileges.
+2. First, install the Python package for Azure management resources:
+
+ ```python
+ pip install azure-mgmt-resource
+ ```
+
+3. To install the Python package for Azure Purview, run the following command:
+
+ ```python
+ pip install azure-mgmt-purview
+ ```
+
+ The [Python SDK for Azure Purview](https://github.com/Azure/azure-sdk-for-python) supports Python 2.7, 3.3, 3.4, 3.5, 3.6 and 3.7.
+
+4. To install the Python package for Azure Identity authentication, run the following command:
+
+ ```python
+ pip install azure-identity
+ ```
+
+ > [!NOTE]
+ > The "azure-identity" package might have conflicts with "azure-cli" on some common dependencies. If you meet any authentication issue, remove "azure-cli" and its dependencies, or use a clean machine without installing "azure-cli" package.
+
+## Create a purview client
+
+1. Create a file named **purview.py**. Add the following statements to add references to namespaces.
+
+ ```python
+ from azure.identity import ClientSecretCredential
+ from azure.mgmt.resource import ResourceManagementClient
+ from azure.mgmt.purview import PurviewManagementClient
+ from azure.mgmt.purview.models import *
+ from datetime import datetime, timedelta
+ import time
+ ```
+
+2. Add the following code to the **Main** method that creates an instance of PurviewManagementClient class. You'll use this object to create a purview account, delete purview accounts, check name availability, and other resource provider operations.
+
+ ```python
+ def main():
+
+ # Azure subscription ID
+ subscription_id = '<subscription ID>'
+
+ # This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group
+ rg_name = '<resource group>'
+
+ # The purview name. It must be globally unique.
+ purview_name = '<purview account name>'
+
+ # Location name, where Azure Purview account must be created.
+ location = '<location name>'
+
+ # Specify your Active Directory client ID, client secret, and tenant ID
+ credentials = ClientSecretCredential(client_id='<service principal ID>', client_secret='<service principal key>', tenant_id='<tenant ID>')
+ # resource_client = ResourceManagementClient(credentials, subscription_id)
+ purview_client = PurviewManagementClient(credentials, subscription_id)
+ ```
+
+## Create a purview account
+
+1. Add the following code to the **Main** method that creates a **purview account**. If your resource group already exists, comment out the first `create_or_update` statement.
+
+ ```python
+ # create the resource group
+ # comment out if the resource group already exits
+ resource_client.resource_groups.create_or_update(rg_name, rg_params)
+
+ #Create a purview
+ identity = Identity(type= "SystemAssigned")
+ sku = AccountSku(name= 'Standard', capacity= 4)
+ purview_resource = Account(identity=identity,sku=sku,location =location )
+
+ try:
+ pa = (purview_client.accounts.begin_create_or_update(rg_name, purview_name, purview_resource)).result()
+ print("location:", pa.location, " Azure Purview Account Name: ", pa.name, " Id: " , pa.id ," tags: " , pa.tags)
+ except:
+ print("Error")
+ print_item(pa)
+
+ while (getattr(pa,'provisioning_state')) != "Succeeded" :
+ pa = (purview_client.accounts.get(rg_name, purview_name))
+ print(getattr(pa,'provisioning_state'))
+ if getattr(pa,'provisioning_state') != "Failed" :
+ print("Error in creating Azure Purview account")
+ break
+ time.sleep(30)
+ ```
+
+2. Now, add the following statement to invoke the **main** method when the program is run:
+
+ ```python
+ # Start the main method
+ main()
+ ```
+
+## Full script
+
+HereΓÇÖs the full Python code:
+
+```python
+
+ from azure.identity import ClientSecretCredential
+ from azure.mgmt.resource import ResourceManagementClient
+ from azure.mgmt.purview import PurviewManagementClient
+ from azure.mgmt.purview.models import *
+ from datetime import datetime, timedelta
+ import time
+
+ # Azure subscription ID
+ subscription_id = '<subscription ID>'
+
+ # This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group
+ rg_name = '<resource group>'
+
+ # The purview name. It must be globally unique.
+ purview_name = '<purview account name>'
+
+ # Specify your Active Directory client ID, client secret, and tenant ID
+ credentials = ClientSecretCredential(client_id='<service principal ID>', client_secret='<service principal key>', tenant_id='<tenant ID>')
+ # resource_client = ResourceManagementClient(credentials, subscription_id)
+ purview_client = PurviewManagementClient(credentials, subscription_id)
+
+ # create the resource group
+ # comment out if the resource group already exits
+ resource_client.resource_groups.create_or_update(rg_name, rg_params)
+
+ #Create a purview
+ identity = Identity(type= "SystemAssigned")
+ sku = AccountSku(name= 'Standard', capacity= 4)
+ purview_resource = Account(identity=identity,sku=sku,location ="southcentralus" )
+
+ try:
+ pa = (purview_client.accounts.begin_create_or_update(rg_name, purview_name, purview_resource)).result()
+ print("location:", pa.location, " Azure Purview Account Name: ", purview_name, " Id: " , pa.id ," tags: " , pa.tags)
+ except:
+ print("Error in submitting job to create account")
+ print_item(pa)
+
+ while (getattr(pa,'provisioning_state')) != "Succeeded" :
+ pa = (purview_client.accounts.get(rg_name, purview_name))
+ print(getattr(pa,'provisioning_state'))
+ if getattr(pa,'provisioning_state') != "Failed" :
+ print("Error in creating Azure Purview account")
+ break
+ time.sleep(30)
+
+# Start the main method
+main()
+```
+
+## Run the code
+
+Build and start the application. The console prints the progress of Azure Purview account creation. Wait until itΓÇÖs completed.
+HereΓÇÖs the sample output:
+
+```console
+location: southcentralus Azure Purview Account Name: purviewpython7 Id: /subscriptions/8c2c7b23-848d-40fe-b817-690d79ad9dfd/resourceGroups/Demo_Catalog/providers/Microsoft.Purview/accounts/purviewpython7 tags: None
+Creating
+Creating
+Succeeded
+```
+
+## Verify the output
+
+Go to the **Azure Purview accounts** page in the Azure portal and verify the account created using the above code.
+
+## Delete Azure Purview account
+
+To delete purview account, add the following code to the program, then run:
+
+```python
+pa = purview_client.accounts.begin_delete(rg_name, purview_name).result()
+```
+
+## Next steps
+
+The code in this tutorial creates a purview account and deletes a purview account. You can now download the python SDK and learn about other resource provider actions you can perform for an Azure Purview account.
+
+Follow these next articles to learn how to navigate the Azure Purview Studio, create a collection, and grant access to Azure Purview.
+
+* [How to use the Azure Purview Studio](use-azure-purview-studio.md)
+* [Create a collection](quickstart-create-collection.md)
+* [Add users to your Azure Purview account](catalog-permissions.md)
purview Create Catalog Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-catalog-portal.md
This quickstart describes the steps to create an Azure Purview account in the Azure portal and get started on the process of classifying, securing, and discovering your data in Azure Purview!
-Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Azure Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end to end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Azure Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
For more information about Azure Purview, [see our overview page](overview.md). For more information about deploying Azure Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
In this quickstart, you learned how to create an Azure Purview account and how t
Next, you can create a user-assigned managed identity (UAMI) that will enable your new Azure Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication.
-To create a UAMI follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
+To create a UAMI, follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
Follow these next articles to learn how to navigate the Azure Purview Studio, create a collection, and grant access to Azure Purview:
-* [Using the Azure Purview Studio](use-purview-studio.md)
+* [Using the Azure Purview Studio](use-azure-purview-studio.md)
* [Create a collection](quickstart-create-collection.md) * [Add users to your Azure Purview account](catalog-permissions.md)
purview Create Catalog Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-catalog-powershell.md
In this Quickstart, you'll create an Azure Purview account using Azure PowerShell/Azure CLI. [PowerShell reference for Azure Purview](/powershell/module/az.purview/) is available, but this article will take you through all the steps needed to create an account with PowerShell.
-Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Azure Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end to end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Azure Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
For more information about Azure Purview, [see our overview page](overview.md). For more information about deploying Azure Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
In this quickstart, you learned how to create an Azure Purview account.
Follow these next articles to learn how to navigate the Azure Purview Studio, create a collection, and grant access to Azure Purview.
-* [How to use the Azure Purview Studio](use-purview-studio.md)
+* [How to use the Azure Purview Studio](use-azure-purview-studio.md)
* [Add users to your Azure Purview account](catalog-permissions.md) * [Create a collection](quickstart-create-collection.md)
purview How To Create And Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-create-and-manage-collections.md
In order to create and manage collections in Azure Purview, you will need to be
:::image type="content" source="./media/how-to-create-and-manage-collections/role-assignments.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
-1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Azure Purview resource, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
+1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Azure Purview resource, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant your permission.
:::image type="content" source="./media/how-to-create-and-manage-collections/collection-admins.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
You'll need to be a collection admin in order to create a collection. If you are
1. Select **+ Add a collection**. Again, note that only [collection admins](#check-permissions) can manage collections.
- :::image type="content" source="./media/how-to-create-and-manage-collections/select-add-a-collection.png" alt-text="Screenshot of Azure Purview studio window, showing the new collection window, with the add a collection buttons highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/select-add-a-collection.png" alt-text="Screenshot of Azure Purview studio window, showing the new collection window, with the 'Add a collection' button highlighted." border="true":::
1. In the right panel, enter the collection name and description. If needed you can also add users or groups as collection admins to the new collection. 1. Select **Create**.
You'll need to be a collection admin in order to delete a collection. If you are
## Add roles and restrict access through collections
-Since permissions are managed through collections in Azure Purview, it is important to understand the roles and what permissions they will give your users. A user granted permissions on a collection will have access to sources and assets associated with that collection, as well as inherit permissions to subcollections. Inheritance [can be restricted](#restrict-inheritance), but is allowed by default.
+Since permissions are managed through collections in Azure Purview, it is important to understand the roles and what permissions they will give your users. A user granted permissions on a collection will have access to sources and assets associated with that collection, and inherit permissions to subcollections. Inheritance [can be restricted](#restrict-inheritance), but is allowed by default.
The following guide will discuss the roles, how to manage them, and permissions inheritance.
All assigned roles apply to sources, assets, and other objects within the collec
1. Type in the textbox to search for users you want to add to the role member. Select **X** to remove members you don't want to add.
- :::image type="content" source="./media/how-to-create-and-manage-collections/search-user-permissions.png" alt-text="Screenshot of Azure Purview studio collection collection admin window with the search bar highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/search-user-permissions.png" alt-text="Screenshot of Azure Purview studio collection admin window with the search bar highlighted." border="true":::
1. Select **OK** to save your changes, and you will see the new users reflected in the role assignments list.
Once you restrict inheritance, you will need to add users directly to the restri
## Register source to a collection
-1. Select **Register** or register icon on collection node to register a data source. Note that only data source admin can register sources.
+1. Select **Register** or register icon on collection node to register a data source. Only a data source admin can register sources.
:::image type="content" source="./media/how-to-create-and-manage-collections/register-by-collection.png" alt-text="Screenshot of the data map Azure Purview studio window with the register button highlighted both at the top of the page and under a collection."border="true":::
-1. Fill in the data source name, and other source information. It lists all the collections which you have scan permission on the bottom of the form. You can select one collection. All assets under this source will belong to the collection you select.
+1. Fill in the data source name, and other source information. It lists all the collections where you have scan permission on the bottom of the form. You can select one collection. All assets under this source will belong to the collection you select.
:::image type="content" source="./media/how-to-create-and-manage-collections/register-source.png" alt-text="Screenshot of the source registration window."border="true":::
Once you restrict inheritance, you will need to add users directly to the restri
:::image type="content" source="./media/how-to-create-and-manage-collections/new-scan.png" alt-text="Screenshot of a source Azure Purview studio window with the new scan button highlighted."border="true"::: 1. Similarly, at the bottom of the form, you can select a collection, and all assets scanned will be included in the collection.
-Note that the collections listed here are restricted to subcollections of the data source collection.
+The collections listed here are restricted to subcollections of the data source collection.
:::image type="content" source="./media/how-to-create-and-manage-collections/scan-under-collection.png" alt-text="Screenshot of a new scan window with the collection dropdown highlighted."border="true":::
Assets and sources are also associated with collections. During a scan, if the s
:::image type="content" source="./media/how-to-create-and-manage-collections/collection-path.png" alt-text="Screenshot of Azure Purview studio asset window, with the collection path highlighted." border="true"::: 1. Permissions in asset details page:
- 1. Please check the collection based permission model by following the [add roles and restricting access on collections guide above](#add-roles-and-restrict-access-through-collections).
- 1. If you don't have read permission on a collection, the assets under that collection will not be listed in search results. If you get the direct URL of one asset and open it, you will see the no access page. In this case please contact your Azure Purview admin to grant you the access. You can select the **Refresh** button to check the permission again.
+ 1. Check the collection-based permission model by following the [add roles and restricting access on collections guide above](#add-roles-and-restrict-access-through-collections).
+ 1. If you don't have read permission on a collection, the assets under that collection will not be listed in search results. If you get the direct URL of one asset and open it, you will see the no access page. Contact your Azure Purview admin to grant you the access. You can select the **Refresh** button to check the permission again.
:::image type="content" source="./media/how-to-create-and-manage-collections/no-access.png" alt-text="Screenshot of Azure Purview studio asset window where the user has no permissions, and has no access to information or options." border="true":::
Assets and sources are also associated with collections. During a scan, if the s
:::image type="content" source="./media/how-to-create-and-manage-collections/move-asset.png" alt-text="Screenshot of Azure Purview studio asset window with the collection path highlighted and the ellipsis button next to collection path selected." border="true"::: 1. Select the **Move to another collection** button.
-1. In the right side panel, choose the target collection you want move to. Note that you can only see the collections where you have write permissions. The asset can also only be added to the subcollections of the data source collection.
+1. In the right side panel, choose the target collection you want move to. You can only see the collections where you have write permissions. The asset can also only be added to the subcollections of the data source collection.
:::image type="content" source="./media/how-to-create-and-manage-collections/move-select-collection.png" alt-text="Screenshot of Azure Purview studio pop-up window with the select a collection dropdown menu highlighted." border="true":::
Assets and sources are also associated with collections. During a scan, if the s
:::image type="content" source="./media/how-to-create-and-manage-collections/by-collection-view.png" alt-text="Screenshot of the asset Azure Purview studio window with the by collection tab selected."border="true":::
-1. On the next page, the search results of the assets under selected collection will be show up. You can narrow the results by selecting the facet filters. Or you can see the assets under other collections by selecting the sub/related collection names.
+1. On the next page, the search results of the assets under selected collection will be shown. You can narrow the results by selecting the facet filters. Or you can see the assets under other collections by selecting the sub/related collection names.
:::image type="content" source="./media/how-to-create-and-manage-collections/search-results-by-collection.png" alt-text="Screenshot of the catalog Azure Purview studio window with the by collection tab selected."border="true":::
Now that you have a collection, you can follow these guides below to add resourc
* [Manage data sources](manage-data-sources.md)
-* [Supported data sources](purview-connector-overview.md)
+* [Supported data sources](azure-purview-connector-overview.md)
* [Scan and ingestion](concept-scans-and-ingestion.md)
purview How To Create Import Export Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-create-import-export-glossary.md
You should be able to export terms from glossary as long as the selected terms b
## Next steps
-* For more information about glossary terms, see the [glossary reference](reference-purview-glossary.md)
+* For more information about glossary terms, see the [glossary reference](reference-azure-purview-glossary.md)
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-data-owner-policy-authoring-generic.md
Title: Authoring and publishing data owner access policies description: Step-by-step guide on how a data owner can author and publish access policies in Azure Purview-+ Previously updated : 1/28/2022 Last updated : 2/2/2022
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-integration-runtimes.md
Installation of the self-hosted integration runtime on a domain controller isn't
- If the host machine hibernates, the self-hosted integration runtime doesn't respond to data requests. Configure an appropriate power plan on the computer before you install the self-hosted integration runtime. If the machine is configured to hibernate, the self-hosted integration runtime installer prompts with a message. - You must be an administrator on the machine to successfully install and configure the self-hosted integration runtime. - Scan runs happen with a specific frequency per the schedule you've set up. Processor and RAM usage on the machine follows the same pattern with peak and idle times. Resource usage also depends heavily on the amount of data that is scanned. When multiple scan jobs are in progress, you see resource usage goes up during peak times.-- Scanning some data sources requires additional setup on the self-hosted integration runtime machine. For example, JDK, Visual C++ Redistributable, or specific driver. Refer to [each source article](purview-connector-overview.md) for prerequisite details.
+- Scanning some data sources requires additional setup on the self-hosted integration runtime machine. For example, JDK, Visual C++ Redistributable, or specific driver. Refer to [each source article](azure-purview-connector-overview.md) for prerequisite details.
> [!IMPORTANT] > If you use the Self-Hosted Integration runtime to scan Parquet files, you need to install the **64-bit JRE 8 (Java Runtime Environment) or OpenJDK** on your IR machine. Check our [Java Runtime Environment section at the bottom of the page](#java-runtime-environment-installation) for an installation guide.
purview Quickstart Create Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/quickstart-create-collection.md
Collections are Azure Purview's tool to manage ownership and access control acro
## Check permissions
-In order to create and manage collections in Azure Purview, you will need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](use-purview-studio.md). You can find the studio by going to your Azure Purview resource in the [Azure portal](https://portal.azure.com), and selecting the **Open Azure Purview Studio** tile on the overview page.
+In order to create and manage collections in Azure Purview, you will need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](use-azure-purview-studio.md). You can find the studio by going to your Azure Purview resource in the [Azure portal](https://portal.azure.com), and selecting the **Open Azure Purview Studio** tile on the overview page.
1. Select Data Map > Collections from the left pane to open collection management page.
In order to create and manage collections in Azure Purview, you will need to be
## Create a collection in the portal
-To create your collection, we'll start in the [Azure Purview Studio](use-purview-studio.md). You can find the studio by going to your Azure Purview resource in the Azure portal and selecting the **Open Azure Purview Studio** tile on the overview page.
+To create your collection, we'll start in the [Azure Purview Studio](use-azure-purview-studio.md). You can find the studio by going to your Azure Purview resource in the Azure portal and selecting the **Open Azure Purview Studio** tile on the overview page.
1. Select Data Map > Collections from the left pane to open collection management page.
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/reference-azure-purview-glossary.md
+
+ Title: Azure Purview product glossary
+description: A glossary defining the terminology used throughout Azure Purview
++++ Last updated : 08/16/2021+
+# Azure Purview product glossary
+
+Below is a glossary of terminology used throughout Azure Purview.
+
+## Annotation
+Information that is associated with data assets in Azure Purview, for example, glossary terms and classifications. After they are applied, annotations can be used within Search to aid in the discovery of the data assets.
+## Approved
+The state given to any request that has been accepted as satisfactory by the designated individual or group who has authority to change the state of the request.
+## Asset
+Any single object that is stored within an Azure Purview data catalog.
+> [!NOTE]
+> A single object in the catalog could potentially represent many objects in storage, for example, a resource set is an asset but it's made up of many partition files in storage.
+## Azure Information Protection
+A cloud solution that supports labeling of documents and emails to classify and protect information. Labeled items can be protected by encryption, marked with a watermark, or restricted to specific actions or users, and is bound to the item. This cloud-based solution relies on Azure Rights Management Service (RMS) for enforcing restrictions.
+## Business glossary
+A searchable list of specialized terms that an organization uses to describe key business words and their definitions. Using a business glossary can provide consistent data usage across the organization.
+## Classification report
+A report that shows key classification details about the scanned data.
+## Classification
+A type of annotation used to identify an attribute of an asset or a column such as "Age", ΓÇ£Email Address", and "Street Address". These attributes can be assigned during scans or added manually.
+## Classification rule
+A classification rule is a set of conditions that determine how scanned data should be classified when content matches the specified pattern.
+## Classified asset
+An asset where Azure Purview extracts schema and applies classifications during an automated scan. The scan rule set determines which assets get classified. If the asset is considered a candidate for classification and no classifications are applied during scan time, an asset is still considered a classified asset.
+## Column pattern
+A regular expression included in a classification rule that represents the column names that you want to match.
+## Contact
+An individual who is associated with an entity in the data catalog.
+## Control plane operation
+Operations that manage resources in your subscription, such as role-based access control and Azure Policy, that are sent to the Azure Resource Manager end point.
+## Credential
+A verification of identity or tool used in an access control system. Credentials can be used to authenticate an individual or group to grant access to a data asset.
+## Data catalog
+Azure Purview features that enable customers to view and manage the metadata for assets in your data estate.
+## Data map
+Azure Purview features that enable customers to manage their data estate, such as scanning, lineage, and movement.
+## Data pattern
+A regular expression that represents the data that is stored in a data field. For example, a data pattern for employee ID could be Employee{GUID}.
+## Data plane operation
+An operation within a specific Azure Purview instance, such as editing an asset or creating a glossary term. Each instance has predefined roles, such as "data reader" and "data curator" that control which data plane operations a user can perform.
+## Discovered asset
+An asset that Azure Purview identifies in a data source during the scanning process. The number of discovered assets includes all files or tables before resource set grouping.
+## Distinct match threshold
+The total number of distinct data values that need to be found in a column before the scanner runs the data pattern on it. For example, a distinct match threshold of eight for employee ID requires that there are at least eight unique data values among the sampled values in the column that match the data pattern set for employee ID.
+## Expert
+An individual within an organization who understands the full context of a data asset or glossary term.
+## Full scan
+A scan that processes all assets within a selected scope of a data source.
+## Fully Qualified Name (FQN)
+A path that defines the location of an asset within its data source.
+## Glossary term
+An entry in the Business glossary that defines a concept specific to an organization. Glossary terms can contain information on synonyms, acronyms, and related terms.
+## Incremental scan
+A scan that detects and processes assets that have been created, modified, or deleted since the previous successful scan. To run an incremental scan, at least one full scan must be completed on the source.
+## Ingested asset
+An asset that has been scanned, classified (when applicable), and added to the Azure Purview data map. Ingested assets are discoverable and consumable within the data catalog through automated scanning or external connections, such as Azure Data Factory and Azure Synapse.
+## Insights
+An area within Azure Purview where you can view reports that summarize information about your data.
+## Integration runtime
+The compute infrastructure used to scan in a data source.
+## Lineage
+How data transforms and flows as it moves from its origin to its destination. Understanding this flow across the data estate helps organizations see the history of their data, and aid in troubleshooting or impact analysis.
+## Management Center
+An area within Azure Purview where you can manage connections, users, roles, and credentials.
+## Minimum match threshold
+The minimum percentage of matches among the distinct data values in a column that must be found by the scanner for a classification to be applied.
+
+For example, a minimum match threshold of 60% for employee ID requires that 60% of all distinct values among the sampled data in a column match the data pattern set for employee ID. If the scanner samples 128 values in a column and finds 60 distinct values in that column, then at least 36 of the distinct values (60%) must match the employee ID data pattern for the classification to be applied.
+## On-premises data
+Data that is in a data center controlled by a customer, for example, not in the cloud or software as a service (SaaS).
+## Owner
+An individual or group in charge of managing a data asset.
+## Pattern rule
+A configuration that overrides how Azure Purview groups assets as resource sets and displays them within the catalog.
+## Azure Purview instance
+A single Azure Purview resource.
+## Registered source
+A source that has been added to an Azure Purview instance and is now managed as a part of the Data catalog.
+## Related terms
+Glossary terms that are linked to other terms within the organization.
+## Resource set
+A single asset that represents many partitioned files or objects in storage. For example, Azure Purview stores partitioned Apache Spark output as a single resource set instead of unique assets for each individual file.
+## Role
+Permissions assigned to a user within an Azure Purview instance. Roles, such as Azure Purview Data Curator or Azure Purview Data Reader, determine what can be done within the product.
+## Scan
+An Azure Purview process that examines a source or set of sources and ingests its metadata into the data catalog. Scans can be run manually or on a schedule using a scan trigger.
+## Scan ruleset
+A set of rules that define which data types and classifications a scan ingests into a catalog.
+## Scan trigger
+A schedule that determines the recurrence of when a scan runs.
+## Search
+A data discovery feature of Azure Purview that returns a list of assets that match to a keyword.
+## Search relevance
+The scoring of data assets that determine the order search results are returned. Multiple factors determine an asset's relevance score.
+## Self-hosted integration runtime
+An integration runtime installed on an on-premises machine or virtual machine inside a private network that is used to connect to data on-premises or in a private network.
+## Sensitivity label
+Annotations that classify and protect an organizationΓÇÖs data. Azure Purview integrates with Microsoft Information Protection for creation of sensitivity labels.
+## Sensitivity label report
+A summary of which sensitivity labels are applied across the data estate.
+## Service
+A product that provides standalone functionality and is available to customers by subscription or license.
+## Source
+A system where data is stored. Sources can be hosted in various places such as a cloud or on-premises. You register and scan sources so that you can manage them in Azure Purview.
+## Source type
+A categorization of the registered sources used in an Azure Purview instance, for example, Azure SQL Database, Azure Blob Storage, Amazon S3, or SAP ECC.
+## Steward
+An individual who defines the standards for a glossary term. They are responsible for maintaining quality standards, nomenclature, and rules for the assigned entity.
+## Term template
+A definition of attributes included in a glossary term. Users can either use the system-defined term template or create their own to include custom attributes.
+## Next steps
+
+To get started with Azure Purview, see [Quickstart: Create an Azure Purview account](create-catalog-portal.md).
purview Tutorial Azure Purview Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-azure-purview-tools.md
+
+ Title: Learn about Azure Purview open-source tools and utilities
+description: This tutorial lists various tools and utilities available in Azure Purview and discusses their usage.
+++++ Last updated : 10/10/2021
+# Customer Intent: As an Azure Purview administrator, I want to kickstart and be up and running with Azure Purview service in a matter of minutes; additionally, I want to perform and set up automations, batch-mode API executions and scripts that help me run Azure Purview smoothly and effectively for the long-term on a regular basis.
++
+# Azure Purview open-source tools and utilities
+
+This article lists several open-source tools and utilities (command-line, python, and PowerShell interfaces) that help you get started quickly on Azure Purview service in a matter of minutes! These tools have been authored & developed by collective effort of the Azure Purview Product Group and the open-source community. The objective of such tools is to make learning, starting up, regular usage, and long-term adoption of Azure Purview breezy and super fast.
+
+### Intended audience
+
+- Azure Purview community including customers, developers, ISVs, partners, evangelists, and enthusiasts.
+
+- Azure Purview catalog is based on [Apache Atlas](https://atlas.apache.org/) and extends full support for Apache Atlas APIs. We welcome Apache Atlas community, enthusiasts, and developers to wholeheartedly build on and evangelize Azure Purview.
+
+### Azure Purview customer journey stages
+
+- *Azure Purview Learners*: Learners who are starting fresh with Azure Purview service and are keen to understand and explore how a multi-cloud unified data governance solution works. A section of learners includes users who want to compare and contrast Azure Purview with other competing solutions in the data governance market and try it before adopting for long-term usage.
+
+- *Azure Purview Innovators*: Innovators who are keen to understand existing and latest features, ideate, and conceptualize features upcoming on Azure Purview. They are adept at building and developing solutions for customers, and have futuristic forward-looking ideas for the next-gen cutting-edge data governance product.
+
+- *Azure Purview Enthusiasts/Evangelists*: Enthusiasts who are a combination of Learners and Innovators. They have developed solid understanding and knowledge of Azure Purview, hence, are upbeat about adoption of Azure Purview. They can help evangelize Azure Purview as a service and educate several other Azure Purview users and probable customers across the globe.
+
+- *Azure Purview Adopters*: Adopters who have migrated from starting up and exploring Azure Purview and are smoothly using Azure Purview for more than a few months.
+
+- *Azure Purview Long-Term Regular Users*: Long-term users who have been using Azure Purview for more than one year and are now confident and comfortable using most advanced Azure Purview use cases on the Azure portal and Azure Purview Studio; furthermore they have near perfect knowledge and awareness of the Azure Purview REST APIs and the other use cases supported via Azure Purview APIs.
++
+## Azure Purview open-source tools and utilities list
+
+1. [Purview-API-via-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell/blob/main/README.md)
+
+ - **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts, Adopters, Long-Term Regular Users*
+ - **Description**: This utility is based on and covers the entire set of [Azure Purview REST API Reference](/rest/api/purview/) Microsoft Docs. [Download & Install from PowerShell Gallery](https://aka.ms/purview-api-ps). It helps you execute all the documented Azure Purview REST APIs through a breezy fast and easy to use PowerShell interface. Use and automate Azure Purview APIs for regular and long-term usage via command-line and scripted methods. This is an alternative for customers looking to do bulk tasks in automated manner, batch-mode, or scheduled cron jobs; as against the GUI method of using the Azure portal and Azure Purview Studio. Detailed documentation, sample usage guide, self-help, and examples are available on [GitHub:Azure-Purview-API-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell).
+
+1. [Purview-Starter-Kit](https://aka.ms/PurviewKickstart)
+
+ - **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts*
+ - **Description**: PowerShell script to perform initial setup of Azure Purview account. Useful for anyone looking to set up several fresh new Azure Purview account(s) in less than 5 minutes!
+
+1. [Azure Purview Lab](https://aka.ms/purviewlab)
+
+ - **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts*
+ - **Description**: A hands-on-lab introducing the myriad features of Azure Purview and helping you learn the concepts in a practical and hands-on approach where you execute each step on your own by hand to develop the best possible understanding of Azure Purview.
+
+1. [Azure Purview CLI](https://aka.ms/purviewcli)
+
+ - **Recommended customer journey stages**: *Innovators, Enthusiasts, Adopters, Long-Term Regular Users*
+ - **Description**: Python-based tool to execute the Azure Purview APIs similar to [Purview-API-via-PowerShell](https://aka.ms/purview-api-ps) but has limited/lesser functionality than the PowerShell-based framework.
+
+1. [Azure Purview Demo](https://aka.ms/pvdemo)
+
+ - **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts*
+ - **Description**: An Azure Resource Manager (ARM) template-based tool to automatically set up and deploy fresh new Azure Purview account quickly and securely at the issue of just one command. It is similar to [Purview-Starter-Kit](https://aka.ms/PurviewKickstart), the extra feature being it deploys a few more pre-configured data sources - Azure SQL Database, Azure Data Lake Storage Gen2 Account, Azure Data Factory, Azure Synapse Analytics Workspace
+
+1. [PyApacheAtlas: Interface between Azure Purview and Apache Atlas](https://github.com/wjohnson/pyapacheatlas) using Atlas APIs
+
+ - **Recommended customer journey stages**: *Innovators, Enthusiasts, Adopters, Long-Term Regular Users*
+ - **Description**: A python package to work with Azure Purview and Apache Atlas API. Supports bulk loading, custom lineage, and more from a Pythonic set of classes and Excel templates. The package supports programmatic interaction and an Excel template for low-code uploads.
+
+1. [Azure Purview Event Hubs Notifications Reader](https://github.com/Azure/Azure-Purview-API-PowerShell/blob/main/purview_atlas_eventhub_sample.py)
+
+ - **Recommended customer journey stages**: *Innovators, Enthusiasts, Adopters, Long-Term Regular Users*
+ - **Description**: This tool demonstrates how to read Azure Purview's Event Hubs and catch real-time Kafka notifications from the Event Hubs in Atlas Notifications (https://atlas.apache.org/2.0.0/Notifications.html) format. Further, it generates an excel sheet CSV of the entities and assets on the fly that are discovered live during a scan, and any other notifications of interest that Azure Purview generates.
++
+## Feedback and disclaimer
+
+None of the tools come with an express warranty from Microsoft verifying their efficacy or guarantees of functionality. They are certified to be free of any malicious activity or viruses, and guaranteed to not collect any private or sensitive data.
+
+For feedback or questions about efficacy and functionality during usage, contact the respective tool owners and authors on the contact details mentioned in the respective GitHub repo.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Purview-API-PowerShell](https://aka.ms/purview-api-ps)
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-data-owner-policies-resource-group.md
Title: Resource group and subscription access provisioning by data owner description: Step-by-step guide showing how a data owner can create access policies to resource groups or subscriptions.-+ Previously updated : 1/28/2022 Last updated : 2/2/2022
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-data-owner-policies-storage.md
Title: Access provisioning by data owner to Azure Storage datasets description: Step-by-step guide showing how data owners can create access policies to datasets in Azure Storage-+ Previously updated : 1/28/2022 Last updated : 2/2/2022
purview Tutorial Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-register-scan-on-premises-sql-server.md
Collections in Azure Purview are used to organize assets and sources into a cust
### Check permissions
-To create and manage collections in Azure Purview, you'll need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](use-purview-studio.md).
+To create and manage collections in Azure Purview, you'll need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](use-azure-purview-studio.md).
1. Select **Data Map > Collections** from the left pane to open the collection management page.
purview Use Azure Purview Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/use-azure-purview-studio.md
+
+ Title: Use the Azure Purview Studio
+description: This article describes how to use Azure Purview Studio.
++++ Last updated : 09/27/2021++
+# Use Azure Purview Studio
+
+This article gives an overview of some of the main features of Azure Purview.
+
+## Prerequisites
+
+* An Active Azure Purview account is already created in Azure portal and the user has permissions to access [Azure Purview Studio](https://web.purview.azure.com/resource/).
+
+## Launch Azure Purview account
+
+* To launch your Azure Purview account, go to Azure Purview accounts in Azure portal, select the account you want to launch and launch the account.
+
+ :::image type="content" source="./media/use-purview-studio/open-purview-studio.png" alt-text="Screenshot of Azure Purview window in Azure portal, with Azure Purview Studio button highlighted." border="true":::
+
+* Another way to launch Azure Purview account is to go to `https://web.purview.azure.com`, select **Azure Active Directory** and an account name to launch the account.
+
+## Home page
+
+**Home** is the starting page for the Azure Purview client.
++
+The following list summarizes the main features of **Home page**. Each number in the list corresponds to a highlighted number in the preceding screenshot.
+
+1. Friendly name of the catalog. You can set catalog name in **Management** > **Account information**.
+
+2. Catalog analytics shows the number of:
+
+ * Data sources
+ * Assets
+ * Glossary terms
+
+3. The search box allows you to search for data assets across the data catalog.
+
+4. The quick access buttons give access to frequently used functions of the application. The buttons that are presented, depend on the role assigned to your user account at the root collection.
+
+ * For *collection admin*, the available button is **Knowledge center**.
+ * For *data curator*, the buttons are **Browse assets**, **Manage glossary**, and **Knowledge center**.
+ * For *data reader*, the buttons are **Browse assets**, **View glossary**, and **Knowledge center**.
+ * For *data source admin* + *data curator*, the buttons are **Browse assets**, **Manage glossary**, and **Knowledge center**.
+ * For *data source admin* + *data reader*, the buttons are **Browse assets**, **View glossary**, and **Knowledge center**.
+
+ > [!NOTE]
+ > For more information about Azure Purview roles, see [Access control in Azure Purview](catalog-permissions.md).
+
+5. The left navigation bar helps you locate the main pages of the application.
+6. The **Recently accessed** tab shows a list of recently accessed data assets. For information about accessing assets, see [Search the Data Catalog](how-to-search-catalog.md) and [Browse by asset type](how-to-browse-catalog.md). **My items** tab is a list of data assets owned by the logged-on user.
+7. **Links** contains links to region status, documentation, pricing, overview, and Azure Purview status
+8. The top navigation bar contains information about release notes/updates, change purview account, notifications, help, and feedback sections.
+
+## Knowledge center
+
+Knowledge center is where you can find all the videos and tutorials related to Azure Purview.
+
+## Guided tours
+
+Each UX in Azure Purview Studio will have guided tours to give overview of the page. To start the guided tour, select **help** on the top bar and select **guided tours**.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Add a security principal](tutorial-scan-data.md)
search Search Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-overview.md
Previously updated : 01/19/2022 Last updated : 02/01/2022 # Indexers in Azure Cognitive Search
Indexers crawl data stores on Azure and outside of Azure.
+ [Snowflake](search-how-to-index-power-query-data-sources.md) (in preview) + [Azure SQL Managed Instance](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md) + [SQL Server on Azure Virtual Machines](search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md)++ [Azure Files](search-file-storage-integration.md) (in preview) Indexers accept flattened row sets, such as a table or view, or items in a container or folder. In most cases, it creates one search document per row, record, or item.
Now that you've been introduced to indexers, a next step is to review indexer pr
+ [Reset and run indexers](search-howto-run-reset-indexers.md) + [Schedule indexers](search-howto-schedule-indexers.md) + [Define field mappings](search-indexer-field-mappings.md)
-+ [Monitor indexer status](search-howto-monitor-indexers.md)
++ [Monitor indexer status](search-howto-monitor-indexers.md)
security Security Code Analysis Customize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/develop/security-code-analysis-customize.md
description: This article describes customizing the tasks in the Microsoft Secur
Previously updated : 03/22/2021 Last updated : 01/31/2022
# Configure and customize the build tasks > [!Note]
-> Effective March 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through March 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective July 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through July 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
This article describes in detail the configuration options available in each of the build tasks. The article starts with the tasks for security code analysis tools. It ends with the post-processing tasks.
security Security Code Analysis Onboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/develop/security-code-analysis-onboard.md
description: Learn how to onboard and install the Microsoft Security Code Analys
Previously updated : 03/22/2021 Last updated : 01/31/2022
# Onboarding and installing > [!Note]
-> Effective March 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through March 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective July 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through July 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
Prerequisites to getting started with Microsoft Security Code Analysis:
security Security Code Analysis Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/develop/security-code-analysis-overview.md
description: Learn about the Microsoft Security Code Analysis extension. With th
Previously updated : 03/22/2021 Last updated : 01/31/2022
# About Microsoft Security Code Analysis > [!Note]
-> Effective March 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through March 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective July 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through July 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
With the Microsoft Security Code Analysis extension, teams can add security code analysis to their Azure DevOps continuous integration and delivery (CI/CD) pipelines. This analysis is recommended by the [Secure Development Lifecycle (SDL)](https://www.microsoft.com/securityengineering/sdl/practices) experts at Microsoft.
security Security Code Analysis Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/develop/security-code-analysis-releases.md
description: This article describes upcoming releases for the Microsoft Security
Previously updated : 03/22/2021 Last updated : 01/31/2022
# Microsoft Security Code Analysis releases and roadmap > [!Note]
-> Effective March 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through March 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective July 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through July 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
Microsoft Security Code Analysis team in partnership with Developer Support is proud to announce recent and upcoming enhancements to our MSCA extension.
security Yaml Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/develop/yaml-configuration.md
description: This article describes lists YAML configuration options for customi
Previously updated : 03/22/2021 Last updated : 01/31/2022
# YAML configuration options to customize the build tasks > [!Note]
-> Effective March 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through March 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective July 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through July 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
This article lists all YAML configuration options available in each of the build tasks. The article starts with the tasks for security code analysis tools. It ends with the post-processing tasks.
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| - [Entity insights](../../sentinel/enable-entity-behavior-analytics.md) | GA | Public Preview | |- [SOC incident audit metrics](../../sentinel/manage-soc-with-incident-metrics.md) | GA | GA | | - [Incident advanced search](../../sentinel/investigate-cases.md#search-for-incidents) |GA |GA |
+| - [Microsoft 365 Defender incident integration](../../sentinel/microsoft-365-defender-sentinel-integration.md#incident-integration) |Public Preview |Public Preview|
| - [Microsoft Teams integrations](../../sentinel/collaborate-in-microsoft-teams.md) |Public Preview |Not Available | |- [Bring Your Own ML (BYO-ML)](../../sentinel/bring-your-own-ml.md) | Public Preview | Public Preview | | **Notebooks** | | |
security Key Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/key-management.md
+
+ Title: Overview of Key Management in Azure
+description: This article provides an overview of Key Management in Azure.
+
+documentationcenter: na
+++++
+ na
+ Last updated : 01/25/2022+++
+# Key management in Azure
+
+In Azure, encryption keys can be either platform managed or customer managed.
+
+Platform-managed keys (PMKs) are encryption keys that are generated, stored, and managed entirely by Azure. Customers do not interact with PMKs. The keys used for [Azure Data Encryption-at-Rest](encryption-atrest.md), for instance, are PMKs by default.
+
+Customer-managed keys (CMK), on the other hand, are those that can be read, created, deleted, updated, and/or administered by one or more customers. Keys stored in a customer-owned key vault or hardware security module (HSM) are CMKs. Bring Your Own Key (BYOK) is a CMK scenario in which a customer imports (brings) keys from an outside storage location into an Azure key management service (see the [Azure Key Vault: Bring your own key specification](../../key-vault/keys/byok-specification.md)).
+
+A specific kind of customer-managed key is the "key encryption key" (KEK). A KEK is a master key, that controls access to one or more encryption keys that are themselves encrypted.
+
+Customer-managed keys can be stored on-premise or, more commonly, in a cloud key management service.
+
+## Azure key management services
+
+Azure offers several options for storing and managing your keys in the cloud, including Azure Key Vault, Azure Managed HSM, Dedicated HSM, and Payments HSM. These options differ in terms of their FIPS compliance level, management overhead, and intended applications.
+
+**Azure Key Vault (Standard Tier)**: A FIPS 140-2 Level 1 validated multi-tenant cloud key management service that can also be used to store secrets and certificates. Keys stored in Azure Key Vault are software-protected and can be used for encryption-at-rest and custom applications. Key Vault provides a modern API and the widest breadth of regional deployments and integrations with Azure Services. For more information, see [About Azure Key Vault](../../key-vault/general/overview.md).
+
+**Azure Key Vault (Premium Tier)**: A FIPS 140-2 Level 2 validated multi-tenant HSM offering that can be used to store keys in a secure hardware boundary. Microsoft manages and operates the underlying HSM, and keys stored in Azure Key Vault Premium can be used for encryption-at-rest and custom applications. Key Vault Premium also provides a modern API and the widest breadth of regional deployments and integrations with Azure Services. For more information, see [About Azure Key Vault](../../key-vault/general/overview.md).
+
+**Azure Managed HSM**: A FIPS 140-2 Level 3 validated single-tenant HSM offering that gives customers full control of an HSM for encryption-at-rest, Keyless SSL, and custom applications. Customers receive a pool of three HSM partitionsΓÇötogether acting as one logical, highly available HSM appliance--fronted by a service that exposes crypto functionality through the Key Vault API. Microsoft handles the provisioning, patching, maintenance, and hardware failover of the HSMs, but does not have access to the keys themselves, because the service executes within Azure's Confidential Compute Infrastructure. Managed HSM is integrated with the Azure SQL, Azure Storage, and Azure Information Protection PaaS services and offers support for Keyless TLS with F5 and Nginx. For more information, see [What is Azure Key Vault Managed HSM?](../../key-vault/managed-hsm/overview.md)
+
+**Azure Dedicated HSM**: A FIPS 140-2 Level 3 validated bare metal HSM offering, that lets customers lease a general-purpose HSM appliance that resides in Microsoft datacenters. The customer has complete and total ownership over the HSM device and is responsible for patching and updating the firmware when required. Microsoft has no permissions on the device or access to the key material, and Dedicated HSM is not integrated with any Azure PaaS offerings. Customers can interact with the HSM using the PKCS#11, JCE/JCA, and KSP/CNG APIs. This offering is most useful for legacy lift-and-shift workloads, PKI, SSL Offloading and Keyless TLS (supported integrations include F5, Nginx, Apache, Palo Alto, IBM GW and more), OpenSSL applications, Oracle TDE, and Azure SQL TDE IaaS. For more information, see [What is Azure Key Vault Managed HSM?](../../dedicated-hsm/overview.md)
+
+**Azure Payments HSM**: A FIPS 140-2 Level 3, PCI HSM v3, validated bare metal offering that lets customers lease a payment HSM appliance in Microsoft datacenters for payments operations, including payment processing, payment credential issuing, securing keys and authentication data, and sensitive data protection. The service is currently undergoing PCI DSS and PCI 3DS audits. Azure Payment HSM offers single-tenant HSMs for customers to have complete administrative control and exclusive access to the HSM. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released, to ensure complete privacy and security is maintained. This offering is currently in public preview. For more information, see [About Azure Key Vault](../../payment-hsm/overview.md).
+
+### Pricing
+
+The Azure Key Vault Standard and Premium tiers are billed on a transactional basis, with an additional monthly per-key charge for premium hardware-backed keys. Managed HSM, Dedicated HSM, and Payments HSM do not charge on a transactional basis; instead they are always-in-use devices that are billed at a fixed hourly rate. For detailed pricing information, see [Key Vault pricing](https://azure.microsoft.com/pricing/details/key-vault), [Dedicated HSM pricing](https://azure.microsoft.com/pricing/details/azure-dedicated-hsm), and [Payment HSM pricing](https://azure.microsoft.com/pricing/details/payment-hsm).
+
+### Service Limits
+
+Managed HSM, Dedicated HSM, and Payments HSM offer dedicated capacity. Key Vault Standard and Premium are multi-tenant offerings and have throttling limits. For service limits, see [Key Vault service limits](../../key-vault/general/service-limits.md).
+
+### Encryption-At-Rest
+
+Azure Key Vault and Azure Key Vault Managed HSM have integrations with Azure Services and Microsoft 365 for Customer Managed Keys, meaning customers may use their own keys in Azure Key Vault and Azure Key Managed HSM for encryption-at-rest of data stored in these services. Dedicated HSM and Payments HSM are Infrastructure-as-Service offerings and do not offer integrations with Azure Services. For an overview of encryption-at-rest with Azure Key Vault and Managed HSM, see [Azure Data Encryption-at-Rest](encryption-atrest.md).
+
+### APIs
+
+Dedicated HSM and Payments HSM support the PKCS#11, JCE/JCA, and KSP/CNG APIs, but Azure Key Vault and Managed HSM do not. Azure Key Vault and Managed HSM use the Azure Key Vault REST API and offer SDK support. For more information on the Azure Key Vault API, see [Azure Key Vault REST API Reference](/rest/api/keyvault/).
security Ransomware Detect Respond https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/ransomware-detect-respond.md
+
+ Title: Detect and respond to ransomware attacks
+description: Detect and respond to ransomware attacks
+++++ Last updated : 01/10/2022+++
+# Detect and respond to ransomware attacks
+
+There are several potential triggers that may indicate a ransomware incident. Unlike many other types of malware, most will be higher-confidence triggers (where little additional investigation or analysis should be required prior to the declaration of an incident) rather than lower-confidence triggers (where more investigation or analysis would likely be required before an incident should be declared).
+
+In general, such infections obvious from basic system behavior, the absence of key system or user files and the demand for ransom. In this case, the analyst should consider whether to immediately declare and escalate the incident, including taking any automated actions to mitigate the attack.
+
+## Detecting ransomware attacks
+
+Microsoft Defender for Cloud provides high-quality threat detection and response capabilities, also called Extended Detection and Response (XDR).
+
+Ensure rapid detection and remediation of common attacks on VMs, SQL Servers, Web applications, and identity.
+
+- **Prioritize Common Entry Points** ΓÇô Ransomware (and other) operators favor Endpoint/Email/Identity + Remote Desktop Protocol (RDP)
+ - **Integrated XDR** - Use integrated Extended Detection and Response (XDR) tools like Microsoft [Defender for Cloud](https://azure.microsoft.com/services/azure-defender/) to provide high quality alerts and minimize friction and manual steps during response
+ - **Brute Force** - Monitor for brute-force attempts like [password spray](/defender-for-identity/compromised-credentials-alerts)
+- **Monitor for Adversary Disabling Security** ΓÇô as this is often part of Human Operated Ransomware (HumOR) attack chain
+ - **Event Logs Clearing** ΓÇô especially the Security Event log and PowerShell Operational logs
+ - **Disabling of security tools/controls** (associated with some groups)
+- **Don't Ignore Commodity Malware** - Ransomware attackers regularly purchase access to target organizations from dark markets
+- **Integrate outside experts** ΓÇô into processes to supplement expertise, such as the [Microsoft Detection and Response Team (DART)](https://aka.ms/dart).
+- **Rapidly isolate** compromised computers using [Defender for Endpoint](/windows/security/threat-protection/microsoft-defender-atp/respond-machine-alerts#isolate-devices-from-the-network) in on-premises deployment.
+
+## Responding to ransomware attacks
+
+### Incident declaration
+
+Once a successful ransomware infection has been confirmed, the analyst should verify this represents a new incident or whether it may be related to an existing incident. Look for currently-open tickets that indicate similar incidents. If so, update the current incident ticket with new information in the ticketing system. If this is a new incident, an incident should be declared in the relevant ticketing system and escalated to the appropriate teams or providers to contain and mitigate the incident. Be mindful that managing ransomware incidents may require actions taken by multiple IT and security teams. Where possible, ensure that the ticket is clearly identified as a ransomware incident to guide workflow.
+
+### Containment/Mitigation
+
+In general, various server/endpoint antimalware, email antimalware and network protection solutions should be configured to automatically contain and mitigate known ransomware. There may be cases, however, where the specific ransomware variant has been able to bypass such protections and successfully infect target systems.
+
+Microsoft provides extensive resources to help update your incident response processes on the [Top Azure Security Best Practices](/cloud-adoption-framework/secure/security-top-10#4-process-update-incident-response-processes-for-cloud).
+
+The following are recommended actions to contain or mitigate a declared incident involving ransomware where automated actions taken by antimalware systems have been unsuccessful:
+
+1. Engage antimalware vendors through standard support processes
+1. Manually add hashes and other information associated with malware to antimalware systems
+1. Apply antimalware vendor updates
+1. Contain affected systems until they can be remediated
+1. Disable compromised accounts
+1. Perform root cause analysis
+1. Apply relevant patches and configuration changes on affected systems
+1. Block ransomware communications using internal and external controls
+1. Purge cached content
+
+## Road to recovery
+
+The Microsoft Detection and Response Team will help protect you from attacks
+
+Understanding and fixing the fundamental security issues that led to the compromise in the first place should be a priority for ransomware victims.
+
+Integrate outside experts into processes to supplement expertise, such as the [Microsoft Detection and Response Team (DART)](https://aka.ms/dart). The DART engages with customers around the world, helping to protect and harden against attacks before they occur, as well as investigating and remediating when an attack has occurred.
+
+Customers can engage our security experts directly from within the Microsoft 365 Defender portal for timely and accurate response. Experts provide insights needed to better understand the complex threats affecting your organization, from alert inquiries, potentially compromised devices, root cause of a suspicious network connection, to additional threat intelligence regarding ongoing advanced persistent threat campaigns.
+
+Microsoft is ready to assist your company in returning to safe operations.
+
+Microsoft performs hundreds of compromise recoveries and has a tried-and-true methodology. Not only will it get you to a more secure position, it affords you the opportunity to consider your long-term strategy rather than reacting to the situation.
+
+Microsoft provides Rapid Ransomware Recovery services. Under this, assistance is provided in all areas such as restoration of identity services, remediation and hardening and with monitoring deployment to help victims of ransomware attacks to return to normal business in the shortest possible timeframe.
+
+Our Rapid Ransomware Recovery services are treated as "Confidential" for the duration of the engagement. Rapid Ransomware Recovery engagements are exclusively delivered by the Compromise Recovery Security Practice (CRSP) team, part of the Azure Cloud & AI Domain. For more information, you can contact CRSP at [Request contact about Azure security](https://azure.microsoft.com/overview/meet-with-an-azure-specialist/).
+
+## What's next
+
+See the white paper: [Azure defenses for ransomware attack whitepaper](https://azure.microsoft.com/resources/azure-defenses-for-ransomware-attack).
+
+Other articles in this series:
+
+- [Ransomware protection in Azure](ransomware-protection.md)
+- [Prepare for a ransomware attack](ransomware-prepare.md)
+- [Azure features and resources that help you protect, detect, and respond](ransomware-features-resources.md)
+++++++++++
security Ransomware Features Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/ransomware-features-resources.md
+
+ Title: Azure features & resources that help you protect, detect, and respond
+description: Azure features & resources that help you protect, detect, and respond
+++++ Last updated : 01/10/2022+++
+# Azure features & resources that help you protect, detect, and respond
+
+Microsoft has invested in Azure native security capabilities that organizations can leverage to defeat ransomware attack techniques found in both high-volume, everyday attacks, and sophisticated targeted attacks.
+
+Key capabilities include:
+- **Native Threat Detection**: Microsoft Defender for Cloud provides high-0quality threat detection and response capabilities, also called Extended Detection and Response (XDR). This helps you:
+ - Avoid wasting time and talent of scarce security resources to build custom alerts using raw activity logs.
+ - Ensure effective security monitoring, which often enables security teams to rapidly approve use of Azure services.
+- **Passwordless and Multi-factor authentication**: Azure Active Directory MFA, Azure AD Authenticator App, and Windows Hello provide these capabilities. This helps protect accounts against commonly seen password attacks (which account for 99.9% of the volume of identity attacks we see in Azure AD). While no security is perfect, eliminating password-only attack vectors dramatically lowers the ransomware attack risk to Azure resources.
+- **Native Firewall and Network Security**: Microsoft built native DDoS attack mitigations, Firewall, Web Application Firewall, and many other controls into Azure. These security 'as a service' help simplify the configuration and implementation of security controls. These give organizations the choice of using native services or virtual appliances versions of familiar vendor capabilities to simplify their Azure security.
+
+## Microsoft Defender for Cloud
+
+Microsoft Defender for Cloud is a built-in tool that provides threat protection for workloads running in Azure, on-premises, and in other clouds. It protects your hybrid data, cloud native services, and servers from ransomware and other threats; and integrates with your existing security workflows like your SIEM solution and Microsoft's vast threat intelligence to streamline threat mitigation.
+
+Microsoft Defender for Cloud delivers protection for all resources from directly within the Azure experience and extends protection to on-premises and multi-cloud virtual machines and SQL databases using Azure Arc:
+- Protects Azure services
+- Protects hybrid workloads
+- Streamline security with AI and automation
+- Detects and blocks advanced malware and threats for Linux and Windows servers on any cloud
+- Protects cloud-native services from threats
+- Protects data services against ransomware attacks
+- Protects your managed and unmanaged IoT and OT devices, with continuous asset discovery, vulnerability management, and threat monitoring
+
+Microsoft Defender for Cloud provides you the tools to detect and block ransomware, advanced malware and threats for your resources
+
+Keeping your resources safe is a joint effort between your cloud provider, Azure, and you, the customer. You have to make sure your workloads are secure as you move to the cloud, and at the same time, when you move to IaaS (infrastructure as a service) there is more customer responsibility than there was in PaaS (platform as a service), and SaaS (software as a service). Microsoft Defender for Cloud provides you the tools needed to harden your network, secure your services and make sure you're on top of your security posture.
+
+Microsoft Defender for Cloud is a unified infrastructure security management system that strengthens the security posture of your data centers and provides advanced threat protection across your hybrid workloads in the cloud whether they're in Azure or not - as well as on premises.
+
+Microsoft Defender for Cloud's threat protection enables you to detect and prevent threats at the Infrastructure as a Service (IaaS) layer, non-Azure servers as well as for Platforms as a Service (PaaS) in Azure.
+
+Security Center's threat protection includes fusion kill-chain analysis, which automatically correlates alerts in your environment based on cyber kill-chain analysis, to help you better understand the full story of an attack campaign, where it started and what kind of impact it had on your resources.
+
+Key Features:
+- Continuous security assessment: Identify Windows and Linux machines with missing security updates or insecure OS settings and vulnerable Azure configurations. Add optional watchlists or events you want to monitor.
+- Actionable recommendations: Remediate security vulnerabilities quickly with prioritized, actionable security recommendations.
+- Centralized policy management: Ensure compliance with company or regulatory security requirements by centrally managing security policies across all your hybrid cloud workloads.
+- Industry's most extensive threat intelligence: Tap into the Microsoft Intelligent Security Graph, which uses trillions of signals from Microsoft services and systems around the globe to identify new and evolving threats.
+- Advanced analytics and machine learning: Use built-in behavioral analytics and machine learning to identify known attack patterns and post-breach activity.
+- Adaptive application control: Block malware and other unwanted applications by applying allowlist recommendations adapted to your specific workloads and powered by machine learning.
+- Prioritized alerts and attack timelines: Focus on the most critical threats first with prioritized alerts and incidents that are mapped into a single attack campaign.
+- Streamlined investigation: Quickly investigate the scope and impact of an attack with a visual, interactive experience. Use ad hoc queries for deeper exploration of security data.
+- Automation and orchestration: Automate common security workflows to address threats quickly using built-in integration with Azure Logic Apps. Create security playbooks that can route alerts to existing ticketing system or trigger incident response actions.
+
+## Microsoft Sentinel
+
+Microsoft Sentinel helps to create a complete view of a kill chain
+
+With Sentinel, you can connect to any of your security sources using built-in connectors and industry standards and then take advantage of artificial intelligence to correlate multiple low fidelity signals spanning multiple sources to create a complete view of a ransomware kill chain and prioritized alerts so that defenders can accelerate their time to evict adversaries.
+
+Microsoft Sentinel is your birds-eye view across the enterprise alleviating the stress of increasingly sophisticated attacks, increasing volumes of alerts, and long resolution time frames.
+
+Collect data at cloud scale across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.
+
+Detect previously undetected threats, and minimize [false positives](../../sentinel/false-positives.md) using Microsoft's analytics and unparalleled threat intelligence.
+
+Investigate threats with artificial intelligence, and hunt for suspicious activities at scale, tapping into years of Cyber security work at Microsoft.
+
+Respond to incidents rapidly with built-in orchestration and automation of common tasks.
+
+## Native threat prevention with Microsoft Defender for Cloud
+
+Microsoft Defender for Cloud scans virtual machines across an Azure subscription and makes a recommendation to deploy endpoint protection where an existing solution is not detected. This recommendation can be accessed via the Recommendations section:
++
+Microsoft Defender for Cloud provides security alerts and advanced threat protection for virtual machines, SQL databases, containers, web applications, your network, and more. When Microsoft Defender for Cloud detects a threat in any area of your environment, it generates a security alert. These alerts describe details of the affected resources, suggested remediation steps, and in some cases an option to trigger a logic app in response.
+
+This alert is an example of a detected Petya ransomware alert:
++
+### Azure native backup solution protects Your data
+
+One important way that organizations can help protect against losses in a ransomware attack is to have a backup of business-critical information in case other defenses fail. Since ransomware attackers have invested heavily into neutralizing backup applications and operating system features like volume shadow copy, it is critical to have backups that are inaccessible to a malicious attacker. With a flexible business continuity and disaster recovery solution, industry-leading data protection and security tools, Azure cloud offers secure services to protect your data:
+
+- **Azure Backup**: Azure Backup service provides simple, secure, and cost-effective solution to back up your Azure VM. Currently, Azure Backup supports backing up of all the disks (OS and Data disks) in a VM using backup solution for Azure Virtual machine.
+- **Azure Disaster Recovery**: With disaster recovery from on-prem to the cloud, or from one cloud to another, you can avoid downtime and keep your applications up and running.
+- **Built-in Security and Management in Azure**: To be successful in the Cloud era, enterprises must have visibility/metrics and controls on every component to pinpoint issues efficiently, optimize and scale effectively, while having the assurance the security, compliance and policies are in place to ensure the velocity.
+
+### Guaranteed and Protected Access to Your Data
+
+Azure has a lengthy period of experience managing global data centers, which are backed by Microsoft's $15 billion-infrastructure investment that is under continuous evaluation and improvement ΓÇô with ongoing investments and improvements, of course.
+
+Key Features:
+- Azure comes with Locally Redundant Storage (LRS), where data is stored locally, as well as Geo Redundant Storage (GRS) in a second region
+- All data stored on Azure is protected by an advanced encryption process, and all Microsoft's data centers have two-tier authentication, proxy card access readers, biometric scanners
+- Azure has more certifications than any other public cloud provider on the market, including ISO 27001, HIPAA, FedRAMP, SOC 1, SOC 2, and many international specifications
+
+## Additional resources
+
+- [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/)
+- [Build great solutions with the Microsoft Azure Well-Architected Framework](/learn/paths/azure-well-architected-framework/)
+- [Azure Top Security Best Practices](/azure/cloud-adoption-framework/get-started/security#step-1-establish-essential-security-practices)
+- [Security Baselines](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/bg-p/Microsoft-Security-Baselines)
+- [Microsoft Azure Resource Center](https://azure.microsoft.com/resources/)
+- [Azure Migration Guide](/azure/cloud-adoption-framework/migrate/azure-migration-guide/)
+- [Security Compliance Management](/azure/cloud-adoption-framework/organize/cloud-security-compliance-management)
+- [Azure Security Control ΓÇô Incident Response](/security/benchmark/azure/security-controls-v3-incident-response)
+- [Zero Trust Guidance Center](/security/zero-trust/)
+- [Azure Web Application Firewall](/azure/web-application-firewall/ag/application-gateway-crs-rulegroups-rules?tabs=owasp32)
+- [Azure VPN gateway](/azure/vpn-gateway/openvpn-azure-ad-tenant#enable-authentication)
+- [Azure Active Directory Multi-Factor Authentication (MFA)](/azure/active-directory/authentication/howto-mfa-userstates)
+- [Azure AD Identity Protection](/azure/active-directory/authentication/concept-password-ban-bad)
+- [Azure AD Conditional Access](/azure/active-directory/conditional-access/overview)
+- [Microsoft Defender for Cloud documentation](/azure/defender-for-cloud/)
+
+## Conclusion
+
+Microsoft focuses heavily on both security of our cloud and providing you the security controls you need to protect your cloud workloads. As a leader in cybersecurity, we embrace our responsibility to make the world a safer place. This is reflected in our comprehensive approach to ransomware prevention and detection in our security framework, designs, products, legal efforts, industry partnerships, and services.
+
+We look forward to partnering with you in addressing ransomware protection, detection, and prevention in a holistic manner.
+
+Connect with us:
+- [AskAzureSecurity@microsoft.com](mailto:AskAzureSecurity&#64;microsoft.com)
+- [www.microsoft.com/services](https://www.microsoft.com/en-us/msservices)
+
+For detailed information on how Microsoft secures our cloud, visit the [service trust portal](https://servicetrust.microsoft.com/).
+
+## What's Next
+
+See the white paper: [Azure defenses for ransomware attack whitepaper](https://azure.microsoft.com/resources/azure-defenses-for-ransomware-attack).
+
+Other articles in this series:
+
+- [Ransomware protection in Azure](ransomware-protection.md)
+- [Prepare for a ransomware attack](ransomware-prepare.md)
+- [Detect and respond to ransomware attack](ransomware-detect-respond.md)
++
security Ransomware Prepare https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/ransomware-prepare.md
+
+ Title: Prepare for a ransomware attack
+description: Prepare for a ransomware attack
+++++ Last updated : 01/10/2022+++
+# Prepare for a ransomware attack
+
+## Adopt a Cybersecurity framework
+
+A good place to start is to adopt the [Azure Security Benchmark](/security/benchmark/azure/) to secure the Azure environment. The Azure Security Benchmark is the Azure security control framework, based on industry-based security control frameworks such as NIST SP800-53, CIS Controls v7.1.
++
+The Azure Security Benchmark provides organizations guidance on how to configure Azure and Azure Services and implement the security controls. Organizations can use [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) to monitor their live Azure environment status with all the Azure Security Benchmark controls.
+
+Ultimately, the Framework is aimed at reducing and better managing cybersecurity risks.
+
+| Azure Security Benchmark stack |
+|--|
+| [Network&nbsp;security&nbsp;(NS)](/security/benchmark/azure/security-controls-v3-network-security) |
+| [Identity&nbsp;Management&nbsp;(IM)](/security/benchmark/azure/security-controls-v3-identity-management) |
+| [Privileged&nbsp;Access&nbsp;(PA)](/security/benchmark/azure/security-controls-v3-privileged-access) |
+| [Data&nbsp;Protection&nbsp;(DP)](/security/benchmark/azure/security-controls-v3-data-protection) |
+| [Asset&nbsp;Management&nbsp;(AM)](/security/benchmark/azure/security-controls-v3-asset-management) |
+| [Logging&nbsp;and&nbsp;Threat&nbsp;Detection (LT)](/security/benchmark/azure/security-controls-v2-logging-threat-detection) |
+| [Incident&nbsp;Response&nbsp;(IR)](/security/benchmark/azure/security-controls-v3-incident-response) |
+| [Posture&nbsp;and&nbsp;Vulnerability&nbsp;Management&nbsp;(PV)](/security/benchmark/azure/security-controls-v3-posture-vulnerability-management) |
+| [Endpoint&nbsp;Security&nbsp;(ES)](/security/benchmark/azure/security-controls-v3-endpoint-security) |
+| [Backup&nbsp;and&nbsp;Recovery&nbsp;(BR)](/security/benchmark/azure/security-controls-v3-backup-recovery) |
+| [DevOps&nbsp;Security&nbsp;(DS)](/security/benchmark/azure/security-controls-v3-devops-security) |
+| [Governance&nbsp;and&nbsp;Strategy&nbsp;(GS)](/security/benchmark/azure/security-controls-v3-governance-strategy) |
+
+## Prioritize mitigation
+
+Based on our experience with ransomware attacks, we've found that prioritization should focus on: 1) prepare, 2) limit, 3) prevent. This may seem counterintuitive, since most people want to prevent an attack and move on. Unfortunately, we must assume breach (a key Zero Trust principle) and focus on reliably mitigating the most damage first. This prioritization is critical because of the high likelihood of a worst-case scenario with ransomware. While it's not a pleasant truth to accept, we're facing creative and motivated human attackers who are adept at finding a way to control the complex real-world environments in which we operate. Against that reality, it's important to prepare for the worst and establish frameworks to contain and prevent attackers' ability to get what they're after.
+
+While these priorities should govern what to do first, we encourage organizations to run as many steps in parallel as possible (including pulling quick wins forward from step 1 whenever you can).
+
+## Make it harder to get in
+
+Prevent a ransomware attacker from entering your environment and rapidly respond to incidents to remove attacker access before they can steal and encrypt data. This will cause attackers to fail earlier and more often, undermining the profit of their attacks. While prevention is the preferred outcome, it is a continuous journey and may not be possible to achieve 100% prevention and rapid response across a real-world organizations (complex multi-platform and multi-cloud estate with distributed IT responsibilities).
+
+To achieve this, organizations should identify and execute quick wins to strengthen security controls to prevent entry and rapidly detect/evict attackers while implementing a sustained program that helps them stay secure. Microsoft recommends organizations follow the principles outlined in the Zero Trust strategy [here](https://aka.ms/zerotrust). Specifically, against Ransomware, organizations should prioritize:
+- Improving security hygiene by focusing efforts on attack surface reduction and threat and vulnerability management for assets in their estate.
+- Implementing Protection, Detection and Response controls for their digital assets that can protect against commodity and advanced threats, provide visibility and alerting on attacker activity and respond to active threats.
+
+## Limit scope of damage
+
+Ensure you have strong controls (prevent, detect, respond) for privileged accounts like IT Admins and other roles with control of business-critical systems. This slows and/or blocks attackers from gaining complete access to your resources to steal and encrypt them. Taking away the attackers' ability to use IT Admin accounts as a shortcut to resources will drastically lower the chances they are successful at attacking you and demanding payment / profiting.
+
+Organizations should have elevated security for privileged accounts (tightly protect, closely monitor, and rapidly respond to incidents related to these roles). See Microsoft's [Security rapid modernization plan](https://aka.ms/sparoadmap), which covers:
+- End to End Session Security (including multifactor authentication (MFA) for admins)
+- Protect and Monitor Identity Systems
+- Mitigate Lateral Traversal
+- Rapid Threat Response
+
+## Prepare for the worst
+
+Plan for the worst-case scenario and expect that it will happen (at all levels of the organization). This will both help your organization and others in the world you depend on:
+
+- Limits damage for the worst-case scenario ΓÇô While restoring all systems from backups is highly disruptive to business, this is more effective and efficient than trying to recovery using (low quality) attacker-provided decryption tools after paying to get the key. Note: Paying is an uncertain path ΓÇô You have no formal or legal guarantee that the key works on all files, the tools work will work effectively, or that the attacker (who may be an amateur affiliate using a professional's toolkit) will act in good faith.
+- Limit the financial return for attackers ΓÇô If an organization can restore business operations without paying the attackers, the attack has effectively failed and resulted in zero return on investment (ROI) for the attackers. This makes it less likely that they will target the organization in the future (and deprives them of additional funding to attack others).
+
+The attackers may still attempt to extort the organization through data disclosure or abusing/selling the stolen data, but this gives them less leverage than if they have the only access path to your data and systems.
+
+To realize this, organizations should ensure they:
+- Register Risk - Add ransomware to risk register as high likelihood and high impact scenario. Track mitigation status via Enterprise Risk Management (ERM) assessment cycle.
+- Define and Backup Critical Business Assets ΓÇô Define systems required for critical business operations and automatically back them up on a regular schedule (including correct backup of critical dependencies like Active Directory)
+Protect backups against deliberate erasure and encryption with offline storage, immutable storage, and/or out of band steps (MFA or PIN) before modifying/erasing online backups.
+- Test 'Recover from Zero' Scenario ΓÇô test to ensure your business continuity / disaster recovery (BC/DR) can rapidly bring critical business operations online from zero functionality (all systems down). Conduct practice exercise(s) to validate cross-team processes and technical procedures, including out-of-band employee and customer communications (assume all email/chat/etc. is down).
+ It is critical to protect (or print) supporting documents and systems required for recovery including restoration procedure documents, CMDBs, network diagrams, SolarWinds instances, etc. Attackers destroy these regularly.
+- Reduce on-premises exposure ΓÇô by moving data to cloud services with automatic backup & self-service rollback.
+
+## Promote awareness and ensure there is no knowledge gap
+
+There are a number of activities that may be undertaken to prepare for potential ransomware incidents.
+
+### Educate end users on the dangers of ransomware
+
+As most ransomware variants rely on end-users to install the ransomware or connect to compromised Web sites, all end users should be educated about the dangers. This would typically be part of annual security awareness training as well as ad hoc training available through the company's learning management systems. The awareness training should also extend to the company's customers via the company's portals or other appropriate channels.
+
+### Educate security operations center (SOC) analysts and others on how to respond to ransomware incidents
+
+SOC analysts and others involved in ransomware incidents should know the fundamentals of malicious software and ransomware specifically. They should be aware of major variants/families of ransomware, along with some of their typical characteristics. Customer call center staff should also be aware of how to handle ransomware reports from the company's end users and customers.
+
+## Ensure that you have appropriate technical controls in place
+
+There are a wide variety of technical controls that should be in place to protect, detect, and respond to ransomware incidents with a strong emphasis on prevention. At a minimum, SOC analysts should have access to the telemetry generated by antimalware systems in the company, understand what preventive measures are in place, understand the infrastructure targeted by ransomware, and be able to assist the company teams to take appropriate action.
+
+This should include some or all of the following essential tools:
+
+- Detective and preventive tools
+ - Enterprise server antimalware product suites (such as Microsoft Defender for Cloud)
+ - Network antimalware solutions (such as Azure Anti-malware)
+ - Security data analytics platforms (such as Azure Monitor, Sentinel)
+ - Next generation intrusion detection and prevention systems
+ - Next generation firewall (NGFW)
+
+- Malware analysis and response toolkits
+ - Automated malware analysis systems with support for most major end-user and server operating systems in the organization
+ - Static and dynamic malware analysis tools
+ - Digital forensics software and hardware
+ - Non- Organizational Internet access (for example, 4G dongle)
+ - For maximum effectiveness, SOC analysts should have extensive access to almost all antimalware platforms through their native interfaces in addition to unified telemetry within the security data analysis platforms. The platform for Azure native Antimalware for Azure Cloud Services and Virtual Machines provides step-by-step guides on how to accomplish this.
+ - Enrichment and intelligence sources
+ - Online and offline threat and malware intelligence sources (such as sentinel, Azure Network Watcher)
+ - Active directory and other authentication systems (and related logs)
+ - Internal Configuration Management Databases (CMDBs) containing endpoint device info
+
+- Data protection
+ - Implement data protection to ensure rapid and reliable recovery from a ransomware attack + block some techniques.
+ - Designate Protected Folders ΓÇô to make it more difficult for unauthorized applications to modify the data in these folders.
+ - Review Permissions ΓÇô to reduce risk from broad access enabling ransomware
+ - Discover broad write/delete permissions on fileshares, SharePoint, and other solutions
+ - Reduce broad permissions while meeting business collaboration requirements
+ - Audit and monitor to ensure broad permissions don't reappear
+ - Secure backups
+ - Ensure critical systems are backed up and backups are protected against deliberate attacker erasure/encryption.
+ - Back up all critical systems automatically on a regular schedule
+ - Ensure Rapid Recovery of business operations by regularly exercising business continuity / disaster recovery (BC/DR) plan
+ - Protect backups against deliberate erasure and encryption
+ - Strong Protection ΓÇô Require out of band steps (like MUA/MFA) before modifying online backups such as Azure Backup
+ - Strongest Protection ΓÇô Isolate backups from online/production workloads to enhance the protection of backup data.
+ - Protect supporting documents required for recovery such as restoration procedure documents, CMDB, and network diagrams
+
+## Establish an incident handling process
+
+Ensure your organization undertakes a number of activities roughly following the incident response steps and guidance described in the US National Institute of Standards and Technology (NIST) Computer Security Incident Handling Guide (Special Publication 800-61r2) to prepare for potential ransomware incidents. These steps include:
+
+1. **Preparation**: This stage describes the various measures that should be put into place prior to an incident. This may include both technical preparations (such as the implementation of suitable security controls and other technologies) and non-technical preparations (such as the preparation of processes and procedures).
+1. **Triggers / Detection**: This stage describes how this type of incident may be detected and what triggers may be available that should be used to initiate either further investigation or the declaration of an incident. These are generally separated into high-confidence and low-confidence triggers.
+1. **Investigation / Analysis**: This stage describes the activities that should be undertaken to investigate and analyze available data when it isnΓÇÖt clear that an incident has occurred, with the goal of either confirming that an incident should be declared or concluded that an incident hasn't occurred.
+1. **Incident Declaration**: This stage covers the steps that must be taken to declare an incident, typically with the raising of a ticket within the enterprise incident management (ticketing) system and directing the ticket to the appropriate personnel for further evaluation and action.
+1. **Containment / Mitigation**: This stage covers the steps that may be taken either by the Security Operations Center (SOC), or by others, to contain or mitigate (stop) the incident from continuing to occur or limiting the effect of the incident using available tools, techniques, and procedures.
+1. **Remediation / Recovery**: This stage covers the steps that may be taken to remediate or recover from damage that was caused by the incident before it was contained and mitigated.
+1. **Post-Incident Activity**: This stage covers the activities that should be performed once the incident has been closed. This can include capturing the final narrative associated with the incident as well as identifying lessons learned.
++
+## Prepare for a quick recovery
+
+Ensure that you have appropriate processes and procedures in place. Almost all ransomware incidents result in the need to restore compromised systems. So appropriate and tested backup and restore processes and procedures should be in place for most systems. There should also be suitable containment strategies in place with suitable procedures to stop ransomware from spreading and recovery from ransomware attacks.
+
+Ensure that you have well-documented procedures for engaging any third-party support, particularly support from threat intelligence providers, antimalware solution providers and from the malware analysis provider. These contacts may be useful if the ransomware variant may have known weaknesses or decryption tools may be available.
+
+The Azure platform provides backup and recovery options through Azure Backup as well built-in within various data services and workloads.
+
+Isolated backups with [Azure Backup](../../backup/backup-azure-security-feature.md#prevent-attacks)
+- Azure Virtual Machines
+- Databases in Azure VMs: SQL, SAP HANA
+- Azure Database for PostgreSQL
+- On-prem Windows Servers (back up to cloud using MARS agent)
+
+Local (operational) backups with Azure Backup
+- Azure Files
+- Azure Blobs
+- Azure Disks
+
+Built-in backups from Azure services
+- Data services like Azure Databases (SQL, MySQL, MariaDB, PostgreSQL), Cosmos DB, and ANF offer built-in backup capabilities
+
+## What's Next
+
+See the white paper: [Azure defenses for ransomware attack whitepaper](https://azure.microsoft.com/resources/azure-defenses-for-ransomware-attack).
+
+Other articles in this series:
+
+- [Ransomware protection in Azure](ransomware-protection.md)
+- [Detect and respond to ransomware attack](ransomware-detect-respond.md)
+- [Azure features and resources that help you protect, detect, and respond](ransomware-features-resources.md)
+++
security Ransomware Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/ransomware-protection.md
+
+ Title: Ransomware protection in Azure
+description: Ransomware protection in Azure
+++++ Last updated : 01/10/2022+++
+# Ransomware protection in Azure
+
+Ransomware and extortion are a high profit, low-cost business, which has a debilitating impact on targeted organizations, national security, economic security, and public health and safety. What started as simple, single-PC ransomware has grown to include a variety of extortion techniques directed at all types of corporate networks and cloud platforms.
+
+To ensure customers running on Azure are protected against ransomware attacks, Microsoft has invested heavily on the security of our cloud platforms, and provides security controls you need to protect your Azure cloud workloads
+
+By leveraging Azure native ransomware protections and implementing the best practices recommended in this article, you're taking measures that ensure your organization is optimally positioned to prevent, protect, and detect potential ransomware attacks on your Azure assets.
+
+This article lays out key Azure native capabilities and defenses for ransomware attacks and guidance on how to proactively leverage these to protect your assets on Azure cloud.
+
+## A growing threat
+
+Ransomware attacks have become one of the biggest security challenges facing businesses today. When successful, ransomware attacks can cripple a business core IT infrastructure, and cause destruction that could have a debilitating impact on the physical, economic security or safety of a business. Ransomware attacks are targeted to businesses of all types. This requires that all businesses take preventive measures to ensure protection.
+
+Recent trends on the number of attacks are quite alarming. While 2020 wasn't a good year for ransomware attacks on businesses, 2021 started on a bad trajectory. On May 7, the Colonial pipeline (Colonial) attack shut down services such as pipeline transportation of diesel, gasoline, and jet fuel were temporary halted. Colonial shut the critical fuel network supplying the populous eastern states.
+
+Historically, cyberattacks were seen as a sophisticated set of actions targeting particular industries, which left the remaining industries believing they were outside the scope of cybercrime, and without context about which cybersecurity threats they should prepare for. Ransomware represents a major shift in this threat landscape, and it's made cyberattacks a very real and omnipresent danger for everyone. Encrypted and lost files and threatening ransom notes have now become the top-of-mind fear for most executive teams.
+
+Ransomware's economic model capitalizes on the misperception that a ransomware attack is solely a malware incident. Whereas in reality ransomware is a breach involving human adversaries attacking a network.
+
+For many organizations, the cost to rebuild from scratch after a ransomware incident far outweighs the original ransom demanded. With a limited understanding of the threat landscape and how ransomware operates, paying the ransom seems like the better business decision to return to operations. However, the real damage is often done when the cybercriminal exfiltrates files for release or sale, while leaving backdoors in the network for future criminal activityΓÇöand these risks persist whether or not the ransom is paid.
+
+Ransomware attacks have become one of the biggest security challenges facing businesses today. When successful, ransomware attacks can cripple a business core IT infrastructure, incapacity, and cause destruction that could have a debilitating impact on the physical, economic security or safety of a business. Ransomware attacks are targeted to businesses of all types. This requires that all businesses take preventive measures to ensure protection.
+
+Recent trends on the number of attacks are quite alarming. While 2020 was not a good year for ransomware attacks on businesses, 2021 started on a bad trajectory. On May 7, the Colonial pipeline (Colonial) attack shutdown services such as pipeline transportation of diesel and gasoline, and jet fuel were temporary halted. Colonial shut the critical fuel network supplying the populous eastern states.
+
+Historically, cyberattacks were seen as a sophisticated set of actions targeting particular industries, which left the remaining industries believing they were outside the scope of cybercrime, and without context about which cybersecurity threats they should prepare for. Ransomware represents a major shift in this threat landscape, and it's made cyberattacks a very real and omnipresent danger for everyone. Encrypted and lost files and threatening ransom notes have now become the top-of-mind fear for most executive teams.
+
+## What is ransomware
+
+Ransomware is a type of malware that infects a computer and restricts a user's access to the infected system or specific files in order to extort them for money. After the target system has been compromised, it typically locks out most interaction and displays an on-screen alert, typically stating that the system has been locked or that all of their files have been encrypted. It then demands a substantial ransom be paid before the system is released or files decrypted.
+
+Ransomware will typically exploit the weaknesses or vulnerabilities in your organization's IT systems or infrastructures to succeed. The attacks are so obvious that it does not take much investigation to confirm that your business has been attacked or that an incident should be declared. The exception would be a spam email that demands ransom in exchange for supposedly compromising materials. In this case, these types of incidents should be dealt with as spam unless the email contains highly specific information.
+
+Any business or organization that operates an IT system with data in it can be attacked. Although individuals can be targeted in a ransomware attack, most attacks are targeted at businesses. While the Colonial ransomware attack of May 2021 drew considerable public attention, our Detection and Response team (DART)'s ransomware engagement data shows that the energy sector represents one of the most targeted sectors, along with the financial, healthcare, and entertainment sectors. And despite continued promises not to attack hospitals or healthcare companies during a pandemic, healthcare remains the number one target of human operated ransomware.
++
+## How your assets are targeted
+
+When attacking cloud infrastructure, adversaries often attack multiple resources to try to obtain access to customer data or company secrets. The cloud "kill chain" model explains how attackers attempt to gain access to any of your resources running in the public cloud through a four-step process: exposure, access, lateral movement, and actions.
+
+1. Exposure is where attackers look for opportunities to gain access to your infrastructure. For example, attackers know customer-facing applications must be open for legitimate users to access them. Those applications are exposed to the Internet and therefore susceptible to attacks.
+1. Attackers will try to exploit an exposure to gain access to your public cloud infrastructure. This can be done through compromised user credentials, compromised instances, or misconfigured resources.
+1. During the lateral movement stage, attackers discover what resources they have access to and what the scope of that access is. Successful attacks on instances give attackers access to databases and other sensitive information. The attacker then searches for additional credentials. Our Microsoft Defender for Cloud data shows that without a security tool to quickly notify you of the attack, it takes organizations on average 101 days to discover a breach. Meanwhile, in just 24-48 hours after a breach, the attacker will usually have complete control of the network.
+1. The actions an attacker takes after lateral movement are largely dependent on the resources they were able to gain access to during the lateral movement phase. Attackers can take actions that cause data exfiltration, data loss or launch other attacks. For enterprises, the average financial impact of data loss is now reaching $1.23 million.
++
+## Why attacks succeed
+
+There are several reasons why ransomware attacks succeed. Businesses that are vulnerable often fall victim to ransomware attacks. The following are some of the attack's critical success factors:
+
+- The attack surface has increased as more and more businesses offer more services through digital outlets
+- There's a considerable ease of obtaining off-the-shelf malware, Ransomware-as-a-Service (RaaS)
+- The option to use cryptocurrency for blackmail payments has opened new avenues for exploit
+- Expansion of computers and their usage in different workplaces (local school districts, police departments, police squad cars, etc.) each of which is a potential access point for malware, resulting in potential attack surface
+- Prevalence of old, outdated, and antiquated infrastructure systems and software
+- Poor patch-management regimens
+- Outdated or very old operating systems that are close to or have gone beyond end-of-support dates
+- Lack of resources to modernize the IT footprint
+- Knowledge gap
+- Lack of skilled staff and key personnel overdependency
+- Poor security architecture
+
+Attackers use different techniques, such as Remote Desktop Protocol (RDP) brute force attack to exploit vulnerabilities.
++
+## Should you pay?
+
+There are varying opinions on what the best option is when confronted with this vexing demand. The Federal Bureau of Investigation (FBI) advises victims not to pay ransom but to instead be vigilant and take proactive measures to secure their data before an attack. They contend that paying doesn't guarantee that locked systems and encrypted data will be released again. The FBI says another reason not to pay is that payments to cyber criminals incentivizes them to continue to attack organizations.
+
+Nevertheless, some victims elect to pay the ransom demand even though system and data access isn't guaranteed after paying the ransom. By paying, such organizations take the calculated risk to pay in hopes of getting back their system and data and quickly resuming normal operations. Part of the calculation is reduction in collateral costs such as lost productivity, decreased revenue over time, exposure of sensitive data, and potential reputational damage.
+
+The best way to prevent paying ransom is not to fall victim by implementing preventive measures and having tool saturation to protect your organization from every step that attacker takes wholly or incrementally to hack into your system. In addition, having the ability to recover impacted assets will ensure restoration of business operations in a timely fashion. Azure Cloud has a robust set of tools to guide you all the way.
+
+### What is the typical cost to a business?
+
+The impact of a ransomware attack on any organization is difficult to quantify accurately. However, depending on the scope and type, the impact is multi-dimensional and is broadly expressed in:
+- Loss of data access
+- Business operation disruption
+- Financial loss
+- Intellectual property theft
+- Compromised customer trust and a tarnished reputation
+
+Colonial Pipeline paid about $4.4 Million in ransom to have their data released. This doesn't include the cost of downtime, lost productive, lost sales and the cost of restoring services. More broadly, a significant impact is the "knock-on effect" of impacting high numbers of businesses and organizations of all kinds including towns and cities in their local areas. The financial impact is also staggering. According to Microsoft, the global cost associated with ransomware recovery is projected to exceed $20 billion in 2021.
++
+## Next steps
+
+See the white paper: [Azure defenses for ransomware attack whitepaper](https://azure.microsoft.com/resources/azure-defenses-for-ransomware-attack).
+
+Other articles in this series:
+- [Prepare for a ransomware attack](ransomware-prepare.md)
+- [Detect and respond to ransomware attack](ransomware-detect-respond.md)
+- [Azure features and resources that help you protect, detect, and respond](ransomware-features-resources.md)
++
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/iot-solution.md
This playbook opens a ticket in SerivceNow each time a new Engineering Workstati
For more information, see:
+- [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
- [Microsoft Defender for IoT documentation](../defender-for-iot/index.yml) - [Microsoft Defender for IoT solution](sentinel-solutions-catalog.md#microsoft)-- [Microsoft Defender for IoT data connector](data-connectors-reference.md#microsoft-defender-for-iot)
+- [Microsoft Defender for IoT data connector](data-connectors-reference.md#microsoft-defender-for-iot)
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sentinel-solutions-catalog.md
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Apache Log4j Vulnerability Detection** | Analytics rules, hunting queries | Application, Security - Threat Protection, Security - Vulnerability Management | Microsoft|
-|**Cybersecurity Maturity Model Certification (CMMC)** | Analytics rules, workbook, playbook | Compliance | Microsoft|
+|**Cybersecurity Maturity Model Certification (CMMC)** | [Analytics rules, workbook, playbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-cybersecurity-maturity-model-certification-cmmc/ba-p/2111184) | Compliance | Microsoft|
| **IoT/OT Threat Monitoring with Defender for IoT** | [Analytics rules, playbooks, workbook](iot-solution.md) | Internet of Things (IoT), Security - Threat Protection | Microsoft | |**Maturity Model for Event Log Management M2131** | [Analytics rules, hunting queries, playbooks, workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/modernize-log-management-with-the-maturity-model-for-event-log/ba-p/3072842) | Compliance | Microsoft|
-|**Microsoft Insider Risk Management** (IRM) |[Data connector](data-connectors-reference.md#microsoft-365-insider-risk-management-irm-preview), workbook, analytics rules, hunting queries, playbook |Security - Insider threat | Microsoft|
+|**Microsoft Insider Risk Management** (IRM) |[Data connector](data-connectors-reference.md#microsoft-365-insider-risk-management-irm-preview), [workbook, analytics rules, hunting queries, playbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-microsoft-sentinel-microsoft-insider-risk/ba-p/2955786) |Security - Insider threat | Microsoft|
| **Microsoft Sentinel Deception** | [Workbooks, analytics rules, watchlists](monitor-key-vault-honeytokens.md) | Security - Threat Protection |Microsoft |
-|**Zero Trust** (TIC3.0) |[Analytics rules, playbook, workbooks](https://techcommunity.microsoft.com/t5/public-sector-blog/announcing-the-azure-sentinel-zero-trust-tic3-0-workbook/ba-p/2313761) |Identity, Security - Others |Microsoft |
+|**Zero Trust** (TIC3.0) |[Analytics rules, playbook, workbooks](/security/zero-trust/integrate/sentinel-solution) |Identity, Security - Others |Microsoft |
| | | | | ## Arista Networks
sentinel Top Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/top-workbooks.md
Access workbooks in Microsoft Sentinel under **Threat Management** > **Workbooks
|**Azure AD Audit logs** | Uses Azure Active Directory audit logs to provide insights into Azure AD scenarios. <br><br>For more information, see [Quickstart: Get started with Microsoft Sentinel](get-visibility.md). | |**Azure AD Audit, Activity and Sign-in logs** | Provides insights into Azure Active Directory Audit, Activity, and Sign-in data with one workbook. Shows activity such as sign-ins by location, device, failure reason, user action, and more. <br><br> This workbook can be used by both Security and Azure administrators. | |**Azure AD Sign-in logs** | Uses the Azure AD sign-in logs to provide insights into Azure AD scenarios. |
-|**Cybersecurity Maturity Model Certification (CMMC)** | Provides a mechanism for viewing log queries aligned to CMMC controls across the Microsoft portfolio, including Microsoft security offerings, Office 365, Teams, Intune, Azure Virtual Desktop, and so on. <br><br>For more information, see [Cybersecurity Maturity Model Certification (CMMC) Workbook in Public Preview](https://techcommunity.microsoft.com/t5/azure-sentinel/what-s-new-cybersecurity-maturity-model-certification-cmmc/ba-p/2111184).|
+| **Azure Security Benchmark** | Provides a single pane of glass for gathering and managing data to address Azure Security Benchmark control requirements, aggregating data from 25+ Microsoft security products. <br><br>For more information, see our [TechCommunity blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/what-s-new-azure-security-benchmark-workbook-preview/ba-p/2865930). |
+|**Cybersecurity Maturity Model Certification (CMMC)** | Provides a mechanism for viewing log queries aligned to CMMC controls across the Microsoft portfolio, including Microsoft security offerings, Office 365, Teams, Intune, Azure Virtual Desktop, and so on. <br><br>For more information, see our [TechCommunity blog](https://techcommunity.microsoft.com/t5/azure-sentinel/what-s-new-cybersecurity-maturity-model-certification-cmmc/ba-p/2111184).|
|**Data collection health monitoring** / **Usage monitoring** | Provides insights into your workspace's data ingestion status, such as ingestion size, latency, and number of logs per source. View monitors and detect anomalies to help you determine your workspaces data collection health. <br><br>For more information, see [Monitor the health of your data connectors with this Microsoft Sentinel workbook](monitor-data-connector-health.md). | |**Event Analyzer** | Enables you to explore, audit, and speed up Windows Event Log analysis, including all event details and attributes, such as security, application, system, setup, directory service, DNS, and so on. | |**Exchange Online** |Provides insights into Microsoft Exchange online by tracing and analyzing all Exchange operations and user activities. |
Access workbooks in Microsoft Sentinel under **Threat Management** > **Workbooks
|**Office 365** | Provides insights into Office 365 by tracing and analyzing all operations and activities. Drill down into SharePoint, OneDrive, Teams, and Exchange data. | |**Security Alerts** | Provides a Security Alerts dashboard for alerts in your Microsoft Sentinel environment. <br><br>For more information, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md). | |**Security Operations Efficiency** | Intended for security operations center (SOC) managers to view overall efficiency metrics and measures regarding the performance of their team. <br><br>For more information, see [Manage your SOC better with incident metrics](manage-soc-with-incident-metrics.md). |
-|**Threat Intelligence** | Provides insights into threat indicators, including type and severity of threats, threat activity over time, and correlation with other data sources, including Office 365 and firewalls. <br><br>For more information, see [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md). |
+|**Threat Intelligence** | Provides insights into threat indicators, including type and severity of threats, threat activity over time, and correlation with other data sources, including Office 365 and firewalls. <br><br>For more information, see [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md) and our [TechCommunity blog](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-azure-sentinel-threat-intelligence-workbook/ba-p/2858265). |
|**Zero Trust (TIC3.0)** | Provides an automated visualization of Zero Trust principles, cross-walked to the [Trusted Internet Connections framework](https://www.cisa.gov/trusted-internet-connections). <br><br>For more information, see the [Zero Trust (TIC 3.0) workbook announcement blog](https://techcommunity.microsoft.com/t5/public-sector-blog/announcing-the-azure-sentinel-zero-trust-tic3-0-workbook/ba-p/2313761). |
service-bus-messaging Service Bus Azure And Service Bus Queues Compared Contrasted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md
As a solution architect/developer, **you should consider using Service Bus queue
* Your queue size won't grow larger than 80 GB. * You want to use the AMQP 1.0 standards-based messaging protocol. For more information about AMQP, see [Service Bus AMQP Overview](service-bus-amqp-overview.md). * You envision an eventual migration from queue-based point-to-point communication to a publish-subscribe messaging pattern. This pattern enables integration of additional receivers (subscribers). Each receiver receives independent copies of either some or all messages sent to the queue.
-* Your messaging solution needs to support the "At-Most-Once" delivery guarantee without the need for you to build the additional infrastructure components.
+* Your messaging solution needs to support the "At-Most-Once" and the "At-Least-Once" delivery guarantees without the need for you to build the additional infrastructure components.
* Your solution needs to publish and consume batches of messages. ## Compare Storage queues and Service Bus queues
static-web-apps Publish Devops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/publish-devops.md
In this tutorial, you learn to:
9. Paste in the deployment token in the _Value_ box.
- :::image type="content" source="media/publish-devops/variable-token.png" alt-text="Variable token":::
+ :::image type="content" source="media/publish-devops/yaml-token.png" alt-text="Variable token" lightbox="media/publish-devops/yaml-token.png":::
10. Select **Keep this value secret**.
In this tutorial, you learn to:
13. Select **Save and run** to open the _Save and run_ dialog.
- :::image type="content" source="media/publish-devops/save-and-run.png" alt-text="Pipeline":::
+ :::image type="content" source="media/publish-devops/yaml-save.png" alt-text="Pipeline" lightbox="media/publish-devops/yaml-save.png":::
14. Select **Save and run** to run the pipeline.
storage Data Lake Storage Acl Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-acl-dotnet.md
This article shows you how to use .NET to get, set, and update the access contro
ACL inheritance is already available for new child items that are created under a parent directory. But you can also add, update, and remove ACLs recursively on the existing child items of a parent directory without having to make these changes individually for each child item.
-[Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Files.DataLake) | [Samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Files.DataLake) | [Recursive ACL Sample](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Frecursiveaclpr.blob.core.windows.net%2Fprivatedrop%2FRecursive-Acl-Sample-Net.zip%3Fsv%3D2019-02-02%26st%3D2020-08-24T07%253A45%253A28Z%26se%3D2021-09-25T07%253A45%253A00Z%26sr%3Db%26sp%3Dr%26sig%3D2GI3f0KaKMZbTi89AgtyGg%252BJePgNSsHKCL68V6I5W3s%253D&data=02%7C01%7Cnormesta%40microsoft.com%7C6eae76c57d224fb6de8908d848525330%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637338865714571853&sdata=%2FWom8iI3DSDMSw%2FfYvAaQ69zbAoqXNTQ39Q9yVMnASA%3D&reserved=0) | [API reference](/dotnet/api/azure.storage.files.datalake) | [Gen1 to Gen2 mapping](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Files.DataLake/GEN1_GEN2_MAPPING.md) | [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
+[Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Files.DataLake) | [Samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Files.DataLake) | [API reference](/dotnet/api/azure.storage.files.datalake) | [Gen1 to Gen2 mapping](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Files.DataLake/GEN1_GEN2_MAPPING.md) | [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
## Prerequisites
This example sets the ACL of a directory named `my-parent-directory`. This metho
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/ACL_DataLake.cs" id="Snippet_SetACLRecursively":::
-To see an example that sets ACLs recursively in batches by specifying a batch size, see the .NET [sample](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Frecursiveaclpr.blob.core.windows.net%2Fprivatedrop%2FRecursive-Acl-Sample-Net.zip%3Fsv%3D2019-02-02%26st%3D2020-08-24T07%253A45%253A28Z%26se%3D2021-09-25T07%253A45%253A00Z%26sr%3Db%26sp%3Dr%26sig%3D2GI3f0KaKMZbTi89AgtyGg%252BJePgNSsHKCL68V6I5W3s%253D&data=02%7C01%7Cnormesta%40microsoft.com%7C6eae76c57d224fb6de8908d848525330%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637338865714571853&sdata=%2FWom8iI3DSDMSw%2FfYvAaQ69zbAoqXNTQ39Q9yVMnASA%3D&reserved=0).
- ## Update ACLs When you *update* an ACL, you modify the ACL instead of replacing the ACL. For example, you can add a new security principal to the ACL without affecting other security principals listed in the ACL. To replace the ACL instead of update it, see the [Set ACLs](#set-acls) section of this article.
This example updates an ACL entry with write permission. This method accepts a b
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/ACL_DataLake.cs" id="Snippet_UpdateACLsRecursively":::
-To see an example that updates ACLs recursively in batches by specifying a batch size, see the .NET [sample](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Frecursiveaclpr.blob.core.windows.net%2Fprivatedrop%2FRecursive-Acl-Sample-Net.zip%3Fsv%3D2019-02-02%26st%3D2020-08-24T07%253A45%253A28Z%26se%3D2021-09-25T07%253A45%253A00Z%26sr%3Db%26sp%3Dr%26sig%3D2GI3f0KaKMZbTi89AgtyGg%252BJePgNSsHKCL68V6I5W3s%253D&data=02%7C01%7Cnormesta%40microsoft.com%7C6eae76c57d224fb6de8908d848525330%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637338865714571853&sdata=%2FWom8iI3DSDMSw%2FfYvAaQ69zbAoqXNTQ39Q9yVMnASA%3D&reserved=0).
- ## Remove ACL entries You can remove one or more ACL entries. This section shows you how to:
This example removes an ACL entry from the ACL of the directory named `my-parent
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/ACL_DataLake.cs" id="Snippet_RemoveACLRecursively":::
-To see an example that removes ACLs recursively in batches by specifying a batch size, see the .NET [sample](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Frecursiveaclpr.blob.core.windows.net%2Fprivatedrop%2FRecursive-Acl-Sample-Net.zip%3Fsv%3D2019-02-02%26st%3D2020-08-24T07%253A45%253A28Z%26se%3D2021-09-25T07%253A45%253A00Z%26sr%3Db%26sp%3Dr%26sig%3D2GI3f0KaKMZbTi89AgtyGg%252BJePgNSsHKCL68V6I5W3s%253D&data=02%7C01%7Cnormesta%40microsoft.com%7C6eae76c57d224fb6de8908d848525330%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637338865714571853&sdata=%2FWom8iI3DSDMSw%2FfYvAaQ69zbAoqXNTQ39Q9yVMnASA%3D&reserved=0).
- ## Recover from failures You might encounter runtime or permission errors when modifying ACLs recursively. For runtime errors, restart the process from the beginning. Permission errors can occur if the security principal doesn't have sufficient permission to modify the ACL of a directory or file that is in the directory hierarchy being modified. Address the permission issue, and then choose to either resume the process from the point of failure by using a continuation token, or restart the process from beginning. You don't have to use the continuation token if you prefer to restart from the beginning. You can reapply ACL entries without any negative impact.
This example returns a continuation token in the event of a failure. The applica
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/ACL_DataLake.cs" id="Snippet_ResumeContinuationToken":::
-To see an example that sets ACLs recursively in batches by specifying a batch size, see the .NET [sample](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Frecursiveaclpr.blob.core.windows.net%2Fprivatedrop%2FRecursive-Acl-Sample-Net.zip%3Fsv%3D2019-02-02%26st%3D2020-08-24T07%253A45%253A28Z%26se%3D2021-09-25T07%253A45%253A00Z%26sr%3Db%26sp%3Dr%26sig%3D2GI3f0KaKMZbTi89AgtyGg%252BJePgNSsHKCL68V6I5W3s%253D&data=02%7C01%7Cnormesta%40microsoft.com%7C6eae76c57d224fb6de8908d848525330%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637338865714571853&sdata=%2FWom8iI3DSDMSw%2FfYvAaQ69zbAoqXNTQ39Q9yVMnASA%3D&reserved=0).
- If you want the process to complete uninterrupted by permission errors, you can specify that. To ensure that the process completes uninterrupted, pass in an **AccessControlChangedOptions** object and set the **ContinueOnFailure** property of that object to ``true``.
This example sets ACL entries recursively. If this code encounters a permission
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/ACL_DataLake.cs" id="Snippet_ContinueOnFailure":::
-To see an example that sets ACLs recursively in batches by specifying a batch size, see the .NET [sample](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Frecursiveaclpr.blob.core.windows.net%2Fprivatedrop%2FRecursive-Acl-Sample-Net.zip%3Fsv%3D2019-02-02%26st%3D2020-08-24T07%253A45%253A28Z%26se%3D2021-09-25T07%253A45%253A00Z%26sr%3Db%26sp%3Dr%26sig%3D2GI3f0KaKMZbTi89AgtyGg%252BJePgNSsHKCL68V6I5W3s%253D&data=02%7C01%7Cnormesta%40microsoft.com%7C6eae76c57d224fb6de8908d848525330%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637338865714571853&sdata=%2FWom8iI3DSDMSw%2FfYvAaQ69zbAoqXNTQ39Q9yVMnASA%3D&reserved=0).
- [!INCLUDE [updated-for-az](../../../includes/recursive-acl-best-practices.md)] ## See also
storage Monitor Blob Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/monitor-blob-storage-reference.md
Azure Storage supports following dimensions for metrics in Azure Monitor.
For the metrics supporting dimensions, you need to specify the dimension value to see the corresponding metrics values. For example, if you look at **Transactions** value for successful responses, you need to filter the **ResponseType** dimension with **Success**. If you look at **BlobCount** value for Block Blob, you need to filter the **BlobType** dimension with **BlockBlob**.
-## Resource logs (preview)
+<a id="resource-logs-preview"></a>
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview, and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (including Azure Data Lake Storage Gen2), files, queues, tables, premium storage accounts in general-purpose v1 and general-purpose v2 storage accounts. Classic storage accounts are not supported.
+## Resource logs
The following table lists the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/monitor-blob-storage.md
When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data that's generated by Azure Blob Storage and how you can use the features of Azure Monitor to analyze alerts on this data.
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues, and tables. This feature is available for all storage accounts that are created with the Azure Resource Manager deployment model. For more information, see [Storage account overview](../common/storage-account-overview.md).
- ## Monitor overview The **Overview** page in the Azure portal for each Blob storage resource includes a brief view of the resource usage, such as requests and hourly billing. This information is useful, but only a small amount of the monitoring data is available. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable additional types of data collection with some configuration.
You can create a diagnostic setting by using the Azure portal, PowerShell, the A
For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues, and tables. This feature is available for all storage accounts that are created with the Azure Resource Manager deployment model. For more information, see [Storage account overview](../common/storage-account-overview.md).
- ### [Azure portal](#tab/azure-portal) 1. Sign in to the Azure portal. 2. Navigate to your storage account.
-3. In the **Monitoring** section, click **Diagnostic settings (preview)**.
+3. In the **Monitoring** section, click **Diagnostic settings**.
> [!div class="mx-imgBorder"] > ![portal - Diagnostics logs](media/monitor-blob-storage/diagnostic-logs-settings-pane.png)
You can access resource logs either as a blob in a storage account, as event dat
For a detailed reference of the fields that appear in these logs, see [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md).
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues, tables, premium storage accounts in general-purpose v1, and general-purpose v2 storage accounts. Classic storage accounts aren't supported.
- Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its blob endpoint but not in its table or queue endpoints, only logs that pertain to the blob service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis. ### Log authenticated requests
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) <sup>2</sup>|![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
+| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
+| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
### Metrics in Azure Monitor
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-overview.md
For premium storage accounts, soft-deleted snapshots don't count toward the per-
You can restore soft-deleted blobs or directories (in a hierarchical namespace) by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation within the retention period. The **Undelete Blob** operation restores a blob and any soft-deleted snapshots associated with it. Any snapshots that were deleted during the retention period are restored.
-In accounts that have a hierarchical namespace, the **Undelete Blob** operation can also be used to restore a soft-deleted directory and all its contents. If you rename a directory that contains soft deleted blobs, those soft deleted blobs become disconnected from the directory. If you want to restore those blobs, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted blobs. You cannot access the contents of a soft-deleted directory until after the directory has been undeleted.
+In accounts that have a hierarchical namespace, the **Undelete Blob** operation can also be used to restore a soft-deleted directory and all its contents. If you rename a directory that contains soft deleted blobs, those soft deleted blobs become disconnected from the directory. If you want to restore those blobs, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted blobs. You also cannot restore a directory or a blob to a filepath that has a directory or blob of that name already there. For example, if you delete a.txt (1) and upload a new file also named a.txt (2), you cannot restore the soft deleted a.txt (1) until the active a.txt (2) has either been deleted or renamed. You cannot access the contents of a soft-deleted directory until after the directory has been undeleted.
Calling **Undelete Blob** on a blob that isn't soft-deleted will restore any soft-deleted snapshots that are associated with the blob. If the blob has no snapshots and isn't soft-deleted, then calling **Undelete Blob** has no effect.
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The items that appear in these tables will change over time as support continues
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
The items that appear in these tables will change over time as support continues
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | | [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
storage Storage Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-private-endpoints.md
You can use [private endpoints](../../private-link/private-endpoint-overview.md) for your Azure Storage accounts to allow clients on a virtual network (VNet) to securely access data over a [Private Link](../../private-link/private-link-overview.md). The private endpoint uses a separate IP address from the VNet address space for each storage account service. Network traffic between the clients on the VNet and the storage account traverses over the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet.
+> [!NOTE]
+> Private endpoints are not available for general-purpose v1 storage accounts.
+ Using private endpoints for your storage account enables you to: - Secure your storage account by configuring the storage firewall to block all connections on the public endpoint for the storage service.
storage Storage Files Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-monitoring-reference.md
Azure Files supports following dimensions for metrics in Azure Monitor.
[!INCLUDE [Metrics dimensions](../../../includes/azure-storage-account-metrics-dimensions.md)]
-## Resource logs (preview)
+<a id="resource-logs-preview"></a>
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview, and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (including Azure Data Lake Storage Gen2), files, queues, tables, premium storage accounts in general-purpose v1 and general-purpose v2 storage accounts. Classic storage accounts are not supported.
+## Resource logs
The following table lists the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-monitoring.md
To get the list of SMB and REST operations that are logged, see [Storage logged
You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, an Azure Resource Manager template, or Azure Policy.
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues,and tables. This feature is available for all storage accounts that are created with the Azure Resource Manager deployment model. See [Storage account overview](../common/storage-account-overview.md).
- For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md). ### [Azure portal](#tab/azure-portal)
For general guidance, see [Create diagnostic setting to collect platform logs an
2. Navigate to your storage account.
-3. In the **Monitoring** section, click **Diagnostic settings (preview)**.
+3. In the **Monitoring** section, click **Diagnostic settings**.
> [!div class="mx-imgBorder"] > ![portal - Diagnostics logs](media/storage-files-monitoring/diagnostic-logs-settings-pane.png)
You can access resource logs either as a blob in a storage account, as event dat
To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues, tables, premium storage accounts in general-purpose v1, and general-purpose v2 storage accounts. Classic storage accounts aren't supported.
- Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure File service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis. ### Log authenticated requests
storage Monitor Queue Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/queues/monitor-queue-storage-reference.md
Azure Storage supports following dimensions for metrics in Azure Monitor.
[!INCLUDE [Metrics dimensions](../../../includes/azure-storage-account-metrics-dimensions.md)]
-## Resource logs (preview)
+<a id="resource-logs-preview"></a>
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview, and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (including Azure Data Lake Storage Gen2), files, queues, tables, premium storage accounts in general-purpose v1 and general-purpose v2 storage accounts. Classic storage accounts are not supported.
+## Resource logs
The following table lists the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/queues/monitor-queue-storage.md
When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data that's generated by Azure Queue Storage and how you can use the features of Azure Monitor to analyze alerts on this data.
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues, and tables. This feature is available for all storage accounts that are created with the Azure Resource Manager deployment model. See [Storage account overview](../common/storage-account-overview.md).
- ## Monitor overview The **Overview** page in the Azure portal for each Queue Storage resource includes a brief view of the resource usage, such as requests and hourly billing. This information is useful, but only a small amount of the monitoring data is available. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable additional types of data collection with some configuration.
You can create a diagnostic setting by using the Azure portal, PowerShell, the A
For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues, and tables. This feature is available for all storage accounts that are created with the Azure Resource Manager deployment model. See [Storage account overview](../common/storage-account-overview.md).
- ### [Azure portal](#tab/azure-portal) 1. Sign in to the Azure portal. 2. Navigate to your storage account.
-3. In the **Monitoring** section, click **Diagnostic settings (preview)**.
+3. In the **Monitoring** section, click **Diagnostic settings**.
> [!div class="mx-imgBorder"] > ![portal - Diagnostics logs](media/monitor-queue-storage/diagnostic-logs-settings-pane.png)
You can access resource logs either as a queue in a storage account, as event da
For a detailed reference of the fields that appear in these logs, see [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md).
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues, tables, premium storage accounts in general-purpose v1, and general-purpose v2 storage accounts. Classic storage accounts aren't supported.
- Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its queue endpoint but not in its table or blob endpoints, only logs that pertain to Queue Storage are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis. ### Log authenticated requests
storage Monitor Table Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/tables/monitor-table-storage-reference.md
Azure Storage supports following dimensions for metrics in Azure Monitor.
[!INCLUDE [Metrics dimensions](../../../includes/azure-storage-account-metrics-dimensions.md)]
-## Resource logs (preview)
+<a id="resource-logs-preview"></a>
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview, and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (including Azure Data Lake Storage Gen2), files, queues, tables, premium storage accounts in general-purpose v1 and general-purpose v2 storage accounts. Classic storage accounts are not supported.
+## Resource logs
The following table lists the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/tables/monitor-table-storage.md
When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data that's generated by Azure Table storage and how you can use the features of Azure Monitor to analyze alerts on this data.
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues,and tables. This feature is available for all storage accounts that are created with the Azure Resource Manager deployment model. See [Storage account overview](../common/storage-account-overview.md).
- ## Monitor overview The **Overview** page in the Azure portal for each Table storage resource includes a brief view of the resource usage, such as requests and hourly billing. This information is useful, but only a small amount of the monitoring data is available. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable additional types of data collection with some configuration.
You can create a diagnostic setting by using the Azure portal, PowerShell, the A
For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview and is available for preview testing inall public and US Government cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues,and tables. This feature is available for all storage accounts that are created with the Azure Resource Manager deployment model. See [Storage account overview](../common/storage-account-overview.md).
- ### [Azure portal](#tab/azure-portal) 1. Sign in to the Azure portal. 2. Navigate to your storage account.
-3. In the **Monitoring** section, click **Diagnostic settings (preview)**.
+3. In the **Monitoring** section, click **Diagnostic settings**.
> [!div class="mx-imgBorder"] > ![portal - Diagnostics logs](media/monitor-table-storage/diagnostic-logs-settings-pane.png)
You can access resource logs either as a blob in a storage account, as event dat
For a detailed reference of the fields that appear in these logs, see [Azure Table storage monitoring data reference](monitor-table-storage-reference.md).
-> [!NOTE]
-> Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public and US Government cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues, tables, premium storage accounts in general-purpose v1, and general-purpose v2 storage accounts. Classic storage accounts aren't supported.
- Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its table endpoint but not in its blob or queue endpoints, only logs that pertain to the table service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis. ### Log authenticated requests
storsimple Storsimple Virtual Array Update 13 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storsimple/storsimple-virtual-array-update-13-release-notes.md
Update 1.3 corresponds to software version 10.0.10319.0.
## What's new in Update 1.3
-This update contains the following improvements:KB4540725
+This update contains the following improvements:
- Transport Layer Security (TLS) 1.2 is a mandatory update and must be installed. From this release onward, TLS 1.2 becomes the standard protocol for all Azure portal communication.
stream-analytics Stream Analytics Real Time Fraud Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-real-time-fraud-detection.md
Before you start, make sure you have completed the following steps:
Sign in to the [Azure portal](https://portal.azure.com).
-## Create an Azure Event Hub
+## Create an event hub
Before Stream Analytics can analyze the fraudulent calls data stream, the data needs to be sent to Azure. In this tutorial, you will send data to Azure by using [Azure Event Hubs](../event-hubs/event-hubs-about.md).
-Use the following steps to create an Event Hub and send call data to that Event Hub:
+Use the following steps to create an event hub and send call data to that event hub:
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **Create a resource** > **Internet of Things** > **Event Hubs**.
- ![Create an Azure Event Hub in the portal](media/stream-analytics-real-time-fraud-detection/find-event-hub-resource.png)
+ ![Create an event hub in the Azure portal.](media/stream-analytics-real-time-fraud-detection/find-event-hub-resource.png)
+ 3. Fill out the **Create Namespace** pane with the following values: |**Setting** |**Suggested value** |**Description** |
Use the following steps to create an Event Hub and send call data to that Event
4. Use default options on the remaining settings and select **Review + create**. Then select **Create** to start the deployment.
- ![Create event hub namespace in Azure portal](media/stream-analytics-real-time-fraud-detection/create-event-hub-namespace.png)
+ ![Create event hub namespace in the Azure portal](media/stream-analytics-real-time-fraud-detection/create-event-hub-namespace.png)
5. When the namespace has finished deploying, go to **All resources** and find *asaTutorialEventHub* in the list of Azure resources. Select *asaTutorialEventHub* to open it.
-6. Next select **+Event Hub** and enter a **Name** for the Event Hub. Set the **Partition Count** to 2. Use the default options in the remaining settings and select **Create**. Then wait for the deployment to succeed.
+6. Next select **+Event Hub** and enter a **Name** for the event hub. Set the **Partition Count** to 2. Use the default options in the remaining settings and select **Create**. Then wait for the deployment to succeed.
- ![Event Hub configuration in Azure portal](media/stream-analytics-real-time-fraud-detection/create-event-hub-portal.png)
+ ![Event hub configuration in the Azure portal](media/stream-analytics-real-time-fraud-detection/create-event-hub-portal.png)
### Grant access to the event hub and get a connection string
Now that you have a stream of call events, you can create a Stream Analytics job
|Subscription | \<Your subscription\> | Select an Azure subscription where you want to create the job. | |Resource group | MyASADemoRG | Select **Use existing** and enter a new resource-group name for your account. | |Location | West US2 | Location where the job can be deployed. It's recommended to place the job and the event hub in the same region for best performance and so that you don't pay to transfer data between regions. |
- |Hosting environment | Cloud | Stream Analytics jobs can be deployed to cloud or edge. Cloud allows you to deploy to Azure Cloud, and Edge allows you to deploy to an IoT Edge device. |
+ |Hosting environment | Cloud | Stream Analytics jobs can be deployed to cloud or edge. **Cloud** allows you to deploy to Azure Cloud, and **Edge** allows you to deploy to an IoT Edge device. |
|Streaming units | 1 | Streaming units represent the computing resources that are required to execute a job. By default, this value is set to 1. To learn about scaling streaming units, see [understanding and adjusting streaming units](stream-analytics-streaming-unit-consumption.md) article. | 4. Use default options on the remaining settings, select **Create**, and wait for the deployment to succeed.
The next step is to define an input source for the job to read data using the ev
|Input alias | CallStream | Provide a friendly name to identify your input. Input alias can contain alphanumeric characters, hyphens, and underscores only and must be 3-63 characters long. | |Subscription | \<Your subscription\> | Select the Azure subscription where you created the event hub. The event hub can be in same or a different subscription as the Stream Analytics job. | |Event hub namespace | asaTutorialEventHub | Select the event hub namespace you created in the previous section. All the event hub namespaces available in your current subscription are listed in the dropdown. |
- |Event Hub name | MyEventHub | Select the event hub you created in the previous section. All the event hubs available in your current subscription are listed in the dropdown. |
- |Event Hub policy name | MyPolicy | Select the event hub shared access policy you created in the previous section. All the event hubs policies available in your current subscription are listed in the dropdown. |
+ |Event hub name | MyEventHub | Select the event hub you created in the previous section. All the event hubs available in your current subscription are listed in the dropdown. |
+ |Event hub policy name | MyPolicy | Select the event hub shared access policy you created in the previous section. All the event hubs policies available in your current subscription are listed in the dropdown. |
4. Use default options on the remaining settings and select **Save**. ![Configure Azure Stream Analytics input](media/stream-analytics-real-time-fraud-detection/configure-stream-analytics-input.png)
+## Create a consumer group
+
+We recommend that you use a distinct consumer group for each Stream Analytics job. If no consumer group is specified, the Stream Analytics job uses the $Default consumer group. When a job contains a self-join or has multiple inputs, some inputs later might be read by more than one reader. This situation affects the number of readers in a single consumer group.
+
+To add a new consumer group:
+
+1. In the Azure portal, go to your Event Hubs instance.
+
+1. In the left menu, under **Entities**, select **Consumer groups**.
+
+1. Select **+ Consumer group**.
+
+1. In **Name**, enter a name for your new consumer group. For example, *MyConsumerGroup*.
+
+1. Select **Create**.
+
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/create-consumer-group.png" alt-text="Screenshot that shows creating a new consumer group.":::
+ ## Configure job output The last step is to define an output sink where the job can write the transformed data. In this tutorial, you output and visualize data with Power BI.
If you want to archive every event, you can use a pass-through query to read all
3. Select **Test query**.
- The Stream Analytics job runs the query against the sample data from the input and displays the output at the bottom of the window. The results indicate that the Event Hub and the Streaming Analytics job are configured correctly.
+ The Stream Analytics job runs the query against the sample data from the input and displays the output at the bottom of the window. The results indicate that the Event Hubs and the Streaming Analytics job are configured correctly.
:::image type="content" source="media/stream-analytics-real-time-fraud-detection/sample-output-passthrough.png" alt-text="Sample output from test query":::
When you use a join with streaming data, the join must provide some limits on ho
![View results in Power BI dashboard](media/stream-analytics-real-time-fraud-detection/power-bi-results-dashboard.png)
-## Embedding your Power BI Dashboard in a Web Application
+## Embedding your Power BI Dashboard in a web application
For this part of the tutorial, you'll use a sample [ASP.NET](https://asp.net/) web application created by the Power BI team to embed your dashboard. For more information about embedding dashboards, see [embedding with Power BI](/power-bi/developer/embedding) article.
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
See the [Scala API reference](https://synapsesql.blob.core.windows.net/docs/1.0.
```scala import com.microsoft.spark.sqlanalytics.utils.Constants
- import com.microsoft.spark.sqlanalytics.SqlAnalyticsConnector._
+ import org.apache.spark.sql.SqlAnalyticsConnector._
val df = spark.read. option(Constants.SERVER, "servername.database.windows.net").
See the [Scala API reference](https://synapsesql.blob.core.windows.net/docs/1.0.
```scala import com.microsoft.spark.sqlanalytics.utils.Constants
- import com.microsoft.spark.sqlanalytics.SqlAnalyticsConnector._
+ import org.apache.spark.sql.SqlAnalyticsConnector._
val df = spark.sql("select * from tmpview")
See the [Scala API reference](https://synapsesql.blob.core.windows.net/docs/1.0.
```scala import com.microsoft.spark.sqlanalytics.utils.Constants
- import com.microsoft.spark.sqlanalytics.SqlAnalyticsConnector._
+ import org.apache.spark.sql.SqlAnalyticsConnector._
val df = spark.read. option(Constants.SERVER, "servername.database.windows.net").
See the [Scala API reference](https://synapsesql.blob.core.windows.net/docs/1.0.
```scala %%spark import com.microsoft.spark.sqlanalytics.utils.Constants
- import com.microsoft.spark.sqlanalytics.SqlAnalyticsConnector._
+ import org.apache.spark.sql.SqlAnalyticsConnector._
val df = spark.sqlContext.sql("select * from tempview")
synapse-analytics Design Elt Data Loading https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/design-elt-data-loading.md
You might need to prepare and clean the data in your storage account before load
### Define the tables
-You must first defined the table(s) you are loading to in your dedicated SQL pool when using the COPY statement.
+You must first define the table(s) you are loading to in your dedicated SQL pool when using the COPY statement.
If you are using PolyBase, you need to define external tables in your dedicated SQL pool before loading. PolyBase uses external tables to define and access the data in Azure Storage. An external table is similar to a database view. The external table contains the table schema and points to data that is stored outside the dedicated SQL pool.
Use the following SQL data type mapping when loading Parquet files:
| INT64 | DECIMAL | decimal | | INT64 | TIME (MILLIS) | time | | INT64 | TIMESTAMP (MILLIS) | datetime2 |
-| [Complex type](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fparquet-format%2Fblob%2Fmaster%2FLogicalTypes.md%23lists&data=02\|01\|kevin%40microsoft.com\|19f74d93f5ca45a6b73c08d7d7f5f111\|72f988bf86f141af91ab2d7cd011db47\|1\|0\|637215323617803168&sdata=6Luk047sK26ijTzfvKMYc%2FNu%2Fz0AlLCX8lKKTI%2F8B5o%3D&reserved=0) | LIST | varchar(max) |
-| [Complex type](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fparquet-format%2Fblob%2Fmaster%2FLogicalTypes.md%23maps&data=02\|01\|kevin%40microsoft.com\|19f74d93f5ca45a6b73c08d7d7f5f111\|72f988bf86f141af91ab2d7cd011db47\|1\|0\|637215323617803168&sdata=FiThqXxjgmZBVRyigHzfh5V7Z%2BPZHjud2IkUUM43I7o%3D&reserved=0) | MAP | varchar(max) |
+| [Complex type](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md) | LIST | varchar(max) |
+| [Complex type](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md | MAP | varchar(max) |
>[!IMPORTANT] >- SQL dedicated pools do not currently support Parquet data types with MICROS and NANOS precision.
virtual-desktop Fslogix Office App Rule Editor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/fslogix-office-app-rule-editor.md
Next, you'll need to create and prepare a VHD image to use the Rule Editor on:
1. Open a command prompt as an administrator. and run the following command: ```cmd
- taskkill /F /IM MicrosoftEdge.exe /T
+ taskkill /F /IM MSEdge.exe /T
``` >[!NOTE]
Now that you've prepared your image, you'll need to configure the Rule Editor an
## Next steps
-If you want to learn more about FSLogix, check out our [FSLogix documentation](/fslogix/).
+If you want to learn more about FSLogix, check out our [FSLogix documentation](/fslogix/).
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 01/05/2022 Last updated : 02/03/2022
Here's what's changed in the Azure Virtual Desktop Agent:
Curious about the latest updates for FSLogix? Check out [What's new at FSLogix](/fslogix/whats-new).
+## January 2022
+
+Here's what changed in January 2022:
+
+### FSLogix version 2201 public preview
+
+FSLogix version 2201 is now in public preview. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/the-fslogix-2201-public-preview-is-now-available/td-p/3070794) or [the FSLogix release notes](/fslogix/whats-new#fslogix-2201-public-preview-29804843478).
+
+### Migration tool now generally available
+
+The PowerShell commands that migrate metadata from Azure Virtual Desktop (classic) to Azure Virtual Desktop are now generally available. To learn more about migrating your existing deployment, see [Migrate automatically from Azure Virtual Desktop (classic)](automatic-migration.md) or [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/tool-to-migrate-from-azure-virtual-desktop-classic-to-arm/m-p/3094856#M8527).
+
+### Increased application group limit
+
+We've increased number of Azure Virtual Desktop application groups you can have on each Azure Active Directory (Azure AD) tenant from 200 to 500. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/increase-in-avd-application-group-limit-to-500/m-p/3094678).
+
+### Updates to required URLs
+
+We've updated the required URL list for Azure Virtual Desktop to accomodate Azure Virtual Desktop agent traffic. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/important-new-changes-in-required-urls/m-p/3094897#M8529).
+ ## December 2021 Here's what changed in December 2021:
You can now automatically create trusted launch virtual machines through the hos
### Azure Active Directory Join VMs with FSLogix profiles on Azure Files
-Azure Active Directory (Azure AD)-joined session hosts for FSLogix profiles on Azure Files in Windows 10 and 11 multi-session is now in public preview. We've updated Azure Files to use a Kerberos protocol for Azure AD that lets you secure folders in the file share to individual users. This new feature also allows FSLogix to function within your deployment without an Active Directory Domain Controller. For more information, check out [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-public-preview-of-fslogix-profiles-for-azure-ad/ba-p/3019855).
+Azure AD-joined session hosts for FSLogix profiles on Azure Files in Windows 10 and 11 multi-session is now in public preview. We've updated Azure Files to use a Kerberos protocol for Azure AD that lets you secure folders in the file share to individual users. This new feature also allows FSLogix to function within your deployment without an Active Directory Domain Controller. For more information, check out [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-public-preview-of-fslogix-profiles-for-azure-ad/ba-p/3019855).
### Azure Virtual Desktop pricing calculator updates
virtual-machines Classic Vm Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/classic-vm-deprecation.md
Azure Cloud Services (classic) retirement was announced in August 2021 [here](ht
- [Microsoft Q&A](/answers/topics/azure-virtual-machines-migration.html): Microsoft and community support for migration. -- [Azure Migration Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/{"pesId":"6f16735c-b0ae-b275-ad3a-03479cfa1396","supportTopicId":"1135e3d0-20e2-aec5-4ef0-55fd3dae2d58"}): Dedicated support team for technical assistance during migration. Customers without technical support can use [free support capability](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0A%20%20%20%20%22pesId%22%3A%20%22f3dc5421-79ef-1efa-41a5-42bf3cbb52c6%22%2C%0A%20%20%20%20%22supportTopicId%22%3A%20%22794bb734-af1b-e2d5-a757-dac7438009ab%22%2C%0A%20%20%20%20%22contextInfo%22%3A%20%22Migrate%20IAAS%20resources%20from%20Classic%20%28ASM%29%20to%20Azure%20Resource%20Manager%20%28ARM%29%22%2C%0A%20%20%20%20%22caller%22%3A%20%22NoSupportPlanASM2ARM%22%2C%0A%20%20%20%20%22severity%22%3A%20%222%22%0A%7D) provided specifically for this migration.
+- [Azure Migration Support](https://portal.azure.com/#create/Microsoft.Support/Parameters/{"pesId":"6f16735c-b0ae-b275-ad3a-03479cfa1396","supportTopicId":"1135e3d0-20e2-aec5-4ef0-55fd3dae2d58"}): Dedicated support team for technical assistance during migration. Customers without technical support can use [free support capability](https://portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0A%20%20%20%20%22pesId%22%3A%20%22f3dc5421-79ef-1efa-41a5-42bf3cbb52c6%22%2C%0A%20%20%20%20%22supportTopicId%22%3A%20%22794bb734-af1b-e2d5-a757-dac7438009ab%22%2C%0A%20%20%20%20%22contextInfo%22%3A%20%22Migrate%20IAAS%20resources%20from%20Classic%20%28ASM%29%20to%20Azure%20Resource%20Manager%20%28ARM%29%22%2C%0A%20%20%20%20%22caller%22%3A%20%22NoSupportPlanASM2ARM%22%2C%0A%20%20%20%20%22severity%22%3A%20%222%22%0A%7D) provided specifically for this migration.
-- [Microsoft Fast Track](https://www.microsoft.com/fasttrack): Fast track can assist eligible customers with planning & execution for this migration. [Nominate yourself](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fprograms%2Fazure-fasttrack%2F%23nomination&data=02%7C01%7CTanmay.Gore%40microsoft.com%7C3e75bbf3617944ec663a08d85c058340%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637360526032558561&sdata=CxWTVQQPVWNwEqDZKktXzNV74pX91uyJ8dY8YecIgGc%3D&reserved=0) for DC Migration Program.
+- [Microsoft Fast Track](https://www.microsoft.com/fasttrack): Fast track can assist eligible customers with planning & execution for this migration. [Nominate yourself](https://azure.microsoft.com/programs/azure-fasttrack/#nominations) for DC Migration Program.
- If your company/organization has partnered with Microsoft or works with Microsoft representatives (like cloud solution architects (CSAs) or technical account managers (TAMs)), please work with them for additional resources for migration.
virtual-machines Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/agent-windows.md
The Windows Guest Agent Package is broken into two parts:
To boot a VM you must have the PA installed on the VM, however the WinGA does not need to be installed. At VM deploy time, you can select not to install the WinGA. The following example shows how to select the *provisionVmAgent* option with an Azure Resource Manager template: ```json
-"resources": [{
-"name": "[parameters('virtualMachineName')]",
-"type": "Microsoft.Compute/virtualMachines",
-"apiVersion": "2016-04-30-preview",
-"location": "[parameters('location')]",
-"dependsOn": ["[concat('Microsoft.Network/networkInterfaces/', parameters('networkInterfaceName'))]"],
-"properties": {
- "osProfile": {
- "computerName": "[parameters('virtualMachineName')]",
- "adminUsername": "[parameters('adminUsername')]",
- "adminPassword": "[parameters('adminPassword')]",
- "windowsConfiguration": {
- "provisionVmAgent": "false"
+{
+ "resources": [{
+ "name": ["parameters('virtualMachineName')"],
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2016-04-30-preview",
+ "location": ["parameters('location')"],
+ "dependsOn": ["[concat('Microsoft.Network/networkInterfaces/', parameters('networkInterfaceName'))]"],
+ "properties": {
+ "osProfile": {
+ "computerName": ["parameters('virtualMachineName')"],
+ "adminUsername": ["parameters('adminUsername')"],
+ "adminPassword": ["parameters('adminPassword')"],
+ "windowsConfiguration": {
+ "provisionVmAgent": "false"
+ }
+ }
+ }
+ }]
} ```
OSProfile :
EnableAutomaticUpdates : True ```
-The following script can be used to return a concise list of VM names and the state of the VM Agent:
+The following script can be used to return a concise list of VM names (running Windows OS) and the state of the VM Agent:
```powershell $vms = Get-AzVM
foreach ($vm in $vms) {
} ```
+The following script can be used to return a concise list of VM names (running Linux OS) and the state of the VM Agent:
+
+```powershell
+$vms = Get-AzVM
+
+foreach ($vm in $vms) {
+ $agent = $vm | Select -ExpandProperty OSProfile | Select -ExpandProperty Linuxconfiguration | Select ProvisionVMAgent
+ Write-Host $vm.Name $agent.ProvisionVMAgent
+}
+```
+ ### Manual Detection When logged in to a Windows VM, Task Manager can be used to examine running processes. To check for the Azure VM Agent, open Task Manager, click the *Details* tab, and look for a process name **WindowsAzureGuestAgent.exe**. The presence of this process indicates that the VM agent is installed.
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/generalize.md
Make sure the server roles running on the machine are supported by Sysprep. For
> > If you plan to run Sysprep before uploading your virtual hard disk (VHD) to Azure for the first time, make sure you have [prepared your VM](./windows/prepare-for-upload-vhd-image.md). >
+> We do not support custom answer file in the sysprep step, hence you should not use the "/unattend:_answerfile_" switch with your sysprep command.
> To generalize your Windows VM, follow these steps:
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/redhat-create-upload-vhd.md
This section assumes that you have already obtained an ISO file from the Red Hat
* Use a cloud-init directive baked into the image that will do this every time the VM is created. ```console
- echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
+ echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF #cloud-config # Generated by Azure cloud image build
This section assumes that you have already obtained an ISO file from the Red Hat
filesystem: swap mounts: - ["ephemeral0.1", "/mnt"]
- - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service", "0", "0"]
+ - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
EOF ``` 1. If you want to unregister the subscription, run the following command:
This section assumes that you have already obtained an ISO file from the Red Hat
* Use a cloud-init directive baked into the image that will do this every time the VM is created. ```console
- echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
+ echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF #cloud-config # Generated by Azure cloud image build
This section assumes that you have already obtained an ISO file from the Red Hat
filesystem: swap mounts: - ["ephemeral0.1", "/mnt"]
- - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service", "0", "0"]
+ - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.device-timeout=2,x-systemd.requires=cloud-init.service", "0", "0"]
EOF ``` 1. If you want to unregister the subscription, run the following command:
This section shows you how to prepare a RHEL 7 distro from an ISO using a kickst
ResourceDisk.EnableSwap=n # Configure swap using cloud-init
+ echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF #cloud-config # Generated by Azure cloud image build
This section shows you how to prepare a RHEL 7 distro from an ISO using a kickst
filesystem: swap mounts: - ["ephemeral0.1", "/mnt"]
- - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service", "0", "0"]
+ - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.device-timeout=2,x-systemd.requires=cloud-init.service", "0", "0"]
EOF # Set the cmdline
virtual-machines Os Compatibility Matrix Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/os-compatibility-matrix-hana-large-instance.md
| SLES 12 SP5 | Available | S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m, S768xm, S896m, S960m | | SLES 15 SP1 | Available | S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m, S768xm, S896m, S960m | | RHEL 7.6 | Available | S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m, S768xm, S896m, S960m |
+ | RHEL 7.9 | Available | S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m, S768xm, S896m, S960m |
## Next steps
Learn more about:
- [Available SKUs](hana-available-skus.md) - [Upgrading the operating system](os-upgrade-hana-large-instance.md) - [Supported scenarios for HANA Large Instances](hana-supported-scenario.md)
-
+
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/tutorial-site-to-site-portal.md
Previously updated : 07/21/2021 Last updated : 02/02/2022
In this tutorial, you learn how to:
* An Azure account with an active subscription. If you don't have one, [create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). * Make sure you have a compatible VPN device and someone who is able to configure it. For more information about compatible VPN devices and device configuration, see [About VPN Devices](vpn-gateway-about-vpn-devices.md). * Verify that you have an externally facing public IPv4 address for your VPN device.
-* If you are unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to.
+* If you're unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to.
## <a name="CreatVNet"></a>Create a virtual network
You can view the gateway public IP address on the **Overview** page for your gat
:::image type="content" source="./media/tutorial-create-gateway-portal/address.png" alt-text="Overview page":::
-To see additional information about the public IP address object, click the name/IP address link next to **Public IP address**.
+To see additional information about the public IP address object, select the name/IP address link next to **Public IP address**.
## <a name="LocalNetworkGateway"></a>Create a local network gateway
-The local network gateway is a specific object that represents your on-premises location (the site) for routing purposes. You give the site a name by which Azure can refer to it, then specify the IP address of the on-premises VPN device to which you will create a connection. You also specify the IP address prefixes that will be routed through the VPN gateway to the VPN device. The address prefixes you specify are the prefixes located on your on-premises network. If your on-premises network changes or you need to change the public IP address for the VPN device, you can easily update the values later.
+The local network gateway is a specific object that represents your on-premises location (the site) for routing purposes. You give the site a name by which Azure can refer to it, then specify the IP address of the on-premises VPN device to which you'll create a connection. You also specify the IP address prefixes that will be routed through the VPN gateway to the VPN device. The address prefixes you specify are the prefixes located on your on-premises network. If your on-premises network changes or you need to change the public IP address for the VPN device, you can easily update the values later.
Create a local network gateway using the following values:
Create a connection using the following values:
### <a name="addconnect"></a>Add additional connections to the gateway
-You can add additional connections, provided that none of the address spaces overlap between connections.
+You can add additional connections if none of the address spaces overlap between connections.
1. To add an additional connection, navigate to the VPN gateway, then select **Connections** to open the Connections page. 1. Select **+Add** to add your connection. Adjust the connection type to reflect either VNet-to-VNet (if connecting to another VNet gateway), or Site-to-site.
-1. If you are connecting using Site-to-site and you have not already created a local network gateway for the site you want to connect to, you can create a new one.
+1. If you're connecting using Site-to-site and you haven't already created a local network gateway for the site you want to connect to, you can create a new one.
1. Specify the shared key that you want to use, then select **OK** to create the connection. ### <a name="resize"></a>Resize a gateway SKU
-There are specific rules regarding resizing vs. changing a gateway SKU. In this section, we will resize the SKU. For more information, see [Gateway settings - resizing and changing SKUs](vpn-gateway-about-vpn-gateway-settings.md#resizechange).
+There are specific rules regarding resizing vs. changing a gateway SKU. In this section, we'll resize the SKU. For more information, see [Gateway settings - resizing and changing SKUs](vpn-gateway-about-vpn-gateway-settings.md#resizechange).
[!INCLUDE [resize a gateway](../../includes/vpn-gateway-resize-gw-portal-include.md)] ### <a name="reset"></a>Reset a gateway
-Resetting an Azure VPN gateway is helpful if you lose cross-premises VPN connectivity on one or more Site-to-Site VPN tunnels. In this situation, your on-premises VPN devices are all working correctly, but are not able to establish IPsec tunnels with the Azure VPN gateways.
+Resetting an Azure VPN gateway is helpful if you lose cross-premises VPN connectivity on one or more Site-to-Site VPN tunnels. In this situation, your on-premises VPN devices are all working correctly, but aren't able to establish IPsec tunnels with the Azure VPN gateways.
[!INCLUDE [reset a gateway](../../includes/vpn-gateway-reset-gw-portal-include.md)] ### <a name="additional"></a>Additional configuration considerations
-S2S configurations can be customized in a variety of ways. For more information, see the following articles:
+S2S configurations can be customized in a variety ways. For more information, see the following articles:
* For information about BGP, see the [BGP Overview](vpn-gateway-bgp-overview.md) and [How to configure BGP](vpn-gateway-bgp-resource-manager-ps.md). * For information about forced tunneling, see [About forced tunneling](vpn-gateway-forced-tunneling-rm.md).
these resources using the following steps:
## Next steps
-Once you have configured a S2S connection, you can add a P2S connection to the same gateway.
+Once you've configured a S2S connection, you can add a P2S connection to the same gateway.
> [!div class="nextstepaction"] > [Point-to-Site VPN connections](vpn-gateway-howto-point-to-site-resource-manager-portal.md)
vpn-gateway Vpn Gateway About Skus Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-about-skus-legacy.md
You can view legacy gateway pricing in the **Virtual Network Gateways** section,
## <a name="resize"></a>Resize a gateway
-You can resize your gateway to a gateway SKU within the same SKU family. For example, if you have a Standard SKU, you can resize to a HighPerformance SKU. However, you can't resize your VPN gateway between the old SKUs and the new SKU families. For example, you can't go from a Standard SKU to a VpnGw2 SKU, or a Basic SKU to VpnGw1.
+With the exception of the Basic SKU, you can resize your gateway to a gateway SKU within the same SKU family. For example, if you have a Standard SKU, you can resize to a HighPerformance SKU. However, you can't resize your VPN gateway between the old SKUs and the new SKU families. For example, you can't go from a Standard SKU to a VpnGw2 SKU, or a Basic SKU to VpnGw1.
### Resource Manager
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
az network vnet-gateway create --name VNet1GW --public-ip-address VNet1GWPIP --r
If you have a VPN gateway and you want to use a different gateway SKU, your options are to either resize your gateway SKU, or to change to another SKU. When you change to another gateway SKU, you delete the existing gateway entirely and build a new one. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. In comparison, when you resize a gateway SKU, there is not much downtime because you do not have to delete and rebuild the gateway. If you have the option to resize your gateway SKU, rather than change it, you will want to do that. However, there are rules regarding resizing: 1. With the exception of the Basic SKU, you can resize a VPN gateway SKU to another VPN gateway SKU within the same generation (Generation1 or Generation2). For example, VpnGw1 of Generation1 can be resized to VpnGw2 of Generation1 but not to VpnGw2 of Generation2.
-2. When working with the old gateway SKUs, you can resize between Basic, Standard, and HighPerformance SKUs.
+2. When working with the old gateway SKUs, you can resize between Standard and HighPerformance SKUs.
3. You **cannot** resize from Basic/Standard/HighPerformance SKUs to VpnGw SKUs. You must instead, [change](#change) to the new SKUs. #### <a name="resizegwsku"></a>To resize a gateway