Updates from: 06/22/2021 03:07:05
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-developer-notes.md
Previously updated : 06/07/2021 Last updated : 06/21/2021
Azure Active Directory B2C [user flows and custom policies](user-flow-overview.m
## Terms for features in public preview - We encourage you to use public preview features for evaluation purposes only.-- Service level agreements (SLAs) don't apply to public preview features.
+- [Service level agreements (SLAs)](https://azure.microsoft.com/support/legal/sla/active-directory-b2c) don't apply to public preview features.
- Support requests for public preview features can be submitted through regular support channels. ## User flows
active-directory-b2c Tenant Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tenant-management.md
Azure AD B2C relies the Azure AD platform. The following Azure AD features can b
| [Custom domain names](../active-directory/fundamentals/add-custom-domain.md) | You can use Azure AD custom domains for administrative accounts only. | [Consumer accounts](user-overview.md#consumer-user) can sign in with a username, phone number, or any email address. You can use [custom domains](custom-domain.md) in your redirect URLs.| | [Conditional Access](../active-directory/conditional-access/overview.md) | Fully supported for administrative and user accounts. | A subset of Azure AD Conditional Access features is supported with [consumer accounts](user-overview.md#consumer-user) Lean how to configure Azure AD B2C [conditional access](conditional-access-user-flow.md).| | [Premium P1](https://azure.microsoft.com/pricing/details/active-directory) | Fully supported for Azure AD premium P1 features. For example, [Password Protection](../active-directory/authentication/concept-password-ban-bad.md), [Hybrid Identities](../active-directory/hybrid/whatis-hybrid-identity.md), [Conditional Access](../active-directory/roles/permissions-reference.md#), [Dynamic groups](../active-directory/enterprise-users/groups-create-rule.md), and more. | A subset of Azure AD Conditional Access features is supported with [consumer accounts](user-overview.md#consumer-user). Learn how to configure Azure AD B2C [Conditional Access](conditional-access-user-flow.md).|
-| [Premium P2](https://azure.microsoft.com/pricing/details/active-directory.md) | Fully supported for Azure AD premium P2 features. For example, [Identity Protection](../active-directory/identity-protection/overview-identity-protection.md), and [Identity Governance](../active-directory/governance/identity-governance-overview.md). | A subset of Azure AD Identity Protection features is supported with [consumer accounts](user-overview.md#consumer-user). Learn how to [Investigate risk with Identity Protection](identity-protection-investigate-risk.md) and configure Azure AD B2C [Conditional Access](conditional-access-user-flow.md). |
+| [Premium P2](https://azure.microsoft.com/pricing/details/active-directory/) | Fully supported for Azure AD premium P2 features. For example, [Identity Protection](../active-directory/identity-protection/overview-identity-protection.md), and [Identity Governance](../active-directory/governance/identity-governance-overview.md). | A subset of Azure AD Identity Protection features is supported with [consumer accounts](user-overview.md#consumer-user). Learn how to [Investigate risk with Identity Protection](identity-protection-investigate-risk.md) and configure Azure AD B2C [Conditional Access](conditional-access-user-flow.md). |
## Other Azure resources in your tenant
To get your Azure AD B2C tenant ID, follow these steps:
## Next steps - [Create an Azure Active Directory B2C tenant in the Azure portal](tutorial-create-tenant.md)-
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/location-condition.md
Previously updated : 06/07/2021 Last updated : 06/21/2021
For the next 24 hours, if the user is still accessing the resource and granted t
A Conditional Access policy with GPS-based named locations in report-only mode prompts users to share their GPS location, even though they are not blocked from signing in.
+> [!IMPORTANT]
+> Users may receive prompts every hour letting them know that Azure AD is checking their location in the Authenticator app. The preview should only be used to protect very sensitive apps where this behavior is acceptable or where access needs to be restricted to a specific country.
+ #### Include unknown countries/regions Some IP addresses are not mapped to a specific country or region, including all IPv6 addresses. To capture these IP locations, check the box **Include unknown countries/regions** when defining a geographic location. This option allows you to choose if these IP addresses should be included in the named location. Use this setting when the policy using the named location should apply to unknown locations.
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-windows-desktop.md
using Microsoft.Identity.Client;
Then, initialize MSAL using the following code: ```csharp
-public static IPublicClientApplication PublicClientApp;
-PublicClientApplicationBuilder.Create(ClientId)
+IPublicClientApplication publicClientApp = PublicClientApplicationBuilder.Create(ClientId)
.WithRedirectUri("https://login.microsoftonline.com/common/oauth2/nativeclient") .WithAuthority(AzureCloudInstance.AzurePublic, Tenant) .Build();
active-directory V2 Oauth2 Device Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-device-code.md
POST https://login.microsoftonline.com/{tenant}/oauth2/v2.0/devicecode
Content-Type: application/x-www-form-urlencoded client_id=6731de76-14a6-49ae-97bc-6eba6914391e
-scope=user.read%20openid%20profile
+&scope=user.read%20openid%20profile
```
While the user is authenticating at the `verification_uri`, the client should be
POST https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token Content-Type: application/x-www-form-urlencoded
-grant_type: urn:ietf:params:oauth:grant-type:device_code
-client_id: 6731de76-14a6-49ae-97bc-6eba6914391e
-device_code: GMMhmHCXhWEzkobqIHGG_EnNYYsAkukHspeYUk9E8...
+grant_type=urn:ietf:params:oauth:grant-type:device_code
+&client_id=6731de76-14a6-49ae-97bc-6eba6914391e
+&device_code=GMMhmHCXhWEzkobqIHGG_EnNYYsAkukHspeYUk9E8...
``` | Parameter | Required | Description|
active-directory B2b Quickstart Add Guest Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md
Previously updated : 08/05/2020 Last updated : 06/18/2020
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
To complete the scenario in this tutorial, you need:
+ - A role that allows you to create users in your tenant directory, like the Global Administrator role or any of the limited administrator directory roles such as guest inviter or user administrator.
- A valid email account that you can add to your tenant directory, and that you can use to receive the test invitation email. ## Add a new guest user in Azure AD
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
For more information, see [Upgrade to monthly active users billing model](../../
In October 2019, we've added these 35 new apps with Federation support to the app gallery:
-[In Case of Crisis ΓÇô Mobile](../saas-apps/in-case-of-crisis-mobile-tutorial.md), [Juno Journey](../saas-apps/juno-journey-tutorial.md), [ExponentHR](../saas-apps/exponenthr-tutorial.md), [Tact](https://www.tact.ai/products/tact-assistant), [OpusCapita Cash Management](https://appsource.microsoft.com/product/web-apps/opuscapitagroupoy-1036255.opuscapita-cm), [Salestim](https://www.salestim.com/), [Learnster](../saas-apps/learnster-tutorial.md), [Dynatrace](../saas-apps/dynatrace-tutorial.md), [HunchBuzz](https://login.hunchbuzz.com/integrations/azure/process), [Freshworks](../saas-apps/freshworks-tutorial.md), [eCornell](../saas-apps/ecornell-tutorial.md), [ShipHazmat](../saas-apps/shiphazmat-tutorial.md), [Netskope Cloud Security](../saas-apps/netskope-cloud-security-tutorial.md), [Contentful](../saas-apps/contentful-tutorial.md), [Bindtuning](https://bindtuning.com/login), [HireVue Coordinate ΓÇô Europe](https://www.hirevue.com/), [HireVue Coordinate - USOnly](https://www.hirevue.com/), [HireVue Coordinate - US](https://www.hirevue.com/), [WittyParrot Knowledge Box](https://wittyapi.wittyparrot.com/wittyparrot/api/provision/trail/signup), [Cloudmore](../saas-apps/cloudmore-tutorial.md), [Visit.org](../saas-apps/visitorg-tutorial.md), [Cambium Xirrus EasyPass Portal](https://login.xirrus.com/azure-signup), [Paylocity](../saas-apps/paylocity-tutorial.md), [Mail Luck!](../saas-apps/mail-luck-tutorial.md), [Teamie](https://theteamie.com/), [Velocity for Teams](https://velocity.peakup.org/teams/login), [SIGNL4](https://account.signl4.com/manage), [EAB Navigate IMPL](../saas-apps/eab-navigate-impl-tutorial.md), [ScreenMeet](https://console.screenmeet.com/), [Omega Point](https://pi.ompnt.com/), [Speaking Email for Intune (iPhone)](https://speaking.email/FAQ/98/email-access-via-microsoft-intune), [Speaking Email for Office 365 Direct (iPhone/Android)](https://speaking.email/FAQ/126/email-access-via-microsoft-office-365-direct), [ExactCare SSO](../saas-apps/exactcare-sso-tutorial.md), [iHealthHome Care Navigation System](https://ihealthnav.com/account/signin), [Qubie](https://qubie.azurewebsites.net/static/adminTab/authorize.html)
+[In Case of Crisis ΓÇô Mobile](../saas-apps/in-case-of-crisis-mobile-tutorial.md), [Juno Journey](../saas-apps/juno-journey-tutorial.md), [ExponentHR](../saas-apps/exponenthr-tutorial.md), [Tact](https://www.tact.ai/products/tact-assistant), [OpusCapita Cash Management](https://appsource.microsoft.com/product/web-apps/opuscapitagroupoy-1036255.opuscapita-cm), [Salestim](https://www.salestim.com/), [Learnster](../saas-apps/learnster-tutorial.md), [Dynatrace](../saas-apps/dynatrace-tutorial.md), [HunchBuzz](https://login.hunchbuzz.com/integrations/azure/process), [Freshworks](../saas-apps/freshworks-tutorial.md), [eCornell](../saas-apps/ecornell-tutorial.md), [ShipHazmat](../saas-apps/shiphazmat-tutorial.md), [Netskope Cloud Security](../saas-apps/netskope-cloud-security-tutorial.md), [Contentful](../saas-apps/contentful-tutorial.md), [Bindtuning](https://bindtuning.com/login), [HireVue Coordinate ΓÇô Europe](https://www.hirevue.com/), [HireVue Coordinate - USOnly](https://www.hirevue.com/), [HireVue Coordinate - US](https://www.hirevue.com/), [WittyParrot Knowledge Box](https://wittyapi.wittyparrot.com/wittyparrot/api/provision/trail/signup), [Cloudmore](../saas-apps/cloudmore-tutorial.md), [Visit.org](../saas-apps/visitorg-tutorial.md), [Cambium Xirrus EasyPass Portal](https://login.xirrus.com/azure-signup), [Paylocity](../saas-apps/paylocity-tutorial.md), [Mail Luck!](../saas-apps/mail-luck-tutorial.md), [Teamie](https://theteamie.com/), [Velocity for Teams](https://velocity.peakup.org/teams/login), [SIGNL4](https://account.signl4.com/manage), [EAB Navigate IMPL](../saas-apps/eab-navigate-impl-tutorial.md), [ScreenMeet](https://console.screenmeet.com/), [Omega Point](https://pi.ompnt.com/), [Speaking Email for Intune (iPhone)](https://speaking.email/FAQ/98/email-access-via-microsoft-intune), [Speaking Email for Office 365 Direct (iPhone/Android)](https://speaking.email/FAQ/126/email-access-via-microsoft-office-365-direct), [ExactCare SSO](../saas-apps/exactcare-sso-tutorial.md), [iHealthHome Care Navigation System](https://ihealthnav.com/account/signin), [Qubie](https://qubie.azurewebsites.net/static/adminTab/)
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
active-directory What Is App Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/what-is-app-provisioning.md
Previously updated : 10/30/2020- Last updated : 06/21/2021+
active-directory What Is Hr Driven Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/what-is-hr-driven-provisioning.md
Last updated 10/30/2020-+
active-directory What Is Identity Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/what-is-identity-lifecycle-management.md
Last updated 10/30/2020-+
active-directory What Is Inter Directory Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/what-is-inter-directory-provisioning.md
Last updated 10/30/2020-+
active-directory What Is Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/what-is-provisioning.md
Last updated 10/30/2020-+
active-directory How To Connect Azure Ad Trust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-azure-ad-trust.md
Title: Azure AD Connect - Manage AD FS trust with Azure AD using Azure AD Connect | Microsoft Docs description: Operational details of Azure AD trust handling by Azure AD connect.
-keywords: AD FS, ADFS, AD FS management, AAD Connect, Connect, Azure AD, trust, AAD, claim, claim, claim rules, issuance, transform, rules, backup, restore
documentationcenter: ''
## Overview
-Azure AD Connect can manage federation between on-premises Active Directory Federation Service (AD FS) and Azure AD. This article provides an overview of:
+When you federate your on-premises environment with Azure AD, you establish a trust relationship between the on-premises identity provider and Azure AD. Azure AD Connect can manage federation between on-premises Active Directory Federation Service (AD FS) and Azure AD. This article provides an overview of:
* The various settings configured on the trust by Azure AD Connect * The issuance transform rules (claim rules) set by Azure AD Connect * How to back-up and restore your claim rules between upgrades and configuration updates.
+* Best practice for securing and monitoring the AD FS trust with Azure AD
## Settings controlled by Azure AD Connect
You can restore the issuance transform rules using the suggested steps below
> [!NOTE] > Make sure that your additional rules do not conflict with the rules configured by Azure AD Connect.
+## Best practice for securing and monitoring the AD FS trust with Azure AD
+When you federate your AD FS with Azure AD, it is critical that the federation configuration (trust relationship configured between AD FS and Azure AD) is monitored closely, and any unusual or suspicious activity is captured. To do so, we recommend setting up alerts and getting notified whenever any changes are made to the federation configuration. To learn how to setup alerts, see [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md).
+++ ## Next steps * [Manage and customize Active Directory Federation Services using Azure AD Connect](how-to-connect-fed-management.md)
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
na ms.devlang: na Previously updated : 02/16/2021 Last updated : 06/21/2021
We recommend that you harden your Azure AD Connect server to decrease the securi
- Ensure every machine has a unique local administrator password. For more information, see [Local Administrator Password Solution (LAPS)](https://support.microsoft.com/help/3062591/microsoft-security-advisory-local-administrator-password-solution-laps) can configure unique random passwords on each workstation and server store them in Active Directory protected by an ACL. Only eligible authorized users can read or request the reset of these local administrator account passwords. You can obtain the LAPS for use on workstations and servers from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=46899). Additional guidance for operating an environment with LAPS and privileged access workstations (PAWs) can be found in [Operational standards based on clean source principle](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material#operational-standards-based-on-clean-source-principle). - Implement dedicated [privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) for all personnel with privileged access to your organization's information systems. - Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment.
+- Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD.
### SQL Server used by Azure AD Connect
active-directory How To Connect Monitor Federation Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-monitor-federation-changes.md
+
+ Title: Monitor changes to federation configuration in Azure AD | Microsoft Docs
+description: This article explains how to monitor changes to your federation configuration with Azure AD.
+
+documentationcenter: ''
+++++ Last updated : 06/21/2021+++++
+# Monitor changes to federation configuration in your Azure AD
+
+When you federate your on-premises environment with Azure AD, you establish a trust relationship between the on-premises identity provider and Azure AD.
+
+Due to this established trust, Azure AD honors the security token issued by the on-premises identity provider post authentication, to grant access to resources protected by Azure AD.
+
+Therefore, it's critical that this trust (federation configuration) is monitored closely, and any unusual or suspicious activity is captured.
+
+To monitor the trust relationship, we recommend you set up alerts to be notified when changes are made to the federation configuration.
++
+## Set up alerts to monitor the trust relationship
+
+Follow these steps to set up alerts to monitor the trust relationship:
+
+1. [Configure Azure AD audit logs](../../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) to flow to an Azure Log Analytics Workspace.
+2. [Create an alert rule](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-log) that triggers based on Azure AD log query.
+3. [Add an action group](https://docs.microsoft.com/azure/azure-monitor/alerts/action-groups) to the alert rule that gets notified when the alert condition is met.
+
+After the environment is configured, the data flows as follows:
+
+1. Azure AD Logs get populated per the activity in the tenant.
+2. The log information flows to the Azure Log Analytics workspace.
+3. A background job from Azure Monitor executes the log query based on the configuration of the Alert Rule in the configuration step (2) above.
+ ```
+ AuditLogs
+ | extend TargetResource = parse_json(TargetResources)
+ | where ActivityDisplayName contains "Set federation settings on domain" or ActivityDisplayName contains "Set domain authentication"
+ | project TimeGenerated, SourceSystem, TargetResource[0].displayName, AADTenantId, OperationName, InitiatedBy, Result, ActivityDisplayName, ActivityDateTime, Type
+ ```
+
+ 4. If the result of the query matches the alert logic (that is, the number of results is greater than or equal to 1), then the action group kicks in. LetΓÇÖs assume that it kicked in, so the flow continues in step 5.
+ 5. Notification is sent to the action group selected while configuring the alert.
++
+## Next steps
+
+- [Integrate Azure AD logs with Azure Monitor logs](../../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
+- [Create, view, and manage log alerts using Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-log)
+- [Manage AD FS trust with Azure AD using Azure AD Connect](how-to-connect-azure-ad-trust.md)
+- [Best practices for securing Active Directory Federation Services](https://docs.microsoft.com/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs)
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Previously updated : 05/27/2021 Last updated : 06/15/2021
These risks can be calculated in real-time or calculated offline using Microsoft
| Anonymous IP address | Real-time | This risk detection type indicates sign-ins from an anonymous IP address (for example, Tor browser or anonymous VPN). These IP addresses are typically used by actors who want to hide their login telemetry (IP address, location, device, etc.) for potentially malicious intent. | | Atypical travel | Offline | This risk detection type identifies two sign-ins originating from geographically distant locations, where at least one of the locations may also be atypical for the user, given past behavior. Among several other factors, this machine learning algorithm takes into account the time between the two sign-ins and the time it would have taken for the user to travel from the first location to the second, indicating that a different user is using the same credentials. <br><br> The algorithm ignores obvious "false positives" contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The system has an initial learning period of the earliest of 14 days or 10 logins, during which it learns a new user's sign-in behavior. | | Malware linked IP address | Offline | This risk detection type indicates sign-ins from IP addresses infected with malware that is known to actively communicate with a bot server. This detection is determined by correlating IP addresses of the user's device against IP addresses that were in contact with a bot server while the bot server was active. |
+| Suspicious browser | Offline | Suspicious browser detection indicates anomalous behavior based on suspicious sign-in activity across multiple tenants from different countries in the same browser. |
| Unfamiliar sign-in properties | Real-time | This risk detection type considers past sign-in history (IP, Latitude / Longitude and ASN) to look for anomalous sign-ins. The system stores information about previous locations used by a user, and considers these "familiar" locations. The risk detection is triggered when the sign-in occurs from a location that's not already in the list of familiar locations. Newly created users will be in "learning mode" for a period of time in which unfamiliar sign-in properties risk detections will be turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity. The system also ignores sign-ins from familiar devices, and locations that are geographically close to a familiar location. <br><br> We also run this detection for basic authentication (or legacy protocols). Because these protocols do not have modern properties such as client ID, there is limited telemetry to reduce false positives. We recommend our customers to move to modern authentication. | | Admin confirmed user compromised | Offline | This detection indicates an admin has selected 'Confirm user compromised' in the Risky users UI or using riskyUsers API. To see which admin has confirmed this user compromised, check the user's risk history (via UI or API). | | Malicious IP address | Offline | This detection indicates sign-in from a malicious IP address. An IP address is considered malicious based on high failure rates because of invalid credentials received from the IP address or other IP reputation sources. |
Risk detections like leaked credentials require the presence of password hashes
#### Where does Microsoft find leaked credentials?
-Microsoft finds leaked credentials in a variety of places, including:
+Microsoft finds leaked credentials in various places, including:
- Public paste sites such as pastebin.com and paste.ca where bad actors typically post such material. This location is most bad actors' first stop on their hunt to find stolen credentials. - Law enforcement agencies.
Microsoft finds leaked credentials in a variety of places, including:
#### Why aren't I seeing any leaked credentials?
-Leaked credentials are processed anytime Microsoft finds a new, publicly available batch. Due to the sensitive nature, the leaked credentials are deleted shortly after processing. Only new leaked credentials found after you enable password hash synchronization (PHS) will be processed against your tenant. Verifying against previously found credential pairs is not performed.
+Leaked credentials are processed anytime Microsoft finds a new, publicly available batch. Because of the sensitive nature, the leaked credentials are deleted shortly after processing. Only new leaked credentials found after you enable password hash synchronization (PHS) will be processed against your tenant. Verifying against previously found credential pairs isn't done.
#### I haven't seen any leaked credential risk events for quite some time?
Credentials are processed immediately after they have been found, normally in mu
### Locations
-Location in risk detections are determined by IP address lookup.
+Location in risk detections is determined by IP address lookup.
## Next steps
active-directory Overview Identity Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/overview-identity-protection.md
Previously updated : 01/05/2021 Last updated : 06/15/2021
Identity Protection is a tool that allows organizations to accomplish three key tasks: -- Automate the detection and remediation of identity-based risks.-- Investigate risks using data in the portal.-- Export risk detection data to third-party utilities for further analysis.
+- [Automate the detection and remediation of identity-based risks](howto-identity-protection-configure-risk-policies.md).
+- [Investigate risks](howto-identity-protection-investigate-risk.md) using data in the portal.
+- [Export risk detection data to your SEIM](../../sentinel/connect-azure-ad-identity-protection.md).
Identity Protection uses the learnings Microsoft has acquired from their position in organizations with Azure AD, the consumer space with Microsoft Accounts, and in gaming with Xbox to protect your users. Microsoft analyses 6.5 trillion signals per day to identify and protect customers from threats.
In his [blog post in October of 2018](https://techcommunity.microsoft.com/t5/Azu
## Risk detection and remediation
-Identity Protection identifies risks in the following classifications:
-
-| Risk detection type | Description |
-| | |
-| Anonymous IP address | Sign in from an anonymous IP address (for example: Tor browser, anonymizer VPNs). |
-| Atypical travel | Sign in from an atypical location based on the user's recent sign-ins. |
-| Malware linked IP address | Sign in from a malware linked IP address. |
-| Unfamiliar sign-in properties | Sign in with properties we've not seen recently for the given user. |
-| Leaked Credentials | Indicates that the user's valid credentials have been leaked. |
-| Password spray | Indicates that multiple usernames are being attacked using common passwords in a unified, brute-force manner. |
-| Azure AD threat intelligence | Microsoft's internal and external threat intelligence sources have identified a known attack pattern. |
-| New country | This detection is discovered by [Microsoft Cloud App Security (MCAS)](/cloud-app-security/anomaly-detection-policy#activity-from-infrequent-country). |
-| Activity from anonymous IP address | This detection is discovered by [Microsoft Cloud App Security (MCAS)](/cloud-app-security/anomaly-detection-policy#activity-from-anonymous-ip-addresses). |
-| Suspicious inbox forwarding | This detection is discovered by [Microsoft Cloud App Security (MCAS)](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-forwarding). |
-
-More detail on these risks and how/when they are calculated can be found in the article, [What is risk](concept-identity-protection-risks.md).
+Identity Protection identifies risks of many types, including:
+
+- Anonymous IP address use
+- Atypical travel
+- Malware linked IP address
+- Unfamiliar sign-in properties
+- Leaked credentials
+- Password spray
+- and more...
+
+More detail on these and other risks including how or when they are calculated can be found in the article, [What is risk](concept-identity-protection-risks.md).
The risk signals can trigger remediation efforts such as requiring users to: perform Azure AD Multi-Factor Authentication, reset their password using self-service password reset, or blocking until an administrator takes action.
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-configure.md
To edit the application properties:
| Enabled for users to sign in? | User assignment required? | Visible to users? | Behavior for users who have either been assigned to the app or not. | ||||| | Yes | Yes | Yes | Assigned users can see the app and sign in.<br>Unassigned users cannot see the app and cannot sign in. |
- | Yes | Yes | No | Assigned uses cannot see the app but they can sign in.<br>Unassigned users cannot see the app and cannot sign in. |
+ | Yes | Yes | No | Assigned users cannot see the app but they can sign in.<br>Unassigned users cannot see the app and cannot sign in. |
| Yes | No | Yes | Assigned users can see the app and sign in.<br>Unassigned users cannot see the app but can sign in. | | Yes | No | No | Assigned users cannot see the app but can sign in.<br>Unassigned users cannot see the app but can sign in. | | No | Yes | Yes | Assigned users cannot see the app and cannot sign in.<br>Unassigned users cannot see the app and cannot sign in. |
If you're not going to continue with the quickstart series, then consider deleti
Advance to the next article to learn how to assign users to the app. > [!div class="nextstepaction"]
-> [Assign users to an app](add-application-portal-assign-users.md)
+> [Assign users to an app](add-application-portal-assign-users.md)
active-directory Astra Schedule Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/astra-schedule-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://www.aaiscloud.com/<CUSTOMER_INSTANCE>` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-On URL. Contact [Astra Schedule Client support team](mailto:cloudoperations@aais.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-On URL. Contact [Astra Schedule Client support team](https://help.adastra.live) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
active-directory Cisco Anyconnect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cisco-anyconnect.md
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+1. On the **Set up single sign-on with SAML** page, enter the values for the following fields (note that the values are case-sensitive):
- a. In the **Identifier** text box, type a URL using the following pattern:
- `< YOUR CISCO ANYCONNECT VPN VALUE >`
+ 1. In the **Identifier** text box, type a URL using the following pattern:
+ `https://*.YourCiscoServer.com/saml/sp/metadata/TGTGroup`
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `< YOUR CISCO ANYCONNECT VPN VALUE >`
+ 1. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://YOUR_CISCO_ANYCONNECT_FQDN/+CSCOE+/saml/sp/acs?tgname=TGTGroup`
> [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Cisco AnyConnect Client support team](https://www.cisco.com/c/en/us/support/https://docsupdatetracker.net/index.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > For clarification about these values, contact Cisco TAC support. Update these values with the actual Identifier and Reply URL provided by Cisco TAC. Contact the [Cisco AnyConnect Client support team](https://www.cisco.com/c/en/us/support/https://docsupdatetracker.net/index.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate file and save it on your computer.
In this section, you test your Azure AD single sign-on configuration with follow
* Click on Test this application in Azure portal and you should be automatically signed in to the Cisco AnyConnect for which you set up the SSO * You can use Microsoft Access Panel. When you click the Cisco AnyConnect tile in the Access Panel, you should be automatically signed in to the Cisco AnyConnect for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-## Next Steps
+## Next steps
-Once you configure Cisco AnyConnect you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Cisco AnyConnect you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Darwinbox Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/darwinbox-tutorial.md
Previously updated : 08/23/2019 Last updated : 06/18/2021
In this tutorial, you'll learn how to integrate Darwinbox with Azure Active Dire
* Control in Azure AD who has access to Darwinbox. * Enable your users to be automatically signed-in to Darwinbox with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
To get started, you need the following items:
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud. - ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Darwinbox supports **SP** initiated SSO
+* Darwinbox supports **SP** initiated SSO.
-## Adding Darwinbox from the gallery
+## Add Darwinbox from the gallery
To configure the integration of Darwinbox into Azure AD, you need to add Darwinbox from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Darwinbox** in the search box. 1. Select **Darwinbox** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Darwinbox
+## Configure and test Azure AD SSO for Darwinbox
Configure and test Azure AD SSO with Darwinbox using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Darwinbox.
-To configure and test Azure AD SSO with Darwinbox, complete the following building blocks:
+To configure and test Azure AD SSO with Darwinbox, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Darwinbox, complete the following buildi
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Darwinbox** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Darwinbox** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
-
- 1. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.darwinbox.in/`
+1. On the **Basic SAML Configuration** section, perform the following steps:
1. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.darwinbox.in/adfs/module.php/saml/sp/metadata.php/<CUSTOMID>`
+ `https://<SUBDOMAIN>.darwinbox.in/adfs/module.php/saml/sp/metadata.php/<CUSTOM_ID>`
+
+ 1. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.darwinbox.in/`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Darwinbox Client support team](https://darwinbox.com/contact-us.php) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Darwinbox Client support team](https://darwinbox.com/contact-us.php) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Darwinbox**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called B.Simon in Darwinbox. Work with [Darw
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Darwinbox Sign-on URL where you can initiate the login flow.
+
+* Go to Darwinbox Sign-on URL directly and initiate the login flow from there.
-When you click the Darwinbox tile in the Access Panel, you should be automatically signed in to the Darwinbox for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the Darwinbox tile in the My Apps, this will redirect to Darwinbox Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Test SSO for Darwinbox (Mobile) 1. Open Darwinbox mobile application. Click on **Enter Organization URL** now enter your organization URL in the textbox and click on Arrow button.
- ![Screenshot that shows the "Darwinbox" mobile app with the "Enter Organization U R L" selected, and an example organization and "Arrow" button highlighted.](media/darwinbox-tutorial/DarwinboxMobile01.jpg)
+ ![Screenshot that shows the "Darwinbox" mobile app with the "Enter Organization U R L" selected, and an example organization and "Arrow" button highlighted.](media/darwinbox-tutorial/login.png)
1. If you have multiple domain, then click on your domain.
- ![Screenshot that shows the "Choose your domain" screen with an example domain selected.](media/darwinbox-tutorial/DarwinboxMobile02.jpg)
+ ![Screenshot that shows the "Choose your domain" screen with an example domain selected.](media/darwinbox-tutorial/domain.png)
1. Enter your Azure AD email into the Darwinbox application and click **Next**.
- ![Screenshot that shows the "Sign in" screen with the "Next" button highlighted.](media/darwinbox-tutorial/DarwinboxMobile03.jpg)
+ ![Screenshot that shows the "Sign in" screen with the "Next" button highlighted.](media/darwinbox-tutorial/email.png)
1. Enter your Azure AD password into the Darwinbox application and click **Sign in**.
- ![Screenshot that shows the "Enter password" screen with the "Next" button highlighted.](media/darwinbox-tutorial/DarwinboxMobile04.jpg)
+ ![Screenshot that shows the "Sign into options" screen with the "Next" button highlighted.](media/darwinbox-tutorial/account.png)
1. Finally after successful sign in, the application homepage will be displayed.
- ![Darwinbox mobile app](media/darwinbox-tutorial/DarwinboxMobile05.jpg)
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+ ![Darwinbox mobile app](media/darwinbox-tutorial/application.png)
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Darwinbox with Azure AD](https://aad.portal.azure.com/)
+Once you configure Darwinbox you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Documo Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/documo-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure. If your Documo account has a custom domain, you must also have a custom API domain for SSO to work. Replace the default values with your custom API domain, for example, `https://mycustomapidomain.com` and `https://mycustomapidomain.com/assert`.
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type the URL:
+ In the **Sign-on URL** text box, type the URL:
`https://app.documo.com/sso` 1. Click **Save**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
d. Enter the value in the **Field Name in SAML Token containing Identity email** text box.
- e. Open the downloaded **Federation Metadata XML** from the Azure portal into Notepad and paste the content into the **Signer Certificate** textbox.
+ e. Open the downloaded **Federation Metadata XML** from the Azure portal into Notepad. Find the `<X509Certificate>` tag and paste the content into the **Signer Certificate** textbox.
f. Click **Submit**.
active-directory Fabric Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/fabric-tutorial.md
To configure and test Azure AD SSO with Fabric, perform the following steps:
1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. 1. **[Configure Fabric SSO](#configure-fabric-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Fabric test user](#create-fabric-test-user)** - to have a counterpart of B.Simon in Fabric that is linked to the Azure AD representation of user.
+ 1. **[Create Fabric roles](#create-fabric-roles)** - to have a counterpart of B.Simon in Fabric that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. In the **Basic SAML Configuration** section, enter the values for the following fields:
- a. In the **Identifier** text box, type a URL using the following pattern:
- `http://<HOSTNAME>/primary`
+ 1. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<HOSTNAME>`
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<HOSTNAME>:<PORT>/api/authenticate`
+ 1. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<HOSTNAME>:<PORT>/api/authenticate`
- c. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<HOSTNAME>:<PORT>`
+ 1. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<HOSTNAME>:<PORT>`
> [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Fabric Client support team](mailto:support@k2view.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact K2View COE team to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up Fabric** section, copy the appropriate URL(s) based on your requirement.
+1. In the **Set up Fabric** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
+1. In the **Token encryption** section, select **Import Certificate** and upload the Fabric certificate file. Contact the K2View COE team to get it.
+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Fabric SSO
-To configure single sign-on on **Fabric** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Fabric support team](mailto:support@k2view.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on the **Fabric** side, send the downloaded **Certificate (Base64)** and the appropriate copied URLs from the Azure portal to the K2View COE support team. The team configures the setting so that the SAML SSO connection is set properly on both sides.
+
+For more information, see *Fabric SAML Configuration* and *Azure AD SAML Setup Guide* in the [K2view Knowledge Base](https://support.k2view.com/knowledge-base.html).
-### Create Fabric test user
+### Create Fabric roles
-In this section, you create a user called Britta Simon in Fabric. Work with [Fabric support team](mailto:support@k2view.com) to add the users in the Fabric platform. Users must be created and activated before you use single sign-on.
+Work with the K2View COE support team to set Fabric roles that are matched to the Azure AD groups, and which are relevant to the users who are going to use Fabric. You'll provide the Fabric team the group IDs, because they are sent in the SAML response.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Fabric Sign-on URL where you can initiate the login flow.
+* In the Azure portal, select **Test this application**. You'll be redirected to the Fabric sign-on URL, where you can initiate the login flow.
-* Go to Fabric Sign-on URL directly and initiate the login flow from there.
+* Go to the Fabric sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Fabric tile in the My Apps, this will redirect to Fabric Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you select the **Fabric** tile in the My Apps portal, you'll be redirected to the Fabric sign-on URL. For more information about the My Apps portal, see [Introduction to the My Apps portal](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory Idrive360 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/idrive360-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with IDrive360 | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and IDrive360.
++++++++ Last updated : 06/18/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with IDrive360
+
+In this tutorial, you'll learn how to integrate IDrive360 with Azure Active Directory (Azure AD). When you integrate IDrive360 with Azure AD, you can:
+
+* Control in Azure AD who has access to IDrive360.
+* Enable your users to be automatically signed-in to IDrive360 with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* IDrive360 single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* IDrive360 supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add IDrive360 from the gallery
+
+To configure the integration of IDrive360 into Azure AD, you need to add IDrive360 from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **IDrive360** in the search box.
+1. Select **IDrive360** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for IDrive360
+
+Configure and test Azure AD SSO with IDrive360 using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in IDrive360.
+
+To configure and test Azure AD SSO with IDrive360, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure IDrive360 SSO](#configure-idrive360-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create IDrive360 test user](#create-idrive360-test-user)** - to have a counterpart of B.Simon in IDrive360 that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **IDrive360** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://www.idrive360.com/enterprise/sso`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificate-base64-download.png)
+
+1. On the **Set up IDrive360** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to IDrive360.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **IDrive360**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure IDrive360 SSO
+
+1. Log in to your IDrive360 company site as an administrator.
+
+2. Go to **Settings** > **Single Sign-on(SSO)** and perform the following steps.
+
+ ![Single Sign-on](./media/idrive360-tutorial/settings.png "Single Sign-on")
+
+ a. In the **SSO Name** textbox, type a valid name.
+
+ b. In the **Issuer URL** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ c. In the **SSO Endpoint** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ d. Click on **Upload Certificate** to upload the **Certificate (PEM)**, which you have downloaded from Azure portal.
+
+ e. Click **Configure Single Sign-On**.
+
+### Create IDrive360 test user
+
+1. In a different web browser window, sign in to your IDrive360 company site as an administrator.
+
+2. Navigate to the **Users** tab and click **Add User**.
+
+ ![Users](./media/idrive360-tutorial/add-user.png "Users")
+
+3. In the **Create new user(s)** section, perform the following steps.
+
+ ![Create Users](./media/idrive360-tutorial/new-user.png "Create Users")
+
+ a. Enter valid **Email Address** in the **Email** textbox.
+
+ b. Click **Create**.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to IDrive360 Sign on URL where you can initiate the login flow.
+
+* Go to IDrive360 Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the IDrive360 for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the IDrive360 tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the IDrive360 for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure IDrive360 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Intacct Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/intacct-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- In the **Reply URL** text box, type a URL:
- `https://www.intacct.com/ia/acct/sso_response.phtml`
+ In the **Reply URL** text box, add the following URLs:
+ `https://www.intacct.com/ia/acct/sso_response.phtml` (Select as the default.)
+ `https://www.p-02.intacct.com/ia/acct/sso_response.phtml`
+ `https://www.p-03.intacct.com/ia/acct/sso_response.phtml`
+ `https://www.p-04.intacct.com/ia/acct/sso_response.phtml`
+ `https://www.p-05.intacct.com/ia/acct/sso_response.phtml`
-1. Sage Intacct application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open User Attributes dialog..
+1. The Sage Intacct application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open User Attributes dialog..
![image](common/edit-attribute.png)
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Sage Intacct you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Sage Intacct you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Iprova Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/iprova-tutorial.md
To configure and test Azure AD SSO with Zenya, perform the following steps:
In this section, you retrieve information from Zenya to configure Azure AD single sign-on.
-1. Open a web browser, and go to the **SAML2 info** page in Zenya by using the following URL patterns:
+1. Open a web browser and go to the **SAML2 info** page in Zenya by using the following URL patterns:
- `https://<SUBDOMAIN>.iprova.nl/saml2info`
- `https://<SUBDOMAIN>.iprova.be/saml2info`
- `https://<SUBDOMAIN>.iprova.eu/saml2info`
+ `https://<SUBDOMAIN>.zenya.work/saml2info`
+ `https://<SUBDOMAIN>.iprova.nl/saml2info`
+ `https://<SUBDOMAIN>.iprova.be/saml2info`
+ `https://<SUBDOMAIN>.iprova.eu/saml2info`
![View the Zenya SAML2 info page](media/iprova-tutorial/information.png)
active-directory Jobscore Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/jobscore-tutorial.md
To configure Azure AD single sign-on with JobScore, perform the following steps:
![JobScore Domain and URLs single sign-on information](common/sp-signonurl.png) In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://hire.jobscore.com/auth/adfs/<company name>`
+ `https://hire.jobscore.com/auth/adfs/<company id>`
> [!NOTE] > The value is not real. Update the value with the actual Sign-On URL. Contact [JobScore Client support team](mailto:support@jobscore.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
When you click the JobScore tile in the Access Panel, you should be automaticall
- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
active-directory Logzio Cloud Observability For Engineers Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/logzio-cloud-observability-for-engineers-tutorial.md
Previously updated : 04/08/2020 Last updated : 06/16/2021
In this tutorial, you'll learn how to integrate Logz.io - Azure AD Integration w
* Enable your users to be automatically signed-in to Logz.io - Azure AD Integration with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Logz.io - Azure AD Integration supports **IDP** initiated SSO
-* Once you configure Logz.io - Azure AD Integration you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+* Logz.io - Azure AD Integration supports **IDP** initiated SSO.
-## Adding Logz.io - Azure AD Integration from the gallery
+## Add Logz.io - Azure AD Integration from the gallery
To configure the integration of Logz.io - Azure AD Integration into Azure AD, you need to add Logz.io - Azure AD Integration from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Logz.io - Azure AD Integration** in the search box. 1. Select **Logz.io - Azure AD Integration** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Logz.io - Azure AD Integration
+## Configure and test Azure AD SSO for Logz.io - Azure AD Integration
Configure and test Azure AD SSO with Logz.io - Azure AD Integration using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Logz.io - Azure AD Integration.
-To configure and test Azure AD SSO with Logz.io - Azure AD Integration, complete the following building blocks:
+To configure and test Azure AD SSO with Logz.io - Azure AD Integration, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Logz.io - Azure AD Integration, complete
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Logz.io - Azure AD Integration** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Logz.io - Azure AD Integration** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+1. On the **Set up single sign-on with SAML** page, perform the following steps:
- a. In the **Identifier** text box, type a URL using the following pattern:
+ a. In the **Identifier** text box, type a value using the following pattern:
`urn:auth0:logzio:CONNECTION-NAME` b. In the **Reply URL** text box, type a URL using the following pattern:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Logz.io - Azure AD Integration**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called Britta Simon in Logz.io - Azure AD Int
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Logz.io - Azure AD Integration tile in the Access Panel, you should be automatically signed in to the Logz.io - Azure AD Integration for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Logz.io Azure AD Integration for which you set up the SSO.
-- [Try Logz.io - Azure AD Integration with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the Logz.io Azure AD Integration tile in the My Apps, you should be automatically signed in to the Logz.io Azure AD Integration for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Logz.io - Azure AD Integration with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Logz.io Azure AD Integration you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Loop Flow Crm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/loop-flow-crm-tutorial.md
Previously updated : 09/24/2020 Last updated : 06/16/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Loop Flow CRM supports **SP and IDP** initiated SSO
+* Loop Flow CRM supports **SP and IDP** initiated SSO.
-## Adding Loop Flow CRM from the gallery
+## Add Loop Flow CRM from the gallery
To configure the integration of Loop Flow CRM into Azure AD, you need to add Loop Flow CRM from the gallery to your list of managed SaaS apps.
To configure the integration of Loop Flow CRM into Azure AD, you need to add Loo
1. In the **Add from the gallery** section, type **Loop Flow CRM** in the search box. 1. Select **Loop Flow CRM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Loop Flow CRM Configure and test Azure AD SSO with Loop Flow CRM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Loop Flow CRM.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Loop Flow CRM** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<CUSTOMER_NAME>.loopworks.com`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ![The Certificate download link](common/copy-metadataurl.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you test your Azure AD single sign-on configuration with follow
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Loop Flow CRM for which you set up the SSO
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Loop Flow CRM for which you set up the SSO.
-You can also use Microsoft Access Panel to test the application in any mode. When you click the Loop Flow CRM tile in the Access Panel, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Loop Flow CRM for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Loop Flow CRM tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Loop Flow CRM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/postman-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Postman SSO
-To configure single sign-on on **Postman** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Postman support team](mailto:help@getpostman.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on the **Postman** side, you need to upload the downloaded **Federation Metadata XML** and update the appropriate copied URLs from the Azure portal at Postman. To learn how to configure Postman SSO, see the [step-by-step guide](https://learning.postman.com/docs/administration/sso/admin-sso/).
### Create Postman test user
-In this section, a user called Britta Simon is created in Postman. Postman supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Postman, a new one is created after authentication.
+In this section, a user called Britta Simon is created in Postman. Postman supports just-in-time user provisioning, which can be enabled by selecting the checkbox to [Automatically add new users](https://learning.postman.com/docs/administration/sso/admin-sso/#automatically-adding-new-users). With this, if a user doesn't already exist in Postman, a new one is created after authentication.
## Test SSO
active-directory Readcube Papers Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/readcube-papers-tutorial.md
Previously updated : 06/03/2021 Last updated : 06/07/2021
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure ReadCube Papers SSO
-To configure single sign-on on **ReadCube Papers** side, you need to send the **App Federation Metadata Url** to [ReadCube Papers support team](mailto:support@readcube.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on the **ReadCube Papers** side, you need to send the **App Federation Metadata URL** to the [ReadCube Papers support team](mailto:sso-support@readcube.com). They change this setting so that the SAML SSO connection works properly on both sides.
### Create ReadCube Papers test user
In this section, a user called Britta Simon is created in ReadCube Papers. ReadC
In this section, you test your Azure AD single sign-on configuration with following options.
+> [!NOTE]
+> Before testing, please confirm with the [ReadCube Papers support team](mailto:sso-support@readcube.com) that SSO is set up on the ReadCube side.
+ * Click on **Test this application** in Azure portal. This will redirect to ReadCube Papers Sign-on URL where you can initiate the login flow. * Go to ReadCube Papers Sign-on URL directly and initiate the login flow from there.
active-directory Sailpoint Identitynow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sailpoint-identitynow-tutorial.md
In this tutorial, you'll learn how to integrate SailPoint IdentityNow with Azure
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* SailPoint IdentityNow active subscription. If you do not have IdentityNow, please contact [SailPoint IdentityNow support team](mailto:support@sailpoint.com).
+* SailPoint IdentityNow active subscription. If you do not have IdentityNow, please contact [SailPoint IdentityNow support team](mailto:support@sailpoint.com).
## Scenario description
active-directory Samanage Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/samanage-provisioning-tutorial.md
If you select the **Sync all users and groups** option and configure a value for
## Change log
-* 09/14/2020 - Changed the company name in two SaaS tutorials from Samanage to SolarWinds Service Desk (previously Samanage) per https://github.com/ravitmorales.
+* 09/14/2020 - Changed the company name in two SaaS tutorials from Samanage to SolarWinds Service Desk (previously Samanage) per `https://github.com/ravitmorales`.
* 04/22/2020 - Updated authorization method from basic auth to long lived secret token. ## Additional resources
active-directory Sharepoint On Premises Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sharepoint-on-premises-tutorial.md
$realm = "urn:sharepoint:federation"
$loginUrl = "https://login.microsoftonline.com/dc38a67a-f981-4e24-ba16-4443ada44484/wsfed" # Define the claim types used for the authorization
-$userIdentifier = New-SPClaimTypeMapping -IncomingClaimType `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name` -IncomingClaimTypeDisplayName "name" -LocalClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"
+$userIdentifier = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name" -IncomingClaimTypeDisplayName "name" -LocalClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"
$role = New-SPClaimTypeMapping "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" -IncomingClaimTypeDisplayName "Role" -SameAsIncoming # Let SharePoint trust the Azure AD signing certificate
active-directory Standard For Success Accreditation Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/standard-for-success-accreditation-tutorial.md
Previously updated : 05/21/2021 Last updated : 06/18/2021
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Standard for Success Accreditation supports **SP and IDP** initiated SSO.
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ## Add Standard for Success Accreditation from the gallery To configure the integration of Standard for Success Accreditation into Azure AD, you need to add Standard for Success Accreditation from the gallery to your list of managed SaaS apps.
To configure the integration of Standard for Success Accreditation into Azure AD
1. In the **Add from the gallery** section, type **Standard for Success Accreditation** in the search box. 1. Select **Standard for Success Accreditation** from the results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Standard for Success Accreditation Configure and test Azure AD SSO with Standard for Success Accreditation using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Standard for Success Accreditation.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- In the **Reply URL** text box, type a URL using the following pattern:
+ a. In the **Identifier** text box, type a value using the following pattern:
+ `api://<ApplicationId>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://edu.sfsed.com/access/saml_consume?did=<INSTITUTION-ID>` 1. Click **Set additional URLs** and perform the following steps if you wish to configure the application in **SP** initiated mode:
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://edu.sfsed.com/access/saml_consume?did=<INSTITUTION-ID>` > [!NOTE]
- > These values are not real. Update these values with the actual Reply URL, Sign-on URL and Relay State. Contact [Standard for Success Accreditation Client support team](mailto:help_he@standardforsuccess.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL, Sign-on URL and Relay State. Contact [Standard for Success Accreditation Client support team](mailto:help_he@standardforsuccess.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. In the **SAML Signing Certificate** section, click **Edit** button to open **SAML Signing Certificate** dialog.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up Standard for Success Accreditation** section, copy the appropriate URL(s) based on your requirement. ![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
d. Scroll to the bottom and Click **Create User**. - ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
In this section, you test your Azure AD single sign-on configuration with follow
You can also use Microsoft My Apps to test the application in any mode. When you click the Standard for Success Accreditation tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Standard for Success Accreditation for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). - ## Next steps Once you configure Standard for Success Accreditation you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).--
active-directory Tribeloo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tribeloo-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Tribeloo for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Tribeloo.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: d1063ef2-5d39-4480-a1e2-f58ebe7f98c3
+++
+ na
+ms.devlang: na
+ Last updated : 06/07/2021+++
+# Tutorial: Configure Tribeloo for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Tribeloo and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Tribeloo](https://www.tribeloo.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Tribeloo.
+> * Remove users in Tribeloo when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Tribeloo.
+> * [Single sign-on](tribeloo-tutorial.md) to Tribeloo (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [Tribeloo](https://www.tribeloo.com/) tenant.
+* A user account in Tribeloo with Admin permissions.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Tribeloo](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Tribeloo to support provisioning with Azure AD
+
+Navigate to the [Tribeloo app](https://app.tribeloo.com/) and log as a user with Admin permissions.
+1. Using the side menu(1), navigate to **Admin**(2), select **User management**(3)
+
+ ![Access User Management](media/tribeloo-provisioning-tutorial/tribeloo-user-management.png)
+
+1. Select the **User provisioning**(1) tab. On this tab, you have access to Tribeloo information that you will have to use to configure the Azure AD integration.
+ 1. **SCIM base url** (2)
+ 1. **SCIM Bearer token** (3)
+1. Copy these values to the clipboard and paste them in the corresponding Azure AD fields (see Step 5). The AD fields are named **Tenant URL** and **Secret Token** respectively.
+
+ ![Tribeloo Provisioning Parameters](media/tribeloo-provisioning-tutorial/tribeloo-provisioning-parameters.png)
+
+1. On the **User Provisioning** tab you can now click the **Enable User provisioning**(1) button to enable user provisioning in Tribeloo.
+
+ ![Tribeloo Enable Provisioning](media/tribeloo-provisioning-tutorial/tribeloo-enable-provisioning.png)
+
+## Step 3. Add Tribeloo from the Azure AD application gallery
+
+Add Tribeloo from the Azure AD application gallery to start managing provisioning to Tribeloo. If you have previously setup Tribeloo for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Tribeloo, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Tribeloo
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Tribeloo based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Tribeloo in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Tribeloo**.
+
+ ![The Tribeloo link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, input your Tribeloo **Tenant URL** and **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Tribeloo. If the connection fails , ensure your Tribeloo account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Tribeloo**.
+
+1. Review the user attributes that are synchronized from Azure AD to Tribeloo in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Tribeloo for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Tribeloo API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;
+ |active|Boolean|
+ |displayName|String|
+ |name.givenName|String|
+ |name.familyName|String|
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Tribeloo, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Tribeloo by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-security.md
When an AKS cluster is created or scaled up, the nodes are automatically deploye
### Node security patches #### Linux nodes
-The Azure platform automatically applies OS security patches to Linux nodes on a nightly basis. If a Linux OS security update requires a host reboot, it won't automatically reboot. You can either:
-* Manually reboot the Linux nodes.
-* Use [Kured][kured], an open-source reboot daemon for Kubernetes. Kured runs as a [DaemonSet][aks-daemonsets] and monitors each node for a file indicating that a reboot is required.
+Each evening, Linux nodes in AKS get security patches through their distro security update channel. This behavior is automatically configured as the nodes are deployed in an AKS cluster. To minimize disruption and potential impact to running workloads, nodes are not automatically rebooted if a security patch or kernel update requires it. For more information about how to handle node reboots, see [Apply security and kernel updates to nodes in AKS][aks-kured].
-Reboots are managed across the cluster using the same [cordon and drain process](#cordon-and-drain) as a cluster upgrade.
+Nightly updates apply security updates to the OS on the node, but the node image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node will receive all the security and kernel updates available during the automatic check every night but will remain unpatched until all checks and restarts are complete. You can use node image upgrade to check for and update node images used by your cluster. For more details on nod image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade].
#### Windows Server nodes
For more information on core Kubernetes and AKS concepts, see:
[aks-concepts-scale]: concepts-scale.md [aks-concepts-storage]: concepts-storage.md [aks-concepts-network]: concepts-network.md
+[aks-kured]: node-updates-kured.md
[aks-limit-egress-traffic]: limit-egress-traffic.md [cluster-isolation]: operator-best-practices-cluster-isolation.md [operator-best-practices-cluster-security]: operator-best-practices-cluster-security.md
For more information on core Kubernetes and AKS concepts, see:
[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool [authorized-ip-ranges]: api-server-authorized-ip-ranges.md [private-clusters]: private-clusters.md
-[network-policy]: use-network-policies.md
+[network-policy]: use-network-policies.md
+[node-image-upgrade]: node-image-upgrade.md
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/node-updates-kured.md
Some security updates, such as kernel updates, require a node reboot to finalize
You can use your own workflows and processes to handle node reboots, or use `kured` to orchestrate the process. With `kured`, a [DaemonSet][DaemonSet] is deployed that runs a pod on each Linux node in the cluster. These pods in the DaemonSet watch for existence of the */var/run/reboot-required* file, and then initiate a process to reboot the nodes.
+### Node image upgrades
+
+Unattended upgrades apply updates to the Linux node OS, but the image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node will receive all the security and kernel updates available during the automatic check every night but will remain unpatched until all checks and restarts are complete.
+
+Alternatively, you can use node image upgrade to check for and update node images used by your cluster. For more details on nod image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade].
+ ### Node upgrades There is an additional process in AKS that lets you *upgrade* a cluster. An upgrade is typically to move to a newer version of Kubernetes, not just apply node security updates. An AKS upgrade performs the following actions:
For AKS clusters that use Windows Server nodes, see [Upgrade a node pool in AKS]
[aks-ssh]: ssh.md [aks-upgrade]: upgrade-cluster.md [nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[node-image-upgrade]: node-image-upgrade.md
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-cluster-security.md
You can then upgrade your AKS cluster using the [az aks upgrade][az-aks-upgrade]
For more information about upgrades in AKS, see [Supported Kubernetes versions in AKS][aks-supported-versions] and [Upgrade an AKS cluster][aks-upgrade].
-## Process Linux node updates and reboots using kured
+## Process Linux node updates
-> **Best practice guidance**
->
-> While AKS automatically downloads and installs security fixes on each Linux node, it does not automatically reboot.
-> 1. Use `kured` to watch for pending reboots.
-> 1. Safely cordon and drain the node to allow the node to reboot.
-> 1. Apply the updates.
-> 1. Be as secure as possible with respect to the OS.
-
-For Windows Server nodes, regularly perform an AKS upgrade operation to safely cordon and drain pods and deploy updated nodes.
-
-Each evening, Linux nodes in AKS get security patches through their distro update channel. This behavior is automatically configured as the nodes are deployed in an AKS cluster. To minimize disruption and potential impact to running workloads, nodes are not automatically rebooted if a security patch or kernel update requires it.
-
-The open-source [kured (KUbernetes REboot Daemon)][kured] project by Weaveworks watches for pending node reboots. When a Linux node applies updates that require a reboot, the node is safely cordoned and drained to move and schedule the pods on other nodes in the cluster. Once the node is rebooted, it is added back into the cluster and Kubernetes resumes pod scheduling. To minimize disruption, only one node at a time is permitted to be rebooted by `kured`.
-
-![The AKS node reboot process using kured](media/operator-best-practices-cluster-security/node-reboot-process.png)
-
-If you want even closer control over reboots, `kured` can integrate with Prometheus to prevent reboots if there are other maintenance events or cluster issues in progress. This integration reduces complication by rebooting nodes while you are actively troubleshooting other issues.
+Each evening, Linux nodes in AKS get security patches through their distro update channel. This behavior is automatically configured as the nodes are deployed in an AKS cluster. To minimize disruption and potential impact to running workloads, nodes are not automatically rebooted if a security patch or kernel update requires it. For more information about how to handle node reboots, see [Apply security and kernel updates to nodes in AKS][aks-kured].
-For more information about how to handle node reboots, see [Apply security and kernel updates to nodes in AKS][aks-kured].
+### Node image upgrades
-## Next steps
+Unattended upgrades apply updates to the Linux node OS, but the image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node will receive all the security and kernel updates available during the automatic check every night but will remain unpatched until all checks and restarts are complete. You can use node image upgrade to check for and update node images used by your cluster. For more details on nod image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade].
-This article focused on how to secure your AKS cluster. To implement some of these areas, see the following articles:
+## Process Windows Server node updates
-* [Integrate Azure Active Directory with AKS][aks-aad]
-* [Upgrade an AKS cluster to the latest version of Kubernetes][aks-upgrade]
-* [Process security updates and node reboots with kured][aks-kured]
+For Windows Server nodes, regularly perform a node image upgrade operation to safely cordon and drain pods and deploy updated nodes.
<!-- EXTERNAL LINKS -->
-[kured]: https://github.com/weaveworks/kured
[k8s-apparmor]: https://kubernetes.io/docs/tutorials/clusters/apparmor/ [seccomp]: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
This article focused on how to secure your AKS cluster. To implement some of the
[pod-security-contexts]: developer-best-practices-pod-security.md#secure-pod-access-to-resources [aks-ssh]: ssh.md [security-center-aks]: ../security-center/defender-for-kubernetes-introduction.md
+[node-image-upgrade]: node-image-upgrade.md
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/planned-maintenance.md
az extension update --name aks-preview
To add a maintenance window, you can use the `az aks maintenanceconfiguration add` command. > [!IMPORTANT]
+> At this time, you must set `default` as the value for `--name`. Using any other name will cause your maintenance window to not run.
+>
> Planned Maintenance windows are specified in Coordinated Universal Time (UTC). ```azurecli-interactive
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-internal-vnet.md
Use API Management in internal mode to:
+ **An active Azure subscription**. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-+ **An Azure API Management instance**. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
++ **An Azure API Management instance (supported SKUs: Developer, Premium and Isolated)**. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). [!INCLUDE [api-management-public-ip-for-vnet](../../includes/api-management-public-ip-for-vnet.md)]
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-vnet.md
When an API Management service instance is hosted in a VNET, the ports in the fo
* Enable [service endpoints][ServiceEndpoints] on the subnet in which the API Management service is deployed for: * Azure Sql * Azure Storage
- * Azure EventHub
- * Azure ServiceBus, and
+ * Azure EventHub, and
* Azure KeyVault. By enabling endpoints directly from API Management-delegated subnet to these services, you can use the Microsoft Azure backbone network, providing optimal routing for service traffic. If you use service endpoints with a force tunneled API Management, the above Azure services traffic isn't force tunneled. The other API Management service dependency traffic is force tunneled and can't be lost. If lost, the API Management service would not function properly.
api-management How To Deploy Self Hosted Gateway Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md
When you're automating token refresh, use [this management API operation](/rest/
Kubernetes [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) help with dividing a single cluster among multiple teams, projects, or applications. Namespaces provide a scope for resources and names. They can be associated with a resource quota and access control policies. The Azure portal provides commands to create self-hosted gateway resources in the **default** namespace. This namespace is automatically created, exists in every cluster, and can't be deleted.
-Consider [creating and deploying](https://kubernetesbyexample.com/ns/) a self-hosted gateway into a separate namespace in production.
+Consider [creating and deploying](https://www.kubernetesbyexample.com/) a self-hosted gateway into a separate namespace in production.
### Number of replicas The minimum number of replicas suitable for production is two.
Consider [setting up local monitoring](how-to-configure-local-metrics-logs.md) t
## Next steps * To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md).
-* Learn [how to deploy API Management self-hosted gateway to Azure Arc enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md).
+* Learn [how to deploy API Management self-hosted gateway to Azure Arc enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md).
app-service Networking Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking-features.md
If you scan App Service, you'll find several ports that are exposed for inbound
[networkinfo]: ./environment/network-info.md [appgwserviceendpoints]: ./networking/app-gateway-with-service-endpoints.md [privateendpoints]: ./networking/private-endpoint.md
-[servicetags]: ../virtual-network/service-tags-overview.md
+[servicetags]: ../virtual-network/service-tags-overview.md
app-service Samples Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/samples-resource-manager-templates.md
To learn about the JSON syntax and properties for App Services resources, see [M
| [App on Linux with MySQL](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-linux-managed-mysql) | Deploys an App Service app on Linux with Azure Database for MySQL. | | [App on Linux with PostgreSQL](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-linux-managed-postgresql) | Deploys an App Service app on Linux with Azure Database for PostgreSQL. | |**App with connected resources**| **Description** |
-| [App with MySQL](https://github.com/Azure/azure-quickstart-templates/tree/master/101-webapp-managed-mysql)| Deploys an App Service app on Windows with Azure Database for MySQL. |
+| [App with MySQL](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-managed-mysql)| Deploys an App Service app on Windows with Azure Database for MySQL. |
| [App with PostgreSQL](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-managed-postgresql)| Deploys an App Service app on Windows with Azure Database for PostgreSQL. | | [App with a database in Azure SQL Database](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-sql-database)| Deploys an App Service app and a database in Azure SQL Database at the Basic service level. | | [App with a Blob storage connection](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-blob-connection)| Deploys an App Service app with an Azure Blob storage connection string. You can then use Blob storage from the app. |
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/overview.md
Client applications can be designed to take advantage of SGX enclaves by delegat
Intel® Xeon® Scalable processors only support [ECDSA-based attestation solutions](https://software.intel.com/content/www/us/en/develop/topics/software-guard-extensions/attestation-services.html#Elliptic%20Curve%20Digital%20Signature%20Algorithm%20(ECDSA)%20Attestation) for remotely attesting SGX enclaves. Utilizing ECDSA based attestation model, Azure Attestation supports validation of Intel® Xeon® E3 processors and Intel® Xeon® Scalable processor-based server platforms.
+> [!NOTE]
+> To perform attestation of Intel® Xeon® Scalable processor-based server platforms using Azure Attestation, users are expected to install [Azure DCAP version 1.10.0](https://github.com/microsoft/Azure-DCAP-Client) or higher.
+ ### Open Enclave [Open Enclave](https://openenclave.io/sdk/) (OE) is a collection of libraries targeted at creating a single unified enclaving abstraction for developers to build TEE-based applications. It offers a universal secure app model that minimizes platform specificities. Microsoft views it as an essential stepping-stone toward democratizing hardware-based enclave technologies such as SGX and increasing their uptake on Azure.
automation Automation Use Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-use-azure-ad.md
Before installing the Azure AD modules on your computer:
1. Ensure that the Microsoft .NET Framework 3.5.x feature is enabled on your computer. It's likely that your computer has a newer version installed, but backward compatibility with older versions of the .NET Framework can be enabled or disabled.
-2. Install the 64-bit version of the [Microsoft Online Services Sign-in Assistant](https://www.microsoft.com/Download/details.aspx?id=28177).
+2. Install the 64-bit version of the [Microsoft Online Services Sign-in Assistant](/microsoft-365/enterprise/connect-to-microsoft-365-powershell?view=o365-worldwide#step-1-install-the-required-software-1).
3. Run Windows PowerShell as an administrator to create an elevated Windows PowerShell command prompt.
automation Update Agent Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/update-agent-issues.md
The operating system check verifies whether the Hybrid Runbook Worker is running
### .NET 4.6.2
-The .NET Framework check verifies that the system has [.NET Framework 4.6.2](https://www.microsoft.com/en-us/download/details.aspx?id=53345) or later installed.
+The .NET Framework check verifies that the system has [.NET Framework 4.6.2](https://dotnet.microsoft.com/download/dotnet-framework/net462) or later installed.
### WMF 5.1
azure-cache-for-redis Cache Redis Cache Arm Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-redis-cache-arm-provision.md
The following resources are defined in the template:
Resource Manager templates for the new [Premium tier](cache-overview.md#service-tiers) are also available. * [Create a Premium Azure Cache for Redis with clustering](https://azure.microsoft.com/resources/templates/redis-premium-cluster-diagnostics/)
-* [Create Premium Azure Cache for Redis with data persistence](https://azure.microsoft.com/resources/templates/redis-premium-persistence/)
+* [Create Premium Azure Cache for Redis with data persistence](https://azure.microsoft.com/resources/templates/201-redis-premium-persistence/)
* [Create Premium Redis Cache deployed into a Virtual Network](https://azure.microsoft.com/resources/templates/redis-premium-vnet/) To check for the latest templates, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/) and search for _Azure Cache for Redis_.
azure-maps How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-manage-authentication.md
Title: Manage authentication
+ Title: Manage authentication in Microsoft Azure Maps
description: Become familiar with Azure Maps authentication. See which approach works best in which scenario. Learn how to use the portal to view authentication settings. Previously updated : 06/12/2020 Last updated : 06/10/2021 -+
+custom.ms: subject-rbac-steps
# Manage authentication in Azure Maps
-After you create an Azure Maps account, a client ID and keys are created to support Azure Active Directory (Azure AD) authentication and Shared Key authentication.
+When you create an Azure Maps account, keys and a client ID are generated. The keys and client ID are used to support Azure Active Directory (Azure AD) authentication and Shared Key authentication.
## View authentication details
-After you create an Azure Maps account, the primary and secondary keys are generated. We recommend that you use a primary key as a subscription key when you [use Shared Key authentication to call Azure Maps](./azure-maps-authentication.md#shared-key-authentication). You can use a secondary key in scenarios such as rolling key changes. For more information, see [Authentication in Azure Maps](./azure-maps-authentication.md).
+ >[!IMPORTANT]
+ >We recommend that you use the primary key as the subscription key when you [use Shared Key authentication to call Azure Maps](./azure-maps-authentication.md#shared-key-authentication). It's best to use the secondary key in scenarios like rolling key changes. For more information, see [Authentication in Azure Maps](./azure-maps-authentication.md).
-You can view your authentication details in the Azure portal. There, in your account, on the **Settings** menu, select **Authentication**.
+To view your Azure Maps authentication details:
-> [!div class="mx-imgBorder"]
-> ![Authentication details](./media/how-to-manage-authentication/how-to-view-auth.png)
+1. Sign in to the [Azure portal](https://portal.azure.com).
-## Discover category and scenario
+2. Navigate to the Azure portal menu. Select **All resources**, and then select your Azure Maps account.
-Depending on application needs there are specific pathways to securing the application. Azure AD defines categories to support a wide range of authentication flows. See [application categories](../active-directory/develop/authentication-flows-app-scenarios.md#application-categories) to understand which category the application fits.
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/select-all-resources.png" alt-text="Select Azure Maps account.":::
+
+3. Under **Settings** in the left pane, select **Authentication**.
+
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/view-authentication-keys.png" alt-text="Authentication details.":::
+
+## Choose an authentication category
+
+Depending on your application needs, there are specific pathways to application security. Azure AD defines specific authentication categories to support a wide range of authentication flows. To choose the best category for your application, see [application categories](../active-directory/develop/authentication-flows-app-scenarios.md#application-categories).
> [!NOTE] > Even if you use shared key authentication, understanding categories and scenarios helps you to secure the application.
-## Determine authentication and authorization
+## Choose an authentication and authorization scenario
-The following table outlines common authentication and authorization scenarios in Azure Maps. The table provides a comparison of the types of protection each scenario offers.
+This table outlines common authentication and authorization scenarios in Azure Maps. Use the links to learn detailed configuration information for each scenario.
> [!IMPORTANT]
-> Microsoft recommends implementing Azure Active Directory (Azure AD) with Azure role-based access control (Azure RBAC) for production applications.
+> For production applications, we recommend implementing Azure AD with Azure role-based access control (Azure RBAC).
| Scenario | Authentication | Authorization | Development effort | Operational effort | | - | -- | - | | |
The following table outlines common authentication and authorization scenarios i
| [Web application with interactive single-sign-on](./how-to-secure-webapp-users.md) | Azure AD | High | High | Medium | | [IoT device / input constrained device](./how-to-secure-device-code.md) | Azure AD | High | Medium | Medium |
-The links in the table take you to detailed configuration information for each scenario.
+## View built-in Azure Maps role definitions
+
+To view the built-in Azure Maps role definition:
+
+1. In the left pane, select **Access control (IAM)**.
-## View role definitions
+2. Select the **Roles** tab.
-To view Azure roles that are available for Azure Maps, go to **Access control (IAM)**. Select **Roles**, and then search for roles that begin with *Azure Maps*. These Azure Maps roles are the roles that you can grant access to.
+3. In the search box, enter **Azure Maps**.
-> [!div class="mx-imgBorder"]
-> ![View available roles](./media/how-to-manage-authentication/how-to-view-avail-roles.png)
+The results display the available built-in role definitions for Azure Maps.
+ ## View role assignments To view users and apps that have been granted access for Azure Maps, go to **Access Control (IAM)**. There, select **Role assignments**, and then filter by **Azure Maps**.
-> [!div class="mx-imgBorder"]
-> ![View users and apps that have been granted access](./media/how-to-manage-authentication/how-to-view-amrbac.png)
+1. In the left pane, select **Access control (IAM)**.
+
+2. Select the **Role assignments** tab.
+
+3. In the search box, enter **Azure Maps**.
+
+The results display the current Azure Maps role assignments.
+ ## Request tokens for Azure Maps
Request a token from the Azure AD token endpoint. In your Azure AD request, use
| Azure public cloud | `https://login.microsoftonline.com` | `https://atlas.microsoft.com/` | | Azure Government cloud | `https://login.microsoftonline.us` | `https://atlas.microsoft.com/` |
-For more information about requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md) and view specific scenarios in the table of [Scenarios](./how-to-manage-authentication.md#determine-authentication-and-authorization).
+For more information about requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md). To view specific scenarios, see [the table of scenarios](./how-to-manage-authentication.md#choose-an-authentication-and-authorization-scenario).
## Manage and rotate shared keys
-Your Azure Maps subscription keys are similar to a root password for your Azure Maps account. Always be careful to protect your subscription keys. Use Azure Key Vault to manage and rotate your keys securely. Avoid distributing access keys to other users, hard-coding them, or saving them anywhere in plain text that is accessible to others. Rotate your keys if you believe they may have been compromised.
+Your Azure Maps subscription keys are similar to a root password for your Azure Maps account. Always be careful to protect your subscription keys. Use Azure Key Vault to securely manage and rotate your keys. Avoid distributing access keys to other users, hard-coding them, or saving them anywhere in plain text that's accessible to others. If you believe that your keys may have been compromised, rotate them.
> [!NOTE]
-> Microsoft recommends using Azure Active Directory (Azure AD) to authorize requests if possible, instead of Shared Key. Azure AD provides superior security and ease of use over Shared Key.
+> If possible, we recommend using Azure AD instead of Shared Key to authorize requests. Azure AD has better security than Shared Key, and it's easier to use.
### Manually rotate subscription keys
-Microsoft recommends that you rotate your subscription keys periodically to help keep your Azure Maps account secure. If possible, use Azure Key Vault to manage your access keys. If you are not using Key Vault, you will need to rotate your keys manually.
+To help keep your Azure Maps account secure, we recommend periodically rotating your subscription keys. If possible, use Azure Key Vault to manage your access keys. If you aren't using Key Vault, you'll need to manually rotate your keys.
Two subscription keys are assigned so that you can rotate your keys. Having two keys ensures that your application maintains access to Azure Maps throughout the process. To rotate your Azure Maps subscription keys in the Azure portal: 1. Update your application code to reference the secondary key for the Azure Maps account and deploy.
-2. Navigate to your Azure Maps account in the [Azure portal](https://portal.azure.com/).
+2. In the [Azure portal](https://portal.azure.com/), navigate to your Azure Maps account.
3. Under **Settings**, select **Authentication**. 4. To regenerate the primary key for your Azure Maps account, select the **Regenerate** button next to the primary key. 5. Update your application code to reference the new primary key and deploy. 6. Regenerate the secondary key in the same manner. > [!WARNING]
-> Microsoft recommends using only one of the keys in all of your applications at the same time. If you use Key 1 in some places and Key 2 in others, you will not be able to rotate your keys without some applications losing access.
+> We recommend using only one of the keys in all of your applications at the same time. If you use Key 1 in some places and Key 2 in others, you won't be able to rotate your keys without some applications losing access.
## Next steps
-For more information, see [Azure AD and Azure Maps Web SDK](./how-to-use-map-control.md).
- Find the API usage metrics for your Azure Maps account: > [!div class="nextstepaction"] > [View usage metrics](how-to-view-api-usage.md)
Find the API usage metrics for your Azure Maps account:
Explore samples that show how to integrate Azure AD with Azure Maps: > [!div class="nextstepaction"]
-> [Azure AD authentication samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
+> [Azure AD authentication samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
azure-maps How To Secure Daemon App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-secure-daemon-app.md
Title: How to secure a daemon application
+ Title: How to secure a daemon application in Microsoft Azure Maps
-description: Use the Azure portal to manage authentication to configure a trusted daemon application.
+description: This article describes how to host daemon applications, such as background processes, timers, and jobs in a trusted and secure environment in Microsoft Azure Maps.
Previously updated : 06/12/2020 Last updated : 06/21/2021 -+
+custom.ms: subject-rbac-steps
# Secure a daemon application
-The following guide is for background processes, timers, and jobs which are hosted in a trusted and secured environment. Examples include Azure Web Jobs, Azure Function Apps, Windows Services, and any other reliable background service.
+This article describes how to host daemon applications in a trusted and secure environment in Microsoft Azure Maps.
-> [!Tip]
-> Microsoft recommends implementing Azure Active Directory (Azure AD) and Azure role-based access control (Azure RBAC) for production applications. For an overview of concepts, see [Azure Maps Authentication](./azure-maps-authentication.md).
+The following are examples of daemon applications:
+
+- Azure Web Job
+- Azure Function App
+- Windows Service
+- A running and reliable background service
+
+## View Azure Maps authentication details
[!INCLUDE [authentication details](./includes/view-authentication-details.md)]
-## Scenario: Shared key authentication
+>[!IMPORTANT]
+>For production applications, we recommend implementing Azure AD and Azure role-based access control (Azure RBAC). For an overview of Azure AD concepts, see [Authentication with Azure Maps](azure-maps-authentication.md).
-After you create an Azure Maps account, the primary and secondary keys are generated. We recommend that you use the primary key as the subscription key when you [use shared key authentication to call Azure Maps](./azure-maps-authentication.md#shared-key-authentication). You can use a secondary key in scenarios such as rolling key changes. For more information, see [Authentication in Azure Maps](./azure-maps-authentication.md).
+## Scenario: Shared key authentication with Azure Key Vault
-### Securely store shared key
+Applications that use Shared Key authentication, should store the keys in a secure store. This scenario describes how to safely store your application key as a secret in Azure Key Vault. Instead of storing the shared key in application configuration, the application can retrieve the shared key as an Azure Key Vault secret. To simplify key regeneration, we recommend that applications use one key at a time. Applications can then regenerate the unused key and deploy the regenerated key to Azure Key Vault while still maintaining current connections with one key. To understand how to configure Azure Key Vault, see [Azure Key Vault developer guide](../key-vault/general/developers-guide.md).
-The primary and secondary key allow authorization to all APIs for the Maps account. Applications should store the keys in a secure store such as Azure Key Vault. The application must retrieve the shared key as a Azure Key Vault secret to avoid storing the shared key in plain text in application configuration. To understand how to configure an Azure Key Vault, see [Azure Key Vault developer guide](../key-vault/general/developers-guide.md).
+>[!IMPORTANT]
+>This scenario indirectly accesses Azure Active Directory through Azure Key Vault. However, we recommend that you use Azure AD authentication directly. Using Azure AD directly avoids the additional complexity and operational requirements of using shared key authentication and setting up Key Vault.
The following steps outline this process:
-1. Create an Azure Key Vault.
-2. Create an Azure AD service principal by creating an App registration or managed identity, the created principal is responsible to access Azure Key Vault.
-3. Assign the service principal access to Azure Key secrets `Get` permission.
-4. Temporarily assign access to secrets `Set` permission for you as the developer.
-5. Set the shared key in the Key Vault secrets and reference the secret ID as configuration for the daemon application and remove your secrets `Set` permission.
-6. Implement Azure AD authentication in the daemon application to retrieve the shared key secret from Azure Key Vault.
-7. Create Azure Maps REST API request with shared key.
+1. [Create an Azure Key Vault](../key-vault/general/quick-create-portal.md).
+2. Create an [Azure AD service principal](../active-directory/fundamentals/service-accounts-principal.md) by creating an App registration or managed identity. The created principal is responsible for accessing the Azure Key Vault.
+3. Assign the service principal access to Azure Key secrets `get` permission. For details about how to set permissions, see [Assign a Key Vault access policy using the Azure portal](../key-vault/general/assign-access-policy-portal.md).
+4. Temporarily assign access to secrets `set` permission for you as the developer.
+5. Set the shared key in the Key Vault secrets and reference the secret ID as configuration for the daemon application.
+6. Remove your secrets `set` permission.
+7. To retrieve the shared key secret from Azure Key Vault, implement Azure Active Directory authentication in the daemon application.
+8. Create an Azure Maps REST API request with the shared key.
+Now, the daemon application can retrieve the shared key from the Key Vault.
+
+> [!TIP]
+> If the app is hosted in the Azure environment, we recommend that you use a managed identity to reduce the cost and complexity of managing a secret for authentication. To learn how to set up a managed identity, see [Tutorial: Use a managed identity to connect Key Vault to an Azure web app in .NET](../key-vault/general/tutorial-net-create-vault-azure-web-app.md).
-> [!Tip]
-> If the app is hosted in Azure environment, you should implement a Managed Identity to reduce the cost and complexity of managing a secret to authenticate to Azure Key Vault. See the following Azure Key Vault [tutorial to connect via managed identity](../key-vault/general/tutorial-net-create-vault-azure-web-app.md).
+## Scenario: Azure AD role-based access control
-The daemon application is responsible for retrieving the shared key from a secure storage. The implementation with Azure Key Vault requires authentication through Azure AD to access the secret. Instead, we encourage direct Azure AD authentication to Azure Maps as a result of the additional complexity and operational requirements of using shared key authentication.
+After an Azure Maps account is created, the Azure Maps `Client ID` value is present in the Azure portal authentication details page. This value represents the account that is to be used for REST API requests. This value should be stored in application configuration and retrieved before making HTTP requests. The goal of the scenario is to enable the daemon application to authenticate to Azure AD and call Azure Maps REST APIs.
-> [!IMPORTANT]
-> To simplify key regeneration, we recommend applications use one key at a time. Applications can then regenerate the unused key and deploy the new regenerated key to a secured secret store such as Azure Key Vault.
+> [!TIP]
+>To enable benefits of managed identity components, we recommend that you host on Azure Virtual Machines, Virtual Machine Scale Sets, or App Services.
-## Scenario: Azure AD role-based access control
+### Host a daemon on Azure resources
-Once an Azure Maps account is created, the Azure Maps `x-ms-client-id` value is present in the Azure portal authentication details page. This value represents the account which will be used for REST API requests. This value should be stored in application configuration and retrieved prior to making HTTP requests. The objective of the scenario is to enable the daemon application to authenticate to Azure AD and call Azure Maps REST APIs.
+When running on Azure resources, you can configure Azure-managed identities to enable low cost, minimal credential management effort.
-> [!Tip]
-> We recommend hosting on Azure Virtual Machines, Virtual Machine Scale Sets, or App Services to enable benefits of Managed Identity components.
+To enable application access to a managed identity, see [Overview of managed identities](../active-directory/managed-identities-azure-resources/overview.md).
-### Daemon hosted on Azure resources
+Some managed identity benefits are:
-When running on Azure resources, configure Azure managed identities to enable low cost, minimal credential management effort.
+- Azure system-managed X509 certificate public key cryptography authentication.
+- Azure AD security with X509 certificates instead of client secrets.
+- Azure manages and renews all certificates associated with the Managed Identity resource.
+- Credential operational management is simplified because managed identity removes the need for a secured secret store service, such as Azure Key Vault.
-See [Overview of Managed Identities](../active-directory/managed-identities-azure-resources/overview.md) to enable the application access to a Managed Identity.
+### Host a daemon on non-Azure resources
-Managed Identity benefits:
+When running on a non-Azure environment, managed identities aren't available. As such, you must configure a service principal through an Azure AD application registration for the daemon application.
-* Azure system managed X509 certificate public key cryptography authentication.
-* Azure AD security with X509 certificates instead of client secrets.
-* Azure manages and renews all certificates associated with the Managed Identity resource.
-* Simplified credential operational management by removing any need for a secured secret store service like Azure Key Vault.
+#### Create new application registration
-### Daemon hosted on non-Azure resources
+If you've already created your application registration, go to [Assign delegated API permissions](#assign-delegated-api-permissions).
-When running on a non-Azure environment Managed Identities are not available. Therefore you must configure a service principal through an Azure AD application registration for the daemon application.
+To create a new application registration:
-1. In the Azure portal, in the list of Azure services, select **Azure Active Directory** > **App registrations** > **New registration**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
- > [!div class="mx-imgBorder"]
- > ![App registration](./media/how-to-manage-authentication/app-registration.png)
+2. Select **Azure Active Directory**.
-2. If you've already registered your app, then continue to the next step. If you haven't registered your app, then enter a **Name**, choose a **Support account type**, and then select **Register**.
+3. Under **Manage** in the left pane, select **App registrations**.
- > [!div class="mx-imgBorder"]
- > ![App registration details](./media/how-to-manage-authentication/app-create.png)
+4. Select the **+ New registration** tab.
-3. To assign delegated API permissions to Azure Maps, go to the application. Then under **App registrations**, select **API permissions** > **Add a permission**. Under **APIs my organization uses**, search for and select **Azure Maps**.
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/app-registration.png" alt-text="View app registrations.":::
- > [!div class="mx-imgBorder"]
- > ![Add app API permissions](./media/how-to-manage-authentication/app-permissions.png)
+5. Enter a **Name**, and then select a **Support account type**.
-4. Select the check box next to **Access Azure Maps**, and then select **Add permissions**.
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/app-create.png" alt-text="Create app registration.":::
- > [!div class="mx-imgBorder"]
- > ![Select app API permissions](./media/how-to-manage-authentication/select-app-permissions.png)
+6. Select **Register**.
-5. Complete the following steps to create a client secret or configure certificate.
+#### Assign delegated API permissions
- * If your application uses server or application authentication, then on your app registration page, go to **Certificates & secrets**. Then either upload a public key certificate or create a password by selecting **New client secret**.
+To assign delegated API permissions to Azure Maps:
- > [!div class="mx-imgBorder"]
- > ![Create a client secret](./media/how-to-manage-authentication/app-keys.png)
+1. If you haven't done so already, sign in to the [Azure portal](https://portal.azure.com).
- * After you select **Add**, copy the secret and store it securely in a service such as Azure Key Vault. Review [Azure Key Vault Developer Guide](../key-vault/general/developers-guide.md) to securely store the certificate or secret. You'll use this secret to get tokens from Azure AD.
+2. Select **Azure Active Directory**.
- > [!div class="mx-imgBorder"]
- > ![Add a client secret](./media/how-to-manage-authentication/add-key.png)
+3. Under **Manage** in the left pane, select **App registrations**.
-### Grant role-based access for the daemon application to Azure Maps
+4. Select your application.
-You grant *Azure role-based access control (Azure RBAC)* by assigning either the created Managed Identity or the service principal to one or more Azure Maps role definitions. To view Azure role definitions that are available for Azure Maps, go to **Access control (IAM)**. Select **Roles**, and then search for roles that begin with *Azure Maps*. These Azure Maps roles are the roles that you can grant access to.
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/app-select.png" alt-text="Select app registrations.":::
-> [!div class="mx-imgBorder"]
-> ![View available roles](./media/how-to-manage-authentication/how-to-view-avail-roles.png)
+5. Under **Manage** in the left pane, select **API permissions**.
-1. Go to your **Azure Maps Account**. Select **Access control (IAM)** > **Role assignments**.
+6. Select **Add a permission**.
- > [!div class="mx-imgBorder"]
- > ![Grant access using Azure RBAC](./media/how-to-manage-authentication/how-to-grant-rbac.png)
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/app-add-permissions.png" alt-text="Add app permission.":::
-2. On the **Role assignments** tab, **Add** a role assignment.
+7. Select the **APIs my organization uses** tab.
- > [!div class="mx-imgBorder"]
- > ![Screenshot shows the roll assignments with Add selected.](./media/how-to-manage-authentication/add-role-assignment.png)
+8. In the search box, enter **Azure Maps**.
-3. Select a built-in Azure Maps role definition such as **Azure Maps Data Reader** or **Azure Maps Data Contributor**. Under **Assign access to**, select **Azure AD user, group, or service principal** or Managed Identity with **User assigned managed identity** / **System assigned Managed identity**. Select the principal. Then select **Save**.
+9. Select **Azure Maps**.
- > [!div class="mx-imgBorder"]
- > ![How to add role assignment](./media/how-to-manage-authentication/how-to-add-role-assignment.png)
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/app-permissions.png" alt-text="Request app permission.":::
-4. You can confirm the role assignment was applied on the role assignment tab.
+10. Select the **Access Azure Maps** check box.
-## Request token with Managed Identity
+11. Select **Add permissions**.
-Once a managed identity is configured for the hosting resource, use Azure SDK or REST API to acquire a token for Azure Maps, see details on [Acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md). Following the guide, the expectation is that an access token will be returned which can be used on REST API requests.
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/select-app-permissions.png" alt-text="Select app API permissions.":::
-## Request token with application registration
+#### Create a client secret or configure certificate
-After you register your app and associate it with Azure Maps, you can request access tokens.
+To implement server or application-based authentication into your application, you can choose one of two options:
-* Azure AD resource ID `https://atlas.microsoft.com/`
-* Azure AD App ID
-* Azure AD Tenant ID
-* Azure AD App registration client secret
+- Upload a public key certificate.
+- Create a client secret.
-Request:
+##### Upload a public key certificate
-```http
-POST /<Azure AD Tenant ID>/oauth2/token HTTP/1.1
-Host: login.microsoftonline.com
-Content-Type: application/x-www-form-urlencoded
+To upload a public key certificate:
-client_id=<Azure AD App ID>&resource=https://atlas.microsoft.com/&client_secret=<client secret>&grant_type=client_credentials
-```
+1. Under **Manage** in the left pane, select **Certificates & secrets**.
+
+2. Select **Upload certificate**.
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/upload-certificate.png" alt-text="Upload certificate.":::
+
+3. To the right of the text box, select the file icon.
+
+4. Select a *.crt*, *.cer*, or *.pem* file, and then select **Add**.
+
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/upload-certificate-file.png" alt-text="Upload certificate file.":::
+
+##### Create a client secret
+
+To create a client secret:
+
+1. Under **Manage** in the left pane, select **Certificates & secrets**.
+
+2. Select **+ New client secret**.
+
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/new-client-secret.png" alt-text="New client secret.":::
+
+3. Enter a description for the client secret.
+
+4. Select **Add**.
+
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/new-client-secret-add.png" alt-text="Add new client secret.":::
+
+5. Copy the secret and store it securely in a service such as Azure Key Vault. Also, We'll use the secret in the [Request token with Managed Identity](#request-a-token-with-managed-identity) section of this article.
+
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/copy-client-secret.png" alt-text="Copy client secret.":::
+
+ >[!IMPORTANT]
+ >To securely store the certificate or secret, see the [Azure Key Vault Developer Guide](../key-vault/general/developers-guide.md). You'll use this secret to get tokens from Azure AD.
++
+### Request a token with managed identity
-Response:
+After a managed identity is configured for the hosting resource, you can use Azure SDK or REST API to acquire a token for Azure Maps. To learn how to acquire an access token, see [Acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+
+### Request token with application registration
+
+After you register your app and associate it with Azure Maps, you'll need to request an access token.
+
+To acquire the access token:
+
+1. If you haven't done so already, sign in to the [Azure portal](https://portal.azure.com).
+
+2. Select **Azure Active Directory**.
+
+3. Under **Manage** in the left pane, select **App registrations**.
+
+4. Select your application.
+
+5. You should see the Overview page. Copy the Application (client ID) and the Directory (tenant) ID.
+
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/get-token-params.png" alt-text="Copy token parameters.":::
+
+We'll use the [Postman](https://www.postman.com/) application to create the token request, but you can use a different API development environment.
+
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **Collection**.
+
+3. Select **New** again.
+
+4. In the **Create New** window, select **Request**.
+
+5. Enter a **Request name** for the request, such as *POST Token Request*.
+
+6. Select the collection you previously created, and then select **Save**.
+
+7. Select the **POST** HTTP method.
+
+8. Enter the following URL to address bar (replace `<Tenant ID>` with the Directory (Tenant) ID, the `<Client ID>` with the Application (Client) ID), and `<Client Secret>` with your client secret:
+
+ ```http
+ https://login.microsoftonline.com/<Tenant ID>/oauth2/v2.0/token?response_type=token&grant_type=client_credentials&client_id=<Client ID>&client_secret=<Client Secret>%3D&scope=api%3A%2F%2Fazmaps.fundamentals%2F.default
+ ```
+
+9. Select **Send**
+
+10. You should see the following JSON response:
```json { "token_type": "Bearer",
- "expires_in": "...",
- "ext_expires_in": "...",
- "expires_on": "...",
- "not_before": "...",
- "resource": "https://atlas.microsoft.com/",
- "access_token": "ey...gw"
+ "expires_in": 86399,
+ "ext_expires_in": 86399,
+ "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Im5PbzNaRHJPRFhFSzFq..."
} ```
-See [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md), for more detailed examples.
+For more information about authentication flow, see [OAuth 2.0 client credentials flow on the Microsoft identity platform](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret)
## Next steps
+For more detailed examples:
+> [!div class="nextstepaction"]
+> [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md)
+ Find the API usage metrics for your Azure Maps account: > [!div class="nextstepaction"] > [View usage metrics](how-to-view-api-usage.md) Explore samples that show how to integrate Azure AD with Azure Maps: > [!div class="nextstepaction"]
-> [Azure Maps samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
+> [Azure Maps samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
azure-maps How To Secure Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-secure-spa-app.md
Title: How to secure a single page application with non-interactive sign-in
+ Title: How to secure a single page web application with non-interactive sign-in in Microsoft Azure Maps
-description: How to configure a single page application with non-interactive Azure role-based access control (Azure RBAC) and Azure Maps Web SDK.
+description: How to configure a single page web application with non-interactive Azure role-based access control (Azure RBAC) and Azure Maps Web SDK.
Previously updated : 06/12/2020 Last updated : 06/21/2021 --++
-# How to secure a single page application with non-interactive sign-in
+# How to secure a single page web application with non-interactive sign-in
-The following guide pertains to an application using Azure Active Directory (Azure AD) to provide an access token to Azure Maps applications when the user can't sign in to Azure AD. This flow requires hosting of a web service which must be secured to only be accessed by the single page web application. There are multiple implementations which can accomplish authentication to Azure AD. This guide leverages the product, Azure Function to acquire access tokens.
+This article shows you how to secure a single page web application with Azure Active Directory (Azure AD), when the user is unable to sign in to Azure AD.
+
+To create this non-interactive authentication flow, we'll create an Azure Function secure web service that's responsible for acquiring access tokens from Azure AD. This web service will be exclusively available only to your single page web application.
[!INCLUDE [authentication details](./includes/view-authentication-details.md)]
The following guide pertains to an application using Azure Active Directory (Azu
## Create Azure Function
-Create a secured web service application which is responsible for authentication to Azure AD.
+To create a secured web service application that's responsible for authentication to Azure AD:
1. Create a function in the Azure portal. For more information, see [Create Azure Function](../azure-functions/functions-get-started.md).
-2. Configure CORS policy on the Azure function to be accessible by the single page web application. This will secure browser clients to the allowed origins of your web application. See [Add CORS functionality](../app-service/app-service-web-tutorial-rest-api.md#add-cors-functionality).
+2. Configure CORS policy on the Azure function to be accessible by the single page web application. The CORS policy secures browser clients to the allowed origins of your web application. For more information, see [Add CORS functionality](../app-service/app-service-web-tutorial-rest-api.md#add-cors-functionality).
3. [Add a system-assigned identity](../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity) on the Azure function to enable creation of a service principal to authenticate to Azure AD.
-4. Grant role-based access for the system-assigned identity to the Azure Maps account. See [Grant role-based access](#grant-role-based-access) for details.
+4. Grant role-based access for the system-assigned identity to the Azure Maps account. See [Grant role-based access](#grant-role-based-access-for-users-to-azure-maps) for details.
-5. Write code for the Azure function to obtain Azure Maps access tokens using system-assigned identity with one of the supported mechanisms or the REST protocol. See [Obtain tokens for Azure resources](../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity)
+5. Write code for the Azure function to obtain Azure Maps access tokens using system-assigned identity with one of the supported mechanisms or the REST protocol. For more information, see [Obtain tokens for Azure resources](../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity)
A sample REST protocol example:
Create a secured web service application which is responsible for authentication
* [Create a function access key](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#authorization-keys) * [Secure HTTP endpoint](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#secure-an-http-endpoint-in-production) for the Azure function in production.
-
+ 7. Configure web application Azure Maps Web SDK. ```javascript
Create a secured web service application which is responsible for authentication
}); ```
-## Grant role-based access
-
-You grant *Azure role-based access control (Azure RBAC)* access by assigning the system-assigned identity to one or more Azure role definitions. To view Azure role definitions that are available for Azure Maps, go to **Access control (IAM)**. Select **Roles**, and then search for roles that begin with *Azure Maps*.
-
-1. Go to your **Azure Maps Account**. Select **Access control (IAM)** > **Role assignment**.
-
- > [!div class="mx-imgBorder"]
- > ![Grant access using Azure RBAC](./media/how-to-manage-authentication/how-to-grant-rbac.png)
-
-2. On the **Role assignments** tab, under **Role**, select a built in Azure Maps role definition such as **Azure Maps Data Reader** or **Azure Maps Data Contributor**. Under **Assign access to**, select **Function App**. Select the principal by name. Then select **Save**.
-
- * See details on [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
-
-> [!WARNING]
-> Azure Maps built-in role definitions provide a very large authorization access to many Azure Maps REST APIs. To restrict APIs access to a minimum, see [create a custom role definition and assign the system-assigned identity](../role-based-access-control/custom-roles.md) to the custom role definition. This will enable the least privilege necessary for the application to access Azure Maps.
## Next steps
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups.md
While setting up *Email ARM Role* you need to make sure below 3 conditions are m
### Function Calls an existing HTTP trigger endpoint in [Azure Functions](../../azure-functions/functions-get-started.md). To handle a request, your endpoint must handle the HTTP POST verb.
-When defining the Function action the the Function's httptrigger endpoint and access key are saved in the action definition. For example: https://azfunctionurl.azurewebsites.net/api/httptrigger?code=this_is_access_key. If you change the access key for the function you will need to remove and recreate the Function action in the Action Group.
+When defining the Function action the the Function's httptrigger endpoint and access key are saved in the action definition. For example: `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=this_is_access_key`. If you change the access key for the function you will need to remove and recreate the Function action in the Action Group.
You may have a limited number of Function actions in an Action Group.
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
Any alert instance describes the resource that was affected and the cause of the
#### `monitoringService` = `Log Alerts V2`
+> [!NOTE]
+> Log alerts rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when using this version. You should use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. You can also use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links the generate a custom payload.
+ **Sample values** ```json {
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-overview.md
The consumption and management of alert instances requires the user to have the
You might want to query programmatically for alerts generated against your subscription. Queries might be to create custom views outside of the Azure portal, or to analyze your alerts to identify patterns and trends.
-You can query for alerts generated against your subscriptions either by using the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) or by using the [Azure Resource Graph](../../governance/resource-graph/overview.md) and the [REST API for Resources](/rest/api/azureresourcegraph/resourcegraph(2019-04-01)/resources/resources).
+You can query for alerts generated against your subscriptions either by using the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) or by using the [Azure Resource Graph](../../governance/resource-graph/overview.md) and the [REST API for Resources](/rest/api/azureresourcegraph/resourcegraph(2020-04-01-preview)/resources/resources).
The Resource Graph REST API for Resources allows you to query for alert instances at scale. Resource Graph is recommended when you have to manage alerts generated across many subscriptions.
Smart groups are aggregations of alerts based on machine learning algorithms, wh
- [Learn about action groups](../alerts/action-groups.md) - [Managing your alert instances in Azure](./alerts-managing-alert-instances.md?toc=%2fazure%2fazure-monitor%2ftoc.json) - [Managing Smart Groups](./alerts-managing-smart-groups.md?toc=%2fazure%2fazure-monitor%2ftoc.json)-- [Learn more about Azure alerts pricing](https://azure.microsoft.com/pricing/details/monitor/)
+- [Learn more about Azure alerts pricing](https://azure.microsoft.com/pricing/details/monitor/)
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/annotations.md
Now, whenever you use the release template to deploy a new release, an annotatio
Select any annotation marker to open details about the release, including requestor, source control branch, release pipeline, and environment.
+## Classic annotations
+
+Release annotations are a feature of the cloud-based Azure Pipelines service of Azure DevOps.
+
+### Install the Annotations extension (one time)
+
+To be able to create release annotations, you'll need to install one of the many Azure DevOps extensions available in the Visual Studio Marketplace.
+
+1. Sign in to your [Azure DevOps](https://azure.microsoft.com/services/devops/) project.
+
+1. On the Visual Studio Marketplace [Release Annotations extension](https://marketplace.visualstudio.com/items/ms-appinsights.appinsightsreleaseannotations) page, select your Azure DevOps organization, and then select **Install** to add the extension to your Azure DevOps organization.
+
+ ![Select an Azure DevOps organization and then select Install.](./media/annotations/1-install.png)
+
+You only need to install the extension once for your Azure DevOps organization. You can now configure release annotations for any project in your organization.
+
+### Configure classic release annotations
+
+Create a separate API key for each of your Azure Pipelines release templates.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and open the Application Insights resource that monitors your application. Or if you don't have one, [create a new Application Insights resource](./app-insights-overview.md).
+
+1. Open the **API Access** tab and copy the **Application Insights ID**.
+
+ ![Under API Access, copy the Application ID.](./media/annotations/2-app-id.png)
+
+1. In a separate browser window, open or create the release template that manages your Azure Pipelines deployments.
+
+1. Select **Add task**, and then select the **Application Insights Release Annotation** task from the menu.
+
+ ![Select Add Task and select Application Insights Release Annotation.](./media/annotations/3-add-task.png)
+
+ > [!NOTE]
+ > The Release Annotation task currently supports only Windows-based agents; it won't run on Linux, macOS, or other types of agents.
+
+1. Under **Application ID**, paste the Application Insights ID you copied from the **API Access** tab.
+
+ ![Paste the Application Insights ID](./media/annotations/4-paste-app-id.png)
+
+1. Back in the Application Insights **API Access** window, select **Create API Key**.
+
+ ![In the API Access tab, select Create API Key.](./media/annotations/5-create-api-key.png)
+
+1. In the **Create API key** window, type a description, select **Write annotations**, and then select **Generate key**. Copy the new key.
+
+ ![In the Create API key window, type a description, select Write annotations, and then select Generate key.](./media/annotations/6-create-api-key.png)
+
+1. In the release template window, on the **Variables** tab, select **Add** to create a variable definition for the new API key.
+
+1. Under **Name**, enter `ApiKey`, and under **Value**, paste the API key you copied from the **API Access** tab.
+
+ ![In the Azure DevOps Variables tab, select Add, name the variable ApiKey, and paste the API key under Value.](./media/annotations/7-paste-api-key.png)
+
+1. Select **Save** in the main release template window to save the template.
++
+ > [!NOTE]
+ > Limits for API keys are described in the [REST API rate limits documentation](https://dev.applicationinsights.io/documentation/Authorization/Rate-limits).
+ ## Next steps * [Create work items](./diagnostic-search.md#create-work-item)
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
Performance data (CPU, IO rate, and so on) is available for [Java web services](
* Check that you actually copied all the Microsoft. ApplicationInsights DLLs to the server, together with Microsoft.Diagnostics.Instrumentation.Extensions.Intercept.dll * In your firewall, you might have to [open some TCP ports](./ip-addresses.md). * If you have to use a proxy to send out of your corporate network, set [defaultProxy](/previous-versions/dotnet/netframework-1.1/aa903360(v=vs.71)) in Web.config
-* Windows Server 2008: Make sure you have installed the following updates: [KB2468871](https://support.microsoft.com/kb/2468871), [KB2533523](https://support.microsoft.com/kb/2533523), [KB2600217](https://web.archive.org/web/20150129090641/http://support.microsoft.com/kb/2600217).
+* Windows Server 2008: Make sure you have installed the following updates: [KB2468871](https://support.microsoft.com/kb/2468871), [KB2533523](https://support.microsoft.com/kb/2533523), [KB2600217](https://www.microsoft.com/download/details.aspx?id=28936).
## I used to see data, but it has stopped * Have you hit your monthly quota of data points? Open the Settings/Quota and Pricing to find out. If so, you can upgrade your plan, or pay for additional capacity. See the [pricing scheme](https://azure.microsoft.com/pricing/details/application-insights/).
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-ad-authentication.md
+
+ Title: Azure AD authentication for Application Insights (Preview)
+description: Learn how to enable Azure Active Directory (Azure AD) authentication to ensure that only authenticated telemetry is ingested in your Application Insights resources.
+ Last updated : 06/21/2021++
+# Azure AD authentication for Application Insights (Preview)
+Application Insights now supports Azure Active Directory (Azure AD) authentication. By using Azure AD, you can now ensure that only authenticated telemetry is ingested in your Application Insights resources.
+
+Typically, using various authentication systems can be cumbersome and pose risk since itΓÇÖs difficult to manage credentials at a large scale. You can now choose to opt-out of local authentication and ensure only telemetry that is exclusively authenticated using [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your Application Insights resource. This feature is a step to enhance the security and reliability of the telemetry used to make both critical operational (alerting/autoscale etc.) and business decisions.
+
+> [!IMPORTANT]
+> Azure AD authentication is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Below are SDKs/scenarios not supported in the Public Preview:
+- [Application Insights Java 2.x SDK](java-2x-agent.md) ΓÇô Azure AD authentication is only available for Application Insights Java Agent >=3.2.0.
+- [ApplicationInsights JavaScript Web SDK](javascript.md).
+- [Application Insights OpenCensus Python SDK](opencensus-python.md) with Python version 3.4 and 3.5.
+- [Certificate/secret based Azure AD](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use Managed Identities instead.
+- On by default Codeless monitoring (for languages) for App Service, VM/Virtual machine scale sets, Azure Functions etc.
+- [Availability tests](availability-overview.md).
+
+## Prerequisites to enable Azure AD authentication ingestion
+
+- Familiarity with:
+ - [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
+ - [Service principal](../../active-directory/develop/howto-create-service-principal-portal.md).
+ - [Assigning Azure roles](../../role-based-access-control/role-assignments-portal.md).
+- You have an "Owner" role to the resource group to grant access using [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+
+## Configuring and enabling Azure AD based authentication
+
+1. Create an identity, if you already don't have one, using either managed identity or service principal:
+
+ 1. Using managed identity (Recommended):
+
+ [Setup a managed identity for your Azure Service](../../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) (VM, App Service etc.).
+
+ 1. Using service principal (Not Recommended):
+
+ For more information on how to create an Azure AD application and service principal that can access resources, see [Create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md).
+
+1. Assign role to the Azure Service.
+
+ Follow the steps in [Assign Azure roles](../../role-based-access-control/role-assignments-portal.md) to add the "Monitoring Metrics Publisher" role from the target Application Insights resource to the Azure resource from which the telemetry is sent.
+
+ > [!NOTE]
+ > Although role "Monitoring Metrics Publisher" says metrics, it will publish all telemetry to the App Insights resource.
+
+1. Follow the configuration guidance per language below.
+
+### [ASP.NET and .NET](#tab/net)
+
+> [!NOTE]
+> Support for Azure AD in the Application Insights .NET SDK is included starting with [version 2.18-Beta2](https://www.nuget.org/packages/Microsoft.ApplicationInsights/2.18.0-beta2).
+
+Application Insights .NET SDK supports the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/identity/Azure.Identity#credential-classes).
+
+- `DefaultAzureCredential` is recommended for local development.
+- `ClientSecretCredential` is recommended for service principals.
+
+Below is an example of manually creating and configuring a `TelemetryConfiguration` using .NET:
+
+```csharp
+var config = new TelemetryConfiguration
+{
+ ConnectionString = "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/"
+}
+var credential = new DefaultAzureCredential();
+config.SetAzureTokenCredential(credential);
+
+```
+
+Below is an example of configuring the `TelemetryConfiguration` using ASP.NET Core:
+```csharp
+services.Configure<TelemetryConfiguration>(config =>
+{
+ var credential = new DefaultAzureCredential();
+ config.SetAzureTokenCredential(credential);
+});
+services.AddApplicationInsightsTelemetry(new ApplicationInsightsServiceOptions
+{
+ ConnectionString = "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/"
+});
+```
+### [Node.js](#tab/nodejs)
+
+> [!NOTE]
+> Support for Azure AD in the Application Insights Node.JS is included starting with [version 2.1.0-beta.1](https://www.npmjs.com/package/applicationinsights/v/2.1.0-beta.1).
+
+Application Insights Node.JS supports the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/identity/identity#credential-classes).
+
+#### DefaultAzureCredential
+
+```javascript
+let appInsights = require("applicationinsights");
+import { DefaultAzureCredential } from "@azure/identity";
+
+const credential = new DefaultAzureCredential();
+appInsights.setup("InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/").start();
+appInsights.defaultClient.aadTokenCredential = credential;
+
+```
+
+#### ClientSecretCredential
+
+```javascript
+let appInsights = require("applicationinsights");
+import { ClientSecretCredential } from "@azure/identity";
+
+const credential = new ClientSecretCredential(
+ "<YOUR_TENANT_ID>",
+ "<YOUR_CLIENT_ID>",
+ "<YOUR_CLIENT_SECRET>"
+ );
+appInsights.setup("InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/").start();
+appInsights.defaultClient.aadTokenCredential = credential;
+
+```
+
+### [Java](#tab/java)
+
+> [!NOTE]
+> Support for Azure AD in the Application Insights Java agent is included starting with [Java 3.2.0-BETA](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0-BETA).
+
+1. [Configure your application with the Java agent.](java-in-process-agent.md#quickstart)
+
+ > [!IMPORTANT]
+ > Use the full connection string which includes ΓÇ£IngestionEndpointΓÇ¥ while configuring your app with Java agent. For example `InstrumentationKey=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX;IngestionEndpoint=https://XXXX.applicationinsights.azure.com/`.
+
+ > [!NOTE]
+ > For more information about migrating from 2.X SDK to 3.X Java agent, see [Upgrading from Application Insights Java 2.x SDK](java-standalone-upgrade-from-2x.md).
+
+1. Add the json configuration to ApplicationInsights.json configuration file depending on the authentication being used by you. We recommend users to use managed identities.
+
+#### System-assigned Managed Identity
+
+Below is an example on how to configure Java agent to use system-assigned managed identity for authentication with Azure AD.
+
+```JSON
+{
+ "connectionString": "App Insights Connection String with IngestionEndpoint",
+ "preview": {
+ "authentication": {
+ "enabled": true,
+ "type": "SAMI"
+ }
+ }
+}
+```
+
+#### User-assigned managed identity
+
+Below is an example on how to configure Java agent to use user-assigned managed identity for authentication with Azure AD.
+
+```JSON
+{
+ "connectionString": "App Insights Connection String with IngestionEndpoint",
+ "preview": {
+ "authentication": {
+ "enabled": true,
+ "type": "UAMI",
+ "clientId":"<USER-ASSIGNED MANAGED IDENTITY CLIENT ID>"
+ }
+ }
+}
+```
+
+#### Client secret
+
+Below is an example on how to configure Java agent to use service principal for authentication with Azure AD. We recommend users to use this type of authentication only during development. The ultimate goal of adding authentication feature is to eliminate secrets.
+
+```JSON
+{
+ "connectionString": "App Insights Connection String with IngestionEndpoint",
+ "preview": {
+ "authentication": {
+ "enabled": true,
+ "type": "CLIENTSECRET",
+ "clientId":"<YOUR CLIENT ID>",
+ "clientSecret":"<YOUR CLIENT SECRET>",
+ "tenantId":"<YOUR TENANT ID>"
+ }
+ }
+}
+```
+
+### [Python](#tab/python)
+
+> [!NOTE]
+> Azure AD authentication is only available for Python v2.7, v3.6 and v3.7. Support for Azure AD in the Application Insights Opencensus Python SDK
+is included starting with beta version [opencensus-ext-azure 1.1b0](https://pypi.org/project/opencensus-ext-azure/1.1b0/).
++
+Construct the appropriate [credentials](/python/api/overview/azure/identity-readme?view=azure-python#credentials) and pass it into the constructor of the Azure Monitor exporter. Make sure your connection string is set up with the instrumentation key and ingestion endpoint of your resource.
+
+Below are the following types of authentication that are supported by the Opencensus Azure Monitor exporters. Managed identities are recommended to be used in production environments.
++
+#### System-assigned managed identity
+
+```python
+from azure.identity import ManagedIdentityCredential
+
+from opencensus.ext.azure.trace_exporter import AzureExporter
+from opencensus.trace.samplers import ProbabilitySampler
+from opencensus.trace.tracer import Tracer
+
+credential = ManagedIdentityCredential()
+tracer = Tracer(
+ exporter=AzureExporter(credential=credential, connection_string="InstrumentationKey=<your-instrumentation-key>;IngestionEndpoint=<your-ingestion-endpoint>"),
+ sampler=ProbabilitySampler(1.0)
+)
+...
+
+```
+
+#### User-assigned managed identity
+
+```python
+from azure.identity import ManagedIdentityCredential
+
+from opencensus.ext.azure.trace_exporter import AzureExporter
+from opencensus.trace.samplers import ProbabilitySampler
+from opencensus.trace.tracer import Tracer
+
+credential = ManagedIdentityCredential(client_id="<client-id>")
+tracer = Tracer(
+ exporter=AzureExporter(credential=credential, connection_string="InstrumentationKey=<your-instrumentation-key>;IngestionEndpoint=<your-ingestion-endpoint>"),
+ sampler=ProbabilitySampler(1.0)
+)
+...
+
+```
+
+#### Client secret
+
+```python
+from azure.identity import ClientSecretCredential
+
+from opencensus.ext.azure.trace_exporter import AzureExporter
+from opencensus.trace.samplers import ProbabilitySampler
+from opencensus.trace.tracer import Tracer
+
+tenant_id = "<tenant-id>"
+client_id = "<client-id"
+client_secret = "<client-secret>"
+
+credential = ClientSecretCredential(tenant_id=tenant_id, client_id=client_id, client_secret=client_secret)
+tracer = Tracer(
+ exporter=AzureExporter(credential=credential, connection_string="InstrumentationKey=<your-instrumentation-key>;IngestionEndpoint=<your-ingestion-endpoint>"),
+ sampler=ProbabilitySampler(1.0)
+)
+...
+```
++
+## Disable local authentication
+
+After the Azure AD authentication is enabled, you can choose to disable local authentication. This will allow you to ingest telemetry authenticated exclusively by Azure AD and impacts data access (for example, through API Keys).
+
+You can disable local authentication by using the Azure portal or programmatically.
+
+### Azure portal
+
+1. From your Application Insights resource, select **Properties** under the *Configure* heading in the left-hand menu. Then select **Enabled (click to change)** if the local authentication is enabled.
+
+ :::image type="content" source="./media/azure-ad-authentication/enabled.png" alt-text="Screenshot of Properties under the *Configure* selected and enabled (click to change) local authentication button.":::
+
+1. Select **Disabled** and apply changes.
+
+ :::image type="content" source="./media/azure-ad-authentication/disable.png" alt-text="Screenshot of local authentication with the enabled/disabled button highlighted.":::
+
+1. Once your resource has disabled local authentication, you'll see the corresponding info in the **Overview** pane.
+
+ :::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot of overview tab with the disabled(click to change) highlighted.":::
+
+### Programmatic enablement
+
+Property `DisableLocalAuth` is used to disable any local authentication on your Application Insights resource. When set to `true`, this property enforces that Azure AD authentication must be used for all access.
+
+Below is an example Azure Resource Manager template that you can use to create a workspace-based Application Insights resource with local auth disabled.
+
+```JSON
+{
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "name": {
+ "type": "string"
+ },
+ "type": {
+ "type": "string"
+ },
+ "regionId": {
+ "type": "string"
+ },
+ "tagsArray": {
+ "type": "object"
+ },
+ "requestSource": {
+ "type": "string"
+ },
+ "workspaceResourceId": {
+ "type": "string"
+ },
+ "disableLocalAuth": {
+ "type": "bool"
+ }
+
+ },
+ "resources": [
+ {
+ "name": "[parameters('name')]",
+ "type": "microsoft.insights/components",
+ "location": "[parameters('regionId')]",
+ "tags": "[parameters('tagsArray')]",
+ "apiVersion": "2020-02-02-preview",
+ "dependsOn": [],
+ "properties": {
+ "Application_Type": "[parameters('type')]",
+ "Flow_Type": "Redfield",
+ "Request_Source": "[parameters('requestSource')]",
+ "WorkspaceResourceId": "[parameters('workspaceResourceId')]",
+ "DisableLocalAuth": "[parameters('disableLocalAuth')]"
+ }
+ }
+ ]
+}
+
+```
+
+## Troubleshooting
+
+This section provides distinct troubleshooting scenarios and steps that users can take to resolve any issue before they raise a support ticket.
+
+### Ingestion HTTP errors
+
+The ingestion service will return specific errors, regardless of the SDK language. Network traffic can be collected using a tool such as Fiddler. You should filter traffic to the IngestionEndpoint set in the Connection String.
+
+#### HTTP/1.1 400 Incorrect API was used - v2 API does not support authentication
+
+This indicates that the Application Insights resource has been configured for Azure AD only, but the SDK hasn't been correctly configured and is sending to the incorrect API.
+
+> [!NOTE]
+> "v2/track" does not support Azure AD. When the SDK is correctly configured, telemetry will be sent to "v2.1/track".
+
+Next steps should be to review the SDK configuration.
+
+#### HTTP/1.1 401 Unauthorized - please provide the valid authorization token
+
+This indicates that the SDK has been correctly configured, but was unable to acquire a valid token. This may indicate an issue with Azure Active Directory.
+
+Next steps should be to identify exceptions in the SDK logs or network errors from Azure Identity.
+
+#### HTTP/1.1 403 Forbidden - provided credentials do not grant the access to ingest the telemetry into the component
+
+This indicates that the SDK has been configured with credentials that haven't been given permission to the Application Insights resource or subscription.
+
+Next steps should be to review the Application Insights resource's access control. The SDK must be configured with a credential that has been granted the "Monitoring Metrics Publisher" role.
+
+### [ASP.NET and .NET](#tab/net)
+
+#### Event Source
+
+The Application Insights .NET SDK emits error logs using event source. To learn more about collecting event source logs visit, [Troubleshooting no data- collect logs with PerfView](asp-net-troubleshoot-no-data.md#PerfView).
+
+If the SDK fails to get a token, the exception message is logged as:
+ΓÇ£Failed to get AAD Token. Error message: ΓÇ¥
+
+### [Node.js](#tab/nodejs)
+
+Internal logs could be turned on using the following setup. Once this is enabled, error logs will be shown in the console, including any error related to Azure AD integration. For example, failure to generate the token when wrong credentials are supplied or errors when ingestion endpoint fails to authenticate using the provided credentials.
+
+```javascript
+let appInsights = require("applicationinsights");
+appInsights.setup("InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/").setInternalLogging(true, true);
+```
+
+### [Java](#tab/java)
+
+#### HTTP Traffic
+
+You can inspect network traffic using a tool like Fiddler. To enable the traffic to tunnel through fiddler either add the following proxy settings in configuration file:
+
+```JSON
+"proxy": {
+"host": "localhost",
+"port": 8888
+}
+```
+
+Or add following jvm args while running your application:`-Djava.net.useSystemProxies=true -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8888`
+
+If Azure AD is enabled in the agent, outbound traffic will include the HTTP Header ΓÇ£AuthorizationΓÇ¥.
++
+#### 401 Unauthorized
+
+If the following WARN message is seen in the log file `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 401, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. You've probably not enabled Azure AD authentication on the agent, but your Application Insights resource is configured with `DisableLocalAuth: true`. Make sure you're passing in a valid credential and that it has permission to access your Application Insights resource.
++
+If using fiddler, you might see the following response header: `HTTP/1.1 401 Unauthorized - please provide the valid authorization token`.
++
+#### CredentialUnavailableException
+
+If the following exception is seen in the log file `com.azure.identity.CredentialUnavailableException: ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong ΓÇ£clientIdΓÇ¥ in your User Assigned Managed Identity configuration
++
+#### Failed to send telemetry
+
+If the following WARN message is seen in the log file, `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 403, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. This might be because of the provided credentials don't grant the access to ingest the telemetry into the component
+
+If using fiddler, you might see the following response header: `HTTP/1.1 403 Forbidden - provided credentials do not grant the access to ingest the telemetry into the component`.
+
+Root cause might be one of the following reasons:
+- You've created the resource with ΓÇ£system-assigned managed identityΓÇ¥ enabled or you might have associated the ΓÇ£user-assigned identityΓÇ¥ with the resource but forgot to add the ΓÇ£Monitoring Metrics PublisherΓÇ¥ role to the resource (if using SAMI) or ΓÇ£user-assigned identityΓÇ¥ (if using UAMI).
+- You've provided the right credentials to get the access tokens, but the credentials don't belong to the right Application Insights resource. Make sure you see your resource (vm, app service etc.) or user-assigned identity with ΓÇ£Monitoring Metrics PublisherΓÇ¥ roles in your Application Insights resource.
+
+#### Invalid TenantId
+
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Specified tenant identifier ΓÇÿ' is neither a valid DNS name, nor a valid external domain.`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong ΓÇ£tenantIdΓÇ¥ in your client secret configuration.
+
+#### Invalid client secret
+
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Invalid client secret is provided`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong ΓÇ£clientSecretΓÇ¥ in your client secret configuration.
++
+#### Invalid ClientId
+
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier 'ΓÇÖ was not found in the directory 'ΓÇÖ`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong ΓÇ£clientIdΓÇ¥ in your client secret configuration
+
+ This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.
+
+### [Python](#tab/python)
+
+#### Error starts with ΓÇ£credential errorΓÇ¥ (with no status code)
+
+Something is incorrect about the credential you're using and the client isn't able to obtain a token for authorization. It's usually due to lacking the required data for the state. An example would be passing in a system ManagedIdentityCredential but the resource isn't configured to use system-managed identity.
+
+#### Error starts with ΓÇ£authentication errorΓÇ¥ (with no status code)
+
+Client failed to authenticate with the given credential. Usually occurs when the credential used doesn't have correct role assignments.
+
+#### IΓÇÖm getting a status code 400 in my error logs
+
+You're probably missing a credential or your credential is set to `None`, but your Application Insights resource is configured with `DisableLocalAuth: true`. Make sure you're passing in a valid credential and that it has permission to access your Application Insights resource.
+
+#### IΓÇÖm getting a status code 403 in my error logs
+
+Usually occurs when the provided credentials don't grant access to ingest telemetry for the Application Insights resource. Make sure your AI resource has the correct role assignments.
++
+## Next Steps
+* [Monitor your telemetry in the portal](overview-dashboard.md).
+* [Diagnose with Live Metrics Stream](live-stream.md).
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/convert-classic-resource.md
Workspace-based Application Insights allows you to take advantage of all the lat
* [Customer-Managed Keys (CMK)](../logs/customer-managed-keys.md) provides encryption at rest for your data with encryption keys that only you have access to. * [Azure Private Link](../logs/private-link-security.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. * [Bring Your Own Storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over the encryption-at-rest policy, the lifetime management policy, and network access for all data associated with Application Insights Profiler and Snapshot Debugger.
-* [Capacity Reservation tiers](../logs/manage-cost-storage.md#pricing-model) enable you to save as much as 25% compared to the Pay-As-You-Go price.
+* [Commitment Tiers](../logs/manage-cost-storage.md#pricing-model) enable you to save as much as 30% compared to the Pay-As-You-Go price.
* Faster data ingestion via Log Analytics streaming ingestion. ## Migration process
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/powershell.md
$Resource | Set-AzResource -Force
### Setting data retention using REST
-To get the current data retention for your Application Insights resource, you can use the OSS tool [ARMClient](https://github.com/projectkudu/ARMClient). (Learn more about ARMClient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and [Daniel Bowbyes](https://blog.bowbyes.co.nz/2016/11/02/using-armclient-to-directly-access-azure-arm-rest-apis-and-list-arm-policy-details/).) Here's an example using `ARMClient`, to get the current retention:
+To get the current data retention for your Application Insights resource, you can use the OSS tool [ARMClient](https://github.com/projectkudu/ARMClient). (Learn more about ARMClient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and Daniel Bowbyes.) Here's an example using `ARMClient`, to get the current retention:
```PS armclient GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/microsoft.insights/components/MyResourceName?api-version=2018-05-01-preview
Other automation articles:
* [Create an Application Insights resource](./create-new-resource.md#creating-a-resource-automatically) - quick method without using a template. * [Create web tests](../alerts/resource-manager-alerts-metric.md#availability-test-with-metric-alert) * [Send Azure Diagnostics to Application Insights](powershell-azure-diagnostics.md)
-* [Create release annotations](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/API/CreateReleaseAnnotation.ps1)
+* [Create release annotations](annotations.md)
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 06/11/2021 Last updated : 06/21/2021
Higher usage is caused by one, or both of:
If you observe high data ingestion reported using the `Usage` records (see [below](#data-volume-by-solution)), but you don't observe the same results summing `_BilledSize` directly on the [data type](#data-volume-for-specific-events), it's possible you have significant late arriving data. [Here](#late-arriving-data) is more information on how to diagnose this.
+### Log Analytics Workspace Insights
+
+Start understanding your data voumes in the **Usage** tab of the [Log Analytics Workspace Insights workbook](log-analytics-workspace-insights-overview.md). On the **Usage Dashboard**, you can easily see:
+- Which data tables are ingesting the most data volume in the main table,
+- What are the top resources contributing data, and
+- What is the trend of data ingestion.
+
+You can pivot to the **Additional Queries** to easily execution more queries useful to understanding your data patterns.
+
+Learn more about the [capabilities of the Usage tab](log-analytics-workspace-insights-overview.md#usage-tab).
+
+While this workbook can anaswer many of the questions without even needing to run a query, to answer more specific questions or do deeper analyses, the queries in the next two sections will help to get you started.
+ ## Understanding nodes sending data To understand the number of nodes reporting heartbeats from the agent each day in the last month, use
Some suggestions for reducing the volume of logs collected include:
| Solution data from computers that don't need the solution | Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers. | | Application Insights | Review options for [managing Application Insights data volume](../app/pricing.md#managing-your-data-volume) | | [SQL Analytics](../insights/azure-sql.md) | Use [Set-AzSqlServerAudit](/powershell/module/az.sql/set-azsqlserveraudit) to tune the auditing settings. |
-| Azure Sentinel | Review any [Sentinel data sources](../../sentinel/connect-data-sources.md) which you recently enabled as sources of additional data volume. |
+| Azure Sentinel | Review any [Sentinel data sources](../../sentinel/connect-data-sources.md) which you recently enabled as sources of additional data volume. Learn more about [managing Sentinel costs](../../sentinel/azure-sentinel-billing.md#manage-azure-sentinel-costs) |
### Getting nodes as billed in the Per Node pricing tier
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-storage.md
Log Analytics relies on Azure Storage in various scenarios. This use is typicall
## Ingesting Azure Diagnostics extension logs (WAD/LAD) The Azure Diagnostics extension agents (also called WAD and LAD for Windows and Linux agents respectively) collect various operating system logs and store them on a customer-managed storage account. You can then ingest these logs into Log Analytics to review and analyze them. ### How to collect Azure Diagnostics extension logs from your storage account
-Connect the storage account to your Log Analytics workspace as a storage data source using [the Azure portal](../agents/diagnostics-extension-logs.md#collect-logs-from-azure-storage) or by calling the [Storage Insights API](/rest/api/loganalytics/storage%20insights/createorupdate).
+Connect the storage account to your Log Analytics workspace as a storage data source using [the Azure portal](../agents/diagnostics-extension-logs.md#collect-logs-from-azure-storage) or by calling the [Storage Insights API](/rest/api/loganalytics/storage-insights/create-or-update).
Supported data types: * Syslog
To replace a storage account used for ingestion,
When using your own storage account, retention is up to you. Log Analytics won't delete logs stored on your private storage. Instead, you should set up a policy to handle the load according to your preferences. #### Consider load
-Storage accounts can handle a certain load of read and write requests before they start throttling requests (For more information, see [Scalability and performance targets for Blob storage](../../storage/common/scalability-targets-standard-account.md)). Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register an additional storage account to spread the load between them. To monitor your storage accountΓÇÖs capacity and performance review its [Insights in the Azure portal]( https://docs.microsoft.com/azure/azure-monitor/insights/storage-insights-overview).
+Storage accounts can handle a certain load of read and write requests before they start throttling requests (For more information, see [Scalability and performance targets for Blob storage](../../storage/common/scalability-targets-standard-account.md)). Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register an additional storage account to spread the load between them. To monitor your storage accountΓÇÖs capacity and performance review its [Insights in the Azure portal](/azure/azure-monitor/insights/storage-insights-overview).
### Related charges Storage accounts are charged by the volume of stored data, the type of the storage, and the type of redundancy. For details see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs) and [Table Storage pricing](https://azure.microsoft.com/pricing/details/storage/tables).
Storage accounts are charged by the volume of stored data, the type of the stora
## Next steps - Learn about [using Azure Private Link to securely connect networks to Azure Monitor](private-link-security.md)-- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md)
+- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md)
azure-monitor Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/partners.md
Moogsoft runs in your Azure real-estate with integration to monitoring and autom
![NewRelic Logo](./media/partners/newrelic.png)
-[Newrelic documentation](https://newrelic.com/azure)
+[Newrelic documentation](https://newrelic.com/solutions/partners/azure)
## OpsGenie
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more. Previously updated : 06/17/2021 Last updated : 06/21/2021
The new subscription filtering experience can help you manage large numbers of s
:::image type="content" source="./media/set-preferences/azure-portal-subscription-filtering-opt-in.png" alt-text="Screenshot showing the opt-in option for the new subscription filter settings.":::
+> [!IMPORTANT]
+> If you have access to delegated subscriptions through [Azure Lighthouse](../lighthouse/overview.md), be sure that all directories and subscriptions are selected before you select the **Try it now** link, or else the new experience may not show all of the subscriptions to which you have access. If that happens, you can select **Switch back to the previous view** in the **Subscriptions + filters** pane, then repeat the opt in process with all directories and subscriptions selected. For more information, see [Work in the context of a delegated subscription](../lighthouse/how-to/view-manage-customers.md#work-in-the-context-of-a-delegated-subscription).
+ In the new experience, the **Subscriptions + filters** pane lets you create customized filters. When you activate one of your filters, the full portal experience will be scoped to show only the subscriptions to which the filter applies. You can do this by selecting **Activate** in the **Subscription + filters** pane, or in the **Subscriptions + filters** section of the overview pane. :::image type="content" source="./media/set-preferences/azure-portal-settings-filtering.png" alt-text="Screenshot showing the Subscriptions + filters settings pane.":::
azure-resource-manager Template Tutorial Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-quickstart-template.md
This template works for deploying storage accounts and app service plans, but yo
1. Open [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/) 1. In **Search**, enter _deploy linux web app_.
-1. Select the tile with the title **Deploy a basic Linux web app**. If you have trouble finding it, here's the [direct link](https://azure.microsoft.com/en-us/resources/templates/webapp-basic-linux/).
+1. Select the tile with the title **Deploy a basic Linux web app**. If you have trouble finding it, here's the [direct link](https://azure.microsoft.com/resources/templates/webapp-basic-linux/).
1. Select **Browse on GitHub**. 1. Select _azuredeploy.json_. 1. Review the template. In particular, look for the `Microsoft.Web/sites` resource.
azure-sql-edge Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/release-notes.md
This article describes what's new and what has changed with every new build of A
## Azure SQL Edge 1.0.4
-SQL engine build 15.0.2000.1558
+SQL engine build 15.0.2000.1559
### What's new?
SQL engine build 15.0.2000.1554
## Azure SQL Edge 1.0.2
-SQL engine build 15.0.2000.1554
+SQL engine build 15.0.2000.1557
### Fixes
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
The Azure platform provides a number of PaaS capabilities that are added as an a
| [VNet](../../virtual-network/virtual-networks-overview.md) | Partial, it enables restricted access using [VNet Endpoints](vnet-service-endpoint-rule-overview.md) | Yes, SQL Managed Instance is injected in customer's VNet. See [subnet](../managed-instance/transact-sql-tsql-differences-sql-server.md#subnet) and [VNet](../managed-instance/transact-sql-tsql-differences-sql-server.md#vnet) | | VNet Service endpoint | [Yes](vnet-service-endpoint-rule-overview.md) | No | | VNet Global peering | Yes, using [Private IP and service endpoints](vnet-service-endpoint-rule-overview.md) | Yes, using [Virtual network peering](https://techcommunity.microsoft.com/t5/azure-sql/new-feature-global-vnet-peering-support-for-azure-sql-managed/ba-p/1746913). |
-| [Private connectivity](../../private-link/private-link-overview.md) | Yes, using [Private Link](/database/private-endpoint-overview.md) | Yes, using VNet. |
+| [Private connectivity](../../private-link/private-link-overview.md) | Yes, using [Private Link](/azure/private-link/private-endpoint-overview) | Yes, using VNet. |
## Tools
azure-sql Hyperscale Named Replica Security Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/hyperscale-named-replica-security-configure.md
+
+ Title: Configure named replicas security to allow isolated access
+description: Learn the security considerations for configuring and managing named replica so that a user can access the named replica but not other replicas.
+++++++ Last updated : 3/29/2021+
+# Configure Security to allow isolated access to Azure SQL Database Hyperscale Named Replicas
+
+This article describes the authentication requirements to configure an Azure SQL Hyperscale [named replica](service-tier-hyperscale-replicas.md) so that a user will be allowed access to specific replicas only. This scenario allows complete isolation of named replica from the primary - as the named replica will be running using its own compute node - and it is useful whenever isolated read only access to an Azure SQL Hyperscale database is needed. Isolated, in this context, means that CPU and memory are not shared between the primary and the named replica, and queries running on the named replica will not use any compute resource of the primary or of any other replica.
+
+## Create a new login on the master database
+
+In the `master` database on the logical server hosting the primary database, execute the following to create a new login that will be used to manage access to the primary and the named replica:
+
+```sql
+create login [third-party-login] with password = 'Just4STRONG_PAZzW0rd!';
+```
+
+Now get the SID from the `sys.sql_logins` system view:
+
+```sql
+select [sid] from sys.sql_logins where name = 'third-party-login'
+```
+
+And as last action disable the login. This will prevent this login from accessing the any database in the server
+
+```sql
+alter login [third-party-login] disable
+```
+
+As an optional step, in case there are concerns about the login getting enabled in any way, you can drop the login from the server via:
+
+```sql
+drop login [third-party-login]
+```
+
+## Create database user in the primary replica
+
+Once the login has been created, connect to the primary replica of the database, for example WideWorldImporters (you can find a sample script to restore it here: [Restore Database in Azure SQL](https://github.com/yorek/azure-sql-db-samples/tree/master/samples/01-restore-database)) and create the database user for that login:
+
+```sql
+create user [third-party-user] from login [third-party-login]
+```
+
+## Create a named replica
+
+Create a new Azure SQL logical server that will be used to isolate access to the database to be shared. Follow the instruction available at [Create and manage servers and single databases in Azure SQL Database](single-database-manage.md) if you need help.
+
+Using, for example, AZ CLI:
+
+```azurecli
+az sql server create -g MyResourceGroup -n MyPrimaryServer -l MyLocation --admin-user MyAdminUser --admin-password MyStrongADM1NPassw0rd!
+```
+
+Make sure the region you choose is the same where the primary server also is. Then create a named replica, for example with AZ CLI:
+
+```azurecli
+az sql db replica create -g MyResourceGroup -n WideWorldImporters -s MyPrimaryServer --secondary-type Named --partner-database WideWorldImporters_NR --partner-server MySecondaryServer
+```
+
+## Create login in the named replica
+
+Connect to the `master` database on the logical server hosting the named replica. Add the login using the SID retrieved from the primary replica:
+
+```sql
+create login [third-party-login] with password = 'Just4STRONG_PAZzW0rd!', sid = 0x0...1234;
+```
+
+Done. Now the `third-party-login` can connect to the named replica database, but will be denied connecting to the primary replica.
+
+## Test access
+
+You can try the security configuration by using any client tool to connect to the primary and the named replica. For example using `sqlcmd`, you can try to connect to the primary replica using the `third-party-login` user:
+
+```
+sqlcmd -S MyPrimaryServer.database.windows.net -U third-party-login -P Just4STRONG_PAZzW0rd! -d WideWorldImporters
+```
+
+this will result in an error as the user is not allowed to connect to the server:
+
+```
+Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login failed for user 'third-party-login'. Reason: The account is disabled..
+```
+
+the same user can connect to the named replica instead:
+
+```
+sqlcmd -S MySecondaryServer.database.windows.net -U third-party-login -P Just4STRONG_PAZzW0rd! -d WideWorldImporters_NR
+```
+
+and connection will succeed without errors.
++
+## Next steps
+
+Once you have setup security in this way, you can use the regular `grant`, `deny` and `revoke` commands to manage access to resources. Remember to use these commands on the primary replica: their effect will be applied also to all named replicas, allowing you to decide who can access what, as it would happen normally.
+
+Remember that by default a newly created user has a very minimal set of permissions granted (for example they cannot access any user table), so if you want to allow `third-party-user` to access a table, you need to explicitly grant this permission:
+
+```sql
+grant select on [Application].[Cities] to [third-party-user]
+```
+
+Or you can add the user to the `db_datareaders` [database role](/sql/relational-databases/security/authentication-access/database-level-roles) to allow access to all tables, or you can use [schemas](/sql/relational-databases/security/authentication-access/create-a-database-schema) to [allow access](/sql/t-sql/statements/grant-schema-permissions-transact-sql) to all tables in a schema.
+
+For more information:
+
+* Azure SQL logical Servers, see [What is a server in Azure SQL Database](logical-servers.md)
+* Managing database access and logins, see [SQL Database security: Manage database access and login security](logins-create-manage.md)
+* Database engine permissions, see [Permissions](/sql/relational-databases/security/permissions-database-engine)
+* Granting object permissions, see [GRANT Object Permissions](/sql/t-sql/statements/grant-object-permissions-transact-sql)
+++
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window.md
The maintenance window feature allows you to configure maintenance schedule for
## Overview
-Azure periodically performs [planned maintenance](planned-maintenance.md) of SQL Database and SQL managed instance resources. During Azure SQL maintenance event, databases are fully available but can be subject to short reconfigurations within respective availability SLAs for [SQL Database](https://azure.microsoft.com/support/legal/sla/sql-database) and [SQL managed instance](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance).
+Azure periodically performs [planned maintenance](planned-maintenance.md) of SQL Database and SQL managed instance resources. During Azure SQL maintenance event, databases are fully available but can be subject to short reconfigurations within respective availability SLAs for [SQL Database](https://azure.microsoft.com/support/legal/sla/azure-sql-database) and [SQL managed instance](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance).
Maintenance window is intended for production workloads that are not resilient to database or instance reconfigurations and cannot absorb short connection interruptions caused by planned maintenance events. By choosing a maintenance window you prefer, you can minimize the impact of planned maintenance as it will be occurring outside of your peak business hours. Resilient workloads and non-production workloads may rely on Azure SQL's default maintenance policy.
azure-sql Service Tier Hyperscale Replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale-replicas.md
+
+ Title: Hyperscale secondary replicas
+description: This article describes the different types of secondary replicas available in the Hyperscale service tier.
+++++++ Last updated : 6/9/2021++
+# Hyperscale secondary replicas
+
+As described in [Distributed functions architecture](service-tier-hyperscale.md), Azure SQL Database Hyperscale has two different types of compute nodes, also referred to as "replicas".
+- Primary: serves read and write operations
+- Secondary: provides read scale-out, high availability and geo-replication
+
+A secondary replica can be of three different types:
+
+- High Availability replica
+- Named replica (in Preview)
+- Geo-replica (in Preview)
+
+Each type has a different architecture, feature set, purpose, and cost. Based on the features you need, you may use just one or even all of the three together.
+
+## High Availability replica
+
+A High Availability (HA) replica uses the same page servers as the primary replica, so no data copy is required to add an HA replica. HA replicas are mainly used to provide High Availability as they act as a hot standby for failover purposes. If the primary replica becomes unavailable, failover to one of the existing HA replicas is automatic. Connection string doesn't need to change; during failover applications may experience minimum downtime due to active connections being dropped. As usual for this scenario, proper connection retry logic is recommended. Several drivers already provide some degree of automatic retry logic.
+
+If you are using .NET, the [latest Microsoft.Data.SqlClient](https://devblogs.microsoft.com/azure-sql/configurable-retry-logic-for-microsoft-data-sqlclient/) library provides native full support to configurable automatic retry logic.
+HA replicas use the same server and database name of the primary replica. Their Service Level Objective is also always the same as for the primary replica. HA replicas are not manageable as a stand-alone resource from the portal or from any other tool or DMV.
+
+There can be zero to four HA replicas. Their number can be changed during the creation of a database or after the database has been created, via the usual management endpoint and tools (for example: PowerShell, AZ CLI, Portal, REST API). Creating or removing HA replicas does not affect connections running on the primary replica.
+
+### Connecting to an HA replica
+
+In Hyperscale databases, the ApplicationIntent argument in the connection string used by the client dictates whether the connection is routed to the read-write primary replica or to a read-only HA replica. If the ApplicationIntent set to `ReadOnly` and the database doesn't have a secondary replica, connection will be routed to the primary replica and will default to the `ReadWrite` behavior.
+
+```csharp
+-- Connection string with application intent
+Server=tcp:<myserver>.database.windows.net;Database=<mydatabase>;ApplicationIntent=ReadOnly;User ID=<myLogin>;Password=<myPassword>;Trusted_Connection=False; Encrypt=True;
+```
+
+Given that for a given Hyperscale database all HA replicas are identical in their resource capacity, if more than one secondary replica is present, the read-intent workload is distributed across all available HA secondaries. When there are multiple HA replicas, keep in mind that each one could have different data latency with respect to data changes made on the primary. Each HA replica uses the same data as the primary on the same set of page servers. Local caches on each HA replica reflect the changes made on the primary via the transaction log service, which forwards log records from the primary replica to HA replicas. As a result, depending on the workload being processed by an HA replica, application of log records may happen at different speeds and thus different replicas could have different data latency relative to the primary replica.
+
+## Named replica (in Preview)
+
+A named replica, just like an HA replica, uses the same page servers as the primary replica. Similar to HA replicas, there is no data copy needed to add a named replica.
+
+> [!NOTE]
+> For frequently asked questions on Hyperscale named replicas, see [Azure SQL Database Hyperscale named replicas FAQ](service-tier-hyperscale-named-replicas-faq.yml).
+
+The difference from HA replicas is that named replicas:
+
+- appear as regular (read-only) Azure SQL databases in the portal and in API (CLI, PowerShell, T-SQL) calls
+- can have database name different from the primary replica, and optionally be located on a different logical server (as long as it is in the same region as the primary replica)
+- have their own Service Level Objective that can be set and changed independently from the primary replica
+- support for up to 30 named replicas (for each primary replica)
+- support different authentication and authorization for each named replica by creating different logins on logical servers hosting named replicas
+
+The main goal of named replicas is to allow massive OLTP read scale-out scenario and to improve Hybrid Transactional and Analytical Processing (HTAP) workloads. Examples of how to create such solutions are available here:
+
+- [OLTP scale-out sample](https://github.com/Azure-Samples/azure-sql-db-named-replica-oltp-scaleout)
+- [HTAP scale-out sample](https://github.com/Azure-Samples/azure-sql-db-named-replica-htap)
+
+Aside from the main scenarios listed above, named replicas offer flexibility and elasticity to also satisfy many other use cases:
+- [Access Isolation](hyperscale-named-replica-security-configure.md): grant a login access to a named replica only and deny it from accessing the primary replica or other named replicas.
+- Workload-Dependent Service Objective: as a named replica can have its own service level objective, it is possible to use different named replicas for different workloads and use cases. For example, one named replica could be used to serve Power BI requests, while another can be used to serve data to Apache Spark for Data Science tasks. Each one can have an independent service level objective and scale independently.
+- Workload-Dependent Routing: with up to 30 named replicas, it is possible to use named replicas in groups so that an application can be isolated from another. For example, a group of four named replicas could be used to serve requests coming from mobile applications, while another group two named replicas can be used to serve requests coming from a web application. This approach would allow a fine-grained tuning of performance and costs for each group.
+
+The following example creates named replica `WideWorldImporters_NR` for database `WideWorldImporters` with service level objective HS_Gen5_4. Both use the same logical server `MyServer`. If you prefer to use REST API directly, this option is also possible: [Databases - Create A Database As Named Replica Secondary](/rest/api/sql/2020-11-01-preview/databases/createorupdate#creates-a-database-as-named-replica-secondary).
+
+# [T-SQL](#tab/tsql)
+```sql
+ALTER DATABASE [WideWorldImporters]
+ADD SECONDARY ON SERVER [MyServer]
+WITH (SERVICE_OBJECTIVE = 'HS_Gen5_2', SECONDARY_TYPE = Named, DATABASE_NAME = [WideWorldImporters_NR]);
+```
+# [PowerShell](#tab/azure-powershell)
+```azurepowershell
+New-AzSqlDatabaseSecondary -ResourceGroupName "MyResourceGroup" -ServerName "MyServer" -DatabaseName "WideWorldImporters" -PartnerResourceGroupName "MyResourceGroup" -PartnerServerName "MyServer" -PartnerDatabaseName "WideWorldImporters_NR_" -SecondaryServiceObjectiveName HS_Gen5_2
+```
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az sql db replica create -g MyResourceGroup -n WideWorldImporters -s MyServer --secondary-type named --partner-database WideWorldImporters_NR --partner-server MyServer --service-objective HS_Gen5_2
+```
+++
+As there is no data movement involved, in most cases a named replica will be created in about a minute. Once the named replica is available, it will be visible from the portal or any command-line tool like AZ CLI or PowerShell. A named replica is usable as a regular database, with the exception that it is read-only.
+
+### Connecting to a named replica
+
+To connect to a named replica, you must use the connection string for that named replica. There is no need to specify the option "ApplicationIntent" as named replicas are always read-only. Using it is still possible but will not have any other effect.
+Just like for HA replicas, even though the primary, HA, and named replicas share the same data on the same set of page servers, caches on each named replica are kept in sync with the primary via the transaction log service, which forwards log records from the primary to named replicas. As a result, depending on the workload being processed by a named replica, application of the log records may happen at different speeds and thus different replicas could have different data latency relative to the primary replica.
+
+### Modifying a named replica
+
+You can define the service level objective of a named replica when you create it, via the `ALTER DATABASE` command or in any other supported ways (AZ CLI, PowerShell, REST API). If you need to change the service level objective after the named replica has been created, you can do it using the regular `ALTER DATABASE…MODIFY` command on the named replica itself. For example, if `WideWorldImporters_NR` is the named replica of `WideWorldImporters` database, you can do it as shown below.
+
+# [T-SQL](#tab/tsql)
+```sql
+ALTER DATABASE [WideWorldImporters_NR] MODIFY (SERVICE_OBJECTIVE = 'HS_Gen5_4')
+```
+# [PowerShell](#tab/azure-powershell)
+```azurepowershell
+Set-AzSqlDatabase -ResourceGroup "MyResourceGroup" -ServerName "MyServer" -DatabaseName "WideWorldImporters_NR" -RequestedServiceObjectiveName "HS_Gen5_4"
+```
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az sql db update -g MyResourceGroup -s MyServer -n WideWorldImporters_NR --service-objective HS_Gen5_4
+```
+++
+### Removing a named replica
+
+To remove a named replica, you drop it just like you would do with a regular database. Make sure you are connected to the `master` database of the server with the named replica you want to drop, and then use the following command:
+
+# [T-SQL](#tab/tsql)
+```sql
+DROP DATABASE [WideWorldImporters_NR];
+```
+# [PowerShell](#tab/azure-powershell)
+```azurepowershell
+Remove-AzSqlDatabase -ResourceGroupName "MyResourceGroup" -ServerName "MyServer" -DatabaseName "WideWorldImporters_NR"
+```
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az sql db delete -g MyResourceGroup -s MyServer -n WideWorldImporters_NR
+```
++
+> [!NOTE]
+> Named replicas will also be removed when the primary replica from which they have been created is deleted.
+
+### Known issues
+
+#### Partially incorrect data returned from sys.databases
+During Public Preview, row values returned from `sys.databases`, for named replicas, in columns other than `name` and `database_id`, may be inconsistent and incorrect. For example, the `compatibility_level` column for a named replica could be reported as 140 even if the primary database from which the named replica has been created is set to 150. A workaround, when possible, is to get the same data using the system function `databasepropertyex`, that will return the correct data instead.
++
+## Geo-replica (in Preview)
+
+With [active geo-replication](active-geo-replication-overview.md), you can create a readable secondary replica of the primary Hyperscale database in the same or in a different region. Geo-replicas must be created on a different logical server. The database name of a geo-replica always matches the database name of the primary.
+
+When creating a geo-replica, all data is copied from the primary to a different set of page servers. A geo-replica does not share page servers with the primary, even if they are in the same region. This architecture provides the necessary redundancy for geo-failovers.
+
+Geo-replicas are primarily used to maintain a transactionally consistent copy of the database via asynchronous replication in a different geographical region for disaster recovery in case of a disaster or outage in the primary region. Geo-replicas can also be used for geographic read scale-out scenarios.
+
+With [active geo-replication on Hyperscale](active-geo-replication-overview.md), failover must be initiated manually. After failover, the new primary will have a different connection end point, referencing the logical server name hosting the new primary replica. For more information, see [active geo-replication](active-geo-replication-overview.md).
+
+Geo-replication for Hyperscale databases is currently in preview, with the following limitations:
+- Only one geo-replica can be created (in the same or different region).
+- Failover groups are not supported.
+- Planned failover is not supported.
+- Point in time restore of the geo-replica is not supported
+- Creating a database copy of the geo-replica is not supported.
+- Secondary of a secondary (also known as "geo-replica chaining") is not supported.
+
+## Next steps
+
+- [Hyperscale service tier](service-tier-hyperscale.md)
+- [Active geo-replication](active-geo-replication-overview.md)
+- [Configure Security to allow isolated access to Azure SQL Database Hyperscale Named Replicas](hyperscale-named-replica-security-configure.md)
+- [Azure SQL Database Hyperscale named replicas FAQ](service-tier-hyperscale-named-replicas-faq.yml)
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
Last updated 3/31/2021
# Hyperscale service tier Azure SQL Database is based on SQL Server Database Engine architecture that is adjusted for the cloud environment in order to ensure 99.99% availability even in the cases of infrastructure failures. There are three architectural models that are used in Azure SQL Database:
The Hyperscale service tier in Azure SQL Database provides the following additio
- Nearly instantaneous database backups (based on file snapshots stored in Azure Blob storage) regardless of size with no IO impact on compute resources - Fast database restores (based on file snapshots) in minutes rather than hours or days (not a size of data operation) - Higher overall performance due to higher log throughput and faster transaction commit times regardless of data volumes-- Rapid scale out - you can provision one or more read-only nodes for offloading your read workload and for use as hot-standbys
+- Rapid scale out - you can provision one or more [read-only replicas](service-tier-hyperscale-replicas.md) for offloading your read workload and for use as hot-standbys
- Rapid Scale up - you can, in constant time, scale up your compute resources to accommodate heavy workloads when needed, and then scale the compute resources back down when not needed.
-The Hyperscale service tier removes many of the practical limits traditionally seen in cloud databases. Where most other databases are limited by the resources available in a single node, databases in the Hyperscale service tier have no such limits. With its flexible storage architecture, storage grows as needed. In fact, Hyperscale databases aren't created with a defined max size. A Hyperscale database grows as needed - and you're billed only for the capacity you use. For read-intensive workloads, the Hyperscale service tier provides rapid scale-out by provisioning additional read replicas as needed for offloading read workloads.
+The Hyperscale service tier removes many of the practical limits traditionally seen in cloud databases. Where most other databases are limited by the resources available in a single node, databases in the Hyperscale service tier have no such limits. With its flexible storage architecture, storage grows as needed. In fact, Hyperscale databases aren't created with a defined max size. A Hyperscale database grows as needed - and you're billed only for the capacity you use. For read-intensive workloads, the Hyperscale service tier provides rapid scale-out by provisioning additional replicas as needed for offloading read workloads.
Additionally, the time required to create database backups or to scale up or down is no longer tied to the volume of data in the database. Hyperscale databases can be backed up virtually instantaneously. You can also scale a database in the tens of terabytes up or down in minutes. This capability frees you from concerns about being boxed in by your initial configuration choices.
Hyperscale service tier is only available in [vCore model](service-tiers-vcore.m
- **Compute**:
- The Hyperscale compute unit price is per replica. The [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) price is applied to read scale replicas automatically. We create a primary replica and one read-only replica per Hyperscale database by default. Users may adjust the total number of replicas including the primary from 1-5.
+ The Hyperscale compute unit price is per replica. The [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) price is applied to high-availabilty and named replicas automatically. We create a primary replica and one secondary [high-availability replica](service-tier-hyperscale-replicas.md) per Hyperscale database by default. Users may adjust the total number of high-availability replicas from 0-4, depending on the needed [SLA](https://azure.microsoft.com/support/legal/sla/sql-database/).
- **Storage**:
For more information about Hyperscale pricing, see [Azure SQL Database Pricing](
## Distributed functions architecture
-Unlike traditional database engines that have centralized all of the data management functions in one location/process (even so called distributed databases in production today have multiple copies of a monolithic data engine), a Hyperscale database separates the query processing engine, where the semantics of various data engines diverge, from the components that provide long-term storage and durability for the data. In this way, the storage capacity can be smoothly scaled out as far as needed (initial target is 100 TB). Read-only replicas share the same storage components so no data copy is required to spin up a new readable replica.
+Unlike traditional database engines that have centralized all of the data management functions in one location/process (even so called distributed databases in production today have multiple copies of a monolithic data engine), a Hyperscale database separates the query processing engine, where the semantics of various data engines diverge, from the components that provide long-term storage and durability for the data. In this way, the storage capacity can be smoothly scaled out as far as needed (initial target is 100 TB). High-availability and named replicas share the same storage components so no data copy is required to spin up a new replica.
The following diagram illustrates the different types of nodes in a Hyperscale database:
ALTER DATABASE [DB2] MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen
GO ```
-## Connect to a read-scale replica of a Hyperscale database
-
-In Hyperscale databases, the `ApplicationIntent` argument in the connection string provided by the client dictates whether the connection is routed to the write replica or to a read-only secondary replica. If the `ApplicationIntent` set to `READONLY` and the database doesn't have a secondary replica, connection will be routed to the primary replica and defaults to `ReadWrite` behavior.
-
-```cmd
Connection string with application intent
-Server=tcp:<myserver>.database.windows.net;Database=<mydatabase>;ApplicationIntent=ReadOnly;User ID=<myLogin>;Password=<myPassword>;Trusted_Connection=False; Encrypt=True;
-```
-
-Hyperscale secondary replicas are all identical, using the same Service Level Objective as the primary replica. If more than one secondary replica is present, the workload is distributed across all available secondaries. Each secondary replica is updated independently. Thus, different replicas could have different data latency relative to the primary replica.
- ## Database high availability in Hyperscale
-As in all other service tiers, Hyperscale guarantees data durability for committed transactions regardless of compute replica availability. The extent of downtime due to the primary replica becoming unavailable depends on the type of failover (planned vs. unplanned), and on the presence of at least one secondary replica. In a planned failover (i.e. a maintenance event), the system either creates the new primary replica before initiating a failover, or uses an existing secondary replica as the failover target. In an unplanned failover (i.e. a hardware failure on the primary replica), the system uses a secondary replica as a failover target if one exists, or creates a new primary replica from the pool of available compute capacity. In the latter case, downtime duration is longer due to extra steps required to create the new primary replica.
+As in all other service tiers, Hyperscale guarantees data durability for committed transactions regardless of compute replica availability. The extent of downtime due to the primary replica becoming unavailable depends on the type of failover (planned vs. unplanned), and on the presence of at least one high-availability replica. In a planned failover (i.e. a maintenance event), the system either creates the new primary replica before initiating a failover, or uses an existing high-availability replica as the failover target. In an unplanned failover (i.e. a hardware failure on the primary replica), the system uses a high-availability replica as a failover target if one exists, or creates a new primary replica from the pool of available compute capacity. In the latter case, downtime duration is longer due to extra steps required to create the new primary replica.
For Hyperscale SLA, see [SLA for Azure SQL Database](https://azure.microsoft.com/support/legal/sla/sql-database/).
If you need to restore a Hyperscale database in Azure SQL Database to a region o
The Azure SQL Database Hyperscale tier is available in all regions but enabled by default in the following regions listed below. If you want to create a Hyperscale database in a region where Hyperscale is not enabled by default, you can send an onboarding request via Azure portal. For instructions, see [Request quota increases for Azure SQL Database](quota-increase-request.md) for instructions. When submitting your request, use the following guidelines: - Use the [Region access](quota-increase-request.md#region) SQL Database quota type.-- In the description, add the compute SKU/total cores including readable replicas, and indicate that you are requesting Hyperscale capacity.
+- In the description, add the compute SKU/total cores including high-availability and named replicas, and indicate that you are requesting Hyperscale capacity.
- Also specify a projection of the total size of all databases over time in TB. Enabled Regions:
azure-sql Service Tiers General Purpose Business Critical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-general-purpose-business-critical.md
The following table describes the key differences between service tiers for the
| | SQL Managed Instance | [24 GB per vCore](../managed-instance/resource-limits.md#service-tier-characteristics) | N/A | Up to 4 TB - [limited by storage size](../managed-instance/resource-limits.md#service-tier-characteristics) | | **Log write throughput** | SQL Database | [1.875 MB/s per vCore (max 30 MB/s)](resource-limits-vcore-single-databases.md#general-purposeprovisioned-computegen4) | 100 MB/s | [6 MB/s per vCore (max 96 MB/s)](resource-limits-vcore-single-databases.md#business-criticalprovisioned-computegen4) | | | SQL Managed Instance | [3 MB/s per vCore (max 22 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics) | N/A | [4 MB/s per vcore (max 48 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics) |
-|**Availability**|All| 99.99% | [99.95% with one secondary replica, 99.99% with more replicas](service-tier-hyperscale-frequently-asked-questions-faq.yml#what-slas-are-provided-for-a-hyperscale-database) | 99.99% <br/> [99.995% with zone redundant single database](https://azure.microsoft.com/blog/understanding-and-leveraging-azure-sql-database-sla/) |
+|**Availability**|All| 99.99% | [99.95% with one secondary replica, 99.99% with more replicas](service-tier-hyperscale-frequently-asked-questions-faq.yml#what-slas-are-provided-for-a-hyperscale-database-) | 99.99% <br/> [99.995% with zone redundant single database](https://azure.microsoft.com/blog/understanding-and-leveraging-azure-sql-database-sla/) |
|**Backups**|All|RA-GRS, 7-35 days (7 days by default). Maximum retention for Basic tier is 7 days. | RA-GRS, 7 days, constant time point-in-time recovery (PITR) | RA-GRS, 7-35 days (7 days by default) | |**In-memory OLTP** | | N/A | N/A | Available | |**Read-only replicas**| | 0 built-in <br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) | 0 - 4 built-in | 1 built-in, included in price <br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) |
azure-sql Auditing Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/auditing-configure.md
f1_keywords:
Previously updated : 05/26/2020 Last updated : 06/21/2021 # Get started with Azure SQL Managed Instance auditing [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
Last updated 05/26/2020
- Helps you maintain regulatory compliance, understand database activity, and gain insight into discrepancies and anomalies that could indicate business concerns or suspected security violations. - Enables and facilitates adherence to compliance standards, although it doesn't guarantee compliance. For more information about Azure programs that support standards compliance, see the [Azure Trust Center](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942), where you can find the most current list of compliance certifications.
+> [!IMPORTANT]
+> Auditing for Azure SQL Database, Azure Synapse and Azure SQL Managed Instance is optimized for availability and performance. During very high activity, or high network load, Azure SQL Database, Azure Synapse and Azure SQL Managed Instance allow operations to proceed and may not record some audited events.
+ ## Set up auditing for your server to Azure Storage The following section describes the configuration of auditing on your managed instance.
azure-vmware Production Ready Deployment Steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/production-ready-deployment-steps.md
Title: Plan the Azure VMware Solution deployment
description: This article outlines an Azure VMware Solution deployment workflow. The final result is an environment ready for virtual machine (VM) creation and migration. Previously updated : 05/13/2021 Last updated : 06/21/2021 # Plan the Azure VMware Solution deployment
The steps outlined give you a production-ready environment for creating virtual
## Request a host quota
-It's important to request a host quota early as you prepare to create your Azure VMware Solution resource. You can request a host quota now, so when the planning process is finished, you're ready to deploy the Azure VMware Solution private cloud. After the support team receives your request for a host quota, it takes up to five business days to confirm your request and allocate your hosts. If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you complete the same process. For more information, see the following links, depending on the type of subscription you have:
+It's important to request a host quota early, so when the planning process is finished, you're ready to deploy your Azure VMware Solution private cloud.
+ - [EA customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-ea-customers) - [CSP customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-csp-customers)
+After the support team receives your request for a host quota, it takes up to five business days to confirm your request and allocate your hosts.
++ ## Identify the subscription Identify the subscription you plan to use to deploy Azure VMware Solution. You can either create a new subscription or reuse an existing one. >[!NOTE]
->The subscription must be associated with a Microsoft Enterprise Agreement or a Cloud Solution Provider Azure plan. For more information, see [How to enable Azure VMware Solution resource](deploy-azure-vmware-solution.md#step-1-register-the-microsoftavs-resource-provider).
+>The subscription must be associated with a Microsoft Enterprise Agreement or a Cloud Solution Provider Azure plan. For more information, see [Eligibility criteria](request-host-quota-azure-vmware-solution.md#eligibility-criteria).
## Identify the resource group
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
Title: Peer on-premises environments to Azure VMware Solution
description: Learn how to create ExpressRoute Global Reach peering to a private cloud in Azure VMware Solution. Previously updated : 05/14/2021 Last updated : 06/21/2021 # Peer on-premises environments to Azure VMware Solution In this step of the quick start, you'll connect Azure VMware Solution to your on-premises environment. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud. The ExpressRoute Global Reach connection is established between the private cloud ExpressRoute circuit and an existing ExpressRoute connection to your on-premises environments. -
->[!NOTE]
->You can connect through VPN, but that's out of scope for this quick start document.
-
-This tutorial results in a connection as shown in the diagram.
- :::image type="content" source="media/pre-deployment/azure-vmware-solution-on-premises-diagram.png" alt-text="Diagram showing ExpressRoute Global Reach on-premises network connectivity." lightbox="media/pre-deployment/azure-vmware-solution-on-premises-diagram.png" border="false":::
+>[!NOTE]
+>You can connect through VPN, but that's out of scope for this quick start guide.
-## Before you begin
-
-Before you enable connectivity between two ExpressRoute circuits using ExpressRoute Global Reach, review the documentation on how to [enable connectivity in different Azure subscriptions](../expressroute/expressroute-howto-set-global-reach-cli.md#enable-connectivity-between-expressroute-circuits-in-different-azure-subscriptions).
## Prerequisites+
+- Review the documentation on how to [enable connectivity in different Azure subscriptions](../expressroute/expressroute-howto-set-global-reach-cli.md#enable-connectivity-between-expressroute-circuits-in-different-azure-subscriptions).
- A separate, functioning ExpressRoute circuit used to connect on-premises environments to Azure, which is _circuit 1_ for peering. - Ensure that all gateways, including the ExpressRoute provider's service, supports 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes.
cdn Cdn Map Content To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-map-content-to-custom-domain.md
This tutorial shows how to add a custom domain to an Azure Content Delivery Netw
The endpoint name in your CDN profile is a subdomain of azureedge.net. By default when delivering content, the CDN profile domain is included within the URL.
-For example, **https://contoso.azureedge.net/photo.png**.
+For example, `https://contoso.azureedge.net/photo.png`.
Azure CDN provides the option of associating a custom domain with a CDN endpoint. This option delivers content with a custom domain in your URL instead of the default domain.
cognitive-services Luis Concept Devops Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-concept-devops-automation.md
In your source code management (SCM) system, configure automated build pipelines
The **CI/CD workflow** combines two complementary development processes:
-* [Continuous Integration](/azure/devops/learn/what-is-continuous-integration) (CI) is the engineering practice of frequently committing code in a shared repository, and performing an automated build on it. Paired with an automated [testing](luis-concept-devops-testing.md) approach, continuous integration allows us to verify that for each update, the LUDown source is still valid and can be imported into a LUIS app, but also that it passes a group of tests that verify the trained app can recognize the intents and entities required for your solution.
+* [Continuous Integration](/devops/develop/what-is-continuous-integration) (CI) is the engineering practice of frequently committing code in a shared repository, and performing an automated build on it. Paired with an automated [testing](luis-concept-devops-testing.md) approach, continuous integration allows us to verify that for each update, the LUDown source is still valid and can be imported into a LUIS app, but also that it passes a group of tests that verify the trained app can recognize the intents and entities required for your solution.
-* [Continuous Delivery](/azure/devops/learn/what-is-continuous-delivery) (CD) takes the Continuous Integration concept further to automatically deploy the application to an environment where you can do more in-depth testing. CD enables us to learn early about any unforeseen issues that arise from our changes as quickly as possible, and also to learn about gaps in our test coverage.
+* [Continuous Delivery](/devops/deliver/what-is-continuous-delivery) (CD) takes the Continuous Integration concept further to automatically deploy the application to an environment where you can do more in-depth testing. CD enables us to learn early about any unforeseen issues that arise from our changes as quickly as possible, and also to learn about gaps in our test coverage.
The goal of continuous integration and continuous delivery is to ensure that "main is always shippable,". For a LUIS app, this means that we could, if we needed to, take any version from the main branch LUIS app and ship it on production.
The [LUIS DevOps template repo](https://github.com/Azure-Samples/LUIS-DevOps-Tem
* Use the [LUIS DevOps template repo](https://github.com/Azure-Samples/LUIS-DevOps-Template) to apply DevOps with your own project. * [Source control and branch strategies for LUIS](luis-concept-devops-sourcecontrol.md)
-* [Testing for LUIS DevOps](luis-concept-devops-testing.md)
+* [Testing for LUIS DevOps](luis-concept-devops-testing.md)
cognitive-services Luis Concept Devops Sourcecontrol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-concept-devops-sourcecontrol.md
A LUIS app in LUDown format is human readable, which supports the communication
## Versioning
-An application consists of multiple components that might include things such as a bot running in [Azure Bot Service](/azure/bot-service/bot-service-overview-introduction), [QnA Maker](https://www.qnamaker.ai/), [Azure Speech service](../speech-service/overview.md), and more. To achieve the goal of loosely coupled applications, use [version control](/azure/devops/learn/git/what-is-version-control) so that each component of an application is versioned independently, allowing developers to detect breaking changes or updates just by looking at the version number. It's easier to version your LUIS app independently from other components if you maintain it in its own repo.
+An application consists of multiple components that might include things such as a bot running in [Azure Bot Service](/azure/bot-service/bot-service-overview-introduction), [QnA Maker](https://www.qnamaker.ai/), [Azure Speech service](../speech-service/overview.md), and more. To achieve the goal of loosely coupled applications, use [version control](/devops/develop/git/what-is-version-control) so that each component of an application is versioned independently, allowing developers to detect breaking changes or updates just by looking at the version number. It's easier to version your LUIS app independently from other components if you maintain it in its own repo.
The LUIS app for the main branch should have a versioning scheme applied. When you merge updates to the `.lu` for a LUIS app into main, you'll then import that updated source into a new version in the LUIS app for the main branch.
cognitive-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/containers/translator-how-to-install-container.md
There are several ways to validate that the container is running:
#### English &leftrightarrow; German
-Navigate to the swagger page: <http://localhost:5000/swagger/https://docsupdatetracker.net/index.html>
+Navigate to the swagger page: `<http://localhost:5000/swagger/https://docsupdatetracker.net/index.html>`
1. Select **POST /translate** 1. Select **Try it out**
cognitive-services Get Started With Form Recognizer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/get-started-with-form-recognizer.md
Extract text, tables, selection marks and structure from a document.
:::image type="content" source="../media/label-tool/layout-2.jpg" alt-text="Connection settings of Layout Form Recognizer tool.":::
-5. Select source url, paste the following url of the sample document https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg click the Fetch button.
+5. Select source url, paste the following url of the sample document `https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg` click the Fetch button.
1. Click "Run Layout" The Form Recognizer sample labeling tool will call the Analyze Layout API and analyze the document.
Extract text, tables and key value pairs from invoices, sales receipts, identity
4. Choose the file you would like to analyze from the below options: * A URL for an image of an invoice. You can use a [sample invoice document](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/forms/Invoice_1.pdf) for this quickstart.
- * A URL for an image of a receipt. You can use a [sample ID document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/id-us-driver-license-wa.jpg) for this quickstart.
+ * A URL for an image of a receipt. You can use a [sample ID document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/id-license.jpg) for this quickstart.
* A URL for an image of a receipt. You can use a [sample receipt image](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg) for this quickstart. * A URL for an image of a business card. You can use a [sample business card image](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg) for this quickstart.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/whats-new.md
NuGet package version 3.1.0-beta.4
* **New methods to analyze data from identity documents**:
- **[StartRecognizeIdDocumentsFromUriAsync](/dotnet/api/azure.ai.formrecognizer.formrecognizerclient.startrecognizeiddocumentsasync?view=azure-dotnet-preview&preserve-view=true)**
+ **[StartRecognizeIdDocumentsFromUriAsync]**
**[StartRecognizeIdDocumentsAsync](/dotnet/api/azure.ai.formrecognizer.formrecognizerclient.startrecognizeiddocumentsasync?view=azure-dotnet-preview&preserve-view=true)**
Maven artifact package dependency version 3.1.0-beta.3
* **New methods to analyze data from identity documents**:
- **[beginRecognizeIdDocumentsFromUrl](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizeiddocumentsfromurl?view=azure-java-preview&preserve-view=true)**
+ **[beginRecognizeIdDocumentsFromUrl]**
- **[beginRecognizeIdDocuments](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizeiddocuments?view=azure-java-preview&preserve-view=true)**
+ **[beginRecognizeIdDocuments]**
For a list of field values, _see_ [Fields extracted](concept-identification-cards.md#fields-extracted) in our Form Recognizer documentation.
npm package version 3.1.0-beta.3
* New option `pages` supported by all form recognition methods (custom forms and all prebuilt models). The argument allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
-* Added support for a **[ReadingOrder](/javascript/api/@azure/ai-form-recognizer/readingorder?view=azure-node-preview&preserve-view=true)** type to the content recognition methods. This option enables you to control the algorithm that the service uses to determine how recognized lines of text should be ordered. You can specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
+* Added support for a **[ReadingOrder](/javascript/api/@azure/ai-form-recognizer/formreadingorder?view=azure-node-latest)** type to the content recognition methods. This option enables you to control the algorithm that the service uses to determine how recognized lines of text should be ordered. You can specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
* Split **[FormField](/javascript/api/@azure/ai-form-recognizer/formfield?view=azure-node-preview&preserve-view=true)** type into several different interfaces. This update should not cause any API compatibility issues except in certain edge cases (undefined valueType).
cognitive-services Text Analytics How To Entity Linking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking.md
Starting in `v3.1-preview.5`, The JSON response includes a `redactedText` proper
The API will attempt to detect the [listed entity categories](../named-entity-types.md?tabs=personal) for a given document language. If you want to specify which entities will be detected and returned, use the optional `piiCategories` parameter with the appropriate entity categories. This parameter can also let you detect entities that aren't enabled by default for your document language. The following example would detect a French driver's license number that might occur in English text, along with the default English entities. > [!TIP]
-> If you don't include `default` when specifying entity categories, The API will only return the entity cateogires you specify.
+> If you don't include `default` when specifying entity categories, The API will only return the entity categories you specify.
`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/entities/recognition/pii?piiCategories=default,FRDriversLicenseNumber`
cognitive-services Tutorial Power Bi Key Phrases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/tutorials/tutorial-power-bi-key-phrases.md
In this tutorial, you'll learn how to:
To get started, open Power BI Desktop and load the comma-separated value (CSV) file `FabrikamComments.csv` that you downloaded in [Prerequisites](#Prerequisites). This file represents a day's worth of hypothetical activity in a fictional small company's support forum. > [!NOTE]
-> Power BI can use data from a wide variety of sources, such as Facebook or a SQL database. Learn more at [Facebook integration with Power BI](https://powerbi.microsoft.com/integrations/facebook/) and [SQL Server integration with Power BI](https://powerbi.microsoft.com/integrations/sql-server/).
+> Power BI can use data from a wide variety of web-based sources, such as SQL databases. See the [Power Query documentation](/power-query/connectors/) for more information.
In the main Power BI Desktop window, select the **Home** ribbon. In the **External data** group of the ribbon, open the **Get Data** drop-down menu and select **Text/CSV**.
communication-services Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/access-tokens.md
Deleted the identity with ID: 8:acs:4ccc92c8-9815-4422-bddc-ceea181dc774_0000000
If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](./create-communication-resource.md#clean-up-resources). - ## Next Steps In this quickstart, you learned how to:
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/manage-teams-identity.md
The Administrator role has extended permissions in AAD. Members of this role can
Users must be authenticated against AAD applications with Azure Communication Service's `VoIP` permission. If you don't have an existing application that you would like to use for this quickstart, you can create new application registration. The following application settings influence the experience:-- Property *Supported account types* defines whether the *Application* is single tenant ("Accounts in this organizational directory only") or multitenant ("Accounts in any organizational directory"). For this scenario, you can use multitenant.
+- Property *Supported account types* defines whether the *Application* is single tenant ("Accounts in this organizational directory only") or multi-tenant ("Accounts in any organizational directory"). For this scenario, you can use multi-tenant.
- *Redirect URI* defines URI where authentication request is redirected after authentication. For this scenario, you can use "Public client/native(mobile & desktop)" and fill in "http://localhost" as URI.
-[Here you can find detailed documentation.](https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app#register-an-application).
+[Here you can find detailed documentation.](/azure/active-directory/develop/quickstart-register-app#register-an-application).
When the *Application* is registered, you'll see an identifier in the overview. This identifier will be used in followings steps: **Application (client) ID**.
When the *Application* is registered, you'll see an identifier in the overview.
In the *Authentication* pane of your *Application*, you can see Configured platform for *Public client/native(mobile & desktop)* with Redirect URI pointing to *localhost*. In the bottom of the screen, you can find toggle *Allow public client flows*, which for this quickstart will be set to **Yes**. ### 3. Verify application (Optional)
-In the *Branding* pane, you can verify your platform within Microsoft identity platform. This one time process will remove requirement for Fabrikam's admin to give admin consent to this application. You can find details on how to verify your application [here](https://docs.microsoft.com/azure/active-directory/develop/howto-configure-publisher-domain).
+In the *Branding* pane, you can verify your platform within Microsoft identity platform. This one time process will remove requirement for Fabrikam's admin to give admin consent to this application. You can find details on how to verify your application [here](/azure/active-directory/develop/howto-configure-publisher-domain).
### 4. Define Azure Communication Services' VoIP permission in application
Contoso's developer needs to set up *Client application* for authentication of u
Microsoft Authentication Library (MSAL) enables developers to acquire AAD user tokens from the Microsoft identity platform endpoint to authenticate users and access secure web APIs. It can be used to provide secure access to Azure Communication Services. MSAL supports many different application architectures and platforms including .NET, JavaScript, Java, Python, Android, and iOS.
-You can find more details how to set up different environments in public documentation. [Microsoft Authentication Library (MSAL) overview](https://docs.microsoft.com/azure/active-directory/develop/msal-overview).
+You can find more details how to set up different environments in public documentation. [Microsoft Authentication Library (MSAL) overview](/azure/active-directory/develop/msal-overview).
> [!NOTE] > Following sections describes how to exchange AAD access token for Teams access token for console application in .NET.
var teamsAccessToken = identityClient.ExchangeTeamsToken(aadUserToken.AccessToke
Console.WriteLine("\nTeams access token expires on: " + teamsAccessToken.Value.ExpiresOn); ```
-If all conditions defined in the prerequirements are met, then you would get valid Teams access token valid for 24 hours.
+If all conditions defined in the requirements are met, then you would get valid Teams access token valid for 24 hours.
#### Run the code Run the application from your application directory with the dotnet run command.
User represents the Fabrikam's users of Contoso's *Application*. User experience
With valid Teams' access token in *Client application*, developer can integrate ACS calling SDK and build custom Teams endpoint. - ## Next steps In this quickstart, you learned how to:
communication-services Managed Identity From Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/managed-identity-from-cli.md
The Azure Identity SDK reads values from three environment variables at runtime
Once these variables have been set, you should be able to use the DefaultAzureCredential object in your code to authenticate to the service client of your choice. - ## Next steps > [!div class="nextstepaction"]
communication-services Handle Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/handle-sms-events.md
In this quickstart, you learned how to consume SMS events. You can receive SMS m
You may also want to: - - [Learn about event handling concepts](../../../event-grid/event-schema-communication-services.md) - [Learn about Event Grid](../../../event-grid/overview.md)
communication-services Download Recording File Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/download-recording-file-sample.md
First, we'll create a webhook. Your Communication Services resource will use Eve
You can write your own custom webhook to receive these event notifications. It's important for this webhook to respond to inbound messages with the validation code to successfully subscribe the webhook to the event service.
-```
+```csharp
[HttpPost] public async Task<ActionResult> PostAsync([FromBody] object request) {
public async Task<ActionResult> PostAsync([FromBody] object request)
} ``` - The above code depends on the `Microsoft.Azure.EventGrid` NuGet package. To learn more about Event Grid endpoint validation, visit the [endpoint validation documentation](../../../event-grid/receive-events.md#endpoint-validation) We'll then subscribe this webhook to the `recording` event:
Your webhook will now be notified whenever your Communication Services resource
## Notification schema When the recording is available to download, your Communication Services resource will emit a notification with the following event schema. The document IDs for the recording can be fetched from the `documentId` fields of each `recordingChunk`.
-```
+```json
{ "id": string, // Unique guid for event "topic": string, // Azure Communication Services resource id
To download recorded media and metadata, use HMAC authentication to authenticate
Create an `HttpClient` and add the necessary headers using the `HmacAuthenticationUtils` provided below:
-```
+```csharp
var client = new HttpClient(); // Set Http Method
Create an `HttpClient` and add the necessary headers using the `HmacAuthenticati
// Hash the content of the request. var contentHashed = HmacAuthenticationUtils.CreateContentHash(serializedPayload);
- // Add HAMC headers.
+ // Add HMAC headers.
HmacAuthenticationUtils.AddHmacHeaders(request, contentHashed, accessKey, method); // Make a request to the Azure Communication Services APIs mentioned above
The below utilities can be used to manage your HMAC workflow.
**Create content hash**
-```
+```csharp
public static string CreateContentHash(string content) { var alg = SHA256.Create();
public static string CreateContentHash(string content)
**Add HMAC headers**
-```
+```csharp
public static void AddHmacHeaders(HttpRequestMessage requestMessage, string contentHash, string accessKey) { var utcNowString = DateTimeOffset.UtcNow.ToString("r", CultureInfo.InvariantCulture);
For more information, see the following articles:
- Check out our [web calling sample](../../samples/web-calling-sample.md) - Learn about [Calling SDK capabilities](./calling-client-samples.md?pivots=platform-web)-- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Trusted Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/trusted-service-tutorial.md
Title: Build a trusted authentication service using Azure Functions in Azure Communication Services
+ Title: Build a trusted user access service using Azure Functions in Azure Communication Services
-description: Learn how to create a trusted authentication service for Communication services with Azure Functions
--
+description: Learn how to create a trusted user access service for Communication services with Azure Functions
++
-# Build a trusted authentication service using Azure Functions
+# Build a trusted user access service using Azure Functions
+
+This article describes how to use Azure Functions to build a trusted user access service.
+
+> [!IMPORTANT]
+> The endpoint created at the end of this tutorial isn't secure. Be sure to read about the security details in the [Azure Function Security](https://docs.microsoft.com/azure/azure-functions/security-concepts) article. You need to add security to the endpoint to ensure bad actors can't provision tokens.
[!INCLUDE [Trusted Service JavaScript](./includes/trusted-service-js.md)]
+## Securing Azure Function
+
+As part of setting up an trusted service to provision access tokens for users, we need to take into account the security of that endpoint to make sure no bad actor can randomly create tokens for your service. Azure Functions provide built-in security features that you can use to secure the endpoint using different types of authentication policies. Read more about [Azure Function Security](https://docs.microsoft.com/azure/azure-functions/security-concepts)
+ ## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can find out more about [cleaning up Azure Communication Service resources](../quickstarts/create-communication-resource.md#clean-up-resources) and [cleaning Azure Function Resources](../../azure-functions/create-first-function-vs-code-csharp.md#clean-up-resources). ## Next steps
+> [!div class="nextstepaction"]
+> [Learn about Azure Function Security](https://docs.microsoft.com/azure/azure-functions/security-concepts)
+ > [!div class="nextstepaction"] > [Add voice calling to your app](../quickstarts/voice-video-calling/getting-started-with-calling.md)
connectors Apis List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/apis-list.md
The following table includes known issues for Logic Apps connectors.
## Next steps > [!div class="nextstepaction"]
-> [Create custom APIs you can call from Logic Apps](/logic-apps/logic-apps-create-api-app)
+> [Create custom APIs you can call from Logic Apps](../logic-apps/logic-apps-create-api-app.md)
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/change-feed-pull-model.md
ms.devlang: dotnet Previously updated : 03/10/2021 Last updated : 06/04/2021
With the change feed pull model, you can consume the Azure Cosmos DB change feed at your own pace. As you can already do with the [change feed processor](change-feed-processor.md), you can use the change feed pull model to parallelize the processing of changes across multiple change feed consumers.
-> [!NOTE]
-> The change feed pull model is currently in [preview in the Azure Cosmos DB .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.17.0-preview) only. The preview is not yet available for other SDK versions.
- ## Comparing with change feed processor Many scenarios can process the change feed using either the [change feed processor](change-feed-processor.md) or the pull model. The pull model's continuation tokens and the change feed processor's lease container are both "bookmarks" for the last processed item (or batch of items) in the change feed.
Here's some key differences between the change feed processor and pull model:
| Ability to replay past changes | Yes, with push model | Yes, with pull model| | Polling for future changes | Automatically checks for changes based on user-specified `WithPollInterval` | Manual | | Behavior where there are no new changes | Automatically wait `WithPollInterval` and recheck | Must catch exception and manually recheck |
-| Process changes from entire container | Yes, and automatically parallelized across multiple threads/machine consuming from the same container| Yes, and manually parallelized using FeedTokens |
+| Process changes from entire container | Yes, and automatically parallelized across multiple threads/machine consuming from the same container| Yes, and manually parallelized using FeedRange |
| Process changes from just a single partition key | Not supported | Yes|
-| Support level | Generally available | Preview |
> [!NOTE] > Unlike when reading using the change feed processor, you must explicitly handle cases where there are no new changes.
FeedIterator iteratorForTheEntireContainer = container.GetChangeFeedStreamIterat
while (iteratorForTheEntireContainer.HasMoreResults) {
- try {
- FeedResponse<User> users = await iteratorForTheEntireContainer.ReadNextAsync();
+ FeedResponse<User> users = await iteratorForTheEntireContainer.ReadNextAsync();
- foreach (User user in users)
- {
- Console.WriteLine($"Detected change for user with id {user.id}");
- }
- }
- catch {
+ if (users.Status == HttpStatusCode.NotModified)
+ {
Console.WriteLine($"No new changes"); await Task.Delay(TimeSpan.FromSeconds(5)); }
+ else
+ {
+ foreach (User user in users)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
} ```
FeedIterator<User> iteratorForPartitionKey = container.GetChangeFeedIterator<Use
while (iteratorForThePartitionKey.HasMoreResults) {
- try {
- FeedResponse<User> users = await iteratorForThePartitionKey.ReadNextAsync();
+ FeedResponse<User> users = await iteratorForThePartitionKey.ReadNextAsync();
- foreach (User user in users)
- {
- Console.WriteLine($"Detected change for user with id {user.id}");
- }
- }
- catch (CosmosException exception) when (exception.StatusCode == System.Net.HttpStatusCode.NotModified)
+ if (users.Status == HttpStatusCode.NotModified)
{ Console.WriteLine($"No new changes"); await Task.Delay(TimeSpan.FromSeconds(5)); }
+ else
+ {
+ foreach (User user in users)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
} ```
Machine 1:
FeedIterator<User> iteratorA = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[0]), ChangeFeedMode.Incremental); while (iteratorA.HasMoreResults) {
- try {
- FeedResponse<User> users = await iteratorA.ReadNextAsync();
+ FeedResponse<User> users = await iteratorA.ReadNextAsync();
- foreach (User user in users)
- {
- Console.WriteLine($"Detected change for user with id {user.id}");
- }
- }
- catch (CosmosException exception) when (exception.StatusCode == System.Net.HttpStatusCode.NotModified)
+ if (users.Status == HttpStatusCode.NotModified)
{ Console.WriteLine($"No new changes"); await Task.Delay(TimeSpan.FromSeconds(5)); }
+ else
+ {
+ foreach (User user in users)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
} ```
Machine 2:
FeedIterator<User> iteratorB = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[1]), ChangeFeedMode.Incremental); while (iteratorB.HasMoreResults) {
- try {
- FeedResponse<User> users = await iteratorA.ReadNextAsync();
+ FeedResponse<User> users = await iteratorA.ReadNextAsync();
- foreach (User user in users)
- {
- Console.WriteLine($"Detected change for user with id {user.id}");
- }
- }
- catch (CosmosException exception) when (exception.StatusCode == System.Net.HttpStatusCode.NotModified)
+ if (users.Status == HttpStatusCode.NotModified)
{ Console.WriteLine($"No new changes"); await Task.Delay(TimeSpan.FromSeconds(5)); }
+ else
+ {
+ foreach (User user in users)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
} ``` ## Saving continuation tokens
-You can save the position of your `FeedIterator` by creating a continuation token. A continuation token is a string value that keeps of track of your FeedIterator's last processed changes. This allows the `FeedIterator` to resume at this point later. The following code will read through the change feed since container creation. After no more changes are available, it will persist a continuation token so that change feed consumption can be later resumed.
+You can save the position of your `FeedIterator` by obtaining the continuation token. A continuation token is a string value that keeps of track of your FeedIterator's last processed changes. This allows the `FeedIterator` to resume at this point later. The following code will read through the change feed since container creation. After no more changes are available, it will persist a continuation token so that change feed consumption can be later resumed.
```csharp FeedIterator<User> iterator = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
string continuation = null;
while (iterator.HasMoreResults) {
- try {
- FeedResponse<User> users = await iterator.ReadNextAsync();
- continuation = users.ContinuationToken;
+ FeedResponse<User> users = await iterator.ReadNextAsync();
- foreach (User user in users)
- {
- Console.WriteLine($"Detected change for user with id {user.id}");
- }
- }
- catch (CosmosException exception) when (exception.StatusCode == System.Net.HttpStatusCode.NotModified)
+ if (users.Status == HttpStatusCode.NotModified)
{ Console.WriteLine($"No new changes");
- await Task.Delay(TimeSpan.FromSeconds(5));
- }
+ continuation = users.ContinuationToken;
+ // Stop the consumption since there are no new changes
+ break;
+ }
+ else
+ {
+ foreach (User user in users)
+ {
+ Console.WriteLine($"Detected change for user with id {user.id}");
+ }
+ }
}
-// Some time later
+// Some time later when I want to check changes again
FeedIterator<User> iteratorThatResumesFromLastPoint = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.ContinuationToken(continuation), ChangeFeedMode.Incremental); ```
cosmos-db Cosmosdb Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-monitor-resource-logs.md
Use the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-se
> If you are using SQL API, we recommend setting the **export-to-resource-specific** property to **true**. ```azurecli-interactive
-az monitor diagnostic-settings create --resource /subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/ --name {DIAGNOSTIC_SETTING_NAME} --export-to-resource-specific true --logs '[{"category": "QueryRuntimeStatistics","categoryGroup": null,"enabled": true,"retentionPolicy": {"enabled": false,"days": 0}}]' --workspace /subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}"
+az monitor diagnostic-settings create --resource /subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{RESOURCE_NAME} --name {DIAGNOSTIC_SETTING_NAME} --export-to-resource-specific true --logs '[{"category": "QueryRuntimeStatistics","categoryGroup": null,"enabled": true,"retentionPolicy": {"enabled": false,"days": 0}}]' --workspace /subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}"
``` ## Next steps
az monitor diagnostic-settings create --resource /subscriptions/{SUBSCRIPTION_ID
* For more information on how to query AzureDiagnostics tables see [troubleshooting using AzureDiagnostics tables](cosmosdb-monitor-logs-basic-queries.md#azure-diagnostics-queries).
-* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
+* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/index-policy.md
A container's indexing policy can be updated at any time [by using the Azure por
There is no impact to write availability during any index transformations. The index transformation uses your provisioned RUs but at a lower priority than your CRUD operations or queries.
-There is no impact to read availability when adding a new index. Queries will only utilize new indexes once the index transformation is complete. During the index transformation, the query engine will continue to use existing indexes, so you'll observe similar read performance during the indexing transformation to what you had observed before initiating the indexing change. When adding new indexes, there is also no risk of incomplete or inconsistent query results.
+There is no impact to read availability when adding new indexed paths. Queries will only utilize new indexed paths once an index transformation is complete. In other words, when adding a new indexed paths, queries that benefit from that indexed path will have the same performance before and during the index transformation. After the index transformation is complete, the query engine will begin to use the new indexed paths.
-When removing indexes and immediately running queries that filter on the dropped indexes, there is not a guarantee of consistent or complete query results. If you remove multiple indexes and do so in one single indexing policy change, the query engine provides consistent and complete results throughout the index transformation. However, if you remove indexes through multiple indexing policy changes, the query engine will not provide consistent or complete results until all index transformations complete. Most developers do not drop indexes and then immediately try to run queries that utilize these indexes so, in practice, this situation is unlikely.
+When removing indexed paths, you should group all your changes into one indexing policy transformation. If you remove multiple indexes and do so in one single indexing policy change, the query engine provides consistent and complete results throughout the index transformation. However, if you remove indexes through multiple indexing policy changes, the query engine will not provide consistent or complete results until all index transformations complete. Most developers do not drop indexes and then immediately try to run queries that utilize these indexes so, in practice, this situation is unlikely.
+
+When you drop an indexed path, the query engine will immediately stop using it and instead do a full scan.
> [!NOTE] > Where possible, you should always try to group multiple indexing changes into one single indexing policy modification
cosmos-db Mongodb Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-time-to-live.md
Time-to-live (TTL) functionality allows the database to automatically expire data. Azure Cosmos DB's API for MongoDB utilizes Cosmos DB's core TTL capabilities. Two modes are supported: setting a default TTL value on the whole collection, and setting individual TTL values for each document. The logic governing TTL indexes and per-document TTL values in Cosmos DB's API for MongoDB is the [same as in Cosmos DB](../cosmos-db/mongodb-indexing.md). ## TTL indexes
-To enable TTL universally on a collection, a ["TTL index" (time-to-live index)](../cosmos-db/mongodb-indexing.md) needs to be created. The TTL index is an index on the _ts field with an "expireAfterSeconds" value.
+To enable TTL universally on a collection, a ["TTL index" (time-to-live index)](../cosmos-db/mongodb-indexing.md) needs to be created. The TTL index is an index on the `_ts` field with an "expireAfterSeconds" value.
-Example:
-```JavaScript
+JavaScript example:
+
+```js
globaldb:PRIMARY> db.coll.createIndex({"_ts":1}, {expireAfterSeconds: 10}) { "_t" : "CreateIndexesResponse",
globaldb:PRIMARY> db.coll.createIndex({"_ts":1}, {expireAfterSeconds: 10})
The command in the above example will create an index with TTL functionality. Once the index is created, the database will automatically delete any documents in that collection that have not been modified in the last 10 seconds. > [!NOTE]
-> **_ts** is a Cosmos DB-specific field and is not accessible from MongoDB clients. It is a reserved (system) property that contains the timestamp of the document's last modification.
->
-
-Additionally, a C# example:
+> `_ts` is a Cosmos DB-specific field and is not accessible from MongoDB clients. It is a reserved (system) property that contains the timestamp of the document's last modification.
+
+Java example:
+
+```java
+MongoCollection collection = mongoDB.getCollection("collectionName");
+String index = collection.createIndex(Indexes.ascending("_ts"),
+new IndexOptions().expireAfter(10L, TimeUnit.SECONDS));
+```
+
+C# example:
```csharp var options = new CreateIndexOptions {ExpireAfter = TimeSpan.FromSeconds(10)};
cosmos-db Sql Api Sdk Java Spark V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java-spark-v3.md
developers to work with data using a variety of standard APIs, such as SQL, Mong
## Documentation -- [Getting started](https://github.com/Azure/azure-sdk-for-jav)-- [Catalog API](https://github.com/Azure/azure-sdk-for-jav)-- [Configuration Parameter Reference](https://github.com/Azure/azure-sdk-for-jav)
+- [Getting started](https://github.com/Azure/azure-sdk-for-jav)
+- [Catalog API](https://github.com/Azure/azure-sdk-for-jav)
+- [Configuration Parameter Reference](https://github.com/Azure/azure-sdk-for-jav)
## Version compatibility
cosmos-db Table Storage Design Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-design-guide.md
Here are some general guidelines for designing Table storage queries. The filter
`$filter=PartitionKey eq 'Sales' and LastName eq 'Smith'`. * A *table scan* doesn't include the `PartitionKey`, and is inefficient because it searches all of the partitions that make up your table for any matching entities. It performs a table scan regardless of whether or not your filter uses the `RowKey`. For example: `$filter=LastName eq 'Jones'`.
-* Azure Table storage queries that return multiple entities sort them in `PartitionKey` and `RowKey` order. To avoid resorting the entities in the client, choose a `RowKey` that defines the most common sort order. Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table storage](/cosmos-db/table-api-faq#table-api-in-azure-cosmos-db-vs-azure-table-storage).
+* Azure Table storage queries that return multiple entities sort them in `PartitionKey` and `RowKey` order. To avoid resorting the entities in the client, choose a `RowKey` that defines the most common sort order. Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table storage](/azure/cosmos-db/table-storage-how-to-use-java).
Using an "**or**" to specify a filter based on `RowKey` values results in a partition scan, and isn't treated as a range query. Therefore, avoid queries that use filters such as: `$filter=PartitionKey eq 'Sales' and (RowKey eq '121' or RowKey eq '322')`.
Many designs must meet requirements to enable lookup of entities based on multip
Table storage returns query results sorted in ascending order, based on `PartitionKey` and then by `RowKey`. > [!NOTE]
-> Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table storage](/table-api-faq.yml#table-api-in-azure-cosmos-db-vs-azure-table-storage).
+> Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table storage](/azure/cosmos-db/table-storage-how-to-use-java).
Keys in Table storage are string values. To ensure that numeric values sort correctly, you should convert them to a fixed length, and pad them with zeroes. For example, if the employee ID value you use as the `RowKey` is an integer value, you should convert employee ID **123** to **00000123**.
The following patterns and guidance might also be relevant when implementing thi
Retrieve the *n* entities most recently added to a partition by using a `RowKey` value that sorts in reverse date and time order. > [!NOTE]
-> Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. Thus, while this pattern is suitable for Table storage, it isn't suitable for Azure Cosmos DB. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table Storage](/table-api-faq.yml#table-api-in-azure-cosmos-db-vs-azure-table-storage).
+> Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. Thus, while this pattern is suitable for Table storage, it isn't suitable for Azure Cosmos DB. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table Storage](/azure/cosmos-db/table-storage-how-to-use-java).
#### Context and problem A common requirement is to be able to retrieve the most recently created entities, for example the ten most recent expense claims submitted by an employee. Table queries support a `$top` query operation to return the first *n* entities from a set. There's no equivalent query operation to return the last *n* entities in a set.
cosmos-db Troubleshoot Java Async Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-java-async-sdk.md
To identify which library brings in RxJava-1.2.2 run the following command next
```bash mvn dependency:tree ```
-For more information, see the [maven dependency tree guide](https://maven.apache.org/plugins/maven-dependency-plugin/examples/resolving-conflicts-using-the-dependency-tree.html).
+For more information, see the [maven dependency tree guide](https://maven.apache.org/plugins-archives/maven-dependency-plugin-2.10/examples/resolving-conflicts-using-the-dependency-tree.html).
Once you identify RxJava-1.2.2 is transitive dependency of which other dependency of your project, you can modify the dependency on that lib in your pom file and exclude RxJava transitive dependency it:
cosmos-db Troubleshoot Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-java-sdk-v4-sql.md
To identify which of your project dependencies brings in an older version of som
```bash mvn dependency:tree ```
-For more information, see the [maven dependency tree guide](https://maven.apache.org/plugins/maven-dependency-plugin/examples/resolving-conflicts-using-the-dependency-tree.html).
+For more information, see the [maven dependency tree guide](https://maven.apache.org/plugins-archives/maven-dependency-plugin-2.10/examples/resolving-conflicts-using-the-dependency-tree.html).
Once you know which dependency of your project depends on an older version, you can modify the dependency on that lib in your pom file and exclude the transitive dependency, following the example below (which assumes that *reactor-core* is the outdated dependency):
cost-management-billing Quick Create Budget Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/quick-create-budget-template.md
Budgets in Cost Management help you plan for and drive organizational accountabi
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fcreate-budget%2Fazuredeploy.json)
+[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.consumption%2Fcreate-budget%2Fazuredeploy.json)
## Prerequisites
For more information about assigning permission to Cost Management data, see [As
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/create-budget). One Azure resource is defined in the template:
One Azure resource is defined in the template:
1. Select the following image to sign in to Azure and open a template. The template creates a budget.
- [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fcreate-budget%2Fazuredeploy.json)
+ [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.consumption%2Fcreate-budget%2Fazuredeploy.json)
2. Select or enter the following values.
data-factory Compute Optimized Retire https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-optimized-retire.md
Azure Data Factory and Azure Synapse Analytics data flows provide a low-code mec
| Compute Option | Performance | | :-- | :-- |
-| General Purpose Data Flows | Best performing runtime for data flows when working with large datasets and many calculations |
-| Memory Optimized Data Flows | Good for general use cases in production workloads |
+| General Purpose Data Flows | Good for general use cases in production workloads |
+| Memory Optimized Data Flows | Best performing runtime for data flows when working with large datasets and many calculations |
| Compute Optimized Data Flows | Not recommended for production workloads | ## Migration steps
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-troubleshoot-guide.md
description: Learn how to troubleshoot external control activities in Azure Data
Previously updated : 04/30/2020 Last updated : 06/18/2021
The following table applies to U-SQL.
- **Recommendation**: Go to the Azure portal and find your storage, then copy-and-paste the connection string into your linked service and try again.
-### Error code: 2108
--- **Message**: `Error calling the endpoint '%url;'. Response status code: '%code;'`--- **Cause**: The request failed due to an underlying issue such as network connectivity, DNS failure, server certificate validation, or timeout.--- **Recommendation**: Use Fiddler/Postman to validate the request.- ### Error code: 2110 - **Message**: `The linked service type '%linkedServiceType;' is not supported for '%executorType;' activities.`
The following table applies to U-SQL.
- **Recommendation**: Use storage in another cloud and try again.
-### Error code: 2128
--- **Message**: `No response from the endpoint. Possible causes: network connectivity, DNS failure, server certificate validation or timeout.`--- **Cause**: Network connectivity, DNS failure, server certificate validation or timeout.--- **Recommendation**: Validate that the endpoint you are trying to hit is responding to requests. You may use tools like Fiddler/Postman.- ## Custom The following table applies to Azure Batch.
The following table applies to Azure Batch.
- **Cause**: This issue is due to either Network connectivity, a DNS failure, a server certificate validation, or a timeout. -- **Recommendation**: Validate that the endpoint you are trying to hit is responding to requests. You may use tools like **Fiddler/Postman**.
+- **Recommendation**: Validate that the endpoint you are trying to hit is responding to requests. You may use tools like **Fiddler/Postman/Netmon/Wireshark**.
### Error code: 2108
The following table applies to Azure Batch.
- **Cause**: The request failed due to an underlying issue such as network connectivity, a DNS failure, a server certificate validation, or a timeout. -- **Recommendation**: Use Fiddler/Postman to validate the request.
+- **Recommendation**: Use Fiddler/Postman/Netmon/Wireshark to validate the request.
#### More details To use **Fiddler** to create an HTTP session of the monitored web application:
data-factory Ssis Integration Runtime Diagnose Connectivity Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ssis-integration-runtime-diagnose-connectivity-faq.md
Previously updated : 06/07/2020 Last updated : 06/21/2021 # Use the diagnose connectivity feature in the SSIS integration runtime
Use the following sections to learn about the most common errors that occur when
## Next steps -- [Deploy an SSIS project to Azure with SSMS](/sql/integration-services/ssis-quickstart-deploy-ssms)-- [Run SSIS packages in Azure with SSMS](/sql/integration-services/ssis-quickstart-run-ssms)-- [Schedule SSIS packages in Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms)
+- [Migrate SSIS jobs with SSMS](https://docs.microsoft.com/azure/data-factory/how-to-migrate-ssis-job-ssms)
+- [Run SSIS packages in Azure with SSDT](https://docs.microsoft.com/azure/data-factory/how-to-invoke-ssis-package-ssdt)
+- [Schedule SSIS packages in Azure](https://docs.microsoft.com/azure/data-factory/how-to-schedule-azure-ssis-integration-runtime)
databox-online Azure Stack Edge Gpu Collect Virtual Machine Guest Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md
Previously updated : 06/03/2021 Last updated : 06/21/2021 # Collect VM guest logs on an Azure Stack Edge Pro GPU device
databox-online Azure Stack Edge Gpu Troubleshoot Virtual Machine Gpu Extension Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md
Previously updated : 06/02/2021 Last updated : 06/21/2021 # Troubleshoot GPU extension issues for GPU VMs on Azure Stack Edge Pro GPU
For installation steps, see [Install GPU extension](./azure-stack-edge-gpu-deplo
**Error description:** Extension provisioning failed during extension installation or while in the Enable state.
-1. Check the guest log for the associated error. <!--To collect the guest logs, see [Collect guest logs for VMs on an Azure Stack Edge Pro](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md).-->
+1. Check the guest log for the associated error. To collect the guest logs, see [Collect guest logs for VMs on an Azure Stack Edge Pro](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md).
On a Linux VM: * Look in `/var/log/waagent.log` or `/var/log/azure/nvidia-vmext-status`.
For installation steps, see [Install GPU extension](./azure-stack-edge-gpu-deplo
**Suggested solution:** To resolve the issue, do these steps:
-1. To find out what process is applying the lock, search the \var\log\azure\nvidia-vmext-status log for an error such as ΓÇ£dpkg is used by another processΓÇ¥ or ΓÇ¥Another app is holding yum lockΓÇ¥.
+1. To find out what process is applying the lock, search the \var\log\azure\nvidia-vmext-status log for an error such as ΓÇ£dpkg is used by another processΓÇ¥ or ΓÇ¥Another app is holding `yum lock`ΓÇ¥.
1. Either wait for the process to finish, or end the process.
For installation steps, see [Install GPU extension](./azure-stack-edge-gpu-deplo
## Next steps -- [Install the GPU extension](./azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md?tabs=linux)<!--Temporary link until next one can be restored.-->
-<!-- Remove link while cmdlet is fixed. - [Collect guest logs, and create a Support package](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md)-->
+[Collect guest logs, and create a Support package](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md)
databox-online Azure Stack Edge Gpu Troubleshoot Virtual Machine Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-troubleshoot-virtual-machine-provisioning.md
Previously updated : 06/04/2021 Last updated : 06/21/2021 # Troubleshoot VM deployment in Azure Stack Edge Pro GPU
This article describes how to troubleshoot common errors when deploying virtual machines on an Azure Stack Edge Pro GPU device. The article provides guidance for investigating the most common issues that cause VM provisioning timeouts and issues during network interface and VM creation.
-To diagnose any VM provisioning failure, you'll review guest logs for the failed virtual machine. <!--For steps to collect VM guest logs and include them in a Support package, see [Collect guest logs for VMs on Azure Stack Edge Pro](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md).-->
+To diagnose any VM provisioning failure, you'll review guest logs for the failed virtual machine. For steps to collect VM guest logs and include them in a Support package, see [Collect guest logs for VMs on Azure Stack Edge Pro](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md).
For guidance on issues that prevent successful upload of a VM image before your VM deployment, see [Troubleshoot virtual machine image uploads in Azure Stack Edge Pro GPU](azure-stack-edge-gpu-troubleshoot-virtual-machine-image-upload.md).
If Kubernetes is enabled before the VM is created, Kubernetes will use all the a
## Next steps
-<!-- Remove link while cmdlet issue is fixed. - * [Collect a Support package that includes guest logs for a failed VM](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md)-->
+* [Collect a Support package that includes guest logs for a failed VM](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md)
* [Troubleshoot issues with a failed GPU extension installation](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md) * [Troubleshoot issues with Azure Resource Manager](azure-stack-edge-gpu-troubleshoot-azure-resource-manager.md)
databox Data Box Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-faq.md
A. To speed up the copy process:
- Copy files to the VM's disk.-->
+### Q. Can I leverage Data Box to import data to a storage account with Private Endpoints configured?
+A. Yes. You can import data to a storage account that has private endpoint connections enabled. To let Data Box service import the data, select "Allow trusted Microsoft services to access this storage account" under the Networking section of the storage account.
++ ### Q. Can I use multiple storage accounts with Data Box? A. Yes. A maximum of 10 storage accounts, general purpose, classic, or blob storage are supported with Data Box. Both hot and cool blob are supported.
If you chose self-managed shipping, then you can pick up or drop off your Data B
- Review the [Data Box system requirements](data-box-system-requirements.md). - Understand the [Data Box limits](data-box-limits.md).-- Quickly deploy [Azure Data Box](data-box-quickstart-portal.md) in Azure portal.
+- Quickly deploy [Azure Data Box](data-box-quickstart-portal.md) in Azure portal.
defender-for-iot How To Identify Required Appliances https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-identify-required-appliances.md
Title: Identify required appliances description: Learn about hardware and virtual appliances for certified Defender for IoT sensors and the on-premises management console. Previously updated : 06/07/2021 Last updated : 06/21/2021
Defender for IoT supports both physical and virtual deployments.
This section provides an overview of physical sensor models that are available. You can purchase sensors with preconfigured software or purchase sensors that are not preconfigured.
-| Deployment type | Corporate | Enterprise | SMB rack mount| SMB ruggedized|
+| Deployment type | Corporate | Enterprise | SMB rack mount| SMB Ruggedized |
|--|--|--|--|--| | Image | :::image type="content" source="media/how-to-prepare-your-network/corporate-hpe-proliant-dl360-v2.png" alt-text="The corporate-level model."::: | :::image type="content" source="media/how-to-prepare-your-network/enterprise-and-smb-hpe-proliant-dl20-v2.png" alt-text="The enterprise-level model."::: | :::image type="content" source="media/how-to-prepare-your-network/enterprise-and-smb-hpe-proliant-dl20-v2.png" alt-text="The SMB-level model."::: | :::image type="content" source="media/how-to-prepare-your-network/office-ruggedized.png" alt-text="The SMB-ruggedized level model."::: | | Model | HPE ProLiant DL360 | HPE ProLiant DL20 | HPE ProLiant DL20 | HPE EL300 |
-| Monitoring ports | Up to 15 RJ45 or 8 OPT | Up to 8 RJ45 or 6 OPT | 4 RJ45 | Up to 5 |
+| Monitoring ports | Up to 15 RJ45 or 8 OPT | Up to 8 RJ45 or 6 OPT | Up to 4 RJ45 | Up to 5 RJ45 |
| Maximum bandwidth [1](#anchortext) | 3 Gb/sec | 1 Gb/sec | 200 Mb/Sec | 100 Mb/sec | | Maximum protected devices | 30,000 | 15,000 | 1,000 | 800 |
This section provides an overview of the virtual sensors that are available.
| Deployment type | Corporate | Enterprise | SMB | |--|--|--|--| | Maximum bandwidth | 2.5 Gb/sec | 800 Mb/sec | 160 Mb/sec |
-| Maximum protected devices | 30,000 | 10,000 | 2,500 |
+| Maximum protected devices | 30,000 | 10,000 | 800 |
## On-premises management console appliance
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-install-software.md
Title: Defender for IoT installation description: Learn how to install a sensor and the on-premises management console for Azure Defender for IoT. Previously updated : 06/07/2021 Last updated : 06/21/2021
The Defender for IoT appliance sensor connects to a SPAN port or network TAP and
The following rack mount appliances are available:
-| **Deployment type** | **Corporate** | **Enterprise** | **SMB** |**Line** |
+| **Deployment type** | **Corporate** | **Enterprise** | **SMB** |**SMB Ruggedized** |
|--|--|--|--|--|
-| **Model** | HPE ProLiant DL360 | Dell PowerEdge R340 XL | HPE ProLiant DL20 | HPE ProLiant DL20 |
-| **Monitoring ports** | up to 15 RJ45 or 8 OPT | up to 9 RJ45 or 6 OPT | up to 8 RJ45 or 6 OPT | 4 RJ45 |
-| **Max Bandwidth\*** | 3 Gb/Sec | 1 Gb/Sec | 1 Gb/Sec | 100 Mb/Sec |
-| **Max Protected Devices** | 30,000 | 10,000 | 15,000 | 1,000 |
+| **Model** | HPE ProLiant DL360 | HPE ProLiant DL20 | HPE ProLiant DL20 | HPE EL300 |
+| **Monitoring ports** | up to 15 RJ45 or 8 OPT | up to 8 RJ45 or 6 OPT | up to 4 RJ45 | Up to 5 RJ45 |
+| **Max Bandwidth\*** | 3 Gb/Sec | 1 Gb/Sec | 200 Mb/Sec | 100 Mb/Sec |
+| **Max Protected Devices** | 30,000 | 15,000 | 1,000 | 800 |
*Maximum bandwidth capacity might vary depending on protocol distribution.
The following rack mount appliances are available:
The following virtual appliances are available:
-| **Deployment type** | **Corporate** | **Enterprise** | **SMB** | **Line** |
-|--|--|--|--|--|
-| **Description** | Virtual appliance for corporate deployments | Virtual appliance for enterprise deployments | Virtual appliance for SMB deployments | Virtual appliance for line deployments |
-| **Max Bandwidth\*** | 2.5 Gb/Sec | 800 Mb/sec | 160 Mb/sec | 3 Mb/sec |
-| **Max protected devices** | 30,000 | 10,000 | 2,500 | 100 |
-| **Deployment Type** | Corporate | Enterprise | SMB | Line |
+| **Deployment type** | **Corporate** | **Enterprise** | **SMB** |
+|--|--|--|--|
+| **Description** | Virtual appliance for corporate deployments | Virtual appliance for enterprise deployments | Virtual appliance for SMB deployments |
+| **Max Bandwidth\*** | 2.5 Gb/Sec | 800 Mb/sec | 160 Mb/sec |
+| **Max protected devices** | 30,000 | 10,000 | 800 |
+| **Deployment Type** | Corporate | Enterprise | SMB |
*Maximum bandwidth capacity might vary depending on protocol distribution.
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies.md
# What is an ontology?
-The vocabulary of an Azure Digital Twins solution is defined using [models](concepts-models.md), which describe the types of entity that exist in your environment.
+The vocabulary of an Azure Digital Twins solution is defined using [models](concepts-models.md), which describe the types of entities that exist in your environment.
Sometimes, when your solution is tied to a particular industry, it can be easier and more effective to start with a set of models for that industry that already exist, instead of authoring your own model set from scratch. These pre-existing model sets are called **ontologies**.
digital-twins Concepts Twins Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-twins-graph.md
The result of this process is a set of nodes (the digital twins) connected via e
[!INCLUDE [visualizing with Azure Digital Twins explorer](../../includes/digital-twins-visualization.md)] + ## Create with the APIs This section shows what it looks like to create digital twins and relationships from a client application. It contains .NET code examples that utilize the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins), to provide additional context on what goes on inside each of these concepts. ### Create digital twins
-Below is a snippet of client code that uses the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins) to instantiate a twin of type Room.
+Below is a snippet of client code that uses the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins) to instantiate a twin of type Room with a `twinId` that's defined during the instantiation.
You can initialize the properties of a twin when it is created, or set them later. To create a twin with initialized properties, create a JSON document that provides the necessary initialization values.
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
To create a twin, you use the `CreateOrReplaceDigitalTwinAsync()` method on the
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="CreateTwinCall"::: To create a digital twin, you need to provide:
-* The desired ID for the digital twin
+* The desired ID for the digital twin, which you are defining at this stage
* The [model](concepts-models.md) you want to use Optionally, you can provide initial values for all properties of the digital twin. Properties are treated as optional and can be set later, but **they won't show up as part of a twin until they've been set.**
When a twin is created using this model, it's not necessary to instantiate the `
This can be done with a JSON Patch `add` operation, like this:
-```json
-[
- {
- "op": "add",
- "path": "/ObjectProperty",
- "value": {"StringSubProperty":"<string-value>"}
- }
-]
-```
>[!NOTE] > If `ObjectProperty` has more than one property, you should include all of them in the `value` field of this operation, even if you're only updating one:
This can be done with a JSON Patch `add` operation, like this:
After this has been done once, a path to `StringSubProperty` exists, and it can be updated directly from now on with a typical `replace` operation:
-```json
-[
- {
- "op": "replace",
- "path": "/ObjectProperty/StringSubProperty",
- "value": "<string-value>"
- }
-]
-```
Although the first step isn't necessary in cases where `ObjectProperty` was instantiated when the twin was created, it's recommended to use it every time you update a sub-property for the first time, since you may not always know for sure whether the object property was initially instantiated or not.
digital-twins How To Set Up Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-cli.md
In this section, you will **create a new instance of Azure Digital Twins** using
* A name for your instance. If your subscription has another Azure Digital Twins instance in the region that's already using the specified name, you'll be asked to pick a different name.
-Use these values in the following command to create the instance:
+Use these values in the following [az dt command](/cli/azure/dt?view=azure-cli-latest&preserve-view=true) to create the instance:
```azurecli-interactive az dt create --dt-name <name-for-your-Azure-Digital-Twins-instance> --resource-group <your-resource-group> --location <region>
event-grid Event Schema Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-farmbeats.md
This article provides the properties and schema for Azure FarmBeats events. For
|Microsoft.AgFoodPlatform.FarmChanged| Published when a farm is created/updated/deleted. |Microsoft.AgFoodPlatform.BoundaryChanged|Published when a boundary is created /updated/deleted. |Microsoft.AgFoodPlatform.FieldChanged|Published when a field is created /updated/deleted.
-|Microsoft.AgFoodPlatform.SeasonalField Changed|Published when a seasonal field is created /updated/deleted.
+|Microsoft.AgFoodPlatform.SeasonalFieldChanged|Published when a seasonal field is created /updated/deleted.
|Microsoft.AgFoodPlatform.SeasonChanged|Published when a season is created /updated/deleted. |Microsoft.AgFoodPlatform.CropChanged|Published when a crop is created /updated/deleted. |Microsoft.AgFoodPlatform.CropVarietyChanged|Published when a crop variety is created /updated/deleted. |Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChange| Published when a satellite data ingestion job's status changes, for example, job is created, has progressed or completed.
-|Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChange|Published when a satellite data ingestion job's status changes, for example, job is created, has progressed or completed.
-|Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChange| Published when a satellite data ingestion job's status changes, for example, job is created, has progressed or completed.
+|Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChange|Published when a weather data ingestion job's status changes, for example, job is created, has progressed or completed.
+|Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChange| Published when a farm operations data ingestion job's status changes, for example, job is created, has progressed or completed.
|Microsoft.AgFoodPlatform.ApplicationDataChanged|Published when application data is created /updated/deleted. This event is associate with farm operations data. |Microsoft.AgFoodPlatform.HarvestingDataChanged|Published when harvesting data is created /updated/deleted.This event is associated with farm operations data. |Microsoft.AgFoodPlatform.TillageDataChanged|Published when a tillage data is created or updated or deleted. This event is associated with farm operations data.
event-grid Install K8s Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/install-k8s-extension.md
To establish a secure HTTPS communication with the Event Grid broker and Event G
:::image type="content" source="./media/install-k8s-extension/monitoring-page.png" alt-text="Install Event Grid extension - Monitoring page"::: 1. Select **Next: Tags** to navigate to the **Tags** page. 1. On the **Tags** page, do the following steps:
- 1. Define [tags](/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging), if necessary.
+ 1. Define [tags](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging), if necessary.
:::image type="content" source="./media/install-k8s-extension/tags-page.png" alt-text="Install Event Grid extension - Tags page"::: 1. Select **Review + create** at the bottom of the page.
event-hubs Event Hubs Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-geo-dr.md
Title: Geo-disaster recovery - Azure Event Hubs| Microsoft Docs description: How to use geographical regions to fail over and perform disaster recovery in Azure Event Hubs Previously updated : 04/14/2021 Last updated : 06/21/2021 # Azure Event Hubs - Geo-disaster recovery
The Geo-disaster recovery feature of Azure Event Hubs is a disaster recovery sol
The disaster recovery feature implements metadata disaster recovery, and relies on primary and secondary disaster recovery namespaces.
-The Geo-disaster recovery feature is available for the [standard and dedicated SKUs](https://azure.microsoft.com/pricing/details/event-hubs/) only. You don't need to make any connection string changes, as the connection is made via an alias.
+The Geo-disaster recovery feature is available for the [standard, premium, and dedicated SKUs](https://azure.microsoft.com/pricing/details/event-hubs/) only. You don't need to make any connection string changes, as the connection is made via an alias.
The following terms are used in this article:
The following terms are used in this article:
## Supported namespace pairs The following combinations of primary and secondary namespaces are supported:
-| Primary namespace | Secondary namespace | Supported |
-| -- | -- | - |
-| Standard | Standard | Yes |
-| Standard | Dedicated | Yes |
-| Dedicated | Dedicated | Yes |
-| Dedicated | Standard | No |
+| Primary namespace tier | Allowed secondary namespace tier |
+| -- | -- |
+| Standard | Standard, Dedicated |
+| Premium | Premium |
+| Dedicated | Dedicated |
> [!NOTE] > You can't pair namespaces that are in the same dedicated cluster. You can pair namespaces that are in separate clusters.
The following combinations of primary and secondary namespaces are supported:
The following section is an overview of the failover process, and explains how to set up the initial failover.
-![1][]
+ ### Setup
If you initiate the failover, two steps are required:
> [!NOTE] > Only fail forward semantics are supported. In this scenario, you fail over and then re-pair with a new namespace. Failing back is not supported; for example, in a SQL cluster.
-![2][]
## Management
Note the following considerations to keep in mind:
5. Synchronizing entities can take some time, approximately 50-100 entities per minute. ## Availability Zones
+Event Hubs supports [Availability Zones](../availability-zones/az-overview.md), providing fault-isolated locations within an Azure region. The Availability Zones support is only available in [Azure regions with availability zones](../availability-zones/az-region.md#azure-regions-with-availability-zones). Both metadata and data (events) are replicated across data centers in the availability zone.
-The Event Hubs Standard SKU supports [Availability Zones](../availability-zones/az-overview.md), providing fault-isolated locations within an Azure region.
-
-> [!NOTE]
-> The Availability Zones support for Azure Event Hubs Standard is only available in [Azure regions](../availability-zones/az-region.md) where availability zones are present.
-
-You can enable Availability Zones on new namespaces only, using the Azure portal. Event Hubs doesn't support migration of existing namespaces. You can't disable zone redundancy after enabling it on your namespace.
-
-When you use availability zones, both metadata and data (events) are replicated across data centers in the availability zone.
+When creating a namespace, you see the following highlighted message when you select a region that has availability zones.
-![3][]
## Private endpoints This section provides more considerations when using Geo-disaster recovery with namespaces that use private endpoints. To learn about using private endpoints with Event Hubs in general, see [Configure private endpoints](private-link-service.md).
Review the following samples or reference documentation.
- [TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/eventhub/event-hubs/samples/typescript) - [REST API reference](/rest/api/eventhub/)
-[1]: ./media/event-hubs-geo-dr/geo1.png
[2]: ./media/event-hubs-geo-dr/geo2.png
-[3]: ./media/event-hubs-geo-dr/eh-az.png
+
firewall-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/overview.md
Previously updated : 04/29/2021 Last updated : 06/21/2021
Azure Firewall Manager has the following known issues:
|One secured virtual hub per region|You can't have more than one secured virtual hub per region.|Create multiple virtual WANs in a region.| |Base policies must be in same region as local policy|Create all your local policies in the same region as the base policy. You can still apply a policy that was created in one region on a secured hub from another region.|Investigating| |Filtering inter-hub traffic in secure virtual hub deployments|Secured Virtual Hub to Secured Virtual Hub communication filtering isn't yet supported. However, hub to hub communication still works if private traffic filtering via Azure Firewall isn't enabled.|Investigating|
-|Spokes in different region than the virtual hub|Spokes in different region than the virtual hub aren't supported.|Investigating<br><br>Create a hub per region and peer VNets in the same region as the hub.|
|Branch to branch traffic with private traffic filtering enabled|Branch to branch traffic isn't supported when private traffic filtering is enabled. |Investigating.<br><br>Don't secure private traffic if branch to branch connectivity is critical.| |All Secured Virtual Hubs sharing the same virtual WAN must be in the same resource group.|This behavior is aligned with Virtual WAN Hubs today.|Create multiple Virtual WANs to allow Secured Virtual Hubs to be created in different resource groups.| |Bulk IP address addition fails|The secure hub firewall goes into a failed state if you add multiple public IP addresses.|Add smaller public IP address increments. For example, add 10 at a time.|
firewall-manager Quick Secure Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/quick-secure-virtual-hub.md
For more information about Azure Firewall Manager, see [What is Azure Firewall M
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Ffwm-docs-qs%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Ffwm-docs-qs%2Fazuredeploy.json)
## Prerequisites
This template creates a secured virtual hub using Azure Firewall Manager, along
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/fwm-docs-qs/). Multiple Azure resources are defined in the template:
Deploy the ARM template to Azure:
1. Select **Deploy to Azure** to sign in to Azure and open the template. The template creates an Azure Firewall, a virtual WAN and virtual hub, the network infrastructure, and two virtual machines.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Ffwm-docs-qs%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Ffwm-docs-qs%2Fazuredeploy.json)
2. In the portal, on the **Secured virtual hubs** page, type or select the following values: - Subscription: Select from existing subscriptions
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/overview.md
Title: Overview of Azure Blueprints description: Understand how the Azure Blueprints service enables you to create, define, and deploy artifacts in your Azure environment. Previously updated : 05/01/2021 Last updated : 06/21/2021 # What is Azure Blueprints?
+> [!IMPORTANT]
+> Azure Blueprints is currently in PREVIEW. The
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+> include additional legal terms that apply to Azure features that are in beta, preview, or
+> otherwise not yet released into general availability.
+ Just as a blueprint allows an engineer or an architect to sketch a project's design parameters, Azure Blueprints enables cloud architects and central information technology groups to define a repeatable set of Azure resources that implements and adheres to an organization's standards,
industry Generate Soil Moisture Map In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/generate-soil-moisture-map-in-azure-farmbeats.md
A farm is a geographical area of interest for which you want to create a soil mo
## Deploy sensors
-You should physically deploy soil moisture sensors on the farm. You can purchase soil moisture sensors from any of our approved partners - [Davis Instruments](https://www.davisinstruments.com/product/enviromonitor-gateway/) and [Teralytic](https://teralytic.com/). You should coordinate with your sensor provider to do the physical setup on your farm.
+You should physically deploy soil moisture sensors on the farm. You can purchase soil moisture sensors from any of our approved partners - [Davis Instruments](https://www.davisinstruments.com/products/enviromonitor-gateway-us-lte) and [Teralytic](https://teralytic.com/). You should coordinate with your sensor provider to do the physical setup on your farm.
## Get soil moisture sensor data from partner
industry Overview Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/overview-azure-farmbeats.md
With the preview of Azure FarmBeats you can:
## Datahub The Azure FarmBeats Datahub is an API layer, which enables aggregation, normalization, and contextualization of various agriculture datasets across providers. You can use Azure FarmBeats to get:-- **Sensor data** from two sensor providers [Davis Instruments](https://www.davisinstruments.com/product/enviromonitor-gateway/), [Teralytic](https://teralytic.com/), [Pessl Instruments](https://metos.at/)
+- **Sensor data** from two sensor providers [Davis Instruments](https://www.davisinstruments.com/products/enviromonitor-gateway-us-lte), [Teralytic](https://teralytic.com/), [Pessl Instruments](https://metos.at/)
- **Satellite imagery** from European Space Agency's [Sentinel-2](https://sentinel.esa.int/web/sentinel/home) satellite mission - **Drone imagery** from three drone imagery providers [senseFly](https://www.sensefly.com/) , [SlantRange](https://slantrange.com/) , [DJI](https://dji.com/)
Azure FarmBeats is offered at no additional charge and you pay only for the Azur
## Next steps > [!div class="nextstepaction"]
-> [Install Azure FarmBeats](install-azure-farmbeats.md)
+> [Install Azure FarmBeats](install-azure-farmbeats.md)
iot-hub Iot Hub Live Data Visualization In Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-live-data-visualization-in-web-apps.md
In this article, you learn how to visualize real-time sensor data that your IoT
* An Iot hub under your subscription * A client application that sends messages to your Iot hub
+* [Node.js](https://nodejs.org) version 10.6 or later. To check your node version run `node --version`.
+ * [Download Git](https://www.git-scm.com/downloads) * The steps in this article assume a Windows development machine; however, you can easily perform these steps on a Linux system in your preferred shell.
Note down the name you choose, you'll need it later in this tutorial.
IoT hubs are created with several default access policies. One such policy is the **service** policy, which provides sufficient permissions for a service to read and write the IoT hub's endpoints. Run the following command to get a connection string for your IoT hub that adheres to the service policy: ```azurecli-interactive
-az iot hub show-connection-string --hub-name YourIotHub --policy-name service
+az iot hub connection-string show --hub-name YourIotHub --policy-name service
``` The connection string should look similar to the following:
key-vault Create Certificate Signing Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/create-certificate-signing-request.md
Example
This error might occur if **SubjectName** includes any special characters. See notes in the Azure portal and PowerShell instructions.
+- Error type **The CSR used to get your certificate has already been used. Please try to generate a new certificate with a new CSR.**
+ Go to 'Advanced Policy' section of the certificate and check if 'reuse key on renewal' option is turned off.
## Next steps
Example
- [Key Vault Developer's Guide](../general/developers-guide.md) - [Azure Key Vault REST API reference](/rest/api/keyvault) - [Vaults - Create or Update](/rest/api/keyvault/vaults/createorupdate)-- [Vaults - Update Access Policy](/rest/api/keyvault/vaults/updateaccesspolicy)
+- [Vaults - Update Access Policy](/rest/api/keyvault/vaults/updateaccesspolicy)
key-vault Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/private-link.md
az network private-endpoint create --resource-group {RG} --vnet-name {vNet NAME}
az network private-endpoint show --resource-group {RG} --name {Private Endpoint Name} # Approve a Private Link Connection Request
-az keyvault private-endpoint-connection approve --approval-description {"OPTIONAL DESCRIPTION"} --resource-group {RG} --hsm-name {HSM NAME} ΓÇô-name {PRIVATE LINK CONNECTION NAME}
+az keyvault private-endpoint-connection approve --description {"OPTIONAL DESCRIPTION"} --resource-group {RG} --hsm-name {HSM NAME} ΓÇô-name {PRIVATE LINK CONNECTION NAME}
# Deny a Private Link Connection Request
-az keyvault private-endpoint-connection reject --rejection-description {"OPTIONAL DESCRIPTION"} --resource-group {RG} --hsm-name {HSM NAME} ΓÇô-name {PRIVATE LINK CONNECTION NAME}
+az keyvault private-endpoint-connection reject --description {"OPTIONAL DESCRIPTION"} --resource-group {RG} --hsm-name {HSM NAME} ΓÇô-name {PRIVATE LINK CONNECTION NAME}
# Delete a Private Link Connection Request az keyvault private-endpoint-connection delete --resource-group {RG} --hsm-name {HSM NAME} --name {PRIVATE LINK CONNECTION NAME}
Aliases: <your-hsm-name>.managed.azure.net
## Limitations and Design Considerations > [!NOTE]
-> The number of key vaults with private endpoints enabled per subscription is an adjustable limit. The limit shown below is the default limit. If you would like to request a limit increase for your service, please send an email to akv-privatelink@microsoft.com. We will approve these requests on a case by case basis.
+> The number of managed HSMs with private endpoints enabled per subscription is an adjustable limit. The limit shown below is the default limit. If you would like to request a limit increase for your subscription, please create an Azure support ticket. We will approve these requests on a case by case basis.
**Pricing**: For pricing information, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
lab-services Class Type React Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/class-type-react-windows.md
For example, if using the [Node.js Interactive Window](/visualstudio/javascript/
.npm install react-jsx ```
-To create your first Node.js with React app in Visual Studio, see [Tutorial: Create a Node.js and React app in Visual Studio](/visualstudio/javascript/tutorial-nodejs-with-react-and-jsx.md?view=vs-2019&preserve-view=true).
+To create your first Node.js with React app in Visual Studio, see [Tutorial: Create a Node.js and React app in Visual Studio](/visualstudio/javascript/tutorial-nodejs-with-react-and-jsx?view=vs-2019&preserve-view=true).
### Install debugger extensions
lighthouse View Manage Customers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/view-manage-customers.md
Title: View and manage customers and delegated resources in the Azure portal description: As a service provider or enterprise using Azure Lighthouse, you can view all of your delegated resources and subscriptions by going to My customers in the Azure portal. Previously updated : 03/12/2021 Last updated : 06/21/2021
If you then access a service which supports [cross-tenant management experiences
You can also access functionality related to delegated subscriptions or resource groups from within services that support cross-tenant management experiences by selecting the subscription or resource group from within that service.
+> [!TIP]
+> You can also [opt in to the new subscription filtering experience](../../azure-portal/set-preferences.md#opt-into-the-new-subscription-filtering-experience) to make your selections. If you do so, be sure that all directories and subscriptions are selected before you select the **Try it now** link, or else the new experience may not show all of the subscriptions to which you have access. If that happens, you can select **Switch back to the previous view** in the **Subscriptions + filters** pane, then repeat the opt in process with all directories and subscriptions selected.
+>
+> :::image type="content" source="../media/azure-portal-subscription-filtering-opt-in-delegated.png" alt-text="Screenshot showing the opt-in selections for the new subscription filter settings.":::
+ ## Cloud Solution Provider (Preview) A separate **Cloud Solution Provider (Preview)** section of the **My customers** page shows billing info and resources for your CSP customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started). For more information, see [Get started with your Microsoft Partner Agreement billing account](../../cost-management-billing/understand/mpa-overview.md).
load-balancer Load Balancer Monitor Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-monitor-log.md
Metrics-to-logs export is enabled on a per-resource level. To enable these logs:
For metric export limitations, see the [Limitations](#limitations) section of this article.
-After you enable **AllMetrics** in the diagnostic settings of Standard Load Balancer, if you're using an event hub or Log Analytics workspace, these logs will be populated in the **AzureMonitor** table.
+After you enable **AllMetrics** in the diagnostic settings of Standard Load Balancer, if you're using an event hub or Log Analytics workspace, these logs will be populated in the **AzureMetrics** table.
If you're exporting to storage, connect to your storage account and retrieve the JSON log entries for event and health probe logs. After you download the JSON files, you can convert them to CSV and view them in Excel, Power BI, or any other data visualization tool.
The metrics-to-logs export feature for Azure Load Balancer has the following lim
## Next steps * [Review the available metrics for your load balancer](./load-balancer-standard-diagnostics.md)
-* [Create and test queries by following Azure Monitor instructions](../azure-monitor/logs/log-query-overview.md)
+* [Create and test queries by following Azure Monitor instructions](../azure-monitor/logs/log-query-overview.md)
load-balancer Quickstart Load Balancer Standard Internal Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-internal-template.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Review the template
-The template used in this quickstart is from the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/2-vms-internal-load-balancer).
+The template used in this quickstart is from the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/2-vms-internal-load-balancer/).
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.compute/2-vms-internal-load-balancer/azuredeploy.json":::
logic-apps Logic Apps Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-azure-functions.md
Title: Add and call functions from Azure Logic Apps
-description: Call and run custom code in functions made in Azure from automated tasks and workflows in Azure Logic Apps
+ Title: Call Azure Functions from logic app workflows
+description: Run your own code in workflows created with Azure Logic Apps by creating and calling an Azure Function.
ms.suite: integration-- Previously updated : 10/01/2019++ Last updated : 06/14/2021
-# Call functions from Azure Logic Apps
+# Create and run your own code from workflows in Azure Logic Apps by using Azure Functions
-When you want to run code that performs a specific job in your logic apps, you can create your own function by using [Azure Functions](../azure-functions/functions-overview.md). This service helps you create Node.js, C#, and F# functions so you don't have to build a complete app or infrastructure to run code. You can also [call logic apps from inside functions](#call-logic-app). Azure Functions provides serverless computing in the cloud and is useful for performing tasks such as these examples:
+When you want to run code that performs a specific job in your logic app workflow, you can create a function by using [Azure Functions](../azure-functions/functions-overview.md). This service helps you create Node.js, C#, and F# functions so you don't have to build a complete app or infrastructure to run code. You can also [call logic app workflows from inside an Azure function](#call-logic-app). Azure Functions provides serverless computing in the cloud and is useful for performing certain tasks, for example:
* Extend your logic app's behavior with functions in Node.js or C#. * Perform calculations in your logic app workflow.
-* Apply advanced formatting or compute fields in your logic apps.
+* Apply advanced formatting or compute fields in your logic app workflows.
-To run code snippets without using Azure Functions, learn how to [add and run inline code](../logic-apps/logic-apps-add-run-inline-code.md).
+To run code snippets without using Azure Functions, learn how you can [add and run inline code](../logic-apps/logic-apps-add-run-inline-code.md).
> [!NOTE]
-> Integration between Logic Apps and Azure Functions currently doesn't work with Slots enabled.
+> Azure Logic Apps doesn't support using Azure Functions with deployment slots enabled. Although this scenario might sometimes work,
+> this behavior is unpredictable and might result in authorization problems when your workflow tries call the Azure function.
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
-* An function app, which is a container for a function that's created in Azure Functions, along with the function you create. If you don't have a function app, [create your function app first](../azure-functions/functions-get-started.md). You can then create your function either outside your logic app in the Azure portal, or [from inside your logic app](#create-function-designer) in the Logic App Designer.
+* A function app, which is a container for a function that's created using Azure Functions, along with the function that you create.
+
+ If you don't have a function app, [create your function app first](../azure-functions/functions-get-started.md). You can then create your function either outside your logic app in the Azure portal, or [from inside your logic app](#create-function-designer) in the workflow designer.
* When working with logic apps, the same requirements apply to function apps and functions whether they are existing or new:
To run code snippets without using Azure Functions, learn how to [add and run in
* Your function uses the **HTTP trigger** template.
- The HTTP trigger template can accept content that has `application/json` type from your logic app. When you add a function to your logic app, the Logic App Designer shows custom functions that are created from this template within your Azure subscription.
+ The HTTP trigger template can accept content that has `application/json` type from your logic app. When you add a function to your logic app, the workflow designer shows custom functions that are created from this template within your Azure subscription.
* Your function doesn't use custom routes unless you've defined an [OpenAPI definition](../azure-functions/functions-openapi-definition.md) (formerly known as a [Swagger file](https://swagger.io/)).
- * If you have an OpenAPI definition for your function, the Logic Apps Designer gives you a richer experience when your work with function parameters. Before your logic app can find and access functions that have OpenAPI definitions, [set up your function app by following these steps](#function-swagger).
+ * If you have an OpenAPI definition for your function, the workflow designer gives you a richer experience when your work with function parameters. Before your logic app can find and access functions that have OpenAPI definitions, [set up your function app by following these steps](#function-swagger).
* The logic app where you want to add the function, including a [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts) as the first step in your logic app
To run code snippets without using Azure Functions, learn how to [add and run in
## Find functions that have OpenAPI descriptions
-For a richer experience when you work with function parameters in the Logic Apps Designer, [generate an OpenAPI definition](../azure-functions/functions-openapi-definition.md), formerly known as a [Swagger file](https://swagger.io/), for your function. To set up your function app so your logic app can find and use functions that have Swagger descriptions, follow these steps:
+For a richer experience when you work with function parameters in the workflow designer, [generate an OpenAPI definition](../azure-functions/functions-openapi-definition.md), formerly known as a [Swagger file](https://swagger.io/), for your function. To set up your function app so your logic app can find and use functions that have Swagger descriptions, follow these steps:
1. Make sure that your function app is actively running.
Now that you've created your function in Azure, follow the steps to [add functio
## Create functions inside logic apps
-You can create functions directly from your logic app's workflow by using the built-in Azure Functions action in the Logic App Designer, but you can use this method only for functions written in JavaScript. For other languages, you can create functions through the Azure Functions experience in the Azure portal. For more information, see [Create your first function in the Azure portal](../azure-functions/functions-get-started.md).
+You can create functions directly from your logic app's workflow by using the built-in Azure Functions action in the workflow designer, but you can use this method only for functions written in JavaScript. For other languages, you can create functions through the Azure Functions experience in the Azure portal. For more information, see [Create your first function in the Azure portal](../azure-functions/functions-get-started.md).
However, before you can create your function in Azure, you must already have a function app, which is a container for your functions. If you don't have a function app, create that function app first. See [Create your first function in the Azure portal](../azure-functions/functions-get-started.md).
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app in the designer.
1. To create and add your function, follow the step that applies to your scenario:
However, before you can create your function in Azure, you must already have a f
* Between existing steps in your logic app's workflow, move your mouse over the arrow, select the plus (+) sign, and then select **Add an action**.
-1. In the search box, enter "azure functions" as your filter. From the actions list, select the **Choose an Azure function** action, for example:
+1. In the search box, enter `azure functions`. From the actions list, select the action named **Choose an Azure function**, for example:
![Find functions in the Azure portal.](./media/logic-apps-azure-functions/find-azure-functions-action.png)
However, before you can create your function in Azure, you must already have a f
## Add existing functions to logic apps
-To call existing functions from your logic apps, you can add functions like any other action in the Logic App Designer.
+To call existing functions from your logic apps, you can add functions like any other action in the workflow designer.
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app in the designer.
1. Under the step where you want to add the function, select **New step**.
-1. Under **Choose an action**, in the search box, enter "azure functions" as your filter. From the actions list, select the **Choose an Azure function** action.
+1. Under **Choose an action**, in the search box, enter `azure functions`. From the actions list, select the action named **Choose an Azure function**, for example:
![Find a function in Azure.](./media/logic-apps-azure-functions/find-azure-functions-action.png)
When you want to trigger a logic app from inside a function, the logic app must
## Enable authentication for functions
-To easily authenticate access to other resources that are protected by Azure Active Directory (Azure AD) without having to sign in and provide credentials or secrets, your logic app can use a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) (formerly known as Managed Service Identity or MSI). Azure manages this identity for you and helps secure your credentials because you don't have to provide or rotate secrets. Learn more about [Azure services that support managed identities for Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
+Your logic app can use a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) (formerly known as Managed Service Identity or MSI), if you want to easily authenticate access to resources protected by Azure Active Directory (Azure AD) without having to sign in and provide credentials or secrets. Azure manages this identity for you and helps secure your credentials because you don't have to provide or rotate secrets. Learn more about [Azure services that support managed identities for Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
If you set up your logic app to use the system-assigned identity or a manually created, user-assigned identity, the function in your logic app can also use that same identity for authentication. For more information about authentication support for functions in logic apps, see [Add authentication to outbound calls](../logic-apps/logic-apps-securing-a-logic-app.md#add-authentication-outbound).
Before you start this task, find and put these values aside for later use:
* To generate this object ID, [enable your logic app's system-assigned identity](../logic-apps/create-managed-service-identity.md#azure-portal-system-logic-app).
- * Otherwise, to find this object ID, open your logic app in the Logic App Designer. On your logic app menu, under **Settings**, select **Identity** > **System assigned**.
+ * Otherwise, to find this object ID, open your logic app in the designer. On your logic app menu, under **Settings**, select **Identity** > **System assigned**.
* The directory ID for your tenant in Azure Active Directory (Azure AD)
- To get your tenant's directory ID, you can run the [`Get-AzureAccount`](/powershell/module/servicemanagement/azure.service/get-azureaccount) Powershell command. Or, in the Azure portal, follow these steps:
+ To get your tenant's directory ID, you can run the [`Get-AzureAccount`](/powershell/module/servicemanagement/azure.service/get-azureaccount) PowerShell command. Or, in the Azure portal, follow these steps:
1. In the [Azure portal](https://portal.azure.com), find and select your function app.
Now you're ready to set up Azure AD authentication for your function app.
1. When you're done, select **OK**.
-1. Return to the Logic App Designer and follow the [steps to authenticate access with the managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-identity).
+1. Return to the designer and follow the [steps to authenticate access with the managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-identity).
## Next steps
-* Learn about [Logic Apps connectors](../connectors/apis-list.md)
+* Learn about [Logic Apps connectors](../connectors/apis-list.md)
logic-apps Logic Apps Enterprise Integration As2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-as2.md
This article shows how to add the AS2 encoding and decoding actions to an existi
## Sample
-To try deploying a fully operational logic app and sample AS2 scenario, see the [AS2 logic app template and scenario](https://azure.microsoft.com/resources/templates/logic-app-as2-send-receive).
+To try deploying a fully operational logic app and sample AS2 scenario, see the [AS2 logic app template and scenario](https://azure.microsoft.com/resources/templates/logic-app-as2-send-receive/).
## Connector reference
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
In this article, learn about Azure Machine Learning releases. For the full SDK
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
-## 2021-05-25
-
-### Announcing the 2.0 CLI (preview) for Azure Machine Learning
+## 2021-06-21
-The `ml` extension to the Azure CLI is the next-generation interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle. [Install and get started](how-to-configure-cli.md).
+### Azure Machine Learning SDK for Python v1.31.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Improved documentation for platform property on Environment class
+ + Changed default AML Compute node scale down time from 120 seconds to 1800 seconds
+ + Updated default troubleshooting link displayed on the portal for troubleshooting failed runs to: https://aka.ms/azureml-run-troubleshooting
+ + **azureml-automl-runtime**
+ + Data Cleaning: Samples with target values in [None, "", "nan", np.nan] will be dropped prior to featurization and/or model training
+ + **azureml-interpret**
+ + Prevent flush task queue error on remote AzureML runs that use ExplanationClient by increasing timeout
+ + **azureml-pipeline-core**
+ + Add jar parameter to synapse step
+ + **azureml-train-automl-runtime**
+ + Fix high cardinality guardrails to be more aligned with docs
## 2021-06-07
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ Support for custom defined quantiles during MM inference + Support for forecast_quantiles during batch inference.
+## 2021-05-25
+
+### Announcing the 2.0 CLI (preview) for Azure Machine Learning
+
+The `ml` extension to the Azure CLI is the next-generation interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle. [Install and get started](how-to-configure-cli.md).
### Azure Machine Learning SDK for Python v1.29.0 + **Bug fixes and improvements**
machine-learning Vm Do Ten Things https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/vm-do-ten-things.md
In this article, you'll learn how to use your DSVM to perform data science tasks
## Use Jupyter Notebooks
-The Jupyter Notebook provides a browser-based IDE for data exploration and modeling. You can use Python 2, Python 3, or R (both open source and Microsoft R Server) in a Jupyter Notebook.
+The Jupyter Notebook provides a browser-based IDE for data exploration and modeling. You can use Python 2, Python 3, or R in a Jupyter Notebook.
To start the Jupyter Notebook, select the **Jupyter Notebook** icon on the **Start** menu or on the desktop. In the DSVM command prompt, you can also run the command ```jupyter notebook``` from the directory where you have existing notebooks or where you want to create new notebooks.
After you start Jupyter, navigate to the `/notebooks` directory for example note
When you're in the notebook, you can explore your data, build the model, and test the model by using your choice of libraries. ## Explore data and develop models with Microsoft Machine Learning Server+
+> [!NOTE]
+> Support for Machine Learning Server Standalone will end July 1, 2021. We will remove it from the DSVM images after
+> June, 30. Existing deployments will continue to have access to the software but due to the reached support end date,
+> there will be no support for it after July 1, 2021.
+ You can use languages like R and Python to do your data analytics right on the DSVM. For R, you can use an IDE like RStudio that can be found on the start menu or on the desktop. Or you can use R Tools for Visual Studio. Microsoft has provided additional libraries on top of the open-source CRAN R to enable scalable analytics and the ability to analyze data larger than the memory size allowed in parallel chunked analysis.
machine-learning How To Attach Arc Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-attach-arc-kubernetes.md
Azure Arc enabled machine learning supports the following training scenarios:
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription [create a free account](https://aka.ms/AMLFree) before you begin.
-* Azure Arc enabled Kubernetes cluster. For more information, see the [Connect an existing Kubernetes cluster to Azure Arc quickstart guide](/azure-arc/kubernetes/quickstart-connect-cluster.md).
-* Fulfill [Azure Arc enabled Kubernetes cluster extensions prerequisites](/azure-arc/kubernetes/extensions#prerequisites).
+* An Azure subscription. If you don't have an Azure subscription [create a free account](https://azure.microsoft.com/free) before you begin.
+* Azure Arc enabled Kubernetes cluster. For more information, see the [Connect an existing Kubernetes cluster to Azure Arc quickstart guide](../azure-arc/kubernetes/quickstart-connect-cluster.md).
+* Fulfill [Azure Arc enabled Kubernetes cluster extensions prerequisites](../azure-arc/kubernetes/extensions.md#prerequisites).
* Azure CLI version >= 2.24.0 * Azure CLI k8s-extension extension version >= 0.4.3 * An Azure Machine Learning workspace. [Create a workspace](how-to-manage-workspace.md?tabs=python) before you begin if you don't have one already.
amlarc_compute_name = os.environ.get("AML_COMPUTE_CLUSTER_NAME", "amlarc-compute
resource_id = "/subscriptions/123/resourceGroups/rg/providers/Microsoft.Kubernetes/connectedClusters/amlarc-cluster" if amlarc_compute_name in ws.compute_targets:
- compute_target = ws.compute_targets[amlarc_compute_name]
- if compute_target and type(compute_target) is KubernetesCompute:
+ amlarc_compute = ws.compute_targets[amlarc_compute_name]
+ if amlarc_compute and type(amlarc_compute) is KubernetesCompute:
print("found compute target: " + amlarc_compute_name) else: print("creating new compute target...")
amlarc_compute_name = os.environ.get("AML_COMPUTE_CLUSTER_NAME", "amlarc-compute
resource_id = "/subscriptions/123/resourceGroups/rg/providers/Microsoft.Kubernetes/connectedClusters/amlarc-cluster" if amlarc_compute_name in ws.compute_targets:
- compute_target = ws.compute_targets[amlarc_compute_name]
- if compute_target and type(compute_target) is KubernetesCompute:
+ amlarc_compute = ws.compute_targets[amlarc_compute_name]
+ if amlarc_compute and type(amlarc_compute) is KubernetesCompute:
print("found compute target: " + amlarc_compute_name) else: print("creating new compute target...")
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-private-link.md
Azure Private Link enables you to connect to your workspace using a private endp
[!INCLUDE [cli-version-info](../../includes/machine-learning-cli-version-1-only.md)]
-* If you plan on using a private endpoint enabled workspace with a customer-managed key, you must request this feature using a support ticket. For more information, see [Manage and increase quotas](how-to-manage-quotas.md#private-endpoint-and-private-dns-quota-increases).
- * You must have an existing virtual network to create the private endpoint in. You must also [disable network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md) before adding the private endpoint. ## Limitations
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-kubernetes.md
Result
1.16.13 ```
-If you'd like to **programmatically check the available versions**, use the [Container Service Client - List Orchestrators](/rest/api/container-service/container%20service%20client/listorchestrators) REST API. To find the available versions, look at the entries where `orchestratorType` is `Kubernetes`. The associated `orchestrationVersion` entries contain the available versions that can be **attached** to your workspace.
+If you'd like to **programmatically check the available versions**, use the [Container Service Client - List Orchestrators](/rest/api/container-service/container-service-client/list-orchestrators) REST API. To find the available versions, look at the entries where `orchestratorType` is `Kubernetes`. The associated `orchestrationVersion` entries contain the available versions that can be **attached** to your workspace.
To find the default version that is used when **creating** a cluster through Azure Machine Learning, find the entry where `orchestratorType` is `Kubernetes` and `default` is `true`. The associated `orchestratorVersion` value is the default version. The following JSON snippet shows an example entry:
az aks get-credentials -g <rg> -n <aks cluster name>
* [Use Azure RBAC for Kubernetes authorization](../aks/manage-azure-rbac.md) * [How and where to deploy a model](how-to-deploy-and-where.md)
-* [Deploy a model to an Azure Kubernetes Service cluster](how-to-deploy-azure-kubernetes-service.md)
+* [Deploy a model to an Azure Kubernetes Service cluster](how-to-deploy-azure-kubernetes-service.md)
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-workspace-template.md
For more information, see [Deploy an application with Azure Resource Manager tem
* To use a template from a CLI, you need either [Azure PowerShell](/powershell/azure/) or the [Azure CLI](/cli/azure/install-azure-cli).
-* Some scenarios require you to open a support ticket. For example, using a Private Link enabled workspace with a customer-managed key. For more information, see [Manage and increase quotas](how-to-manage-quotas.md#private-endpoint-and-private-dns-quota-increases).
- ## Limitations [!INCLUDE [register-namespace](../../includes/machine-learning-register-namespace.md)]
machine-learning How To Manage Environments In Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-environments-in-studio.md
To create an environment:
1. Select the **Create** button. Create an environment by specifying one of the following:
-* Pip requirements [file](https://pip.pypa.io/stable/cli/pip_install/#requirements-file-format)
-* Conda yaml [file](https://conda.io/projects/conda/latest/user-guide/tasks/manage-environments.html#sharing-an-environment)
+* Pip requirements [file](https://pip.pypa.io/en/stable/cli/pip_install)
+* Conda yaml [file](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html)
* Docker [image](https://hub.docker.com/search?q=&type=image) * [Dockerfile](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-quotas.md
Previously updated : 05/25/2021 Last updated : 06/14/2021
Azure uses limits and quotas to prevent budget overruns due to fraud, and to hon
> + Creating workspace-level quotas. > + Viewing your quotas and limits. > + Requesting quota increases.
-> + Private endpoint and DNS quotas.
Along with managing quotas, you can learn how to [plan and manage costs for Azure Machine Learning](concept-plan-manage-cost.md) or learn about the [service limits in Azure Machine Learning](resource-limits-quotas-capacity.md).
When you're requesting a quota increase, select the service that you have in min
> [!NOTE] > [Free trial subscriptions](https://azure.microsoft.com/offers/ms-azr-0044p) are not eligible for limit or quota increases. If you have a free trial subscription, you can upgrade to a [pay-as-you-go](https://azure.microsoft.com/offers/ms-azr-0003p/) subscription. For more information, see [Upgrade Azure free trial to pay-as-you-go](../cost-management-billing/manage/upgrade-azure-subscription.md) and [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq).
-## Private endpoint and private DNS quota increases
-
-There are limits on the number of private endpoints and private DNS zones that you can create in a subscription.
-
-Azure Machine Learning creates resources in your (customer) subscription, but some scenarios create resources in a Microsoft-owned subscription.
-
- In the following scenarios, you might need to request a quota allowance in the Microsoft-owned subscription:
-
-* Azure Private Link enabled workspace with a customer-managed key (CMK)
-* Attaching a Private Link enabled Azure Kubernetes Service cluster to your workspace
-
-To request an allowance for these scenarios, use the following steps:
-
-1. [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md#create-a-support-request) and select the following options in the __Basics__ section:
-
- | Field | Selection |
- | -- | -- |
- | Issue type | **Technical** |
- | Service | **My services**. Then select __Machine Learning__ in the drop-down list. |
- | Problem type | **Workspace Configuration and Security** |
- | Problem subtype | **Private Endpoint and Private DNS Zone allowance request** |
-
-2. In the __Details__ section, use the __Description__ field to provide the Azure region and the scenario that you plan to use. If you need to request quota increases for multiple subscriptions, list the subscription IDs in this field.
-
-3. Select __Create__ to create the request.
-- ## Next steps + [Plan and manage costs for Azure Machine Learning](concept-plan-manage-cost.md)
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-rest.md
providers/Microsoft.MachineLearningServices/workspaces/{your-workspace-name}/com
-H "Authorization:Bearer {your-access-token}" ```
-To create or overwrite a named compute resource, you'll use a PUT request. In the following, in addition to the now-familiar substitutions of `your-subscription-id`, `your-resource-group`, `your-workspace-name`, and `your-access-token`, substitute `your-compute-name`, and values for `location`, `vmSize`, `vmPriority`, `scaleSettings`, `adminUserName`, and `adminUserPassword`. As specified in the reference at [Machine Learning Compute - Create Or Update SDK Reference](/rest/api/azureml/workspacesandcomputes/machinelearningcompute/createorupdate), the following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes:
+To create or overwrite a named compute resource, you'll use a PUT request. In the following, in addition to the now-familiar substitutions of `your-subscription-id`, `your-resource-group`, `your-workspace-name`, and `your-access-token`, substitute `your-compute-name`, and values for `location`, `vmSize`, `vmPriority`, `scaleSettings`, `adminUserName`, and `adminUserPassword`. As specified in the reference at [Machine Learning Compute - Create Or Update SDK Reference](/rest/api/azureml/workspaces/createorupdate), the following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes:
```bash curl -X PUT \
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-inferencing-vnet.md
Previously updated : 05/14/2021 Last updated : 06/14/2021
By default, AKS clusters have a control plane, or API server, with public IP add
After you create the private AKS cluster, [attach the cluster to the virtual network](how-to-create-attach-kubernetes.md) to use with Azure Machine Learning.
-> [!IMPORTANT]
-> Before using a private link enabled AKS cluster with Azure Machine Learning, you must open a support incident to enable this functionality. For more information, see [Manage and increase quotas](how-to-manage-quotas.md#private-endpoint-and-private-dns-quota-increases).
- ### Internal AKS load balancer By default, AKS deployments use a [public load balancer](../aks/load-balancer-standard.md). In this section, you learn how to configure AKS to use an internal load balancer. An internal (or private) load balancer is used where only private IPs are allowed as frontend. Internal load balancers are used to load balance traffic inside a virtual network
machine-learning How To Train Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-cli.md
For instance, look at the `jobs/train/lightgbm/iris` project directory in the ex
```tree .
-Γö£ΓöÇΓöÇ environment.yml
Γö£ΓöÇΓöÇ job-sweep.yml Γö£ΓöÇΓöÇ job.yml ΓööΓöÇΓöÇ src ΓööΓöÇΓöÇ main.py ```
-This directory contains two job files, a conda environment file, and a source code subdirectory `src`. While this example only has a single file under `src`, the entire subdirectory is recursively uploaded and available for use in the job.
+This directory contains two job files and a source code subdirectory `src`. While this example only has a single file under `src`, the entire subdirectory is recursively uploaded and available for use in the job.
The basic command job is configured via the `job.yml`:
While running this job locally is slower than running `python main.py` in a loca
> [Docker](https://docker.io) needs to be installed and running locally. Python needs to be installed in the job's environment. For local runs which use `inputs`, the Python package `azureml-dataprep` needs to be installed in the job's environment. > [!TIP]
-> This will take a few minutes to pull the base Docker image and create the conda environment on top of it. Use prebuilt Docker images to avoid the image build time.
+> This will take a few minutes to pull the base Docker image. Use prebuilt Docker images to avoid the image build time.
## Create compute
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/resource-curated-environments.md
Last updated 4/2/2021
This article lists the curated environments in Azure Machine Learning. Curated environments are provided by Azure Machine Learning and are available in your workspace by default. They are backed by cached Docker images that use the latest version of the Azure Machine Learning SDK, reducing the run preparation cost and allowing for faster deployment time. Use these environments to quickly get started with various machine learning frameworks. > [!NOTE]
-> This list is updated as of April 2021. Use the Python SDK or CLI to get the most updated list of environments and their dependencies. For more information, see the [environments article](./how-to-use-environments.md#use-a-curated-environment). Following the release of this new set, previous curated environments will be hidden but can still be used.
+> This list is updated as of April 2021. Use the Python [SDK](https://docs.microsoft.com/azure/machine-learning/how-to-use-environments), [CLI](https://docs.microsoft.com/cli/azure/ml/environment?view=azure-cli-latest#az_ml_environment_list), or Azure Machine Learning [studio](how-to-manage-environments-in-studio.md) to get the most updated list of environments and their dependencies. For more information, see the [environments article](how-to-use-environments.md#use-a-curated-environment). Following the release of this new set, previous curated environments will be hidden but can still be used.
+ ## PyTorch - AzureML-pytorch-1.7-ubuntu18.04-py37-cuda11-gpu
- - An environment for deep learning with PyTorch containing the Azure ML SDK and additional python packages.
- - PyTorch version: 1.7
- - Python version: 3.7
- - Base image: mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04
- - CUDA version: 11.0.3
- - OpenMPI version: 4.1.0
- - Ubuntu version: 18.04
+ - An environment for deep learning with PyTorch containing the AzureML Python SDK and additional python packages.
+ - The following Dockerfile can be customized for your personal workflows:
+
+ ```dockerfile
+ FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20210615.v1
+
+ ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/pytorch-1.7
+
+ # Create conda environment
+ RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
+ python=3.7 \
+ pip=20.2.4 \
+ pytorch=1.7.1 \
+ torchvision=0.8.2 \
+ torchaudio=0.7.2 \
+ cudatoolkit=11.0 \
+ nvidia-apex=0.1.0 \
+ -c anaconda -c pytorch -c conda-forge
+
+ # Prepend path to AzureML conda environment
+ ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
+
+ # Install pip dependencies
+ RUN HOROVOD_WITH_PYTORCH=1 \
+ pip install 'matplotlib>=3.3,<3.4' \
+ 'psutil>=5.8,<5.9' \
+ 'tqdm>=4.59,<4.60' \
+ 'pandas>=1.1,<1.2' \
+ 'scipy>=1.5,<1.6' \
+ 'numpy>=1.10,<1.20' \
+ 'azureml-core==1.30.0' \
+ 'azureml-defaults==1.30.0' \
+ 'azureml-mlflow==1.30.0' \
+ 'azureml-telemetry==1.30.0' \
+ 'tensorboard==2.4.0' \
+ 'tensorflow-gpu==2.4.1' \
+ 'onnxruntime-gpu>=1.7,<1.8' \
+ 'horovod[pytorch]==0.21.3' \
+ 'future==0.17.1'
+
+ # This is needed for mpi to locate libpython
+ ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
+ ```
+
+## LightGBM
+- AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu
+ - An environment for machine learning with Scikit-learn, LightGBM, XGBoost, Dask containing the AzureML Python SDK and additional packages.
+ - The following Dockerfile can be customized for your personal workflows:
+
+ ```dockerfile
+ FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1
+
+ ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/lightgbm
+
+ # Create conda environment
+ RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
+ python=3.7 pip=20.2.4
+
+ # Prepend path to AzureML conda environment
+ ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
+
+ # Install pip dependencies
+ RUN HOROVOD_WITH_TENSORFLOW=1 \
+ pip install 'matplotlib>=3.3,<3.4' \
+ 'psutil>=5.8,<5.9' \
+ 'tqdm>=4.59,<4.60' \
+ 'pandas>=1.1,<1.2' \
+ 'numpy>=1.10,<1.20' \
+ 'scipy~=1.5.0' \
+ 'scikit-learn~=0.24.1' \
+ 'xgboost~=1.4.0' \
+ 'lightgbm~=3.2.0' \
+ 'dask~=2021.6.0' \
+ 'distributed~=2021.6.0' \
+ 'dask-ml~=1.9.0' \
+ 'adlfs~=0.7.0' \
+ 'azureml-core==1.30.0' \
+ 'azureml-defaults==1.30.0' \
+ 'azureml-mlflow==1.30.0' \
+ 'azureml-telemetry==1.30.0'
+
+ # This is needed for mpi to locate libpython
+ ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
+ ```
## Sklearn - AzureML-sklearn-0.24-ubuntu18.04-py37-cuda11-gpu
- - An environment for tasks such as regression, clustering, and classification with Scikit-learn. Contains the Azure ML SDK and additional python packages.
- - Scikit-learn version: 24.1
- - Python version: 3.7
- - Base image: mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04
- - CUDA version: 11.0.3
- - OpenMPI version: 4.1.0
- - Ubuntu version: 18.04
+ - An environment for tasks such as regression, clustering, and classification with Scikit-learn. Contains the AzureML Python SDK and additional python packages.
+ - The following Dockerfile can be customized for your personal workflows:
+
+ ```dockerfile
+ FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20210615.v1
+
+ ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/sklearn-0.24.1
+
+ # Create conda environment
+ RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
+ python=3.7 pip=20.2.4
+
+ # Prepend path to AzureML conda environment
+ ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
+
+ # Install pip dependencies
+ RUN pip install 'matplotlib>=3.3,<3.4' \
+ 'psutil>=5.8,<5.9' \
+ 'tqdm>=4.59,<4.60' \
+ 'pandas>=1.1,<1.2' \
+ 'scipy>=1.5,<1.6' \
+ 'numpy>=1.10,<1.20' \
+ 'azureml-core==1.30.0' \
+ 'azureml-defaults==1.30.0' \
+ 'azureml-mlflow==1.30.0' \
+ 'azureml-telemetry==1.30.0' \
+ 'scikit-learn==0.24.1'
+
+ # This is needed for mpi to locate libpython
+ ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
+ ```
## TensorFlow - AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
- - An environment for deep learning with Tensorflow containing the Azure ML SDK and additional python packages.
- - Tensorflow version: 2.4
- - Python version: 3.7
- - Base image: mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04
- - CUDA version: 11.0.3
- - OpenMPI version: 4.1.0
- - Ubuntu version: 18.04
+ - An environment for deep learning with Tensorflow containing the AzureML Python SDK and additional python packages.
+ - The following Dockerfile can be customized for your personal workflows:
+
+ ```dockerfile
+ FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20210615.v1
+
+ ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/tensorflow-2.4
+
+ # Create conda environment
+ RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
+ python=3.7 pip=20.2.4
+
+ # Prepend path to AzureML conda environment
+ ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
+
+ # Install pip dependencies
+ RUN HOROVOD_WITH_TENSORFLOW=1 \
+ pip install 'matplotlib>=3.3,<3.4' \
+ 'psutil>=5.8,<5.9' \
+ 'tqdm>=4.59,<4.60' \
+ 'pandas>=1.1,<1.2' \
+ 'scipy>=1.5,<1.6' \
+ 'numpy>=1.10,<1.20' \
+ 'azureml-core==1.30.0' \
+ 'azureml-defaults==1.30.0' \
+ 'azureml-mlflow==1.30.0' \
+ 'azureml-telemetry==1.30.0' \
+ 'tensorboard==2.4.0' \
+ 'tensorflow-gpu==2.4.1' \
+ 'onnxruntime-gpu>=1.7,<1.8' \
+ 'horovod[tensorflow-gpu]==0.21.3'
+
+ # This is needed for mpi to locate libpython
+ ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
+ ```
## Inference only curated environments and prebuilt docker images - Read about inference only curated environments and MCR path for prebuilt docker images, see [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md#list-of-prebuilt-docker-images-for-inference).
machine-learning Execute Data Science Tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/execute-data-science-tasks.md
After multiple models have been built, you usually need to have a system for reg
1. [Azure Machine Learning - model management service](../index.yml) 2. [ModelDB from MIT](https://people.csail.mit.edu/mvartak/papers/modeldb-hilda.pdf) 3. [SQL-server as a model management system](https://blogs.technet.microsoft.com/dataplatforminsider/2016/10/17/sql-server-as-a-machine-learning-model-management-system/)
-4. [Microsoft Machine Learning Server](/sql/advanced-analytics/r/r-server-standalone)
## 3. <a name='Deployment-3'></a> Deployment
There are various approaches and platforms to put models into production. Here a
- [Model deployment in Azure Machine Learning](../how-to-deploy-and-where.md) - [Deployment of a model in SQL-server](/sql/advanced-analytics/tutorials/sqldev-py6-operationalize-the-model)-- [Microsoft Machine Learning Server](/sql/advanced-analytics/r/r-server-standalone) > [!NOTE] > Prior to deployment, one has to insure the latency of model scoring is low enough to use in production.
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-create-secure-workspace.md
+
+ Title: Create a secure workspace
+
+description: Create an Azure Machine Learning workspace and required Azure services inside a secure virtual network.
++++++ Last updated : 06/21/2021+++
+# How to create a secure workspace
+
+In this article, learn how to create and connect to a secure Azure Machine Learning workspace. A secure workspace uses Azure Virtual Network to create a security boundary around resources used by Azure Machine Learning.
+
+In this tutorial, you accomplish the following tasks:
+
+> [!div class="checklist"]
+> * Create an Azure Virtual Network (VNet) to __secure communications between services in the virtual network__.
+> * Create a Network Security Group (NSG) to __configure what network traffic is allowed into and out of the VNet__.
+> * Create an Azure Storage Account (blob and file) behind the VNet. This service is used as __default storage for the workspace__.
+> * Create an Azure Key Vault behind the VNet. This service is used to __store secrets used by the workspace__. For example, the security information needed to access the storage account.
+> * Create an Azure Container Registry (ACR). This service is used as a repository for Docker images. __Docker images provide the compute environments needed when training a machine learning model or deploying a trained model as an endpoint__.
+> * Create an Azure Machine Learning workspace.
+> * Create a jump box. A jump box is an Azure Virtual Machine that is behind the VNet. Since the VNet restricts access from the public internet, __the jump box is used as a way to connect to resources behind the VNet__.
+> * Configure Azure Machine Learning studio to work behind a VNet. The studio provides a __web interface for Azure Machine Learning__.
+> * Create an Azure Machine Learning compute cluster. A compute cluster is used when __training machine learning models in the cloud__. In configurations where Azure Container Registry is behind the VNet, it is also used to build Docker images.
+> * Connect to the jump box and use the Azure Machine Learning studio.
+
+## Prerequisites
+
+* Familiarity with Azure Virtual Networks and IP networking
+* While most of the steps in this article use the Azure portal or the Azure Machine Learning studio, some steps use the Azure CLI extension for Machine Learning.
+
+## Create a virtual network
+
+To create a virtual network, use the following steps:
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Virtual Network__ in the search field. Select the __Virtual Network__ entry, and then select __Create__.
++
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-resource-search-vnet.png" alt-text="The create resource UI search":::
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-resource-vnet.png" alt-text="Virtual network create":::
+
+1. From the __Basics__ tab, select the Azure __subscription__ to use for this resource and then select or create a new __resource group__. Under __Instance details__, enter a friendly __name__ for your virtual network and select the __region__ to create it in.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-basics.png" alt-text="Image of the basic virtual network config":::
+
+1. Select __IP Addresses__ tab. The default settings should be similar to the following image:
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-ip-address-default.png" alt-text="Default IP Address screen":::
+
+ Use the following steps to configure the IP address and configure a subnet for training and scoring resources:
+
+ > [!TIP]
+ > While you can use a single subnet for all Azure ML resources, the steps in this article show how to create two subnets to separate the training & scoring resources.
+ >
+ > The workspace and other dependency services will go into the training subnet. They can still be used by resources in other subnets, such as the scoring subnet.
+
+ 1. Look at the default __IPv4 address space__ value. In the screenshot, the value is __172.17.0.0/16__. __The value may be different for you__. While you can use a different value, the rest of the steps in this tutorial are based on the 172.17.0.0/16 value.
+ 1. Select the __Default__ subnet and then select __Remove subnet__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/delete-default-subnet.png" alt-text="Screenshot of deleting default subnet":::
+
+ 1. To create a subnet to contain the workspace, dependency services, and resources used for training, select __+ Add subnet__ and use the following values for the subnet:
+ * __Subnet name__: Training
+ * __Subnet address range__: 172.17.0.0/24
+ * __Services__: Select the following
+ * __Microsoft.Storage__
+ * __Microsoft.KeyVault__
+ * __Microsoft.ContainerRegistry__
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-training-subnet.png" alt-text="Screenshot of Training subnet":::
+
+ 1. To create a subnet for compute resources used to score your models, select __+ Add subnet__ again, and use the follow values:
+ * __Subnet name__: Scoring
+ * __Subnet address range__: 172.17.1.0/24
+ * __Services__: Select the following
+ * __Microsoft.Storage__
+ * __Microsoft.KeyVault__
+ * __Microsoft.ContainerRegistry__
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-scoring-subnet.png" alt-text="Screenshot of Scoring subnet":::
+
+1. Select __Security__. For __BastionHost__, select __Enable__. [Azure Bastion](../bastion/bastion-overview.md) provides a secure way to access the VM jump box you will create inside the VNet in a later step. Use the following values for the remaining fields:
+
+ * __Bastion name__: A unique name for this Bastion instance
+ * __AzureBastionSubnetAddress space__: 172.17.2.0/27
+ * __Public IP address__: Create a new public IP address.
+
+ Leave the other fields at the default values.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-bastion.png" alt-text="Screenshot of Bastion config":::
+
+1. Select __Review + create__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-ip-address-final.png" alt-text="Screenshot showing the review + create button":::
+
+1. Verify that the information is correct, and then select __Create__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-review.png" alt-text="Screenshot of the review page":::
+
+## Create network security groups
+
+Use the following steps create a network security group (NSG) and add rules required for using Azure Machine Learning compute clusters and compute instances to train models:
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Network security group__. Select the __Network security group__ entry, and then select __Create__.
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __name__ for the new network security group.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-nsg.png" alt-text="Image of the basic network security group config":::
+
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
+
+### Apply security rules
+
+1. Once the network security group has been created, use the __Go to resource__ button and then select __Inbound security rules__. Select __+ Add__ to add a new rule.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/nsg-inbound-security-rules.png" alt-text="Add security rules":::
+
+1. Use the following values for the new rule, and then select __Add__ to add the rule to the network security group:
+ * __Source__: Service Tag
+ * __Source service tag__: BatchNodeManagement
+ * __Source port ranges__: *
+ * __Destination__: Any
+ * __Service__: Custom
+ * __Destination port ranges__: 29876-29877
+ * __Protocol__: TCP
+ * __Action__: Allow
+ * __Priority__: 1040
+ * __Name__: AzureBatch
+ * __Description__: Azure Batch management traffic
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/nsg-batchnodemanagement.png" alt-text="Image of the batchnodemanagement rule":::
++
+1. Select __+ Add__ to add another rule. Use the following values for this rule, and then select __Add__ to add the rule:
+ * __Source__: Service Tag
+ * __Source service tag__: AzureMachineLearning
+ * __Source port ranges__: *
+ * __Destination__: Any
+ * __Service__: Custom
+ * __Destination port ranges__: 44224
+ * __Protocol__: TCP
+ * __Action__: Allow
+ * __Priority__: 1050
+ * __Name__: AzureML
+ * __Description__: Azure Machine Learning traffic to compute cluster/instance
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/nsg-azureml.png" alt-text="Image of the azureml rule":::
+
+1. From the left navigation, select __Subnets__, and then select __+ Associate__. From the __Virtual network__ dropdown, select your network. Then select the __Training__ subnet. Finally, select __OK__.
+
+ > [!TIP]
+ > The rules added in this section only apply to training computes, so do not need to be associated with the scoring subnet.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/nsg-associate-subnet.png" alt-text="Image of the associate config":::
+
+## Create a storage account
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Storage account__. Select the __Storage Account__ entry, and then select __Create__.
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __Storage account name__, and set __Redundancy__ to __Locally-redundant storage (LRS)__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-storage.png" alt-text="Image of storage account basic config":::
+
+1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add private endpoint__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-enable-private-endpoint.png" alt-text="UI to add the blob private network":::
+
+1. On the __Create private endpoint__ form, use the following values:
+ * __Subscription__: The same Azure subscription that contains the previous resources you've created.
+ * __Resource group__: The same Azure resource group that contains the previous resources you've created.
+ * __Location__: The same Azure region that contains the previous resources you've created.
+ * __Name__: A unique name for this private endpoint.
+ * __Target sub-resource__: blob
+ * __Virtual network__: The virtual network you created earlier.
+ * __Subnet__: Training (172.17.0.0/24)
+ * __Private DNS integration__: Yes
+ * __Private DNS Zone__: privatelink.blob.core.windows.net
+
+ Select __OK__ to create the private endpoint.
+
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
+
+1. Once the Storage Account has been created, select __Go to resource__:
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-go-to-resource.png" alt-text="Go to new storage resource":::
+
+1. From the left navigation, select __Networking__ the __Private endpoint connections__ tab, and then select __+ Private endpoint__:
+
+ > [!NOTE]
+ > While you created a private endpoint for Blob storage in the previous steps, you must also create one for File storage.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-networking.png" alt-text="UI for storage account networking":::
+
+1. On the __Create a private endpoint__ form, use the same __subscription__, __resource group__, and __Region__ that you have used for previous resources. Enter a unique __Name__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-private-endpoint.png" alt-text="UI to add the file private endpoint":::
+
+1. Select __Next : Resource__, and then set __Target sub-resource__ to __file__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-private-endpoint-resource.png" alt-text="Add the subresource of 'file'":::
+
+1. Select __Next : Configuration__, and then use the following values:
+ * __Virtual network__: The network you created previously
+ * __Subnet__: Training
+ * __Integrate with private DNS zone__: Yes
+ * __Private DNS zone__: privatelink.file.core.windows.net
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-private-endpoint-config.png" alt-text="UI to configure the file private endpoint":::
+
+1. Select __Review + Create__. Verify that the information is correct, and then select __Create__.
+
+## Create a key vault
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Key Vault__. Select the __Key Vault__ entry, and then select __Create__.
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __Key vault name__. Leave the other fields at the default value.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-key-vault.png" alt-text="Create a new key vault":::
+
+1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/key-vault-networking.png" alt-text="Key vault networking":::
+
+1. On the __Create private endpoint__ form, use the following values:
+ * __Subscription__: The same Azure subscription that contains the previous resources you've created.
+ * __Resource group__: The same Azure resource group that contains the previous resources you've created.
+ * __Location__: The same Azure region that contains the previous resources you've created.
+ * __Name__: A unique name for this private endpoint.
+ * __Target sub-resource__: Vault
+ * __Virtual network__: The virtual network you created earlier.
+ * __Subnet__: Training (172.17.0.0/24)
+ * __Private DNS integration__: Yes
+ * __Private DNS Zone__: privatelink.vaultcore.azure.net
+
+ Select __OK__ to create the private endpoint.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/key-vault-private-endpoint.png" alt-text="Configure a key vault private endpoint":::
+
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
+
+## Create a container registry
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Container Registry__. Select the __Container Registry__ entry, and then select __Create__.
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __location__ you previously used for the virtual network. Enter a unique __Registry name__ and set the __SKU__ to __Premium__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-container-registry.png" alt-text="Create a container registry":::
+
+1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-networking.png" alt-text="Container registry networking":::
+
+1. On the __Create private endpoint__ form, use the following values:
+ * __Subscription__: The same Azure subscription that contains the previous resources you've created.
+ * __Resource group__: The same Azure resource group that contains the previous resources you've created.
+ * __Location__: The same Azure region that contains the previous resources you've created.
+ * __Name__: A unique name for this private endpoint.
+ * __Target sub-resource__: registry
+ * __Virtual network__: The virtual network you created earlier.
+ * __Subnet__: Training (172.17.0.0/24)
+ * __Private DNS integration__: Yes
+ * __Private DNS Zone__: privatelink.azurecr.io
+
+ Select __OK__ to create the private endpoint.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-private-endpoint.png" alt-text="Configure container registry private endpoint":::
+
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
+1. After the container registry has been created, select __Go to resource__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-go-to-resource.png" alt-text="Select 'go to resource'":::
+
+1. From the left of the page, select __Access keys__, and then enable __Admin user__. This setting is required when using Azure Container Registry inside a virtual network with Azure Machine Learning.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-admin-user.png" alt-text="Screenshot of admin user toggle":::
+
+## Create a workspace
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Machine Learning__. Select the __Machine Learning__ entry, and then select __Create__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/machine-learning-create.png" alt-text="{alt-text}":::
+
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Use the follow values for the other fields:
+ * __Workspace name__: A unique name for your workspace.
+ * __Storage account__: Select the storage account you created previously.
+ * __Key vault__: Select the key vault you created previously.
+ * __Application insights__: Use the default value.
+ * __Container registry__: Use the container registry you created previously.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-machine-learning-workspace.png" alt-text="Basic workspace configuration":::
+
+1. From the __Networking__ tab, select __Private endpoint__ and then select __+ add__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/machine-learning-workspace-networking.png" alt-text="Workspace networking":::
+
+1. On the __Create private endpoint__ form, use the following values:
+ * __Subscription__: The same Azure subscription that contains the previous resources you've created.
+ * __Resource group__: The same Azure resource group that contains the previous resources you've created.
+ * __Location__: The same Azure region that contains the previous resources you've created.
+ * __Name__: A unique name for this private endpoint.
+ * __Target sub-resource__: amlworkspace
+ * __Virtual network__: The virtual network you created earlier.
+ * __Subnet__: Training (172.17.0.0/24)
+ * __Private DNS integration__: Yes
+ * __Private DNS Zone__: Leave the two private DNS zones at the default values of __privatelink.api.azureml.ms__ and __privatelink.notebooks.azure.net__.
+
+ Select __OK__ to create the private endpoint.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/machine-learning-workspace-private-endpoint.png" alt-text="Screenshot of workspace private network config":::
+
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
+1. Once the workspace has been created, select __Go to resource__.
+1. From the __Settings__ section on the left, select __Private endpoint connections__ and then select the link in the __Private endpoint__ column:
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/workspace-private-endpoint-connections.png" alt-text="Screenshot of workspace private endpoint connections":::
+
+1. Once the private endpoint information appears, select __DNS configuration__ from the left of the page. Save the IP address and fully qualified domain name (FQDN) information on this page, as it will be used later.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/workspace-private-endpoint-dns.png" alt-text="screenshot of IP and FQDN entries":::
+
+> [!IMPORTANT]
+> There are still some configuration steps needed before you can fully use the workspace. However, these require you to connect to the workspace.
+
+## Enable studio
+
+Azure Machine Learning studio is a web-based application that lets you easily manage your workspace. However, it needs some extra configuration before it can be used with resources secured inside a VNet. Use the following steps to enable studio:
+
+1. From the Azure portal, select your storage account and then select __Access control (IAM)__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-access-control.png" alt-text="screenshot of access control entry":::
+
+1. Select __+ Add__, and then __Add role assignment__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-add-role.png" alt-text="Screenshot of + Add menu.":::
+
+1. From the Add role assignment dialog, set the __Role__ to __Storage Blob Data Contributor__ and then type the name of your Azure Machine Learning workspace in the __Select__ field. Select the item that appears and then select __Save__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-add-blob-data-contributor.png" alt-text="Screenshot of adding storage blob data Contributor role":::
+
+1. When using an Azure Storage Account that has a private endpoint, add the workspace-managed identity as a __Reader__ for the storage private endpoint(s). From the Azure portal, select your storage account and then select __Networking__. Next, select __Private endpoint connections__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-private-endpoint-select.png" alt-text="Screenshot of storage private endpoints":::
+
+1. For __each private endpoint listed__, use the following steps:
+
+ 1. Select the link in the __Private endpoint__ column.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-private-endpoint-selected.png" alt-text="Screenshot of endpoints to select":::
+
+ 1. Select __Access control (IAM)__ from the left side. Select __+ Add__, and then __Add role assignment__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-private-endpoint-add-role.png" alt-text="Screenshot of adding role":::
+
+ 1. From the Add role assignment dialog, set the __Role__ to __Reader__ and then type the name of your Azure Machine Learning workspace in the __Select__ field. Select the item that appears and then select __Save__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/storage-private-endpoint-add-workspace.png" alt-text="Screenshot of adding reader role":::
+
+## Connect to the workspace
+
+There are several ways that you can connect to the secured workspace. The steps in this article use a __jump box__, which is a virtual machine in the VNet. You can connect to it using your web browser and Azure Bastion. The following table lists several other ways that you might connect to the secure workspace:
+
+| Method | Description |
+| -- | -- |
+| [Azure VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) | Connects on-premises networks to the VNet over a private connection. Connection is made over the public internet. |
+| [ExpressRoute](https://azure.microsoft.com/services/expressroute/) | Connects on-premises networks into the cloud over a private connection. Connection is made using a connectivity provider. |
+
+> [!IMPORTANT]
+> When using a __VPN gateway__ or __ExpressRoute__, you will need to plan how name resolution works between your on-premises resources and those in the VNet. For more information, see [Use a custom DNS server](how-to-custom-dns.md).
+
+### Create a jump box (VM)
+
+Use the following steps to create a Data Science Virtual Machine for use as a jump box:
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Data science virtual machine__. Select the __Data science virtual machine - Windows__ entry, and then select __Create__.
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Provide a unique __Virtual machine name__, __Username__, and __Password__. Leave other fields at the default values.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-virtual-machine-basic.png" alt-text="Image of VM basic configuration":::
+
+1. Select __Networking__, and then select the __Virtual network__ you created earlier. Use the following information to set the remaining fields:
+
+ * Select the __Training__ subnet.
+ * Set the __Public IP__ to __None__.
+ * Leave the other fields at the default value.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-virtual-machine-network.png" alt-text="Image of VM network configuration":::
+
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
++
+### Connect to the jump box
+
+1. Once the workspace has been created, select __Go to resource__.
+1. From the top of the page, select __Connect__ and then __Bastion__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/virtual-machine-connect.png" alt-text="Image of the connect/bastion UI":::
+
+1. Select __Use Bastion__, and then provide your authentication information for the virtual machine, and a connection will be established in your browser.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/use-bastion.png" alt-text="Image of use bastion dialog":::
+
+## Create a compute cluster and compute instance
+
+A compute cluster is used by your training jobs. A compute instance provides a Jupyter Notebook experience on a shared compute resource attached to your workspace.
+
+1. From an Azure Bastion connection to the jump box, open the __Microsoft Edge__ browser on the remote desktop.
+1. In the remote browser session, go to __https://ml.azure.com__. When prompted, authenticate using your Azure AD account.
+1. From the __Welcome to studio!__ screen, select the __Machine Learning workspace__ you created earlier and then select __Get started__.
+
+ > [!TIP]
+ > If your Azure AD account has access to multiple subscriptions or directories, use the __Directory and Subscription__ dropdown to select the one that contains the workspace.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/studio-select-workspace.png" alt-text="Screenshot of the select workspace dialog":::
+
+1. From studio, select __Compute__, __Compute clusters__, and then __+ New__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/studio-new-compute-cluster.png" alt-text="Screenshot of new compute cluster workflow":::
+
+1. From the __Virtual Machine__ dialog, select __Next__ to accept the default virtual machine configuration.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/studio-new-compute-vm.png" alt-text="Screenshot of compute cluster vm settings":::
+
+1. From the __Configure Settings__ dialog, enter __cpu-cluster__ as the __Compute name__. Set the __Subnet__ to __Training__ and then select __Create__ to create the cluster.
+
+ > [!TIP]
+ > Compute clusters dynamically scale the nodes in the cluster as needed. We recommend leaving the minimum number of nodes at 0 to reduce costs when the cluster is not in use.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/studio-new-compute-settings.png" alt-text="Screenshot of new compute cluster settings":::
+
+1. From studio, select __Compute__, __Compute instance__, and then __+ New__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance.png" alt-text="Screenshot of new compute instance workflow":::
+
+1. From the __Virtual Machine__ dialog, select __Next__ to accept the default virtual machine configuration.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance-vm.png" alt-text="Screenshot of compute instance vm settings":::
+
+1. From the __Configure Settings__ dialog, enter a unique __Computer name__, set the __Subnet__ to __Training__, and then select __Create__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance-settings.png" alt-text="Screenshot of compute instance settings":::
+
+For more information on creating a compute cluster and compute cluster, including how to do so with Python and the CLI, see the following articles:
+
+* [Create a compute cluster](how-to-create-attach-compute-cluster.md)
+* [Create a compute instance](how-to-create-manage-compute-instance.md)
+
+## Configure image builds
+
+When Azure Container Registry is behind the virtual network, Azure Machine Learning can't use it to directly build Docker images (used for training and deployment). Instead, configure the workspace to use the compute cluster you created earlier. Use the following steps to create a compute cluster and configure the workspace to use it to build images:
+
+1. Navigate to [https://shell.azure.com/](https://shell.azure.com/) to open the Azure Cloud Shell.
+1. From the Cloud Shell, use the following command to install the 1.0 CLI for Azure Machine Learning:
+
+ ```azurecli-interactive
+ az extension add -n azure-cli-ml
+ ```
+
+1. To update the workspace to use the compute cluster to build Docker images. Replace `docs-ml-rg` with your resource group. Replace `docs-ml-ws` with your workspace. Replace `cpu-cluster` with the compute cluster to use:
+
+ ```azurecli-interactive
+ az ml workspace update -g docs-ml-rg -w docs-ml-ws --image-build-compute cpu-cluster
+ ```
+
+ > [!NOTE]
+ > You can use the same compute cluster to train models and build Docker images for the workspace.
+
+## Use the workspace
+
+At this point, you can use studio to interactively work with notebooks on the compute instance and run training jobs on the compute cluster. For a tutorial on using the compute instance and compute cluster, see [run a Python script](tutorial-1st-experiment-hello-world.md).
+
+## Stop compute instance and jump box
+
+> [!WARNING]
+> While it is running (started), the compute instance and jump box will continue charging your subscription. To avoid excess cost, __stop__ them when they are not in use.
+
+The compute cluster dynamically scales between the minimum and maximum node count set when you created it. If you accepted the defaults, the minimum is 0, which effectively turns off the cluster when not in use.
+### Stop the compute instance
+
+From studio, select __Compute__, __Compute clusters__, and then select the compute instance. Finally, select __Stop__ from the top of the page.
+
+### Stop the jump box
+
+Once it has been created, select the virtual machine in the Azure portal and then use the __Stop__ button. When you are ready to use it again, use the __Start__ button to start it.
++
+You can also configure the jump box to automatically shut down at a specific time. To do so, select __Auto-shutdown__, __Enable__, set a time, and then select __Save__.
++
+## Clean up resources
+
+If you plan to continue using the secured workspace and other resources, skip this section.
+
+To delete all resources created in this tutorial, use the following steps:
+
+1. In the Azure portal, select __Resource groups__ on the far left.
+1. From the list, select the resource group that you created in this tutorial.
+1. Select __Delete resource group__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/delete-resources.png" alt-text="Screenshot of delete resource group button":::
+
+1. Enter the resource group name, then select __Delete__.
+## Next steps
+
+Now that you have created a secure workspace and can access studio, learn how to [run a Python script](tutorial-1st-experiment-hello-world.md) using Azure Machine Learning.
media-services Media Services Arm Template Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/media-services-arm-template-quickstart.md
az group delete --name {name of the resource group}
To learn more about using an ARM template by following the process of creating a template with parameters, variables and more, try > [!div class="nextstepaction"]
-> [Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
mysql 01 Mysql Migration Guide Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide introduction"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL introduction"
description: "Migration guide from MySQL on-premises to Azure Data base for MySQL"
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Introduction
+# Migrate MySQL on-premises to Azure Database for MySQL
This migration guide is designed to provide stackable and actionable information for MySQL customers and software integrators seeking to migrate MySQL workloads to [Azure Database for MySQL](../../overview.md). This guide gives applicable knowledge that applies to most cases and provides guidance that leads to the successful planning and execution of a MySQL migration to Azure.
In addition to the PaaS offering, it's still possible to run MySQL in Azure VMs.
This guide focuses entirely on migrating the on-premises MySQL workloads to the Platform as a Service Azure Database for MySQL offering due to its various advantages over Infrastructure as a Service (IaaS) such as scale-up and scale-out, pay-as-you-go, high availability, security, and manageability features.
+## Next steps
+ > [!div class="nextstepaction"] > [Representative Use Case](./02-representative-use-case.md)
mysql 02 Representative Use Case https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/02-representative-use-case.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Representative Use Case"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Representative Use Case"
description: "The following use case is based on a real-world customer scenario of an enterprise who migrated their MySQL workload to Azure Database for MySQL."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Representative Use Case
+# Migrate MySQL on-premises to Azure Database for MySQL: Representative Use Case
## Prerequisites
These stages include:
| Stage | Name | Activities | |-|||
-| 1 | Pre-migration | Assessment, Planning, Migration Method Evaluation, Application Implications, Test Plans, Performance Baselines |
-| 2 | Migration | Execute Migration, Execute Test Plans |
-| 3 | Post-migration | Business Continuity, Disaster Recovery, Management, Security, Performance Optimization, Platform modernization |
+| 1 | Pre-migration | Assessment, Planning, Migration Method Evaluation, Application Implications, Test Plans, Performance Baselines |
+| 2 | Migration | Execute Migration, Execute Test Plans |
+| 3 | Post-migration| Business Continuity, Disaster Recovery, Management, Security, Performance Optimization, Platform modernization |
WWI has several instances of MySQL running with varying versions ranging from 5.5 to 5.7. They would like to move their instances to the latest version as soon as possible but would like to ensure their applications can still work if they move to the newer versions. They're comfortable moving to the same version in the cloud and upgrading afterward, but they would prefer that path if they can accomplish two tasks at once.
They would also like to ensure that their data workloads are safe and available
WWI wants to start with a simple application for the first migration and then move to more business-critical applications in a later phase. This provides the team with the knowledge and experience they need to prepare and plan for those future migrations.
+## Next steps
+ > [!div class="nextstepaction"]
-> [Assessment](./03-assessment.md)
+> [Assessment](./03-assessment.md)
mysql 03 Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/03-assessment.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide assessment"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Assessment"
description: "Before jumping right into migrating a MySQL workload, there's a fair amount of due diligence that must be performed."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Assessment
+# Migrate MySQL on-premises to Azure Database for MySQL: Assessment
## Prerequisites
Many of the other items are operational aspects that administrators should becom
MySQL has a rich history starting in 1995. Since then, it has evolved into a widely used database management system. Azure Database for MySQL started with the support of MySQL version 5.6 and has continued to 5.7 and recently 8.0. For the latest on Azure Database for MySQL version support, reference [Supported Azure Database for MySQL server versions.](../../concepts-supported-versions.md) In the Post Migration Management section, we review how upgrades (such as 5.7.20 to 5.7.21) are applied to the MySQL instances in Azure. > [!NOTE]
-> The jump from 5.x to 8.0 was largely due to the Oracle acquisition of MySQL. To read more about MySQL history, navigate to the [MySQL wiki page. ](https://en.wikipedia.org/wiki/MySQL)
+> The jump from 5.x to 8.0 was largely due to the Oracle acquisition of MySQL. To read more about MySQL history, navigate to the [MySQL wiki page](https://en.wikipedia.org/wiki/MySQL).
Knowing the source MySQL version is essential. The applications using the system may be using database objects and features that are specific to that version. Migrating a database to a lower version could cause compatibility issues and loss of functionality. It's also recommended the data and application instance are thoroughly tested before migrating to a newer version as the features introduced could break your application.
To find useful table information, use this query:
```dotnetcli SELECT
- tab.table_schema,
- tab.table_name,
- tab.engine as engine_type,
- tab.auto_increment,
- tab.table_rows,
- tab.create_time,
- tab.update_time,
- tco.constraint_type
- FROM information_schema.tables tab
- LEFT JOIN information_schema.table_constraints tco
- ON (tab.table_schema = tco.table_schema
- AND tab.table_name = tco.table_name
- )
+ tab.table_schema,
+ tab.table_name,
+ tab.engine as engine_type,
+ tab.auto_increment,
+ tab.table_rows,
+ tab.create_time,
+ tab.update_time,
+ tco.constraint_type
+ FROM information_schema.tables tab
+ LEFT JOIN information_schema.table_constraints tco
+ ON (tab.table_schema = tco.table_schema
+ AND tab.table_name = tco.table_name
+ )
WHERE tab.table_schema NOT IN ('mysql', 'information_schema', 'performance_ schema', 'sys')
Equipped with the assessment information (CPU, memory, storage, etc.), the migra
There are currently three tiers:
- - **Basic** : Workloads requiring light compute and I/O performance.
+ - **Basic**: Workloads requiring light compute and I/O performance.
- - **General Purpose** : Most business workloads requiring balanced compute and memory with scalable I/O throughput.
+ - **General Purpose**: Most business workloads requiring balanced compute and memory with scalable I/O throughput.
- - **Memory Optimized** : High-performance database workloads requiring in-memory performance for faster transaction processing and higher concurrency.
+ - **Memory Optimized**: High-performance database workloads requiring in-memory performance for faster transaction processing and higher concurrency.
The tier decision can be influenced by the RTO and RPO requirements of the data workload. When the data workload requires over 4 TB of storage, an extra step is required. Review and select [a region that supports](../../concepts-pricing-tiers.md#storage) up to 16 TB of storage.
Typically, the decision-making focuses on the storage and IOPS, or Input/output
After evaluating the entire WWI MySQL data workloads, WWI determined they would need at least 4 vCores and 20 GB of memory and at least 100 GB of storage space with an IOP capacity of 450 IOPS. Because of the 450 IOPS requirement, they need to allocate at least 150 GB of storage because of [Azure Database for MySQL IOPs allocation method.](../../concepts-pricing-tiers.md#storage) Additionally, they require at least up to 100% of your provisioned server storage as backup storage and one read replica. They don't anticipate an outbound egress of more than 5 GB.
-Using the [Azure Database for MySQL pricing calculator](https://azure.microsoft.com/pricing/details/mysql/), WWI was able to determine the costs for the Azure Database for MySQL instance. As of 9/2020, the total costs of ownership (TCO) are displayed in the following table for the WWI Conference Database:
+Using the [Azure Database for MySQL pricing calculator](https://azure.microsoft.com/pricing/details/mysql/), WWI was able to determine the costs for the Azure Database for MySQL instance. As of 9/2020, the total costs of ownership (TCO) are displayed in the following table for the WWI Conference Database.
| Resource | Description | Quantity | Cost | |-|-|-||
-| **Compute (General Purpose)** | 4 vCores, 20 GB | 1 @ $0.351/hr | $3074.76 / yr |
-| **Storage** | 5 GB | 12 x 150 @ $0.115 | $207 / yr |
-| **Backup** | Up to 100% of provisioned storage | No extra cost up to 100% of provisioned server storage | $0.00 / yr |
-| **Read Replica** | 1-second region replica | compute + storage | $3281.76 / yr |
-| **Network** | < 5GB/month egress | Free | |
-| **Total** | | | $6563.52 / yr |
+| **Compute (General Purpose)** | 4 vCores, 20 GB | 1 @ $0.351/hr | $3074.76 / yr |
+| **Storage** | 5 GB | 12 x 150 @ $0.115 | $207 / yr |
+| **Backup** | Up to 100% of provisioned storage| No extra cost up to 100% of provisioned server storage | $0.00 / yr |
+| **Read Replica** | 1-second region replica | compute + storage | $3281.76 / yr |
+| **Network** | < 5GB/month egress | Free | |
+| **Total** | | | $6563.52 / yr |
-After reviewing the initial costs, WWI's CIO confirmed they are on Azure for a period much longer than 3 years. They decided to use 3-year [reserve instances](../../concept-reserved-pricing.md) to save an extra \~$4K/yr:
+After reviewing the initial costs, WWI's CIO confirmed they are on Azure for a period much longer than 3 years. They decided to use 3-year [reserve instances](../../concept-reserved-pricing.md) to save an extra \~$4K/yr.
| Resource | Description | Quantity | Cost | |-|-|-||
-| **Compute (General Purpose)** | 4 vCores | 1 @ $0.1375/hr | $1204.5 / yr |
-| **Storage** | 5 GB | 12 x 150 @ $0.115 | $207 / yr |
-| **Backup** | Up to 100% of provisioned storage | No extra cost up to 100% of provisioned server storage | $0.00 / yr |
-| **Network** | < 5GB/month egress | Free | |
-| **Read Replica** | 1-second region replica | compute + storage | $1411.5 / yr |
-| **Total** | | | $2823 / yr |
+| **Compute (General Purpose)** | 4 vCores | 1 @ $0.1375/hr | $1204.5 / yr |
+| **Storage** | 5 GB | 12 x 150 @ $0.115 | $207 / yr |
+| **Backup** | Up to 100% of provisioned storage | No extra cost up to 100% of provisioned server storage | $0.00 / yr |
+| **Network** | < 5GB/month egress | Free | |
+| **Read Replica** | 1-second region replica | compute + storage | $1411.5 / yr |
+| **Total** | | | $2823 / yr |
As the table above shows, backups, network egress, and any read replicas must be considered in the total cost of ownership (TCO). As more databases are added, the storage and network traffic generated would be the only extra cost-based factor to consider.
Lastly, modify the server name in the application connection strings or switch t
## WWI scenario
-WWI started the assessment by gathering information about their MySQL data estate. They were able to compile the following:
+WWI started the assessment by gathering information about their MySQL data estate, as shown in the following table.
| Name | Source | Db Engine | Size | IOPS | Version | Owner | Downtime | ||--|--||||-|-|
For the first phase, WWI focused solely on the ConferenceDB database. The team n
- Be prepared to make application changes.
+## Next steps
+ > [!div class="nextstepaction"]
-> [Planning](./04-planning.md)
+> [Planning](./04-planning.md)
mysql 04 Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/04-planning.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Planning"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Planning"
description: "an azure landing zone is the target environment defined as the final resting place of a cloud migration project."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Planning
+# Migrate MySQL on-premises to Azure Database for MySQL: Planning
## Prerequisites
The migration tool location determines the network connectivity requirements. As
| Migration Tool | Type | Location | Inbound Network Requirements | Outbound Network Requirements | |-||-||-|
-| **Database Migration Service (DMS)** | Offline | Azure | Allow 3306 from external IP | A path to connect to the Azure MySQL database instance |
-| **Import/Export (MySQL Workbench, mysqldump)** | Offline | On-premises | Allow 3306 from internal IP | A path to connect to the Azure MySQL database instance |
-| **Import/Export (MySQL Workbench, mysqldump)** | Offline | Azure VM | Allow 3306 from external IP | A path to connect to the Azure MySQL database instance |
+| **Database Migration Service (DMS)** | Offline | Azure| Allow 3306 from external IP | A path to connect to the Azure MySQL database instance |
+| **Import/Export (MySQL Workbench, mysqldump)** | Offline| On-premises | Allow 3306 from internal IP | A path to connect to the Azure MySQL database instance |
+| **Import/Export (MySQL Workbench, mysqldump)** | Offline| Azure VM | Allow 3306 from external IP | A path to connect to the Azure MySQL database instance |
| **mydumper/myloader** | Offline | On-premises | Allow 3306 from internal IP | A path to connect to the Azure MySQL database instance | | **mydumper/myloader** | Offline | Azure VM | Allow 3306 from external IP | A path to connect to the Azure MySQL database instance |
-| **binlog** | Offline | On-premises | Allow 3306 from external IP or private IP via Private endpoints | A path for each replication server to the master |
+| **binlog** | Offline | On-premises | Allow 3306 from external IP or private IP via Private endpoints | A path for each replication server to the master |
Other networking considerations include:
WWI originally wanted to test an online migration, but the required network setu
- Determine if you're going to use the online or offline data migration strategy. -- Decide on the SSL certificate strategy.
+- Decide on the SSL certificate strategy.
+## Next steps
+ > [!div class="nextstepaction"] > [Migration Methods](./05-migration-methods.md)
mysql 05 Migration Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/05-migration-methods.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Migration Methods"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Migration Methods"
description: "Getting the data from the source to target will require using tools or features of MySQL to accomplish the migration." -+ Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Migration Methods
+# Migrate MySQL on-premises to Azure Database for MySQL: Migration Methods
## Prerequisites
MySQL Workbench provides a wizard-based UI to do full or partial export and impo
`mysqldump` is typically provided as part of the MySQL installation. It's a [client utility](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) that can be run to create logical backups that equate to a set of SQL statements that can be replayed to rebuild the database to a point in time. `mysqldump` is not intended as a fast or scalable solution for backing up or migrating large amounts of data. Executing a large set of SQL insert statements can perform poorly due to the disk I/O required to update indexes. However, when combined with other tools that require the original schema, `mysqldump` is a great tool for generating the database and table schemas. The schemas can create the target landing zone environment.
-The `mysqldump` utility provides useful features during the data migration phase. Performance considerations need to be evaluated before running the utility. See [Performance considerations.](../../concepts-migrate-dump-restore.md#performance-considerations)
+The `mysqldump` utility provides useful features during the data migration phase. Performance considerations need to be evaluated before running the utility. See [Performance considerations](../../concepts-migrate-dump-restore.md#performance-considerations).
### mydumper and myloader
There are many paths WWI can take to migrate their MySQL workloads. We've provid
| Objective | Description | Tool | Prerequisites | Advantages | Disadvantages | |--|-|||||
-| **Fastest migration possible** | Parallel approach | mydumper and myloader | Linux | Highly parallelized | Target throttling |
-| **Online migration** | Keep the source up for as long as possible | binlog | None | Seamless | Extra processing and storage |
-| **Offline migration** | Keep the source up for as long as possible | Database Migration Service (DMS) | None | Repeatable process | Limited to data only, supports all MySQL versions |
+| **Fastest migration possible** | Parallel approach| mydumper and myloader | Linux | Highly parallelized | Target throttling |
+| **Online migration** | Keep the source up for as long as possible | binlog | None | Seamless | Extra processing and storage |
+| **Offline migration** | Keep the source up for as long as possible | Database Migration Service (DMS)| None | Repeatable process | Limited to data only, supports all MySQL versions |
| **Highly Customized Offline Migration** | Selectively export objects | mysqldump | None | Highly customizable | Manual | | **Offline Migration Semi-automated** | UI-based export and import | MySQL Workbench | Download and Install | Semi-automated | Only common sets of switches are supported |
WWI has selected its conference database as its first migration workload. The wo
- Always verify if the data workload supports the method.
+## Next steps
+ > [!div class="nextstepaction"] > [Test Plans](./06-test-plans.md)
mysql 06 Test Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/06-test-plans.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Test Plans"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Test Plans"
description: "WWI created a test plan that included a set of IT and the Business tasks. Successful migrations require all the tests to be executed."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Test Plans
+# Migrate MySQL on-premises to Azure Database for MySQL: Test Plans
## Prerequisites
The source database schema information was used to verify the target migration o
- Have a well-defined timeline of events for the migration.
+## Next steps
+ > [!div class="nextstepaction"] > [Performance Baselines](./07-performance-baselines.md)
mysql 07 Performance Baselines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/07-performance-baselines.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Performance Baselines"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Performance Baselines"
description: "Understanding the existing MySQL workload is one of the best investments that can be made to ensure a successful migration."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Performance Baselines
+# Migrate MySQL on-premises to Azure Database for MySQL: Performance Baselines
## Prerequisites
WWI reviewed their Conference database workload and determined it had a very sma
In reviewing the MySQL database, the MySQL 5.5 server is running with the defaults server parameters that are set during the initial install.
+## Next steps
+ > [!div class="nextstepaction"] > [Data Migration](./08-data-migration.md)
mysql 08 Data Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/08-data-migration.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Data Migration"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Data Migration"
description: "As a prudent step before upgrade or migrate data, export the database before the upgrade using MySQL Workbench or manually via the mysqldump command."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Data Migration
+# Migrate MySQL on-premises to Azure Database for MySQL: Data Migration
## Prerequisites
With the basic migration components in place, it's now possible to proceed with
- Make sure all tasks are documented and checked off as the migration is executed.
+## Next steps
+ > [!div class="nextstepaction"] > [Data Migration with MySQL Workbench](./09-data-migration-with-mySQL-workbench.md)
mysql 09 Data Migration With Mysql Workbench https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/09-data-migration-with-mysql-workbench.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Data Migration with MySQL Workbench"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Data Migration with MySQL Workbench"
description: "Follow all the steps in the Setup guide to create an environment to support the following steps."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Data Migration with MySQL Workbench
+# Migrate MySQL on-premises to Azure Database for MySQL: Data Migration with MySQL Workbench
## Prerequisites
az webapp restart -g $rgName -n $app\_name
You've successfully completed an on-premises to Azure Database for MySQL migration\!
+## Next steps
+ > [!div class="nextstepaction"] > [Post Migration Management](./10-post-migration-management.md)
mysql 10 Post Migration Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/10-post-migration-management.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Post Migration Management"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Post Migration Management"
description: "Once the migration has been successfully completed, the next phase it to manage the new cloud-based data workload resources."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Post Migration Management
+# Migrate MySQL on-premises to Azure Database for MySQL: Post Migration Management
## Prerequisites
The MySQL DBAs installed the Azure Database for [MySQL Azure PowerShell cmdlets]
- Set up notifications for maintenance events such as upgrades and patches. Notify users as necessary.
+## Next steps
+ > [!div class="nextstepaction"] > [Optimization](./11-optimization.md)
mysql 11 Optimization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/11-optimization.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Optimization"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Optimization"
description: "In addition to the audit and activity logs, server performance can also be monitored with Azure Metrics."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Optimization
+# Migrate MySQL on-premises to Azure Database for MySQL: Optimization
## Prerequisites
AzureDiagnostics
| where Category == 'MySqlSlowLogs' | project TimeGenerated, LogicalServerName\_s, event\_class\_s, start\_time\_t , q uery\_time\_d,
-sql\_text\_s | top 5 by query\_time\_d desc
+sql\_text\_s| top 5 by query\_time\_d desc
``` ## Query Performance Insight
They elected to monitor any potential issues for now and implement Azure Automat
- Consider moving regions of the users or application needs change.
+## Next steps
+ > [!div class="nextstepaction"] > [Business Continuity and Disaster Recovery (BCDR)](./12-business-continuity-and-disaster-recovery.md)
mysql 12 Business Continuity And Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/12-business-continuity-and-disaster-recovery.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Business Continuity and Disaster Recovery (BCDR)"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Business Continuity and Disaster Recovery (BCDR)"
description: "As with any mission critical system, having a backup and restore and a disaster recovery (BCDR) strategy is an important part of your overall system design."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Business Continuity and Disaster Recovery (BCDR)
+# Migrate MySQL on-premises to Azure Database for MySQL: Business Continuity and Disaster Recovery (BCDR)
## Prerequisites
Failover Steps:
- Implement a load-balancing strategy for applications for quick failover.
+## Next steps
+ > [!div class="nextstepaction"] > [Security](./13-security.md)
mysql 13 Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/13-security.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Security"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Security"
description: "Moving to a cloud-based service doesnΓÇÖt mean the entire internet has access to it always."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Security
+# Migrate MySQL on-premises to Azure Database for MySQL: Security
## Prerequisites
Review a set of potential [security baseline](/azure/mysql/security-baseline) ta
- Utilize private endpoints for workloads that don't travel over the Internet.
+## Next steps
+ > [!div class="nextstepaction"] > [Summary](./14-summary.md)
mysql 14 Summary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/14-summary.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide Summary"
+ Title: "Migrate MySQL on-premises to Azure Database for MySQL: Summary"
description: "This document has covered several topics related to migrating an application from on-premises MySQL to Azure Database for MySQL."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide Summary
+# Migrate MySQL on-premises to Azure Database for MySQL: Summary
## Prerequisites
mysql 15 Appendix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/migrate/mysql-on-premises-azure-db/15-appendix.md
Title: "MySQL on-premises to Azure Database for MySQL migration guide appendix"
+ Title: "MySQL on-premises to Azure Database for MySQL sample applications"
description: "Download extra documentation we created for this Migration Guide and learn how to configure."
Previously updated : 06/14/2021 Last updated : 06/21/2021
-# MySQL on-premises to Azure Database for MySQL migration guide appendix
-
-## Prerequisites
-
-[Summary](14-summary.md)
+# Migrate MySQL on-premises to Azure Database for MySQL sample applications
## Overview
network-watcher Network Watcher Nsg Flow Logging Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-azure-resource-manager.md
> - [Azure Resource Manager](network-watcher-nsg-flow-logging-azure-resource-manager.md)
-[Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) is Azure's native and powerful way to manage your [infrastructure as code](/azure/devops/learn/what-is-infrastructure-as-code).
+[Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) is Azure's native and powerful way to manage your [infrastructure as code](/devops/deliver/what-is-infrastructure-as-code).
This article shows how you to enable [NSG Flow Logs](./network-watcher-nsg-flow-logging-overview.md) programmatically using an Azure Resource Manager template and Azure PowerShell. We start by providing an overview of the properties of the NSG Flow Log object, followed by a few sample templates. Then we the deploy template using a local PowerShell instance.
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Flow logs are the source of truth for all network activity in your cloud environ
- NSG Flow Logs are written to storage accounts from where they can be accessed.-- You can export, process, analyze, and visualize Flow Logs using tools like TA, Splunk, Grafana, Stealthwatch, etc.
+- You can export, process, analyze, and visualize Flow Logs using tools like Traffic Analytics, Splunk, Grafana, Stealthwatch, etc.
## Log format
network-watcher Quickstart Configure Network Security Group Flow Logs From Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-arm-template.md
- Title: 'Quickstart: Configure network security group flow logs by using an Azure Resource Manager template (ARM template)'
-description: Learn how to enable network security group (NSG) flow logs programmatically by using an Azure Resource Manager template (ARM template) and Azure PowerShell.
--- Previously updated : 01/07/2021---
- - subject-armqs
- - mode-arm
-# Customer intent: I need to enable the network security group flow logs by using an Azure Resource Manager template.
--
-# Quickstart: Configure network security group flow logs by using an ARM template
-
-In this quickstart, you learn how to enable [network security group (NSG) flow logs](network-watcher-nsg-flow-logging-overview.md) by using an [Azure Resource Manager](../azure-resource-manager/management/overview.md) template (ARM template) and Azure PowerShell.
--
-We start with an overview of the properties of the NSG flow log object. We provide sample templates. Then, we use a local Azure PowerShell instance to deploy the template.
-
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
-
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fnetworkwatcher-flowLogs-create%2Fazuredeploy.json)
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Review the template
-
-The template that we use in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/networkwatcher-flowlogs-create).
--
-These resources are defined in the template:
--- [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts)-- [Microsoft.Resources/deployments](/azure/templates/microsoft.resources/deployments)-
-## NSG flow logs object
-
-The following code shows an NSG flow logs object and its parameters. To create a `Microsoft.Network/networkWatchers/flowLogs` resource, add this code to the resources section of your template:
-
-```json
-{
- "name": "string",
- "type": "Microsoft.Network/networkWatchers/flowLogs",
- "location": "string",
- "apiVersion": "2019-09-01",
- "properties": {
- "targetResourceId": "string",
- "storageId": "string",
- "enabled": "boolean",
- "flowAnalyticsConfiguration": {
- "networkWatcherFlowAnalyticsConfiguration": {
- "enabled": "boolean",
- "workspaceResourceId": "string",
- "trafficAnalyticsInterval": "integer"
- },
- "retentionPolicy": {
- "days": "integer",
- "enabled": "boolean"
- },
- "format": {
- "type": "string",
- "version": "integer"
- }
- }
- }
-}
-```
-
-For a complete overview of the NSG flow logs object properties, see [Microsoft.Network networkWatchers/flowLogs](/azure/templates/microsoft.network/networkwatchers/flowlogs).
-
-## Create your template
-
-If you're using ARM templates for the first time, see the following articles to learn more about ARM templates:
--- [Deploy resources with ARM templates and Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md#deploy-local-template)-- [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)-
-The following example is a complete template. It's also the simplest version of the template. The example contains the minimum parameters that are passed to set up NSG flow logs. For more examples, see the overview article [Configure NSG flow logs from an Azure Resource Manager template](network-watcher-nsg-flow-logging-azure-resource-manager.md).
-
-### Example
-
-The following template enables flow logs for an NSG, and then stores the logs in a specific storage account:
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "apiProfile": "2019-09-01",
- "resources": [
- {
- "name": "NetworkWatcher_centraluseuap/Microsoft.NetworkDalanDemoPerimeterNSG",
- "type": "Microsoft.Network/networkWatchers/FlowLogs/",
- "location": "centraluseuap",
- "apiVersion": "2019-09-01",
- "properties": {
- "targetResourceId": "/subscriptions/<subscription Id>/resourceGroups/DalanDemo/providers/Microsoft.Network/networkSecurityGroups/PerimeterNSG",
- "storageId": "/subscriptions/<subscription Id>/resourceGroups/MyCanaryFlowLog/providers/Microsoft.Storage/storageAccounts/storagev2ira",
- "enabled": true,
- "flowAnalyticsConfiguration": {},
- "retentionPolicy": {},
- "format": {}
- }
- }
- ]
-}
-```
-
-> [!NOTE]
-> - The resource name uses the format _ParentResource_ChildResource_. In our example, the parent resource is the regional Azure Network Watcher instance:
-> - **Format**: NetworkWatcher_RegionName
-> - **Example**: NetworkWatcher_centraluseuap
-> - `targetResourceId` is the resource ID of the target NSG.
-> - `storageId` is the resource ID of the destination storage account.
-
-## Deploy the template
-
-This tutorial assumes that you have an existing resource group and an NSG that you can enable flow logging on.
-
-You can save any of the example templates that are shown in this article locally as *azuredeploy.json*. Update the property values so they point to valid resources in your subscription.
-
-To deploy the template, run the following command in Azure PowerShell:
-
-```azurepowershell-interactive
-$context = Get-AzSubscription -SubscriptionId <subscription Id>
-Set-AzContext $context
-New-AzResourceGroupDeployment -Name EnableFlowLog -ResourceGroupName NetworkWatcherRG `
- -TemplateFile "C:\MyTemplates\azuredeploy.json"
-```
-
-> [!NOTE]
-> These commands deploy a resource to the example NetworkWatcherRG resource group, and not to the resource group that contains the NSG.
-
-## Validate the deployment
-
-You have two options to see whether your deployment succeeded:
--- Your PowerShell console shows `ProvisioningState` as `Succeeded`.-- Go to the [NSG flow logs portal page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Network/NetworkWatcherMenuBlade/flowLogs) to confirm your changes.-
-If there were issues with the deployment, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](../azure-resource-manager/templates/common-deployment-errors.md).
-
-## Clean up resources
-
-You can delete Azure resources by using complete deployment mode. To delete a flow logs resource, specify a deployment in complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
-
-You also can disable an NSG flow log in the Azure portal:
-
-1. Sign in to the Azure portal.
-1. Select **All services**. In the **Filter** box, enter **network watcher**. In the search results, select **Network Watcher**.
-1. Under **Logs**, select **NSG flow logs**.
-1. In the list of NSGs, select the NSG for which you want to disable flow logs.
-1. Under **Flow logs settings**, select **Off**.
-1. Select **Save**.
-
-## Next steps
-
-In this quickstart, you learned how to enable NSG flow logs by using an ARM template. Next, learn how to visualize your NSG flow data by using one of these options:
--- [Microsoft Power BI](network-watcher-visualize-nsg-flow-logs-power-bi.md)-- [Open-source tools](network-watcher-visualize-nsg-flow-logs-open-source-tools.md)-- [Azure Traffic Analytics](traffic-analytics.md)+
+ Title: 'Quickstart: Configure network security group flow logs by using an Azure Resource Manager template (ARM template)'
+description: Learn how to enable network security group (NSG) flow logs programmatically by using an Azure Resource Manager template (ARM template) and Azure PowerShell.
+++ Last updated : 01/07/2021+++
+ - subject-armqs
+ - mode-arm
+# Customer intent: I need to enable the network security group flow logs by using an Azure Resource Manager template.
++
+# Quickstart: Configure network security group flow logs by using an ARM template
+
+In this quickstart, you learn how to enable [network security group (NSG) flow logs](network-watcher-nsg-flow-logging-overview.md) by using an [Azure Resource Manager](../azure-resource-manager/management/overview.md) template (ARM template) and Azure PowerShell.
++
+We start with an overview of the properties of the NSG flow log object. We provide sample templates. Then, we use a local Azure PowerShell instance to deploy the template.
+
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fnetworkwatcher-flowLogs-create%2Fazuredeploy.json)
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the template
+
+The template that we use in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/networkwatcher-flowlogs-create/).
++
+These resources are defined in the template:
+
+- [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts)
+- [Microsoft.Resources/deployments](/azure/templates/microsoft.resources/deployments)
+
+## NSG flow logs object
+
+The following code shows an NSG flow logs object and its parameters. To create a `Microsoft.Network/networkWatchers/flowLogs` resource, add this code to the resources section of your template:
+
+```json
+{
+ "name": "string",
+ "type": "Microsoft.Network/networkWatchers/flowLogs",
+ "location": "string",
+ "apiVersion": "2019-09-01",
+ "properties": {
+ "targetResourceId": "string",
+ "storageId": "string",
+ "enabled": "boolean",
+ "flowAnalyticsConfiguration": {
+ "networkWatcherFlowAnalyticsConfiguration": {
+ "enabled": "boolean",
+ "workspaceResourceId": "string",
+ "trafficAnalyticsInterval": "integer"
+ },
+ "retentionPolicy": {
+ "days": "integer",
+ "enabled": "boolean"
+ },
+ "format": {
+ "type": "string",
+ "version": "integer"
+ }
+ }
+ }
+}
+```
+
+For a complete overview of the NSG flow logs object properties, see [Microsoft.Network networkWatchers/flowLogs](/azure/templates/microsoft.network/networkwatchers/flowlogs).
+
+## Create your template
+
+If you're using ARM templates for the first time, see the following articles to learn more about ARM templates:
+
+- [Deploy resources with ARM templates and Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md#deploy-local-template)
+- [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+
+The following example is a complete template. It's also the simplest version of the template. The example contains the minimum parameters that are passed to set up NSG flow logs. For more examples, see the overview article [Configure NSG flow logs from an Azure Resource Manager template](network-watcher-nsg-flow-logging-azure-resource-manager.md).
+
+### Example
+
+The following template enables flow logs for an NSG, and then stores the logs in a specific storage account:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "apiProfile": "2019-09-01",
+ "resources": [
+ {
+ "name": "NetworkWatcher_centraluseuap/Microsoft.NetworkDalanDemoPerimeterNSG",
+ "type": "Microsoft.Network/networkWatchers/FlowLogs/",
+ "location": "centraluseuap",
+ "apiVersion": "2019-09-01",
+ "properties": {
+ "targetResourceId": "/subscriptions/<subscription Id>/resourceGroups/DalanDemo/providers/Microsoft.Network/networkSecurityGroups/PerimeterNSG",
+ "storageId": "/subscriptions/<subscription Id>/resourceGroups/MyCanaryFlowLog/providers/Microsoft.Storage/storageAccounts/storagev2ira",
+ "enabled": true,
+ "flowAnalyticsConfiguration": {},
+ "retentionPolicy": {},
+ "format": {}
+ }
+ }
+ ]
+}
+```
+
+> [!NOTE]
+> - The resource name uses the format _ParentResource_ChildResource_. In our example, the parent resource is the regional Azure Network Watcher instance:
+> - **Format**: NetworkWatcher_RegionName
+> - **Example**: NetworkWatcher_centraluseuap
+> - `targetResourceId` is the resource ID of the target NSG.
+> - `storageId` is the resource ID of the destination storage account.
+
+## Deploy the template
+
+This tutorial assumes that you have an existing resource group and an NSG that you can enable flow logging on.
+
+You can save any of the example templates that are shown in this article locally as *azuredeploy.json*. Update the property values so they point to valid resources in your subscription.
+
+To deploy the template, run the following command in Azure PowerShell:
+
+```azurepowershell-interactive
+$context = Get-AzSubscription -SubscriptionId <subscription Id>
+Set-AzContext $context
+New-AzResourceGroupDeployment -Name EnableFlowLog -ResourceGroupName NetworkWatcherRG `
+ -TemplateFile "C:\MyTemplates\azuredeploy.json"
+```
+
+> [!NOTE]
+> These commands deploy a resource to the example NetworkWatcherRG resource group, and not to the resource group that contains the NSG.
+
+## Validate the deployment
+
+You have two options to see whether your deployment succeeded:
+
+- Your PowerShell console shows `ProvisioningState` as `Succeeded`.
+- Go to the [NSG flow logs portal page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Network/NetworkWatcherMenuBlade/flowLogs) to confirm your changes.
+
+If there were issues with the deployment, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](../azure-resource-manager/templates/common-deployment-errors.md).
+
+## Clean up resources
+
+You can delete Azure resources by using complete deployment mode. To delete a flow logs resource, specify a deployment in complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
+
+You also can disable an NSG flow log in the Azure portal:
+
+1. Sign in to the Azure portal.
+1. Select **All services**. In the **Filter** box, enter **network watcher**. In the search results, select **Network Watcher**.
+1. Under **Logs**, select **NSG flow logs**.
+1. In the list of NSGs, select the NSG for which you want to disable flow logs.
+1. Under **Flow logs settings**, select **Off**.
+1. Select **Save**.
+
+## Next steps
+
+In this quickstart, you learned how to enable NSG flow logs by using an ARM template. Next, learn how to visualize your NSG flow data by using one of these options:
+
+- [Microsoft Power BI](network-watcher-visualize-nsg-flow-logs-power-bi.md)
+- [Open-source tools](network-watcher-visualize-nsg-flow-logs-open-source-tools.md)
+- [Azure Traffic Analytics](traffic-analytics.md)
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Search (Microsoft.Search/searchServices) / searchService | privatelink.search.windows.net | search.windows.net | | Azure Container Registry (Microsoft.ContainerRegistry/registries) / registry | privatelink.azurecr.io | azurecr.io | | Azure App Configuration (Microsoft.AppConfiguration/configurationStores) / configurationStores | privatelink.azconfig.io | azconfig.io |
-| Azure Backup (Microsoft.RecoveryServices/vaults) / vault | privatelink.{region}.backup.windowsazure.com | {region}.backup.windowsazure.com |
-| Azure Site Recovery (Microsoft.RecoveryServices/vaults) / vault | {region}.privatelink.siterecovery.windowsazure.com | {region}.hypervrecoverymanager.windowsazure.com |
+| Azure Backup (Microsoft.RecoveryServices/vaults) / AzureBackup | privatelink.{region}.backup.windowsazure.com | {region}.backup.windowsazure.com |
+| Azure Site Recovery (Microsoft.RecoveryServices/vaults) / AzureSiteRecovery | {region}.privatelink.siterecovery.windowsazure.com | {region}.hypervrecoverymanager.windowsazure.com |
| Azure Event Hubs (Microsoft.EventHub/namespaces) / namespace | privatelink.servicebus.windows.net | servicebus.windows.net | | Azure Service Bus (Microsoft.ServiceBus/namespaces) / namespace | privatelink.servicebus.windows.net | servicebus.windows.net | | Azure IoT Hub (Microsoft.Devices/IotHubs) / iotHub | privatelink.azure-devices.net<br/>privatelink.servicebus.windows.net<sup>1</sup> | azure-devices.net<br/>servicebus.windows.net |
private-multi-access-edge-compute-mec Affirmed Private Network Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-multi-access-edge-compute-mec/affirmed-private-network-service-overview.md
Title: 'What is Affirmed Private Network Service on Azure?' description: Learn about Affirmed Private Network Service solutions on Azure for private LTE/5G networks.-+ -+ Last updated 06/16/2021
private-multi-access-edge-compute-mec Deploy Affirmed Private Network Service Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-multi-access-edge-compute-mec/deploy-affirmed-private-network-service-solution.md
Title: 'Deploy Affirmed Private Network Service on Azure' description: Learn how to deploy the Affirmed Private Network Service solution on Azure-+ -+ Last updated 06/16/2021
private-multi-access-edge-compute-mec Deploy Metaswitch Fusion Core Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-multi-access-edge-compute-mec/deploy-metaswitch-fusion-core-solution.md
Title: 'Deploy Fusion Core on an Azure Stack Edge device' description: Learn how to deploy cloud solutions from Microsoft Azure and Metaswitch Networks that can help future-proof your network, drive down costs, and create new business models and revenue streams.-+ -+ Last updated 06/16/2021
private-multi-access-edge-compute-mec Metaswitch Fusion Core Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-multi-access-edge-compute-mec/metaswitch-fusion-core-overview.md
Title: Fusion Core solution in Azure description: An overview of Fusion Core - a cloud native implementation of the 3GPP standards-defined 5G Next Generation Core (5G NGC or 5GC) that allows 5G network operators to aggregate data traffic from all end devices over multiple wireless and fixed access technologies. -+
private-multi-access-edge-compute-mec Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-multi-access-edge-compute-mec/overview.md
Title: 'Azure private multi-access edge compute' description: Learn about the Azure private multi-access edge compute (MEC) solution that brings together a portfolio of Microsoft compute, networking and application services managed from the cloud.-+ -+ Last updated 06/16/2021
private-multi-access-edge-compute-mec Partner Programs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-multi-access-edge-compute-mec/partner-programs.md
Title: 'Azure private multi-access edge compute partner solutions' description: Learn about Azure multi-access edge compute partner programs.-+
purview Create A Custom Classification And Classification Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-a-custom-classification-and-classification-rule.md
Title: Create a custom classification and classification rule (preview) description: Learn how to create custom classifications to define data types in your data estate that are unique to your organization in Azure Purview.--++
purview Reference Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/reference-purview-glossary.md
Previously updated : 06/08/2021 Last updated : 06/21/2021 # Azure Purview product glossary
A path that defines the location of an asset within its data source.  
An entry in the Business glossary that defines a concept specific to an organization. Glossary terms can contain information on synonyms, acronyms, and related terms.  ## Insights An area within Azure Purview where you can view reports that summarize information about your data.
+## Integration runtime
+The compute infrastructure used to scan in a data source.
## Lineage How data transforms and flows as it moves from its origin to its destination. Understanding this flow across the data estate helps organizations see the history of their data, and aid in troubleshooting or impact analysis.  ## Management Center
An area within Azure Purview where you can manage connections, users, roles, and
## On-premises data Data that is in a data center controlled by a customer, for example, not in the cloud or as a software as a service (SaaS).  ## Owner
-An individual or group in charge of managing a data asset.  
+An individual or group in charge of managing a data asset.
+## Pattern rule
+A configuration that overrides how Azure Purview groups assets as resource sets and displays them within the catalog.
## Purview instance A single Azure Purview resource.  ## Registered source
Glossary terms that are linked to other terms within the organization.  
## Resource set A single asset that represents many partitioned files or objects in storage. For example, Azure Purview stores partitioned Apache Spark output as a single resource set instead of unique assets for each individual file.  ## Role
-Permissions assigned to a user within an Azure Purview instance. Roles, such as Purview Data Curator or Purview Data Reader, determine what can be done within the product. 
+Permissions assigned to a user within an Azure Purview instance. Roles, such as Purview Data Curator or Purview Data Reader, determine what can be done within the product.
## Scan An Azure Purview process that examines a source or set of sources and ingests its metadata into the data catalog. Scans can be run manually or on a schedule using a scan trigger.  ## Scan ruleset A set of rules that define which data types and classifications a scan ingests into a catalog.  ## Scan trigger
-A schedule that determines the recurrence of when a scan runs. 
+A schedule that determines the recurrence of when a scan runs.
+## Search relevance
+The scoring of data assets that determines the order search results are returned. Multiple factors determine an asset's relevance score.
+## Self-hosted integration runtime
+An integration runtime installed on an on-premises machine or virtual machine inside a private network that is used to connect to data on-premises or in a private network.
## Sensitivity label Annotations that classify and protect an organization’s data. Azure Purview integrates with Microsoft Information Protection for creation of sensitivity labels.  ## Sensitivity label report
A system where data is stored. Sources can be hosted in various places such
A categorization of the registered sources used in an Azure Purview instance, for example, Azure SQL Database, Azure Blob Storage, Amazon S3, or SAP ECC.  ## Steward An individual who defines the standards for a glossary term. They are responsible for maintaining quality standards, nomenclature, and rules for the assigned entity. -
+## Term template
+A definition of attributes included in a glossary term. Users can either use the system-defined term template or create their own to include custom attributes.
## Next steps To get started with Azure Purview, see [Quickstart: Create an Azure Purview account](create-catalog-portal.md).
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/supported-classifications.md
Title: List of supported classifications description: This page lists the supported system classifications in Azure Purview.--++
purview Tutorial Schemas And Classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-schemas-and-classifications.md
Title: 'Tutorial: Explore resource sets, details, schemas, and classifications in Azure Purview (preview)' description: This tutorial describes how to use resource sets, asset details, schemas, and classifications. --++
resource-mover Modify Target Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/modify-target-settings.md
This article describes how to modify destination settings, when moving resources
## Modify VM settings
-When moving Azure VMs and associated resources, you can modify the destination settings.
+You can modify destination settings when moving Azure VMs and associated resources. We recommend:
-- We recommend that you only change destination settings after the move collection is validated.-- We recommend that you modify settings before preparing the resources, because some destination properties might be unavailable for edit after prepare is complete.-
-However:
-- If you're moving the source resource, you can usually modify destination settings until you start the initiate move process.-- If you assign an existing resource in the source region, you can modify destination settings until the move commit is complete.
+- That you only change destination settings after the move collection is validated. However:
+ - If you're moving the source resource, you can usually modify these settings until you start the initiate move process.
+ - If you assign an existing resource in the source region, you can modify destination settings until the move commit is complete.
+- That you modify settings before preparing the resources, because some destination properties might be unavailable for edit after prepare is complete.
### Settings you can modify Configuration settings you can modify are summarized in the table. **Resource** | **Options**
- | |
-**VM name** | Options:<br/><br/> - Create a new VM with the same name in the destination region.<br/><br/> - Create a new VM with a different name in the destination region.<br/><br/> - Use an existing VM in the destination region.<br/><br/> If you create a new VM, with the exception of the settings you modify, the new destination VM is assigned the same settings as the source.
+ |
+**VM name** | Destination options:<br/><br/> - Create a new VM with the same name in the destination region.<br/><br/> - Create a new VM with a different name in the destination region.<br/><br/> - Use an existing VM in the destination region.<br/><br/> If you create a new VM, with the exception of the settings you modify, the new destination VM is assigned the same settings as the source.
**VM availability zone** | The availability zone in which the destination VM will be placed. Select **Not applicable** if you donΓÇÖt want to change the source settings, or if you donΓÇÖt want to place the VM in an availability zone. **VM SKU** | The [VM type](https://azure.microsoft.com/pricing/details/virtual-machines/series/) (available in the destination region) that will be used for the destination VM.<br/><br/> The selected destination VM shouldn't be smaller than the source VM.
-**VM availability set | The availability set in which the destination VM will be placed. Select **Not applicable** you donΓÇÖt want to change the source settings, or if you donΓÇÖt want to place the VM in an availability set.
+**VM availability set** | The availability set in which the destination VM will be placed. Select **Not applicable** you donΓÇÖt want to change the source settings, or if you donΓÇÖt want to place the VM in an availability set.
**VM key vault** | The associated key vault when you enable Azure disk encryption on a VM. **Disk encryption set** | The associated disk encryption set if the VM uses a customer-managed key for server-side encryption. **Resource group** | The resource group in which the destination VM will be placed. **Networking resources** | Options for network interfaces, virtual networks (VNets/), and network security groups/network interfaces:<br/><br/> - Create a new resource with the same name in the destination region.<br/><br/> - Create a new resource with a different name in the destination region.<br/><br/> - Use an existing networking resource in the destination region.<br/><br/> If you create a new destination resource, with the exception of the settings you modify, it's assigned the same settings as the source resource. **Public IP address name, SKU, and zone** | Specifies the name, [SKU](../virtual-network/public-ip-addresses.md#sku), and [zone](../virtual-network/public-ip-addresses.md#standard) for standard public IP addresses.<br/><br/> If you want it to be zone redundant, enter as **Zone redundant**.
-**Load balancer name, SKU, and zone ** | Specifies the name, SKU (Basic or Standard), and zone for the load balancer.<br/><br/> We recommend using Standard sKU.<br/><br/> If you want it to be zone redundant, specify as **Zone redundant**.
+**Load balancer name, SKU, and zone** | Specifies the name, SKU (Basic or Standard), and zone for the load balancer.<br/><br/> We recommend using Standard SKU.<br/><br/> If you want it to be zone redundant, specify as **Zone redundant**.
**Resource dependencies** | Options for each dependency:<br/><br/>- The resource uses source dependent resources that will move to the destination region.<br/><br/> - The resource uses different dependent resources located in the destination region. In this case, you can choose from any similar resources in the destination region. ### Edit VM destination settings
-If you don't want to dependent resources from the source region to the destination, you have a couple of other options:
+If you don't want to move dependent resources from the source region to the destination, you have a couple of other options:
- Create a new resource in the destination region. Unless you specify different settings, the new resource will have the same settings as the source resource. - Use an existing resource in the destination region.
You modify the destination settings for a Azure SQL Database resource as follows
1. In **Across regions**, for the resource you want to modify, click the **Destination configuration** entry. 2. In **Configuration settings**, specify the destination settings summarized in the table above. +
+## Modify settings in PowerShell
+
+You can modify settings in PowerShell.
+
+1) Retrieve the move resource for which you want to edit properties. For example, to retrieve a VM run:
+
+ ```azurepowershell
+ $moveResourceObj = Get-AzResourceMoverMoveResource -MoveCollectionName "PS-centralus-westcentralus-demoRMS1" -ResourceGroupName "RG-MoveCollection-demoRMS" -Name "PSDemoVM"
+ ```
+2) Copy the resource setting to a target resource setting object.
+
+ ```azurepowershell
+ $TargetResourceSettingObj = $moveResourceObj.ResourceSetting
+ ```
+
+3) Set the parameter in the target resource setting object. For example, to change the name of the destination VM:
+
+ ```azurepowershell
+ $TargetResourceSettingObj.TargetResourceName="PSDemoVM-target"
+ ```
+
+4) Update the move resource destination settings. In this example, we change the name of the VM from *PSDemoVM* to *PSDemoVMTarget*.
+
+ ```azurepowershell
+ Update-AzResourceMoverMoveResource -ResourceGroupName "RG-MoveCollection-demoRMS" -MoveCollectionName "PS-centralus-westcentralus-demoRMS" -SourceId "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PSDemoRM/providers/Microsoft.Compute/virtualMachines/PSDemoVM" -Name "PSDemoVM" -ResourceSetting $TargetResourceSettingObj
+ ```
+ **Output**
+ ![Output text after modifying destination settings](./media/modify-target-settings/update-settings.png)
++ ## Next steps [Move an Azure VM](tutorial-move-region-virtual-machines.md) to another region.
resource-mover Move Region Within Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/move-region-within-resource-group.md
In this article, learn how to move resources in a specific resource group to a different Azure region. In the resource group, you select the resources you want to move. Then, you move them using [Azure Resource Mover](overview.md).
-> [!IMPORTANT]
-> Azure Resource Mover is currently in public preview.
-- ## Prerequisites - You need *Owner* access on the subscription in which resources you want to move are located.
Prepare as follows:
1. In **Across regions**, select the source resource group > **Prepare**. 2. In **Prepare resources**, select **Prepare**.
-1.
+ ![Button to prepare the source resource group](./media/move-region-within-resource-group/prepare-source-resource-group.png) During the Prepare process, Resource Mover generates Azure Resource Manager (ARM) templates using the resource group settings. Resources inside the resource group aren't affected.
Initiate the move as follows:
2. ln **Move Resources**, select **Initiate move**. The resource group moves into an *Initiate move in progress* state. 3. After initiating the move, the target resource group is created, based on the generated ARM template. The source resource group moves into a *Commit move pending* state.
-![Status showing commit move](./media/move-region-availability-zone/commit-move-pending.png)
+ ![Status showing commit move](./media/move-region-availability-zone/commit-move-pending.png)
To commit and finish the move process:
resource-mover Remove Move Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/remove-move-resources.md
Remove multiple resources as follows:
1. Validate dependencies:
- ````azurepowershell-interactive
+ ```azurepowershell-interactive
$resp = Invoke-AzResourceMoverBulkRemove -ResourceGroupName "RG-MoveCollection-demoRMS" -MoveCollectionName "PS-centralus-westcentralus-demoRMS" -MoveResource $('psdemorm-vnet') -ValidateOnly ```
Remove multiple resources as follows:
2. Retrieve the dependent resources that need to be removed (along with our example virtual network psdemorm-vnet):
- ````azurepowershell-interactive
+ ```azurepowershell-interactive
$resp.AdditionalInfo[0].InfoMoveResource ``` **Output after running cmdlet**
- ![Output text after removing multiple resources from a move collection](./media/remove-move-resources/remove-multiple-get-dependencies.png)
+ ![Output text after retrieving dependent resources that need to be removed](./media/remove-move-resources/remove-multiple-get-dependencies.png)
3. Remove all resources, along with the virtual network:
- ````azurepowershell-interactive
+ ```azurepowershell-interactive
Invoke-AzResourceMoverBulkRemove -ResourceGroupName "RG-MoveCollection-demoRMS" -MoveCollectionName "PS-centralus-westcentralus-demoRMS" -MoveResource $('PSDemoVM','psdemovm111', 'PSDemoRM-vnet','PSDemoVM-nsg') ```
Remove multiple resources as follows:
Remove an entire move collection from the subscription, as follows: 1. Follow the instructions above to remove resources in the collection using PowerShell.
-2. Run:
+2. Remove a collection as follows:
```azurepowershell-interactive Remove-AzResourceMoverMoveCollection -ResourceGroupName "RG-MoveCollection-demoRMS" -MoveCollectionName "PS-centralus-westcentralus-demoRMS"
resource-mover Tutorial Move Region Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/tutorial-move-region-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
Most move resources operations are the same whether using the Azure portal or PowerShell, with a couple of exceptions.
-**Operation** | **PowerShell** | **Portal**
+**Operation** | **Portal** | **PowerShell**
| | **Create a move collection** | A move collection (a list of all the resources you're moving) is created automatically. Required identity permissions are assigned in the backend by the portal. | You use PowerShell cmdlets to:<br/><br/> - Create a resource group for the move collection and specify the location for it.<br/><br/> - Assign a managed identity to the collection.<br/><br/> - Add resources to the collection. **Remove a move collection** | You can't directly remove a move collection in the portal. | You use a PowerShell cmdlet to remove a move collection.
Invoke-AzResourceMoverDiscard -ResourceGroupName "RG-MoveCollection-demoRMS" -Mo
## Delete source resources
-After committing the move, and verifying that resources work as expected in the target region, you can delete each source resource in the [Azure portal](../azure-resource-manager/management/manage-resources-portal.md#delete-resources), [using PowerShell](../azure-resource-manager/management/manage-resources-powershell.md#delete-resources), or [Azure CLI](../azure-resource-manager/management/manage-resources-cli.md#delete-resources).
+After committing the move, and verifying that resources work as expected in the target region, you can delete each source resource in the [Azure portal](../azure-resource-manager/management/manage-resources-portal.md#delete-resources), using [PowerShell](../azure-resource-manager/management/manage-resources-powershell.md#delete-resources), or using [Azure CLI](../azure-resource-manager/management/manage-resources-cli.md#delete-resources).
## Next steps
search Search Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-capacity-planning.md
Last updated 06/18/2021
# Estimate and manage capacity of an Azure Cognitive Search service
-Before [provisioning a search service](search-create-service-portal.md) and locking in a specific pricing tier, take a few minutes to understand how capacity works and how you might adjust replicas and partitions to accommodate workload fluctuation.
+Before you [create a search service](search-create-service-portal.md) and lock in a specific [pricing tier](search-sku-tier.md), take a few minutes to understand how capacity works and how you might adjust replicas and partitions to accommodate workload fluctuation.
In Azure Cognitive Search, capacity is based on *replicas* and *partitions*. Replicas are copies of the search engine. Partitions are units of storage. Each new search service starts with one each, but you can scale up each resource independently to accommodate fluctuations in indexing and query workloads. Adding either resource is an [added cost](search-sku-manage-costs.md).
search Search Faq Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-faq-frequently-asked-questions.md
- Title: Frequently asked questions (FAQ)-
-description: Get answers to common questions about Microsoft Azure Cognitive Search service, a cloud hosted search service on Microsoft Azure.
------ Previously updated : 04/10/2020--
-# Azure Cognitive Search - frequently asked questions (FAQ)
-
- Find answers to commonly asked questions about concepts, code, and scenarios related to Azure Cognitive Search.
-
-## Platform
-
-### How is Azure Cognitive Search different from full text search in my DBMS?
-
-Azure Cognitive Search supports multiple data sources, [linguistic analysis for many languages](/rest/api/searchservice/language-support), [custom analysis for interesting and unusual data inputs](/rest/api/searchservice/custom-analyzers-in-azure-search), search rank controls through [scoring profiles](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), and user-experience features such as typeahead, hit highlighting, and faceted navigation. It also includes other features, such as synonyms and rich query syntax, but those are generally not differentiating features.
-
-### Can I pause Azure Cognitive Search service and stop billing?
-
-You cannot pause the service. Computational and storage resources are allocated for your exclusive use when the service is created. It's not possible to release and reclaim those resources on-demand.
-
-## Indexing Operations
-
-### Move, backup, and restore indexes or index snapshots?
-
-During the development phase, you may want to move your index between search services. For example, you may use a Basic or Free pricing tier to develop your index, and then want to move it to the Standard or higher tier for production use.
-
-Or, you may want to backup an index snapshot to files that can be used to restore it later.
-
-You can do all these things with the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-samples).
-
-You can also [get an index definition](/rest/api/searchservice/get-index) at any time using the Azure Cognitive Search REST API.
-
-There is currently no built-in index extraction, snapshot, or backup-restore feature in the Azure portal. However, we are considering adding the backup and restore functionality in a future release. If you want show your support for this feature, cast a vote on [User Voice](https://feedback.azure.com/forums/263029-azure-search/suggestions/8021610-backup-snapshot-of-index).
-
-### Can I restore my index or service once it is deleted?
-
-No, if you delete an Azure Cognitive Search index or service, it cannot be recovered. When you delete an Azure Cognitive Search service, all indexes in the service are deleted permanently. If you delete an Azure resource group that contains one or more Azure Cognitive Search services, all services are deleted permanently.
-
-Recreating resources such as indexes, indexers, data sources, and skillsets requires that you recreate them from code.
-
-To recreate an index, you must re-index data from external sources. For this reason, it is recommended that you retain a master copy or backup of the original data in another data store, such as Azure SQL Database or Cosmos DB.
-
-As an alternative, you can use the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-samples) to back up an index definition and index snapshot to a series of JSON files. Later, you can use the tool and files to restore the index, if needed.
-
-### Can I index from SQL Database replicas (Applies to [Azure SQL Database indexers](./search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md))
-
-There are no restrictions on the use of primary or secondary replicas as a data source when building an index from scratch. However, refreshing an index with incremental updates (based on changed records) requires the primary replica. This requirement comes from SQL Database, which guarantees change tracking on primary replicas only. If you try using secondary replicas for an index refresh workload, there is no guarantee you get all of the data.
-
-## Search Operations
-
-### Can I search across multiple indexes?
-
-No, this operation is not supported. Search is always scoped to a single index.
-
-### Can I restrict search index access by user identity?
-
-You can implement [security filters](./search-security-trimming-for-azure-search.md) with `search.in()` filter. The filter composes well with [identity management services like Azure Active Directory(AAD)](./search-security-trimming-for-azure-search-with-aad.md) to trim search results based on defined user group membership.
-
-### Why are there zero matches on terms I know to be valid?
-
-The most common case is not knowing that each query type supports different search behaviors and levels of linguistic analyses. Full text search, which is the predominant workload, includes a language analysis phase that breaks down terms to root forms. This aspect of query parsing casts a broader net over possible matches, because the tokenized term matches a greater number of variants.
-
-Wildcard, fuzzy and regex queries, however, aren't analyzed like regular term or phrase queries and can lead to poor recall if the query does not match the analyzed form of the word in the search index. For more information on query parsing and analysis, see [query architecture](./search-lucene-query-architecture.md).
-
-### My wildcard searches are slow.
-
-Most wildcard search queries, like prefix, fuzzy and regex, are rewritten internally with matching terms in the search index. This extra processing of scanning the search index adds to latency. Further, broad search queries, like `a*` for example, that are likely to be rewritten with many terms can be very slow. For performant wildcard searches, consider defining a [custom analyzer](/rest/api/searchservice/custom-analyzers-in-azure-search).
-
-### Why is the search rank a constant or equal score of 1.0 for every hit?
-
-By default, search results are scored based on the [statistical properties of matching terms](search-lucene-query-architecture.md#stage-4-scoring), and ordered high to low in the result set. However, some query types (wildcard, prefix, regex) always contribute a constant score to the overall document score. This behavior is by design. Azure Cognitive Search imposes a constant score to allow matches found through query expansion to be included in the results, without affecting the ranking.
-
-For example, suppose an input of "tour*" in a wildcard search produces matches on "tours", "tourettes", and "tourmaline". Given the nature of these results, there is no way to reasonably infer which terms are more valuable than others. For this reason, we ignore term frequencies when scoring results in queries of types wildcard, prefix, and regex. Search results based on a partial input are given a constant score to avoid bias towards potentially unexpected matches.
-
-## Skillset Operations
-
-### Are there any tips or tricks to reduce cognitive services charges on ingestion?
-
-It is understandable that you don't want to execute built-in skills or custom skills more than is absolutely necessary, especially if you are dealing with millions of documents to process. With that in mind, we have added "incremental enrichment" capabilities to skillset execution. In essence, you can provide a cache location (a blob storage connection string) that will be used to store the output of "intermediate" enrichment steps. That allows the enrichment pipeline to be smart and apply only enrichments that are necessary when you modify your skillset. This will naturally also save indexing time as the pipeline will be more efficient.
-
-Learn more about [incremental enrichment](cognitive-search-incremental-indexing-conceptual.md)
-
-## Design patterns
-
-### What is the best approach for implementing localized search?
-
-Most customers choose dedicated fields over a collection when it comes to supporting different locales (languages) in the same index. Locale-specific fields make it possible to assign an appropriate analyzer. For example, assigning the Microsoft French Analyzer to a field containing French strings. It also simplifies filtering. If you know a query is initiated on a fr-fr page, you could limit search results to this field. Or, create a [scoring profile](/rest/api/searchservice/add-scoring-profiles-to-a-search-index) to give the field more relative weight. Azure Cognitive Search supports over [50 language analyzers](./search-language-support.md) to choose from.
-
-## Next steps
-
-Is your question about a missing feature or functionality? Request the feature on the [User Voice web site](https://feedback.azure.com/forums/263029-azure-search).
-
-## See also
-
- [StackOverflow: Azure Cognitive Search](https://stackoverflow.com/questions/tagged/azure-search)
- [How full text search works in Azure Cognitive Search](search-lucene-query-architecture.md)
- [What is Azure Cognitive Search?](search-what-is-azure-search.md)
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-api-keys.md
Previously updated : 04/08/2021 Last updated : 06/21/2021
-# Create and manage API keys for authentication to Azure Cognitive Search
+# Use API keys for Azure Cognitive Search authentication
-When connecting to a search service, all requests need to include a read-only API key that was generated specifically for your service. The API key is the sole mechanism for authenticating inbound requests to your search service endpoint and is required on every request.
+Key-based authentication uses access keys (or an *API key* as it's called in Cognitive Search) that are unique to your service to authenticate requests. Passing a valid API key on the request is considered proof that the request is from an authorized client. In Cognitive Search, key-based authentication is used for all inbound operations.
-+ In [REST solutions](search-get-started-rest.md), the `api-key` is typically specified in a request header
+> [!NOTE]
+> Alternative [role-based authentication](search-security-rbac.md) is currently limited to two scenarios: portal access, and outbound indexer data read operations.
+
+## Using API keys in search
+
+When connecting to a search service, all requests must include a read-only API key that was generated specifically for your service. The API key is the sole mechanism for authenticating inbound requests to your search service endpoint and is required on every request.
+++ In [REST solutions](search-get-started-rest.md), the API key is typically specified in a request header + In [.NET solutions](search-howto-dotnet-sdk.md), a key is often specified as a configuration setting and then passed as an [AzureKeyCredential](/dotnet/api/azure.azurekeycredential)
You can view and manage API keys in the [Azure portal](https://portal.azure.com)
## What is an API key?
-An API key is a unique string composed of randomly generated numbers and letters that is passed on every request to the search service. The service will accept the request, if both the request itself and the key are valid.
+An API key is a unique string composed of randomly generated numbers and letters that are passed on every request to the search service. The service will accept the request, if both the request itself and the key are valid.
Two types of keys are used to access your search service: admin (read-write) and query (read-only).
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-rbac.md
Title: Authorize access through Azure roles
+ Title: Azure role-based authorization
description: Azure role-based access control (Azure RBAC) in the Azure portal for controlling and delegating administrative tasks for Azure Cognitive Search management.
Previously updated : 05/28/2021 Last updated : 06/21/2021
-# Authorize access through Azure roles in Azure Cognitive Search
+# Use Azure role-based authentication in Azure Cognitive Search
-Azure provides a [global role-based authorization model](../role-based-access-control/role-assignments-portal.md) for all services managed through the portal or Resource Manager APIs. The authorization model provides Owner, Contributor, and Reader roles, which determine the level of *service administration* for Active Directory users, groups, and security principals assigned to each role. Cognitive Search uses these three roles to authorize access for search service administration.
+Azure provides a [global role-based authorization (RBAC) model](../role-based-access-control/role-assignments-portal.md) for all services managed through the portal or Resource Manager APIs. In Azure Cognitive Search, you can use RBAC in two scenarios:
-Cognitive Search does not support:
++ Portal access. Role membership determines the level of *service administration* rights.
-+ [Custom roles](../role-based-access-control/custom-roles.md).
-+ Role-based access control (Azure RBAC) over content-related operations, such as creating or querying an index, or any other object on the service.
++ Outbound indexer access to external Azure data sources. When you [configure a managed identity](search-howto-managed-identities-data-sources.md), you can use RBAC on external data services, such as Azure Blob Storage, to allow read operations from the trusted search service.
- Authorization for performing content operations requires either an [admin API key or query API key](search-security-api-keys.md).
+RBAC scenarios that are **not** supported include:
-> [!Note]
-> For identity-based access over search results (sometimes referred to as row-level security), you can create security filters to trim results by identity, removing documents for which the requestor should not have access. For more information, see [Security filters](search-security-trimming-for-azure-search.md).
++ [Custom roles](../role-based-access-control/custom-roles.md)
-## Roles used in Cognitive Search
++ Inbound requests to the search service, such as creating or querying an index (use [key-based authentication](search-security-api-keys.md) instead)
-For Azure Cognitive Search, roles are associated with permission levels that support the following management tasks:
++ User-identity access over search results (sometimes referred to as row-level security)+
+ For document-level security, you can create [security filters](search-security-trimming-for-azure-search.md) to trim results by identity, removing documents for which the requestor should not have access.
+
+## Azure roles used in Search
+
+Azure roles include Owner, Contributor, and Reader roles, where role membership consists of Azure Active Directory users and groups. In Azure Cognitive Search, roles are associated with permission levels that support the following management tasks:
| Role | Task | | | |
-| Owner |Create or delete the service. Create, update, or delete any object on the service: API keys, indexes, synonym maps, indexers, indexer data sources, and skillsets. </br></br>Full access to all service information exposed in the portal or through the Management REST API, Azure PowerShell, or Azure CLI. </br></br>Assign role membership.</br></br>Subscription administrators and service owners have automatic membership in the Owners role. |
+| Owner |Create or delete the service. Create, update, or delete any object on the service: API keys, indexes, synonym maps, indexers, indexer data sources, and skillsets. </br></br>Full access to all service information exposed in the portal or through the Management REST API, Azure PowerShell, or Azure CLI. </br></br>Assign role membership. </br></br>Subscription administrators and service owners have automatic membership in the Owners role. |
| Contributor | Same level of access as Owner, minus role assignments. [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) is equivalent to the generic Contributor built-in role. | | Reader | Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>Under the Essentials section: resource group, status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. </br></br>On the Monitoring tab, view service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to the Usage tab (storage, counts of indexes or indexers created on the service) or to any information in the Indexes, Indexers, Data sources, Skillsets, or Debug sessions tabs. |
Additionally, for content-related operations in the portal, such as creating or
+ [Manage using PowerShell](search-manage-powershell.md) + [Performance and optimization in Azure Cognitive Search](search-performance-optimization.md)
-+ [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md).
++ [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)?
security Secure Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/develop/secure-design.md
Awareness of these security risks can help you make requirement and
design decisions that minimize these risks in your application. Thinking about security controls to prevent breaches is important.
-However, you also want to [assume a breach](/azure/devops/learn/devops-at-microsoft/security-in-devops)
+However, you also want to [assume a breach](/devops/operate/security-in-devops)
will occur. Assuming a breach helps answer some important questions about security in advance, so they don't have to be answered in an emergency:
you to gather operations data, like who is accessing the application.
In the following articles, we recommend security controls and activities that can help you develop and deploy secure applications. - [Develop secure applications](secure-develop.md)-- [Deploy secure applications](secure-deploy.md)
+- [Deploy secure applications](secure-deploy.md)
security Operational Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/operational-overview.md
DevOps enables teams to deliver more secure, higher-quality solutions faster and
Cloud platforms such as Microsoft Azure have removed traditional bottlenecks and helped commoditize infrastructure. Software reigns in every business as the key differentiator and factor in business outcomes. No organization, developer, or IT worker can or should avoid the DevOps movement.
-Mature DevOps practitioners adopt several of the following practices. These practices [involve people](/azure/devops/learn/what-is-devops-culture) to form strategies based on the business scenarios. Tooling can help automate the various practices.
+Mature DevOps practitioners adopt several of the following practices. These practices [involve people](/devops/what-is-devops) to form strategies based on the business scenarios. Tooling can help automate the various practices.
- [Agile planning and project management](https://www.visualstudio.com/learn/what-is-agile/) techniques are used to plan and isolate work into sprints, manage team capacity, and help teams quickly adapt to changing business needs.-- [Version control, usually with Git](/azure/devops/learn/git/what-is-git), enables teams located anywhere in the world to share source and integrate with software development tools to automate the release pipeline.-- [Continuous integration](/azure/devops/learn/what-is-continuous-integration) drives the ongoing merging and testing of code, which leads to finding defects early. Other benefits include less time wasted on fighting merge issues and rapid feedback for development teams.-- [Continuous delivery](/azure/devops/learn/what-is-continuous-delivery) of software solutions to production and testing environments helps organizations quickly fix bugs and respond to ever-changing business requirements.-- [Monitoring](/azure/devops/learn/what-is-monitoring) of running applications--including production environments for application health, as well as customer usage--helps organizations form a hypothesis and quickly validate or disprove strategies. Rich data is captured and stored in various logging formats.-- [Infrastructure as Code (IaC)](/azure/devops/learn/what-is-infrastructure-as-code) is a practice that enables the automation and validation of creation and teardown of networks and virtual machines to help with delivering secure, stable application hosting platforms.-- [Microservices](/azure/devops/learn/what-are-microservices) architecture is used to isolate business use cases into small reusable services. This architecture enables scalability and efficiency.
+- [Version control, usually with Git](/devops/develop/git/what-is-git), enables teams located anywhere in the world to share source and integrate with software development tools to automate the release pipeline.
+- [Continuous integration](/devops/develop/what-is-continuous-integration) drives the ongoing merging and testing of code, which leads to finding defects early. Other benefits include less time wasted on fighting merge issues and rapid feedback for development teams.
+- [Continuous delivery](/devops/deliver/what-is-continuous-delivery) of software solutions to production and testing environments helps organizations quickly fix bugs and respond to ever-changing business requirements.
+- [Monitoring](/devops/operate/what-is-monitoring) of running applications--including production environments for application health, as well as customer usage--helps organizations form a hypothesis and quickly validate or disprove strategies. Rich data is captured and stored in various logging formats.
+- [Infrastructure as Code (IaC)](/devops/deliver/what-is-infrastructure-as-code) is a practice that enables the automation and validation of creation and teardown of networks and virtual machines to help with delivering secure, stable application hosting platforms.
+- [Microservices](/devops/deliver/what-are-microservices) architecture is used to isolate business use cases into small reusable services. This architecture enables scalability and efficiency.
## Next steps
To learn about the Security and Audit solution, see the following articles:
- [Security and compliance](https://azure.microsoft.com/overview/trusted-cloud/) - [Azure Security Center](../../security-center/security-center-introduction.md)-- [Azure Monitor](../../azure-monitor/overview.md)
+- [Azure Monitor](../../azure-monitor/overview.md)
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/recover-from-identity-compromise.md
+
+ Title: Use Microsoft and Azure security resources to help recover from systemic identity compromise | Microsoft Docs
+description: Learn how to use Microsoft and Azure security resources, such as Microsoft 365 Defender, Azure Sentinel, and Azure Active Directory, and Azure Security Center, and Microsoft recommendations to secure your system against systemic-identity compromises similar to the Nobelium attack (Solorigate) of December 2020.
+
+documentationcenter: na
++
+editor: ''
+++
+ms.devlang: na
+
+ na
+ Last updated : 06/17/2021++++
+# Recovering from systemic identity compromise
+
+This article describes Microsoft resources and recommendations for recovering from a systemic identity compromise attack against your organization, such as the [Nobelium](https://aka.ms/solorigate) attack of December 2020.
+
+The content in this article is based on guidance provided by Microsoft's Detection and Response Team (DART), which works to respond to compromises and help customers become cyber-resilient. For more guidance from the DART team, see their [Microsoft security blog series](https://www.microsoft.com/security/blog/microsoft-detection-and-response-team-dart-blog-series/).
+
+Many organizations have transitioned to a cloud-based approach for stronger security on their identity and access management. However, your organization may also have on-premises systems in place and use varying methods of hybrid architecture. This article acknowledges that systemic identity attacks affect cloud, on-premises, and hybrid systems, and provides recommendations and references for all of these environments.
+
+> [!IMPORTANT]
+> This information is provided as-is and constitutes generalized guidance; the ultimate determination about how to apply this guidance to your IT environment and tenant(s) must consider your unique environment and needs, which each Customer is in the best position to determine.
+>
+
+## About systemic identity compromise
+
+A systemic identity compromise attack on an organization occurs when an attacker successfully gains a foothold into the administration of an organization's identity infrastructure.
+
+If this has happened to your organization, you are in a race against the attacker to secure your environment before further damage can be done.
+
+- **Attackers with administrative control of an environment's identity infrastructure** can use that control to create, modify, or delete identities and identity permissions in that environment.
+
+ In an on-premises compromise, if trusted SAML token-signing certificates are *not* stored in an [HSM](/azure/key-vault/keys/hsm-protected-keys), the attack includes access to that trusted SAML token-signing certificate.
+
+- **Attackers can then use the certificate to forge SAML tokens** to impersonate any of the organization's existing users and accounts without requiring access to account credentials, and without leaving any traces.
+
+- **Highly-privileged account access** can also be used to add attacker-controlled credentials to existing applications, enabling attackers to access your system undetected, such as to call APIs, using those permissions.
+
+## Responding to the attack
++
+Responding to systemic identity compromises should include the steps shown in the following image and table:
+++
+|Step |Description |
+|||
+|**Establish secure communications** | An organization that has experienced a systemic identity compromise must assume that all communication is affected. Before taking any recovery action, you must ensure that the members of your team who are key to your investigation and response effort [can communicate securely](#establish-secure-communications). <br><br>*Securing communications must be your very first step so that you can proceed without the attacker's knowledge.*|
+|**Investigate your environment** | After you have secured communications on your core investigation team, you can start looking for initial access points and persistence techniques. [Identify your indications of compromise](#identify-indications-of-compromise), and then look for initial access points and persistence. At the same time, start [establishing continuous monitoring operations](#establish-continuous-monitoring) during your recovery efforts. |
+|**Improve security posture** | [Enable security features and capabilities](#improve-security-posture) following best practice recommendations for improved system security moving forward. <br><br>Make sure to continue your [continuous monitoring](#establish-continuous-monitoring) efforts as time goes on and the security landscape changes. |
+|**Regain / retain control** | You must regain administrative control of your environment from the attacker. After you have control again and have refreshed your system's security posture, make sure to [remediate or block](#remediate-and-retain-administrative-control) all possible persistence techniques and new initial access exploits. |
+| | |
+
+## Establish secure communications
+
+Before you start responding, you must be sure that you can communicate safely without the attacker eavesdropping. Make sure to isolate any communications related to the incident so that the attacker is not tipped-off to your investigation and is taken by surprise at your response actions.
+
+For example:
+
+1. For initial one-on-one and group communications, you may want to use PSTN calls, conference bridges that are not connected to the corporate infrastructure, and end-to-end encrypted messaging solutions.
+
+ Communications outside these frameworks should be treated as compromised and untrusted, unless verified through a secure channel.
+
+2. After those initial conversations, you may want to create an entirely new Microsoft 365 tenant, isolated from the organization's production tenant. Create accounts only for key personnel who need to be part of the response.
+
+If you do create a new Microsoft 365 tenant, make sure to follow all best practices for the tenant, and especially for administrative accounts and rights. Limit administrative rights, with no trusts for outside applications or vendors.
+
+> [!IMPORTANT]
+> Make sure that you do not communicate about your new tenant on your existing, and potentially compromised, email accounts.
+
+For more information, see [Best practices for securely using Microsoft 365](https://www.microsoft.com/security/blog/2019/01/10/best-practices-for-securely-using-microsoft-365-the-cis-microsoft-365-foundations-benchmark-now-available/).
+
+## Identify indications of compromise
+
+We recommend that customers follow updates from system providers, including both Microsoft and any partners, and implement any new detections and protections provided and identify published incidents of compromise (IOCs).
+
+Check for updates in the following Microsoft security products, and implement any recommended changes:
+
+- [Azure Sentinel](/azure/sentinel/)
+- [Microsoft 365 security solutions and services](/microsoft-365/security/)
+- [Windows 10 Enterprise Security](/windows/security/)
+- [Microsoft Cloud App Security ](/cloud-app-security/)
+
+Implementing new updates will help identify any prior campaigns and prevent future campaigns against your system. Keep in mind that lists of IOCs may not be exhaustive, and may expand as investigations continue.
+
+Therefore, we recommend also taking the following actions:
+
+- Make sure that you've applied the [Azure security benchmark documentation](/security/benchmark/azure/), and are monitoring compliance via [Azure Security Center](/azure/security-center/).
+
+- Incorporate threat intelligence feeds into your SIEM, such as by configuring Microsoft 365 data connectors in [Azure Sentinel](/azure/sentinel/import-threat-intelligence).
+
+For more information, see Microsoft's security documentation:
+
+- [Microsoft security documentation](/security/)
+- [Azure security documentation](/azure/security/)
+
+## Investigate your environment
+
+Once your incident responders and key personnel have a secure place to collaborate, you can start investigating the compromised environment.
+
+You'll need to balance getting to the bottom of every anomalous behavior and taking quick action to stop any further activity by the attacker. Any successful remediation requires an understanding of the initial method of entry and persistence methods that the attacker used, as complete as is possible at the time. Any persistence methods missed during the investigation can result in continued access by the attacker, and a potential recompromise.
+
+At this point, you may want to perform a risk analysis to prioritize your actions. For more information, see:
+
+- [Datacenter threat, vulnerability, and risk assessment](/compliance/assurance/assurance-threat-vulnerability-risk-assessment)
+- [Track and respond to emerging threats with threat analytics](/microsoft-365/security/defender-endpoint/threat-analytics)
+- [Threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt)
+
+Microsoft's security services provide extensive resources for detailed investigations. The following sections describe top recommended actions.
++
+> [!NOTE]
+> If you find that one or more of the listed logging sources is not currently part of your security program, we recommend configuring them as soon as possible to enable detections and future log reviews.
+>
+> Make sure to configure your log retention to support your organizationΓÇÖs investigation goals going forward. Retain evidence as needed for legal, regulatory, or insurance purposes.
+>
+
+### Investigate and review cloud environment logs
+
+Investigate and review cloud environment logs for suspicious actions and attacker indications of compromise. For example, check the following logs:
+
+- [Unified Audit Logs (UAL)](/powershell/module/exchange/search-unifiedauditlog)
+- [Azure Active Directory (Azure AD) logs](/azure/active-directory/reports-monitoring/overview-monitoring)
+- [Microsoft Exchange on-premises logs](/exchange/mail-flow/transport-logs/transport-logs)
+- VPN logs, such as from [VPN Gateway](/azure/vpn-gateway/vpn-gateway-howto-setup-alerts-virtual-network-gateway-log)
+- Engineering system logs
+- Antivirus and endpoint detection logs
+
+### Review endpoint audit logs
+
+Review your endpoint audit logs for on-premises changes, such as the following types of actions:
+
+- Group membership changes
+- New user account creation
+- Delegations within Active Directory
+
+Especially consider any of these changes that occur along with other typical signs of compromise or activity.
+
+### Review administrative rights in your environments
+
+Review administrative rights in both your cloud and on-premises environments. For example:
+
+|Environment |Description |
+|||
+|**All cloud environments** | - Review any privileged access rights in the cloud and remove any unnecessary permissions<br> - Implement Privileged Identity Management (PIM)<br> - Set up Conditional Access policies to limit administrative access during hardening |
+|**All on-premises environments** | - Review privileged access on-premise and remove unnecessary permissions<br> - Reduce membership of built-in groups<br> - Verify Active Directory delegations<br> - Harden your Tier 0 environment, and limit who has access to Tier 0 assets |
+|**All Enterprise applications** | Review for delegated permissions and consent grants that allow any of the following actions: <br><br> - Modifying privileged users and roles <br>- Reading or accessing all mailboxes <br>- Sending or forwarding email on behalf of other users <br>- Accessing all OneDrive or SharePoint site content <br>- Adding service principals that can read/write to the directory |
+|**Microsoft 365 environments** |Review access and configuration settings for your Microsoft 365 environment, including: <br>- SharePoint Online Sharing <br>- Microsoft Teams <br>- PowerApps <br>- Microsoft OneDrive for Business |
+| **Review user accounts in your environments** |- Review and remove guest user accounts that are no longer needed. <br>- Review email configurations for delegates, mailbox folder permissions, ActiveSync mobile device registrations, Inbox rules, and Outlook on the Web options. <br>- Review ApplicationImpersonation rights and reduce any use of legacy authentication as much as possible. <br>- Validate that MFA is enforced and that both MFA and self-service password reset (SSPR) contact information for all users is correct. |
+| | |
+
+## Establish continuous monitoring
+
+Detecting attacker behavior includes several methods, and depends on the security tools your organization has available for responding to the attack.
+
+For example, Microsoft security services may have specific resources and guidance that's relevant to the attack, as described in the sections below.
+
+> [!IMPORTANT]
+> If your investigation finds evidence of administrative permissions acquired through the compromise on your system, which have provided access to your organization's global administrator account and/or trusted SAML token-signing certificate, we recommend taking action to [remediate and retain administrative control](#remediate-and-retain-administrative-control).
+>
+
+### Monitoring with Azure Sentinel
+
+Azure Sentinel has many built-in resources to help in your investigation, such as hunting workbooks and analytics rules that can help detect attacks in relevant areas of your environment.
+
+For more information, see:
+
+- [Visualize and analyze your environment](/azure/sentinel/quickstart-get-visibility.md)
+- [Detect threats out of the box](/azure/sentinel/tutorial-detect-threats-built-in.md).
+
+### Monitoring with Microsoft 365 Defender
+
+We recommend that you check Microsoft 365 Defender for Endpoint and Microsoft Defender Antivirus for specific guidance relevant to your attack.
+
+Check for other examples of detections, hunting queries, and threat analytics reports in the Microsoft security center, such as in Microsoft 365 Defender, Microsoft 365 Defender for Identity, and Microsoft Cloud App Security. To ensure coverage, make sure that you install the [Microsoft Defender for Identity agent](/defender-for-identity/install-step4) on ADFS servers in addition to all domain controllers.
+
+For more information, see:
+
+- [Track and respond to emerging threats with threat analytics](/windows/security/threat-protection/microsoft-defender-atp/threat-analytics)
+- [Understand the analyst report in threat analytics](/windows/security/threat-protection/microsoft-defender-atp/threat-analytics-analyst-reports)
+
+### Monitoring with Azure Active Directory
+
+Azure Active Directory sign-in logs can show whether multi-factor authentication is being used correctly. Access sign-in logs directly from the Azure Active Directory area in the Azure portal, use the **Get-AzureADAuditSignInLogs** cmdlet, or view them in the **Logs** area of Azure Sentinel.
+
+For example, search or filter the results for when the **MFA results** field has a value of **MFA requirement satisfied by claim in the token**. If your organization uses ADFS and the claims logged are not included in the ADFS configuration, these claims may indicate attacker activity.
+
+Search or filter your results further to exclude extra noise. For example, you may want to include results only from federated domains. If you find suspicious sign-ins, drill down even further based on IP addresses, user accounts, and so on.
+
+The following table describes more methods for using Azure Active directory logs in your investigation:
+
+|Method |Description |
+|||
+|**Analyze risky sign-in events** | Azure Active Directory and its Identity Protection platform may generate risk events associated with the use of attacker-generated SAML tokens. <br><br>These events might be labeled as *unfamiliar properties*, *anonymous IP address*, *impossible travel*, and so on. <br><br>We recommend that you closely analyze all risk events associated with accounts that have administrative privileges, including any that may have been automatically been dismissed or remediated. For example, a risk event or an anonymous IP address might be automatically remediated because the user passed MFA. <br><br>Make sure to use [ADFS Connect Health](/azure/active-directory/hybrid/how-to-connect-health-adfs) so that all authentication events are visible in Azure AD. |
+|**Detect domain authentication properties** | Any attempt by the attacker to manipulate domain authentication policies will be recorded in the Azure Active Directory Audit logs, and reflected in the Unified Audit log. <br><br> For example, review any events associated with **Set domain authentication** in the Unified Audit Log, Azure AD Audit logs, and / or your SIEM environment to verify that all activities listed were expected and planned. |
+|**Detect credentials for OAuth applications** | Attackers who have gained control of a privileged account may search for an application with the ability to access any user's email in the organization, and then add attacker-controlled credentials to that application. <br><br>For example, you may want to search for any of the following activities, which would be consistent with attacker behavior: <br>- Adding or updating service principal credentials <br>- Updating application certificates and secrets <br>- Adding an app role assignment grant to a user <br>- Adding Oauth2PermissionGrant |
+|**Detect e-mail access by applications** | Search for access to email by applications in your environment. For example, use the [Microsoft 365 Advanced Auditing features](/microsoft-365/compliance/mailitemsaccessed-forensics-investigations) to investigate compromised accounts. |
+|**Detect non-interactive sign-ins to service principals** | The Azure Active Directory sign-in reports provide details about any non-interactive sign-ins that used service principal credentials. For example, you can use the sign-in reports to find valuable data for your investigation, such as an IP address used by the attacker to access email applications. |
+| | |
++
+## Improve security posture
+
+If a security event has occurred in your systems, we recommend that you reflect on your current security strategy and priorities.
+
+Incident Responders are often asked to provide recommendations on what investments the organization should prioritize, now that itΓÇÖs been faced with new threats.
+
+In addition to the recommendations documented in this article, we recommend that you consider prioritizing the areas of focus that are responsive to the post-exploitation techniques used by this attacker and the common security posture gaps that enable them.
+
+The following sections list recommendations to improve both general and identity security posture.
+
+### Improve general security posture
+
+We recommend the following actions to ensure your general security posture:
+
+- **Review [Microsoft Secure Score](/microsoft-365/security/mtp/microsoft-secure-score)** for security fundamentals recommendations customized for the Microsoft products and services you consume.
+
+- **Ensure that your organization has EDR and SIEM solutions in place**, such as [Microsoft 365 Defender for Endpoint](/microsoft-365/security/defender/microsoft-365-defender) and [Azure Sentinel](/azure/sentinel/overview).
+
+- **Review MicrosoftΓÇÖs [Enterprise access model](/security/compass/privileged-access-access-model)**.
+
+### Improve identity security posture
+
+We recommend the following actions to ensure identity-related security posture:
+
+- **Review Microsoft's [Five steps to securing your identity infrastructure](steps-secure-identity.md)**, and prioritize the steps as appropriate for your identity architecture.
+
+- **[Consider migrating to Azure AD Security Defaults](/azure/active-directory/fundamentals/concept-fundamentals-security-defaults)** for your authentication policy.
+
+- **Eliminate your organizationΓÇÖs use of legacy authentication**, if systems or applications still require it. For more information, see [Block legacy authentication to Azure AD with Conditional Access](/azure/active-directory/conditional-access/block-legacy-authentication).
+
+ > [!NOTE]
+ > The Exchange Team is planning to [disable Basic Authentication for the EAS, EWS, POP, IMAP, and RPS protocols](https://developer.microsoft.com/en-us/office/blogs/deferred-end-of-support-date-for-basic-authentication-in-exchange-online/) in the second half of 2021.
+ >
+ > As a point of clarity, Security Defaults and Authentication Policies are separate but provide complementary features.
+ >
+ > We recommend that customers use Authentication Policies to turn off Basic Authentication for a subset of Exchange Online protocols or to gradually turn off Basic Authentication across a large organization.
+ >
+
+- **Treat your ADFS infrastructure and AD Connect infrastructure as a Tier 0 asset**.
+
+- **Restrict local administrative access to the system**, including the account that is used to run the ADFS service.
+
+ The least privilege necessary for the account running ADFS is the *Log on as a Service* User Right Assignment.
+
+- **Restrict administrative access to limited users and from limited IP address ranges** by using Windows Firewall policies for Remote Desktop.
+
+ We recommend that you set up a Tier 0 jump box or equivalent system.
+
+- **Block all inbound SMB access** to the systems from anywhere in the environment. For more information, see [Beyond the Edge: How to Secure SMB Traffic in Windows](https://techcommunity.microsoft.com/t5/itops-talk-blog/beyond-the-edge-how-to-secure-smb-traffic-in-windows/ba-p/1447159). We also recommend that you stream the Windows Firewall logs to a SIEM for historical and proactive monitoring.
+
+- If you are using a Service Account and your environment supports it, **migrate from a Service Account to a group-Managed Service Account (gMSA)**. If you cannot move to a gMSA, rotate the password on the Service Account to a complex password.
+
+- **Ensure Verbose logging is enabled on your ADFS systems**. For example, run the following commands:
+
+ ```powershell
+ Set-AdfsProperties -AuditLevel verbose
+ Restart-Service -Name adfssrv
+ Auditpol.exe /set /subcategory:ΓÇ¥Application GeneratedΓÇ¥ /failure:enable /success:enable
+ ```
+
+## Remediate and retain administrative control
+
+If your investigation has identified that the attacker has administrative control in the organizationΓÇÖs cloud or on-premises environment, you must regain control in such a way that you ensure that the attacker isn't persistent.
+
+This section provides possible methods and steps to consider when building your administrative control recovery plan.
+
+> [!IMPORTANT]
+> The exact steps required in your organization will depend on what persistence you've discovered in your investigation, and how confident you are that your investigation was complete and has discovered all possible entry and persistence methods.
+>
+> Ensure that any actions taken are performed from a trusted device, built from a [clean source](/security/compass/privileged-access-access-model). For example, use a fresh, [privileged access workstation](/security/compass/privileged-access-deployment).
+>
+
+The following sections include the following types of recommendations for remediating and retaining administrative control:
+
+- Removing trust on your current servers
+- Rotating your SAML token-signing certificate, or replacing your ADFS servers if needed
+- Specific remediation activities for cloud or on-premises environments
+
+### Remove trust on your current servers
+
+If your organization has lost control of the token-signing certificates or federated trust, the most assured approach is to remove trust, and switch to cloud-mastered identity while remediating on-premises.
+
+Removing trust and switching to cloud-mastered identity requires careful planning and an in-depth understanding of the business operation effects of isolating identity. For more information, see [Protecting Microsoft 365 from on-premises attacks](/azure/active-directory/fundamentals/protect-m365-from-on-premises-attacks).
+
+### Rotate your SAML token-signing certificate
+
+If your organization decides *not* to [remove trust](#remove-trust-on-your-current-servers) while recovering administrative control on-premises, you'll have to rotate your SAML token-signing certificate after having regained administrative control on-premises, and blocked the attackers ability to access the signing certificate again.
+
+Rotating the token-signing certificate a single time still allows the previous token-signing certificate to work. Continuing to allow previous certificates to work is a built-in functionality for normal certificate rotations, which permits a grace period for organizations to update any relying party trusts before the certificate expires.
+
+If there was an attack, you don't want the attacker to retain access at all. Make sure to use the following steps to ensure that the attacker doesn't maintain the ability to forge tokens for your domain.
+
+> [!CAUTION]
+> The last step in this procedure logs users out of their phones, current webmail sessions, and any other items that are using the associated tokens and refresh tokens.
+>
+
+> [!TIP]
+> Performing these steps in your ADFS environment creates both a primary and secondary certificate, and automatically promotes the secondary certificate to primary after a default period of 5 days.
+>
+> If you have Relying Party Trusts, this may have effects 5 days after the initial ADFS environment change, and should be accounted for in your plan. You can also resolve this by replacing the primary certificate a third time, using the **Urgent** flag again, and removing the secondary certificate or turning off automatic certificate rotation.
+>
+
+**To fully rotate the token-signing certificate, and prevent new token forging by an attacker**
+
+1. Check to make sure that your **AutoCertificateRollover** parameter is set to **True**:
+
+ ``` powershell
+ Get-AdfsProperties | FL AutoCert*, Certificate*
+ ```
+ If **AutoCertificateRollover** isn't set to **True**, set the value as follows:
+
+ ``` powershell
+ Set-ADFSProperties -AutoCertificateRollover $true
+ ```
+
+1. Connect to the Microsoft Online Service:
+
+ ``` powershell
+ Connect-MsolService
+ ```
+
+1. Run the following command and make a note of your on-premises and cloud token signing certificate thumbprint and expiration dates:
+
+ ``` powershell
+ Get-MsolFederationProperty -DomainName <domain>
+ ```
+
+ For example:
+
+ ```powershell
+ ...
+ [Not Before]
+ 12/9/2020 7:57:13 PM
+
+ [Not After]
+ 12/9/2021 7:57:13 PM
+
+ [Thumbprint]
+ 3UD1JG5MEFHSBW7HEPF6D98EI8AHNTY22XPQWJFK6
+ ```
+
+1. Replace the primary token signing certificate using the **Urgent** switch. This command causes ADFS to replace the primary certificate immediately, without making it a secondary certificate:
+
+ ```powershell
+ Update-AdfsCertificate -CertificateType Token-Signing -Urgent
+ ```
+
+1. Create a secondary Token Signing certificate, without the **Urgent** switch. This command allows for two on-premises token signing certificates before synching with Azure Cloud.
+
+ ```powershell
+ Update-AdfsCertificate -CertificateType Token-Signing
+ ```
+
+1. Update the cloud environment with both the primary and secondary certificates on-premises to immediately remove the cloud published token signing certificate.
+
+ ```powershell
+ Update-MsolFederatedDomain -DomainName <domain>
+ ```
+
+ > [!IMPORTANT]
+ > If this step is not performed using this method, the old token signing certificate may still be able to authenticate users.
+
+1. To ensure that these steps have been performed correctly, verify that the certificate displayed before in step 3 is now removed:
+
+ ```powershell
+ Get-MsolFederationProperty -DomainName <domain>
+ ```
+
+1. Revoke your refresh tokens via PowerShell, to prevent access with the old tokens.
+
+ For more information, see:
+
+ - [Revoke user access in Azure Active Directory](/azure/active-directory/enterprise-users/users-revoke-access)
+ - [Revoke-AzureADUserAllRefreshToken PowerShell docs](/powershell/module/azuread/revoke-azureaduserallrefreshtoken)
+++
+### Replace your ADFS servers
+
+If, instead of [rotating your SAML token-signing certificate](#rotate-your-saml-token-signing-certificate), you decide to replace the ADFS servers with clean systems, you'll need to remove the existing ADFS from your environment, and then build a new one.
+
+For more information, see [Remove a configuration](/azure/active-directory/cloud-provisioning/how-to-configure#remove-a-configuration).
+
+### Cloud remediation activities
+
+In addition to the recommendations listed earlier in this article, we also recommend the following activities for your cloud environments:
+
+|Activity |Description |
+|||
+|**Reset passwords** | Reset passwords on any [break-glass accounts](/azure/active-directory/roles/security-emergency-access) and reduce the number of break-glass accounts to the absolute minimum required. |
+|**Restrict privileged access accounts** | Ensure that service and user accounts with privileged access are cloud-only accounts, and do not use on-premise accounts that are synced or federated to Azure Active Directory. |
+|**Enforce MFA** | Enforce Multi-Factor Authentication (MFA) across all elevated users in the tenant. We recommend enforcing MFA across all users in the tenant. |
+|**Limit administrative access** | Implement [Privileged Identity Management](/azure/active-directory/privileged-identity-management/pim-configure) (PIM) and conditional access to limit administrative access. <br><br>For Microsoft 365 users, implement [Privileged Access Management](https://techcommunity.microsoft.com/t5/microsoft-security-and/privileged-access-management-in-office-365-is-now-generally/ba-p/261751) (PAM) to limit access to sensitive abilities, such as eDiscovery, Global Admin, Account Administration, and more. |
+|**Review / reduce delegated permissions and consent grants** | Review and reduce all Enterprise Applications delegated permissions or [consent grants](/graph/auth-limit-mailbox-access) that allow any of the following functionalities: <br><br>- Modification of privileged users and roles <br>- Reading, sending email, or accessing all mailboxes <br>- Accessing OneDrive, Teams, or SharePoint content <br>- Adding Service Principals that can read/write to the directory <br>- Application Permissions versus Delegated Access |
+| | |
+
+### On-premises remediation activities
+
+In addition to the recommendations listed earlier in this article, we also recommend the following activities for your on-premises environments:
+
+|Activity |Description |
+|||
+|**Rebuild affected systems** | Rebuild systems that were identified as compromised by the attacker during your investigation. |
+|**Remove unnecessary admin users** | Remove unnecessary members from Domain Admins, Backup Operators, and Enterprise Admin groups. For more information, see [Securing Privileged Access](/security/compass/overview). |
+|**Reset passwords to privileged accounts** | Reset passwords of all privileged accounts in the environment. <br><br>**Note**: Privileged accounts are not limited to built-in groups, but can also be groups that are delegated access to server administration, workstation administration, or other areas of your environment. |
+|**Reset the krbtgt account** | Reset the **krbtgt** account twice using the [New-KrbtgtKeys](https://github.com/microsoft/New-KrbtgtKeys.ps1/blob/master/New-KrbtgtKeys.ps1) script. <br><br>**Note**: If you are using Read-Only Domain Controllers, you will need to run the script separately for Read-Write Domain Controllers and for Read-Only Domain Controllers. |
+|**Schedule a system restart** | After you validate that no persistence mechanisms created by the attacker exist or remain on your system, schedule a system restart to assist with removing memory-resident malware. |
+|**Reset the DSRM password** | Reset each domain controllerΓÇÖs DSRM (Directory Services Restore Mode) password to something unique and complex. |
+| | |
+
+### Remediate or block persistence discovered during investigation
+
+Investigation is an iterative process, and you'll need to balance the organizational desire to remediate as you identify anomalies and the chance that remediation will alert the attacker to your detection and give them time to react.
+
+For example, an attacker who becomes aware of the detection might change techniques or create more persistence.
+
+Make sure to remediate any persistence techniques that you've identified in earlier stages of the investigation.
+
+### Remediate user and service account access
+
+In addition to the recommended actions listed above, we recommend that you consider the following steps to remediate and restore user accounts:
+
+- **Enforce conditional access based on trusted devices**. If possible we recommend that you enforce *location-based conditional access* to suit your organizational requirements.
+
+- **Reset passwords** after eviction for any user accounts that may have been compromised. Make sure to also implement a mid-term plan to reset credentials for all accounts in your directory.
+
+- **Revoke refresh tokens** immediately after rotating your credentials.
+
+ For more information, see:
+
+ - [Revoke user access in an emergency in Azure Active Directory](/azure/active-directory/enterprise-users/users-revoke-access)
+ - [Revoke-AzureADUserAllRefreshToken PowerShell documentation](/powershell/module/azuread/revoke-azureaduserallrefreshtoken)
+++
+## Next steps
+
+- **Get help from inside Microsoft products**, including the Microsoft 365 security center, Microsoft 365 Security & Compliance center, and Microsoft Defender Security Center by selecting the **Help** (**?**) button in the top navigation bar.
+
+- **For deployment assistance**, contact us at [FastTrack](https://fasttrack.microsoft.com)
+
+- **If you have product support-related needs**, file a Microsoft support case at https://support.microsoft.com/contactus.
+
+ > [!IMPORTANT]
+ > If you believe you have been compromised and require assistance through an incident response, open a **Sev A** Microsoft support case.
+ >
security Technical Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/technical-capabilities.md
For organizations that need to secure access from multiple workstations located
For organizations that need to secure access from one workstation located on-premises to Azure, use [Point-to-Site VPN](../../vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md).
-Larger data sets can be moved over a dedicated high-speed WAN link such as [ExpressRoute](https://azure.microsoft.com/services/expressroute/). If you choose to use ExpressRoute, you can also encrypt the data at the application-level using [SSL/TLS](https://web.archive.org/web/20150221085231/http://support.microsoft.com:80/kb/257591) or other protocols for added protection.
+Larger data sets can be moved over a dedicated high-speed WAN link such as [ExpressRoute](https://azure.microsoft.com/services/expressroute/). If you choose to use ExpressRoute, you can also encrypt the data at the application-level using SSL/TLS or other protocols for added protection.
If you are interacting with Azure Storage through the Azure portal, all transactions occur via HTTPS. [Storage REST API](/rest/api/storageservices/) over HTTPS can also be used to interact with [Azure Storage](https://azure.microsoft.com/services/storage/) and [Azure SQL Database](https://azure.microsoft.com/services/sql-database/).
Resource Manager provides several benefits:
## Next step
-The [Azure Security Benchmark](../benchmarks/introduction.md) program includes a collection of security recommendations you can use to help secure the services you use in Azure.
+The [Azure Security Benchmark](../benchmarks/introduction.md) program includes a collection of security recommendations you can use to help secure the services you use in Azure.
sentinel Azure Sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/azure-sentinel-billing.md
Manage data ingestion and retention:
- [Optimize Log Analytics costs with dedicated clusters](#optimize-log-analytics-costs-with-dedicated-clusters). - [Separate non-security data in a different workspace](#separate-non-security-data-in-a-different-workspace). - [Reduce long-term data retention costs with Azure Data Explorer (ADX)](#reduce-long-term-data-retention-costs-with-adx).
+- [Use Data Collection Rules for your Windows Security Events](#use-data-collection-rules-for-your-windows-security-events).
Understand, monitor, and alert for data ingestion and cost changes:
With ADX, you can store data at a lower price, but still explore the data using
For more information, see [Integrate Azure Data Explorer for long-term log retention](store-logs-in-azure-data-explorer.md).
+#### Use data collection rules for your Windows Security Events
+
+The [Windows Security Events connector](connect-windows-security-events.md?tabs=LAA) enables you to stream security events from any computer running Windows Server that's connected to your Azure Sentinel workspace, including physical, virtual, or on-premises servers, or in any cloud. This connector includes support for the Azure Monitor agent, which uses data collection rules to define the data to collect from each agent.
+
+Data collection rules enable you to manage collection settings at scale, while still allowing unique, scoped configurations for subsets of machines. For more information, see [Configure data collection for the Azure Monitor agent](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md).
+
+Besides for the predefined sets of events that you can select to ingest, such as All events, Minimal, or Common, data collection rules enable you to build custom filters and select specific events to ingest. The Azure Monitor Agent uses these rules to filter the data at the source, and then ingest only the events you've selected, while leaving everything else behind. Selecting specific events to ingest can help you optimize your costs and save more.
+ ### Understand, monitor, and alert for changes in data ingestion and costs Use the following methods to understand, monitor, and alert for changes in your Azure Sentinel workspace.
For example, to see charts of your daily costs for a certain time frame:
You could also apply further controls. For example, to view only the costs associated with Azure Sentinel, select **Add filter**, select **Service name**, and then select the service names **sentinel**, **log analytics**, and **azure monitor**. ## Next steps
-For more tips on reducing Log Analytics data volume, see [Tips for reducing data volume](../azure-monitor/logs/manage-cost-storage.md#tips-for-reducing-data-volume).
+For more tips on reducing Log Analytics data volume, see [Tips for reducing data volume](../azure-monitor/logs/manage-cost-storage.md#tips-for-reducing-data-volume).
service-fabric Cluster Security Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/cluster-security-certificate-management.md
At this point, a certificate exists in the vault, ready for consumption. Onward
### Certificate provisioning We mentioned a 'provisioning agent', which is the entity that retrieves the certificate, inclusive of its private key, from the vault and installs it on to each of the hosts of the cluster. (Recall that Service Fabric does not provision certificates.) In our context, the cluster will be hosted on a collection of Azure VMs and/or virtual machine scale sets. In Azure, provisioning a certificate from a vault to a VM/VMSS can be achieved with the following mechanisms - assuming, as above, that the provisioning agent was previously granted 'get' permissions on the vault by the vault owner: - ad-hoc: an operator retrieves the certificate from the vault (as pfx/PKCS #12 or pem) and installs it on each node
- - as a virtual machine scale set 'secret' during deployment: the Compute service retrieves, using its first party identity on behalf of the operator, the certificate from a template-deployment-enabled vault and installs it on each node of the virtual machine scale set ([like so](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml#certificates)); note this allows the provisioning of versioned secrets only
+ - as a virtual machine scale set 'secret' during deployment: the Compute service retrieves, using its first party identity on behalf of the operator, the certificate from a template-deployment-enabled vault and installs it on each node of the virtual machine scale set ([like so](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-faq#certificates)); note this allows the provisioning of versioned secrets only
- using the [Key Vault VM extension](../virtual-machines/extensions/key-vault-windows.md); this allows the provisioning of certificates using version-less declarations, with periodic refreshing of observed certificates. In this case, the VM/VMSS is expected to have a [managed identity](../virtual-machines/security-policy.md#managed-identities-for-azure-resources), an identity that has been granted access to the vault(s) containing the observed certificates.
-The ad-hoc mechanism is not recommended for multiple reasons, ranging from security to availability, and won't be discussed here further; for details, refer to [certificates in virtual machine scale sets](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml#certificates).
+The ad-hoc mechanism is not recommended for multiple reasons, ranging from security to availability, and won't be discussed here further; for details, refer to [certificates in virtual machine scale sets](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-faq#certificates).
The VMSS-/Compute-based provisioning presents security and availability advantages, but it also presents restrictions. It requires - by design - declaring certificates as versioned secrets, which makes it suitable only for clusters secured with certificates declared by thumbprint. In contrast, the Key Vault VM extension-based provisioning will always install the latest version of each observed certificate, which makes it suitable only for clusters secured with certificates declared by subject common name. To emphasize, do not use an autorefresh provisioning mechanism (such as the KVVM extension) for certificates declared by instance (that is, by thumbprint) - the risk of losing availability is considerable.
For Microsoft-internal PKIs, please consult the internal documentation on the en
*A*: Obtain a certificate with the intended subject, and add it to the cluster's definition as a secondary, by thumbprint. Once the upgrade completed successfully, initiate another cluster configuration upgrade to convert the certificate declaration to common name. [Image1]:./media/security-cluster-certificate-mgmt/certificate-journey-thumbprint.png
-[Image2]:./media/security-cluster-certificate-mgmt/certificate-journey-common-name.png
+[Image2]:./media/security-cluster-certificate-mgmt/certificate-journey-common-name.png
service-fabric Service Fabric Cluster Creation Via Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-cluster-creation-via-arm.md
Use the following commands to create a cluster secured with a system generated s
### Use the default cluster template that ships in the module
-Use the following command to create a cluster quickly, by specifying minimal parameters, using the default template.
+You can use either the following PowerShell or Azure CLI commands to create a cluster quickly using the default template.
-The template that is used is available on the [Azure Service Fabric template samples : windows template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Windows-1-NodeTypes-Secure-NSG)
- and [Ubuntu template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Ubuntu-1-NodeTypes-Secure)
+The default template used is available here for [Windows](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Windows-1-NodeTypes-Secure-NSG)
+ and here for [Ubuntu](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Ubuntu-1-NodeTypes-Secure).
-The following command can create either Windows or Linux clusters, you need to specify the OS accordingly. The PowerShell/CLI commands also output the certificate in the specified *CertificateOutputFolder*; however, make sure certificate folder already created. The command takes in other parameters such as VM SKU as well.
+The following commands can create either Windows or Linux clusters, depending on how specify the OS parameter. Both PowerShell/CLI commands output the certificate in the specified *CertificateOutputFolder* (make sure the certificate folder location you specify already exists before running the command!).
> [!NOTE]
-> The following PowerShell command only works with the Azure PowerShell `Az` module. To check the current version of Azure Resource Manager PowerShell version, run the following PowerShell command "Get-Module Az". Follow [this link](/powershell/azure/install-Az-ps) to upgrade your Azure Resource Manager PowerShell version.
->
->
+> The following PowerShell command only works with the Azure PowerShell `Az` module. To check the current version of Azure Resource Manager PowerShell version, run the following PowerShell command "Get-Module Az". Follow [this link](/powershell/azure/install-Az-ps) to upgrade your Azure Resource Manager PowerShell version.
Deploy the cluster using PowerShell:
az sf cluster create --resource-group $resourceGroupName --location $resourceGro
## Create a new cluster using your own X.509 certificate
-Use the following command to create cluster, if you have a certificate that you want to use to secure your cluster with.
+You can use the following command to specify an existing certificate to create and secure a new cluster with.
If this is a CA signed certificate that you will end up using for other purposes as well, then it is recommended that you provide a distinct resource group specifically for your key vault. We recommend that you put the key vault into its own resource group. This action lets you remove the compute and storage resource groups, including the resource group that contains your Service Fabric cluster, without losing your keys and secrets. **The resource group that contains your key vault *must be in the same region* as the cluster that is using it.** ### Use the default five node, one node type template that ships in the module
-The template that is used is available on the [Azure samples : Windows template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Windows-1-NodeTypes-Secure-NSG)
- and [Ubuntu template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Ubuntu-1-NodeTypes-Secure)
+
+The default template used is available here for [Windows](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Windows-1-NodeTypes-Secure-NSG)
+ and here for [Ubuntu](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Ubuntu-1-NodeTypes-Secure).
Deploy the cluster using PowerShell:
At this point, you have a secure cluster running in Azure. Next, [connect to you
For the JSON syntax and properties to use a template, see [Microsoft.ServiceFabric/clusters template reference](/azure/templates/microsoft.servicefabric/clusters). <!-- Links -->
-[azure-powershell]:https://docs.microsoft.com/powershell/azure/install-Az-ps
-[azure-CLI]:https://docs.microsoft.com/cli/azure/get-started-with-azure-cli
+[azure-powershell]:/powershell/azure/install-Az-ps
+[azure-CLI]:/cli/azure/get-started-with-azure-cli
[service-fabric-cluster-security]: service-fabric-cluster-security.md [customize-your-cluster-template]: service-fabric-cluster-creation-create-template.md
service-fabric Service Fabric Cluster Resource Manager Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-cluster-resource-manager-metrics.md
this.Partition.ReportLoad(new List<LoadMetric> { new LoadMetric("CurrentConnecti
A service can report on any of the metrics defined for it at creation time. If a service reports load for a metric that it is not configured to use, Service Fabric ignores that report. If there are other metrics reported at the same time that are valid, those reports are accepted. Service code can measure and report all the metrics it knows how to, and operators can specify the metric configuration to use without having to change the service code. ## Reporting load for a partition
-The previous section describes how service replicas or instances report load themselves. There is an additional option to dynamically report load with FabricClient. When reporting load for a partition, you may report for multiple partitions at once.
+The previous section describes how service replicas or instances report load themselves. There is an additional option to dynamically report load for a partition's replicas or instances through Service Fabric API. When reporting load for a partition, you may report for multiple partitions at once.
Those reports will be used in the exactly same way as load reports that are coming from the replicas or instances themselves. Reported values will be valid until new load values are reported, either by the replica or instance or by reporting a new load value for a partition.
With this API, there are multiple ways to update load in the cluster:
- Both stateless and stateful services can update the load of all its secondary replicas or instances. - Both stateless and stateful services can update the load of a specific replica or instance on a node.
-It is also possible to combine any of those updates per partition at the same time.
+It is also possible to combine any of those updates per partition at the same time. Combination of load updates for a particular partition should be specified through the object PartitionMetricLoadDescription, which can contain corresponding list of load updates as it is shown in the example below. Load updates are represented through the object MetricLoadDescription, which can contain _current_ or _predicted_ load value for a metric, specified with a metric name.
+
+> [!NOTE]
+> _Predicted metric load values_ is currently a _preview feature_. It allows predicted load values to be reported and used at the Service Fabric side, but that feature is currently not enabled.
+>
Updating loads for multiple partitions is possible with a single API call, in which case the output will contain a response per partition. In case partition update is not successfully applied for any reason, updates for that partition will be skipped, and corresponding error code for a targeted partition will be provided:
Updating loads for multiple partitions is possible with a single API call, in wh
- ReplicaDoesNotExist - Secondary replica or instance does not exist on a specified node. - InvalidOperation - Could happen in two cases: updating load for a partition that belongs to the System application or updating predicted load is not enabled.
-If some of those errors are returned, you can update the input for a specific partition and retry the update for a specific partition.
+If some of those errors are returned, you can update the input for a specific partition and retry the update for it.
Code:
service-fabric Service Fabric Diagnostics Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-diagnostics-overview.md
Additionally, we even let users override health for entities. If your applicatio
### Watchdogs
-Generally, a watchdog is a separate service that can watch health and load across services, ping endpoints, and report health for anything in the cluster. This can help prevent errors that would not be detected based on the view of a single service. Watchdogs are also a good place to host code that performs remedial actions without user interaction (for example, cleaning up log files in storage at certain time intervals). You can find a sample watchdog service implementation [here](https://github.com/Azure-Samples/service-fabric-watchdog-service).
+Generally, a watchdog is a separate service that watches health and load across services, pings endpoints, and reports unexpected health events in the cluster. This can help prevent errors that may not be detected based only on the performance of a single service. Watchdogs are also a good place to host code that performs remedial actions that don't require user interaction, such as cleaning up log files in storage at certain time intervals. If you want a fully implemented, open source SF watchdog service that includes an easy-to-use watchdog extensibility model and that runs in both Windows and Linux clusters, see the [FabricObserver](https://github.com/Azure-Samples/service-fabric-watchdog-service) project. FabricObserver is production-ready software. We encourage you to deploy FabricObserver to your test and production clusters and extend it to meet your needs either through its plug-in model or by forking it and writing your own built-in observers. The former (plug-ins) is the recommended approach.
## Infrastructure (performance) monitoring Now that we've covered the diagnostics in your application and the platform, how do we know the hardware is functioning as expected? Monitoring your underlying infrastructure is a key part of understanding the state of your cluster and your resource utilization. Measuring system performance depends on many factors that can be subjective depending on your workloads. These factors are typically measured through performance counters. These performance counters can come from a variety of sources including the operating system, the .NET framework, or the Service Fabric platform itself. Some scenarios in which they would be useful are
service-fabric Service Fabric Get Started Eclipse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-get-started-eclipse.md
Title: Azure Service Fabric plug-in for Eclipse description: Learn about getting started with Azure Service Fabric in Java using eclipse and the Service Fabric provided plug-in. -- Last updated 04/06/2018--+ # Service Fabric plug-in for Eclipse Java application development
Install Eclipse Neon or later from the [Eclipse site](https://www.eclipse.org).
- To check for and install updates for Eclipse, go to **Help** > **Check for Updates**. Install the Service Fabric plug-in, in Eclipse, go to **Help** > **Install New Software**.
-1. In the **Work with** box, enter https:\//dl.microsoft.com/eclipse.
+1. In the **Work with** box, enter `https://servicefabricdownloads.blob.core.windows.net/eclipse/`.
2. Click **Add**. ![Service Fabric plug-in for Eclipse][sf-eclipse-plugin-install]
If you already have the Service Fabric plug-in installed, install the latest ver
3. Once you update the Service Fabric plug-in, also refresh the Gradle project. Right click **build.gradle**, then select **Refresh**. > [!NOTE]
-> If installing or updating the Service Fabric plug-in is slow, it might be due to an Eclipse setting. Eclipse collects metadata on all changes to update sites that are registered with your Eclipse instance. To speed up the process of checking for and installing a Service Fabric plug-in update, go to **Available Software Sites**. Clear the check boxes for all sites except for the one that points to the Service Fabric plug-in location (https:\//dl.microsoft.com/eclipse/azure/servicefabric).
+> If installing or updating the Service Fabric plug-in is slow, it might be due to an Eclipse setting. Eclipse collects metadata on all changes to update sites that are registered with your Eclipse instance. To speed up the process of checking for and installing a Service Fabric plug-in update, go to **Available Software Sites**. Clear the check boxes for all sites except for the one that points to the Service Fabric plug-in location (*https://servicefabricdownloads.blob.core.windows.net/eclipse/*).
> [!NOTE] >If Eclipse isn't working as expected on your Mac, or needs you run as super user), go to the **ECLIPSE_INSTALLATION_PATH** folder and navigate to the subfolder **Eclipse.app/Contents/MacOS**. Start Eclipse by running `./eclipse`.
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-versions.md
Support for Service Fabric on a specific OS ends when support for the OS version
| Ubuntu 18.04 | April 2028 | <a href="https://wiki.ubuntu.com/Releases">Ubuntu lifecycle</a>| | Ubuntu 16.04 | April 2024 | <a href="https://wiki.ubuntu.com/Releases">Ubuntu lifecycle</a>|
-## Supported .NET runtimes
-
-The following table lists the .NET runtimes supported by Service Fabric:
-
-| Service Fabric runtime | Supported .NET runtimes for Windows |Supported .NET runtimes for Linux |
-| | | |
-| 8.0 CU1 | .NET 5.0, >= .NET Core 2.1, All >= .NET Framework 4.5 | >= .NET Core 2.1|
-| 8.0 RTO | .NET 5.0, >= .NET Core 2.1, All >= .NET Framework 4.5 | >= .NET Core 2.1|
- ## Service Fabric version name and number reference The following table lists the version names of Service Fabric and their corresponding version numbers.
site-recovery Azure To Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-troubleshoot-replication.md
Following are some of the most common issues.
### App-consistency not enabled on Linux servers
-**How to fix** : Azure Site Recovery for Linux Operation System supports application custom scripts for app-consistency. The custom script with pre and post options will be used by the Azure Site Recovery Mobility Agent for app-consistency. [Here](/azure/site-recovery/site-recovery-faq.yml#replication) are the steps to enable it.
+**How to fix** : Azure Site Recovery for Linux Operation System supports application custom scripts for app-consistency. The custom script with pre and post options will be used by the Azure Site Recovery Mobility Agent for app-consistency. [Here](/azure/site-recovery/site-recovery-faq#replication) are the steps to enable it.
### More causes because of VSS-related issues:
Restart the following
- VSS service. - Azure Site Recovery VSS Provider.-- VDS service.
+- VDS service.
site-recovery Site Recovery Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-sql.md
BCDR technologies Always On, active geo-replication, and auto-failover groups ha
[Create a recovery plan](site-recovery-create-recovery-plans.md) with application and web tier virtual machines. The following steps show how to add failover of the database tier:
-1. Import the scripts to fail over SQL Availability Group in both a [Resource Manager virtual machine](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/asr-automation-recovery/scripts/ASR-SQL-FailoverAG.ps1) and a [classic virtual machine](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/asr-automation-recovery/scripts/ASR-SQL-FailoverAGClassic.ps1). Import the scripts into your Azure Automation account.
+1. Import the scripts to fail over SQL Availability Group in both a [Resource Manager virtual machine](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/demos/asr-automation-recovery/scripts/ASR-SQL-FailoverAG.ps1) and a [classic virtual machine](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/demos/asr-automation-recovery/scripts/ASR-SQL-FailoverAGClassic.ps1). Import the scripts into your Azure Automation account.
[![Image of a "Deploy to Azure" logo](https://azurecomcdn.azureedge.net/mediahandler/acomblog/media/Default/blog/c4803408-340e-49e3-9a1f-0ed3f689813d.png)](https://aka.ms/asr-automationrunbooks-deploy)
spring-cloud Concepts Blue Green Deployment Strategies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/concepts-blue-green-deployment-strategies.md
This article describes the blue-green deployment support in Azure Spring Cloud.
-Azure Spring Cloud (Standard tier and higher) permits two deployments for every app, only one of which receives production traffic. This pattern is commonly known as blue-green deployment. Azure Spring Cloud's support for blue-green deployment, together with a [Continuous Delivery (CD)](/azure/devops/learn/what-is-continuous-delivery) pipeline and rigorous automated testing, allows agile application deployments with high confidence.
+Azure Spring Cloud (Standard tier and higher) permits two deployments for every app, only one of which receives production traffic. This pattern is commonly known as blue-green deployment. Azure Spring Cloud's support for blue-green deployment, together with a [Continuous Delivery (CD)](/devops/deliver/what-is-continuous-delivery) pipeline and rigorous automated testing, allows agile application deployments with high confidence.
## Alternating deployments
spring-cloud How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-bind-cosmos.md
Prerequisites:
* A deployed Azure Spring Cloud instance. Follow our [quickstart on deploying via the Azure CLI](./quickstart.md) to get started. * An Azure Cosmos DB account with a minimum permission level of Contributor.
-## Bind Azure Cosmos DB
-
-Azure Cosmos DB has five different API types that support binding. The following procedure shows how to use them:
-
-1. Create an Azure Cosmos DB database. Refer to the quickstart on [creating a database](../cosmos-db/create-cosmosdb-resources-portal.md) for help.
-
-1. Record the name of your database. For this procedure, the database name is **testdb**.
+## Prepare your Java project
1. Add one of the following dependencies to your Azure Spring Cloud application's pom.xml file. Choose the dependency that is appropriate for your API type.
Azure Cosmos DB has five different API types that support binding. The following
</dependency> ```
-1. Use `az spring-cloud app update` to update the current deployment, or use `az spring-cloud app deployment create` to create a new deployment. These commands will either update or create the application with the new dependency.
+1. Update the current app by running `az spring-cloud app deploy`, or create a new deployment for this change by running `az spring-cloud app deployment create`.
+
+## Bind your app to the Azure Cosmos DB
+
+#### [Service Binding](#tab/Service-Binding)
+Azure Cosmos DB has five different API types that support binding. The following procedure shows how to use them:
+
+1. Create an Azure Cosmos DB database. Refer to the quickstart on [creating a database](../cosmos-db/create-cosmosdb-resources-portal.md) for help.
+
+1. Record the name of your database. For this procedure, the database name is **testdb**.
1. Go to your Azure Spring Cloud service page in the Azure portal. Go to **Application Dashboard** and select the application to bind to Azure Cosmos DB. This application is the same one you updated or deployed in the previous step.
Azure Cosmos DB has five different API types that support binding. The following
azure.cosmosdb.database=testdb ```
+#### [Terraform](#tab/Terraform)
+The following Terraform script shows how to set up an Azure Spring Cloud app with Azure Cosmos DB MongoDB API.
+```terraform
+provider "azurerm" {
+ features {}
+}
+
+variable "application_name" {
+ type = string
+ description = "The name of your application"
+ default = "demo-abc"
+}
+
+resource "azurerm_resource_group" "example" {
+ name = "example-resources"
+ location = "West Europe"
+}
+
+resource "azurerm_cosmosdb_account" "cosmosdb" {
+ name = "cosmosacct-${var.application_name}-001"
+ resource_group_name = azurerm_resource_group.example.name
+ location = azurerm_resource_group.example.location
+ offer_type = "Standard"
+ kind = "MongoDB"
+
+ consistency_policy {
+ consistency_level = "Session"
+ }
+
+ geo_location {
+ failover_priority = 0
+ location = azurerm_resource_group.example.location
+ }
+}
+
+resource "azurerm_cosmosdb_mongo_database" "cosmosdb" {
+ name = "cosmos-${var.application_name}-001"
+ resource_group_name = azurerm_cosmosdb_account.cosmosdb.resource_group_name
+ account_name = azurerm_cosmosdb_account.cosmosdb.name
+}
+
+resource "azurerm_spring_cloud_service" "example" {
+ name = "${var.application_name}"
+ resource_group_name = azurerm_resource_group.example.name
+ location = azurerm_resource_group.example.location
+}
+
+resource "azurerm_spring_cloud_app" "example" {
+ name = "${var.application_name}-app"
+ resource_group_name = azurerm_resource_group.example.name
+ service_name = azurerm_spring_cloud_service.example.name
+ is_public = true
+ https_only = true
+}
+
+resource "azurerm_spring_cloud_java_deployment" "example" {
+ name = "default"
+ spring_cloud_app_id = azurerm_spring_cloud_app.example.id
+ cpu = 2
+ memory_in_gb = 4
+ instance_count = 2
+ jvm_options = "-XX:+PrintGC"
+ runtime_version = "Java_11"
+
+ environment_variables = {
+ "azure.cosmosdb.uri" : azurerm_cosmosdb_account.cosmosdb.connection_strings[0]
+ "azure.cosmosdb.database" : azurerm_cosmosdb_mongo_database.cosmosdb.name
+ }
+}
+
+resource "azurerm_spring_cloud_active_deployment" "example" {
+ spring_cloud_app_id = azurerm_spring_cloud_app.example.id
+ deployment_name = azurerm_spring_cloud_java_deployment.example.name
+}
+```
++ ## Next steps
-In this article, you learned how to bind your Azure Spring Cloud application to an Azure Cosmos DB database. To learn more about binding services to your application, see [Bind to an Azure Cache for Redis cache](./how-to-bind-redis.md).
+In this article, you learned how to bind your Azure Spring Cloud application to an Azure Cosmos DB database. To learn more about binding services to your application, see [Bind to an Azure Cache for Redis cache](./how-to-bind-redis.md).
spring-cloud How To Bind Redis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-bind-redis.md
Instead of manually configuring your Spring Boot applications, you can automatic
If you don't have a deployed Azure Spring Cloud instance, follow the steps in the [quickstart on deploying an Azure Spring Cloud app](./quickstart.md).
-## Bind Azure Cache for Redis
-
+## Prepare your Java project
1. Add the following dependency to your project's pom.xml file: ```xml
If you don't have a deployed Azure Spring Cloud instance, follow the steps in th
1. Update the current deployment using `az spring-cloud app update` or create a new deployment using `az spring-cloud app deployment create`. +
+## Bind your app to the Azure Cache for Redis
+
+#### [Service Binding](#tab/Service-Binding)
1. Go to your Azure Spring Cloud service page in the Azure portal. Go to **Application Dashboard** and select the application to bind to Azure Cache for Redis. This application is the same one you updated or deployed in the previous step. 1. Select **Service binding** and select **Create service binding**. Fill out the form, being sure to select the **Binding type** value **Azure Cache for Redis**, your Azure Cache for Redis server, and the **Primary** key option.
If you don't have a deployed Azure Spring Cloud instance, follow the steps in th
spring.redis.password=abc****** spring.redis.ssl=true ```-
+#### [Terraform](#tab/Terraform)
+
+The following Terraform script shows how to set up an Azure Spring Cloud app with Azure Cache for Redis.
+
+```terraform
+provider "azurerm" {
+ features {}
+}
+
+variable "application_name" {
+ type = string
+ description = "The name of your application"
+ default = "demo-abc"
+}
+
+resource "azurerm_resource_group" "example" {
+ name = "example-resources"
+ location = "West Europe"
+}
+
+resource "azurerm_redis_cache" "redis" {
+ name = "redis-${var.application_name}-001"
+ resource_group_name = azurerm_resource_group.example.name
+ location = azurerm_resource_group.example.location
+ capacity = 0
+ family = "C"
+ sku_name = "Standard"
+ enable_non_ssl_port = false
+ minimum_tls_version = "1.2"
+}
+
+resource "azurerm_spring_cloud_service" "example" {
+ name = "${var.application_name}"
+ resource_group_name = azurerm_resource_group.example.name
+ location = azurerm_resource_group.example.location
+}
+
+resource "azurerm_spring_cloud_app" "example" {
+ name = "${var.application_name}-app"
+ resource_group_name = azurerm_resource_group.example.name
+ service_name = azurerm_spring_cloud_service.example.name
+ is_public = true
+ https_only = true
+}
+
+resource "azurerm_spring_cloud_java_deployment" "example" {
+ name = "default"
+ spring_cloud_app_id = azurerm_spring_cloud_app.example.id
+ cpu = 2
+ memory_in_gb = 4
+ instance_count = 2
+ jvm_options = "-XX:+PrintGC"
+ runtime_version = "Java_11"
+
+ environment_variables = {
+ "spring.redis.host" = azurerm_redis_cache.redis.hostname
+ "spring.redis.password" = azurerm_redis_cache.redis.primary_access_key
+ "spring.redis.port" = "6380"
+ "spring.redis.ssl" = "true"
+ }
+}
+
+resource "azurerm_spring_cloud_active_deployment" "example" {
+ spring_cloud_app_id = azurerm_spring_cloud_app.example.id
+ deployment_name = azurerm_spring_cloud_java_deployment.example.name
+}
+```
+ ## Next steps
-In this article, you learned how to bind your Azure Spring Cloud application to Azure Cache for Redis. To learn more about binding services to your application, see [Bind to an Azure Database for MySQL instance](./how-to-bind-mysql.md).
+In this article, you learned how to bind your Azure Spring Cloud application to Azure Cache for Redis. To learn more about binding services to your application, see [Bind to an Azure Database for MySQL instance](./how-to-bind-mysql.md).
spring-cloud How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-deploy-in-azure-virtual-network.md
Select the virtual network **azure-spring-cloud-vnet** you previously created.
![Screenshot that shows the Access control screen.](./media/spring-cloud-v-net-injection/access-control.png)
-1. Assign the *Owner* role to the **Azure Spring Cloud Resource Provider**. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+1. Assign the *Owner* role to the **Azure Spring Cloud Resource Provider**. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md#step-2-open-the-add-role-assignment-pane).
-You can also do this step by running the following Azure CLI command:
+ ![Screenshot that shows owner assignment to resource provider.](./media/spring-cloud-v-net-injection/assign-owner-resource-provider.png)
-```azurecli
-VIRTUAL_NETWORK_RESOURCE_ID=`az network vnet show \
- --name ${NAME_OF_VIRTUAL_NETWORK} \
- --resource-group ${RESOURCE_GROUP_OF_VIRTUAL_NETWORK} \
- --query "id" \
- --output tsv`
-
-az role assignment create \
- --role "Owner" \
- --scope ${VIRTUAL_NETWORK_RESOURCE_ID} \
- --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2
-```
+ You can also do this step by running the following Azure CLI command:
+
+ ```azurecli
+ VIRTUAL_NETWORK_RESOURCE_ID=`az network vnet show \
+ --name ${NAME_OF_VIRTUAL_NETWORK} \
+ --resource-group ${RESOURCE_GROUP_OF_VIRTUAL_NETWORK} \
+ --query "id" \
+ --output tsv`
+
+ az role assignment create \
+ --role "Owner" \
+ --scope ${VIRTUAL_NETWORK_RESOURCE_ID} \
+ --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2
+ ```
## Deploy an Azure Spring Cloud instance
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/configuration.md
Common uses cases for wildcard routes include:
Single Page Applications often rely on client-side routing. These client-side routing rules update the browser's window location without making requests back to the server. If you refresh the page, or navigate directly to URLs generated by client-side routing rules, a server-side fallback route is required to serve the appropriate HTML page (which is generally the _https://docsupdatetracker.net/index.html_ for your client-side app).
-You can configure your app to use rules that implement a fallback route as shown in the following example that uses a path wildcard with file filter:
+You can define a fallback rule by adding a `navigationFallback` section. The following example returns _/https://docsupdatetracker.net/index.html_ for all static file requests that do not match a deployed file.
+
+```json
+{
+ "navigationFallback": {
+ "rewrite": "/https://docsupdatetracker.net/index.html"
+ }
+}
+```
+
+You can control which requests return the fallback file by defining a filter. In the following example, requests for certain routes in the _/images_ folder and all files in the _/css_ folder are excluded from returning the fallback file.
```json {
The example file structure below, the following outcomes are possible with this
| _/css/global.css_ | The stylesheet file | `200` | | Any other file outside the _/images_ or _/css_ folders | The _/https://docsupdatetracker.net/index.html_ file | `200` |
+> [!IMPORTANT]
+> If you are migrating from the deprecated [_routes.json_](https://github.com/Azure/static-web-apps/wiki/routes.json-reference-(deprecated)) file, do not include the legacy fallback route (`"route": "/*"`) in the [routing rules](#routes).
+ ## Global headers The `globalHeaders` section provides a set of [HTTP headers](https://developer.mozilla.org/docs/Web/HTTP/Headers) applied to each response, unless overridden by a [route header](#route-headers) rule, otherwise the union of both the headers from the route and the global headers is returned.
static-web-apps Deploy Nextjs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/deploy-nextjs.md
Title: "Tutorial: Deploy static-rendered Next.js websites on Azure Static Web Apps" description: "Generate and deploy Next.js dynamic sites with Azure Static Web Apps." -+ Last updated 05/08/2020-+
static-web-apps Deploy Nuxtjs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/deploy-nuxtjs.md
Title: "Tutorial: Deploy server-rendered Nuxt.js websites on Azure Static Web Apps" description: "Generate and deploy Nuxt.js dynamic sites with Azure Static Web Apps." -+ Last updated 05/08/2020-+
static-web-apps Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/getting-started.md
Next, open Visual Studio Code and go to **File > Open Folder** to open the clone
:::image type="content" source="media/getting-started/extension-create-button.png" alt-text="Application name":::
-1. The command palate opens at the top of the editor and prompts you to select a subscription name.
+1. The command palette opens at the top of the editor and prompts you to select a subscription name.
Select your subscription and press <kbd>Enter</kbd>.
storage Storage Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-plan-manage-costs.md
Title: Plan and manage costs for Azure Blob storage
-description: Learn how to plan for and manage costs for Azure Blob storage by using cost analysis in Azure portal.
+ Title: Plan and manage costs for Azure Blob Storage
+description: Learn how to plan for and manage costs for Azure Blob Storage by using cost analysis in Azure portal.
Previously updated : 11/13/2020 Last updated : 06/21/2021
-# Plan and manage costs for Azure Blob storage
+# Plan and manage costs for Azure Blob Storage
-This article helps you plan and manage costs for Azure Blob storage. First, estimate costs by using the Azure pricing calculator. After you create your storage account, optimize the account so that you pay only for what you need. Use cost management features to set budgets and monitor costs. You can also review forecasted costs, and monitor spending trends to identify areas where you might want to act.
+This article helps you plan and manage costs for Azure Blob Storage. First, estimate costs by using the Azure pricing calculator. After you create your storage account, optimize the account so that you pay only for what you need. Use cost management features to set budgets and monitor costs. You can also review forecasted costs, and monitor spending trends to identify areas where you might want to act.
-Keep in mind that costs for Blob storage are only a portion of the monthly costs in your Azure bill. Although this article explains how to estimate and manage costs for Blob storage, you're billed for all Azure services and resources used for your Azure subscription, including the third-party services. After you're familiar with managing costs for Blob storage, you can apply similar methods to manage costs for all the Azure services used in your subscription.
+Keep in mind that costs for Blob Storage are only a portion of the monthly costs in your Azure bill. Although this article explains how to estimate and manage costs for Blob Storage, you're billed for all Azure services and resources used for your Azure subscription, including the third-party services. After you're familiar with managing costs for Blob Storage, you can apply similar methods to manage costs for all the Azure services used in your subscription.
## Estimate costs
Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculato
4. Modify the remaining options to see their affect on your estimate.
- > [!NOTE]
- > You can pay for Azure Blob storage charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+## Understand the full billing model for Azure Blob Storage
+
+Azure Blob Storage runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue.
+
+### How you're charged for Azure Blob Storage
+
+When you create or use Blob Storage resources, you'll be charged for the following meters:
+
+| Meter | Unit |
+|||
+| Data storage | Per GB / per month|
+| Operations | Per transaction |
+| Data transfer | Per GB |
+| Metadata | Per GB / per month<sup>1 |
+| Blob index tags | Per tag<sup>2 |
+| Change feed | Per logged change<sup>2 |
+| Encryption scopes | Per month<sup>2 |
+| Query acceleration | Per GB scanned & Per GB returned |
+
+<sup>1</sup> Applies only to accounts that have a hierarchical namespace.<br />
+<sup>2</sup> Applies only if you enable the feature.<br />
+
+Data traffic might also incur networking costs. See the [Bandwidth pricing](https://azure.microsoft.com/pricing/details/data-transfers/).
+
+At the end of your billing cycle, the charges for each meter are summed. Your bill or invoice shows a section for all Azure Blob Storage costs. There's a separate line item for each meter.
+
+Data storage and metadata are billed per GB on a monthly basis. For data and metadata stored for less than a month, you can estimate the impact on your monthly bill by calculating the cost of each GB per day. You can use a similar approach to estimating the cost of encryption scopes that are in use for less than a month. The number of days in any given month varies. Therefore, to obtain the best approximation of your costs in a given month, make sure to divide the monthly cost by the number of days that occur in that month.
+
+### Finding the unit price for each meter
+
+To find unit prices, open the correct pricing page. If you've enabled the hierarchical namespace feature on your account, see the [Azure Data Lake Storage Gen2 pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/) page. If you haven't enabled this feature, see the [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) page.
+
+In the pricing page, apply the appropriate redundancy, region, and currency filters. Prices for each meter appear in a table. Prices differ based on other settings in your account such as data redundancy options, access tier and performance tier.
+
+### Flat namespace accounts and transaction pricing
+
+Clients can make a request by using either the Blob Storage endpoint or the Data Lake Storage endpoint of your account. To learn more about storage account endpoints, see [Storage account endpoints](storage-account-overview.md#storage-account-endpoints).
+
+Transaction prices that appear in the [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) page apply only to requests that use the Blob Storage endpoint (For example: `https://<storage-account>.blob.core.windows.net`). The listed prices do not apply to requests that use the Data Lake Storage Gen2 endpoint (For example: `https://<storage-account>.dfs.core.windows.net`). For the transaction price of those requests, open the [Azure Data Lake Storage Gen2 pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/) page and select the **Flat Namespace** option.
+
+> [!div class="mx-imgBorder"]
+> ![flat namespace option](media/storage-plan-manage-costs/select-flat-namespace.png)
+
+Requests to the Data Lake Storage Gen2 endpoint can originate from any of the following sources:
+
+- Workloads that use the Azure Blob File System driver or [ABFS driver](https://hadoop.apache.org/docs/stable/hadoop-azure/abfs.html).
+
+- REST calls that use the [Azure Data Lake Store REST API](/rest/api/storageservices/data-lake-storage-gen2)
+
+- Applications that use Data Lake Storage Gen2 APIs from an Azure Storage client library.
++
+### Using Azure Prepayment with Azure Blob Storage
+
+You can pay for Azure Blob Storage charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
## Optimize costs
This section covers each option in more detail.
You can save money on storage costs for blob data with Azure Storage reserved capacity. Azure Storage reserved capacity offers you a discount on capacity for block blobs and for Azure Data Lake Storage Gen2 data in standard storage accounts when you commit to a reservation for either one year or three years. A reservation provides a fixed amount of storage capacity for the term of the reservation. Azure Storage reserved capacity can significantly reduce your capacity costs for block blobs and Azure Data Lake Storage Gen2 data.
-To learn more, see [Optimize costs for Blob storage with reserved capacity](../blobs/storage-blob-reserved-capacity.md).
+To learn more, see [Optimize costs for Blob Storage with reserved capacity](../blobs/storage-blob-reserved-capacity.md).
#### Organize data into access tiers You can reduce costs by placing blob data into the most cost effective access tiers. Choose from three tiers that are designed to optimize your costs around data use. For example, the *hot* tier has a higher storage cost but lower access cost. Therefore, if you plan to access data frequently, the hot tier might be the most cost-efficient choice. If you plan to access data less frequently, the *cold* or *archive* tier might make the most sense because it raises the cost of accessing data while reducing the cost of storing data.
-To learn more, see [Azure Blob storage: hot, cool, and archive access tiers](../blobs/storage-blob-storage-tiers.md?tabs=azure-portal).
+To learn more, see [Azure Blob Storage: hot, cool, and archive access tiers](../blobs/storage-blob-storage-tiers.md?tabs=azure-portal).
#### Automatically move data between access tiers Use lifecycle management policies to periodically move data between tiers to save the most money. These policies can move data to by using rules that you specify. For example, you might create a rule that moves blobs to the archive tier if that blob hasn't been modified in 90 days. By creating policies that adjust the access tier of your data, you can design the least expensive storage options for your needs.
-To learn more, see [Manage the Azure Blob storage lifecycle](../blobs/storage-lifecycle-management-concepts.md?tabs=azure-portal)
+To learn more, see [Manage the Azure Blob Storage lifecycle](../blobs/storage-lifecycle-management-concepts.md?tabs=azure-portal)
## Create budgets
You can also [export your cost data](../../cost-management-billing/costs/tutoria
## Next steps - Learn more on how pricing works with Azure Storage. See [Azure Storage Overview pricing](https://azure.microsoft.com/pricing/details/storage/).-- [Optimize costs for Blob storage with reserved capacity](../blobs/storage-blob-reserved-capacity.md).
+- [Optimize costs for Blob Storage with reserved capacity](../blobs/storage-blob-reserved-capacity.md).
- Learn [how to optimize your cloud investment with Azure Cost Management](../../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
storage File Sync Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-networking-endpoints.md
switch($azureEnvironment) {
"AzureUSGovernment" { $storageSyncSuffix = "afs.azure.us"
+ }
+
+ "AzureChinaCloud" {
+ $storageSyncSuffix = "afs.azure.cn"
} default {
storage File Sync Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/store-cleardb-faq.md
See [ClearDB](https://w2.cleardb.net/) for the latest information on that servic
You have several other options for hosting MySQL in Azure: * [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/)
-* [MySQL cluster running on an Azure VM](https://github.com/azure/azure-quickstart-templates/tree/master/mysql-replication)
+* [MySQL cluster running on an Azure VM](https://github.com/azure/azure-quickstart-templates/tree/master/application-workloads/mysql/mysql-replication)
* [Single instance of MySQL running on an Azure VM](/previous-versions/azure/virtual-machines/windows/classic/mysql-2008r2?toc=%2fazure%2fvirtual-machines%2fwindows%2fclassic%2ftoc.json)
This depends on the type of subscription you are using. Here are some commonly u
* [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/): EA customers are billed against their EA each quarter for all of their Azure Marketplace (third-party) purchases on a separate, consolidated invoice. You are billed outside the Azure Prepayment (previously called monetary commitment) for any marketplace purchases. Please note that, at this time, Azure Store is not available to customers enrolled in Azerbaijan, Croatia, Norway and Puerto Rico. ## Why was I charged $3.50 for a Web app + MySQL from the Azure Marketplace?
-The default database option is Titan, which is $3.50. We donΓÇÖt show the cost during database creation, and you may mistakenly purchase a database you didnΓÇÖt intend to. We are trying to find a way to improve the experience but until then you must check all your selected pricing tiers for web app and database before clicking **Create** and starting the deployment of the resources.
+The default database option is Titan, which is $3.50. We don't show the cost during database creation, and you may mistakenly purchase a database you didn't intend to. We are trying to find a way to improve the experience but until then you must check all your selected pricing tiers for web app and database before clicking **Create** and starting the deployment of the resources.
## I am running MySQL on my own Azure virtual machine. Can I connect my Azure web app to my database? Yes. You can connect your web app to your database as long as your Azure VM has given remote access to your web app. For more information, see [Install MySQL on a virtual machine](/previous-versions/azure/virtual-machines/windows/classic/mysql-2008r2?toc=%2fazure%2fvirtual-machines%2fwindows%2fclassic%2ftoc.json).
Use Basic or a higher pricing tier for Web Apps. For ClearDB, we recommend eithe
## How do I upgrade my ClearDB database from one plan to another? In the [Azure portal](https://portal.azure.com), you can scale up a ClearDB shared hosting database. Read this [article](https://blogs.msdn.microsoft.com/appserviceteam/2016/10/06/upgrade-your-cleardb-mysql-database-in-azure-portal/) to learn more. We currently don't support upgrade for ClearDB Premium clusters in the Azure portal.
-## I canΓÇÖt see my ClearDB database in Azure portal?
+## I can't see my ClearDB database in Azure portal?
If you created a ClearDB database in classic, you will not be able to see your database in the [Azure portal](https://portal.azure.com). There is no work-around for this scenario. ## Who do I contact for support when my database is down?
storsimple Storsimple Virtual Array Failover Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storsimple/storsimple-virtual-array-failover-dr.md
Perform the following steps to restore the device to a target StorSimple virtual
1. Select and click the StorSimple device that was used as the target device for the failover process. 2. Go to **Settings > Management > Shares** (or **Volumes** if iSCSI server). In the **Shares** blade, you can view all the shares (volumes) from the old device. ![Screenshot of the Devices blade. The target device is listed with a status of Online.](./media/storsimple-virtual-array-failover-dr/failover9.png)
-14. You will need to [create a DNS alias](https://web.archive.org/web/20150307000707/http://support.microsoft.com:80/kb/168322) so that all the applications that are trying to connect can get redirected to the new device.
+14. You will need to [create a DNS alias](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc772053(v=ws.11)?redirectedfrom=MSDN) so that all the applications that are trying to connect can get redirected to the new device.
## Errors during DR
If there are StorSimple devices that were registered just before a disaster occu
## Next steps
-Learn more about how to [administer your StorSimple Virtual Array using the local web UI](storsimple-ova-web-ui-admin.md).
+Learn more about how to [administer your StorSimple Virtual Array using the local web UI](storsimple-ova-web-ui-admin.md).
synapse-analytics Overview Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/overview-faq.md
- Title: FAQ - Azure Synapse Analytics
-description: FAQ for Azure Synapse Analytics
----- Previously updated : 10/25/2020----
-# Azure Synapse Analytics frequently asked questions
-
-In this guide, you'll find the most frequently asked questions for Azure Synapse Analytics.
-
-## General
-
-### Q: How can I use RBAC roles to secure my workspace?
-
-A: Azure Synapse introduces a number of roles and scopes to assign them on that will simplify securing your workspace.
-
-Synapse RBAC roles:
-* Synapse Administrator
-* Synapse SQL Administrator
-* Synapse Spark Administrator
-* Synapse Contributor (preview)
-* Synapse Artifact Publisher (preview)
-* Synapse Artifact User (preview)
-* Synapse Compute Operator (preview)
-* Synapse Credential User (preview)
-
-To secure your Synapse workspace, assign the RBAC Roles to these RBAC scopes:
-* Workspaces
-* Spark pools
-* Integration runtimes
-* Linked services
-* Credentials
-
-Additionally, with dedicated SQL pools you have all the same security features that you know and love.
-
-### Q: How do I control dedicated SQL pools, serverless SQL pools, and serverless Spark pools?
-
-A: As a starting point, Azure Synapse works with the built-in cost analysis and cost alerts available at the Azure subscription level.
--- Dedicated SQL pools - you have direct visibility into the cost and control over the cost, because you create and specify the sizes of dedicated SQL pools. You can further control your which users can create or scale dedicated SQL pools with Azure RBAC roles.--- Serverless SQL pools - you have monitoring and cost management controls that let you cap spending at a daily, weekly, and monthly level. [See Cost management for serverless SQL pool](./sql/data-processed.md) for more information. --- Serverless Spark pools - you can restrict who can create Spark pools with Synapse RBAC roles. -
-### Q: Will Synapse workspace support folder organization of objects and granularity at GA?
-
-A: Synapse workspaces supports user-defined folders.
-
-### Q: Can I link more than one Power BI workspace to a single Azure Synapse Workspace?
-
-A: Currently, you can only link a single Power BI workspace to an Azure Synapse Workspace.
-
-### Q: Is Synapse Link to Cosmos DB GA?
-
-A: Synapse Link for Apache Spark is GA. Synapse Link for serverless SQL pool is in Public Preview.
-
-### Q: Does Azure Synapse workspace Support CI/CD?
-
-A: Yes! All Pipeline artifacts, notebooks, SQL scripts, and Spark job definitions will reside in Git. All pool definitions will be stored in Git as ARM Templates. Dedicated SQL pool objects (schemas, tables, views, etc.) will be managed with database projects with CI/CD support.
-
-## Pipelines
-
-### Q: How do I ensure I know what credential is being used to run a pipeline?
-
-A: Each activity in a Synapse Pipeline is executed using the credential specified inside the linked service.
-
-### Q: Are SSIS IRs supported in Synapse Integrate?
-
-A: Not at this time.
-
-### Q: How do I migrate existing pipelines from Azure Data Factory to an Azure Synapse workspace?
-
-A: At this time, you must manually recreate your Azure Data Factory pipelines and related artifacts by exporting the JSON from the original pipeline and importing it into your Synapse workspace.
-
-## Apache Spark
-
-### Q: What is the difference between Apache Spark for Synapse and Apache Spark?
-
-A: Apache Spark for Synapse is Apache Spark with added support for integrations with other services (AAD, AzureML, etc.) and additional libraries (mssparktuils, Hummingbird) and pre-tuned performance configurations.
-
-Any workload that is currently running on Apache Spark will run on Apache Spark for Azure Synapse without change.
-
-### Q: What versions of Spark are available?
-
-A: Azure Synapse Apache Spark fully supports Spark 2.4. For a full list of core components and currently supported version see [Apache Spark version support](./spark/apache-spark-version-support.md).
-
-### Q: Is there an equivalent of DButils in Azure Synapse Spark?
-
-A: Yes, Azure Synapse Apache Spark provides the **mssparkutils** library. For full documentation of the utility see [Introduction to Microsoft Spark utilities](./spark/microsoft-spark-utilities.md).
-
-### Q: How do I set session parameters in Apache Spark?
-
-A: To set session parameters, use %%configure magic available. A session restart is required for the parameters to take effect.
-
-### Q: How do I set cluster level parameters in a serverless Spark pool?
-
-A: To set cluster level parameters, you can provide a spark.conf file for the Spark pool. This pool will then honor the parameters past in the config file.
-
-### Q: Can I run a multi-user Spark Cluster in Azure Synapse Analytics?
-
-A: Azure Synapse provides purpose-built engines for specific use cases. Apache Spark for Synapse is designed as a job service and not a cluster model.
-There are two scenarios where people ask for a multi-user cluster model.
-
-**Scenario #1: Many users accessing a cluster for serving data for BI purposes.**
-
-The easiest way of accomplishing this task is to cook the data with Spark and then take advantage of the serving capabilities of Synapse SQL to that they can connect Power BI to those datasets.
-
-**Scenario #2: Having multiple developers on a single cluster to save money.**
-
-To satisfy this scenario, you should give each developer a serverless Spark pool that is set to use a small number of Spark resources. Since serverless Spark pools donΓÇÖt cost anything, until they are actively used minimizes the cost when there are multiple developers. The pools share metadata (Spark tables) so they can easily work with each other.
-
-### Q: How do I include, manage, and install libraries?
-
-A: You can install external packages via a requirements.txt file while creating the Spark pool, from the synapse workspace, or from the Azure portal. See [Manage libraries for Apache Spark in Azure Synapse Analytics](./spark/apache-spark-azure-portal-add-libraries.md).
-
-## Dedicated SQL Pools
-
-### Q: What are the functional differences between dedicated SQL pools and serverless pools?
-
-A: You can find a full list of differences in [T-SQL feature differences in Synapse SQL](./sql/overview-features.md).
-
-### Q: Now that Azure Synapse is GA, how do I move my dedicated SQL pools that were previously standalone into Azure Synapse?
-
-A: There is no ΓÇ£moveΓÇ¥ or ΓÇ£migration.ΓÇ¥
-You can choose to enable new workspace features on your existing pools. If you do, there are no breaking changes, instead youΓÇÖll be able to use new features such as Synapse Studio, Spark, and serverless SQL pools.
-
-### Q: What is the default deployment of dedicated SQL pools now?
-
-A: By Default, all new dedicated SQL pools will be deployed to a workspace; however, if you need to you can still create a dedicated SQL pool (formerly SQL DW) in a standalone form factor.
-
-## Next steps
-
-* [Get started with Azure Synapse Analytics](get-started.md)
-* [Create a workspace](quickstart-create-workspace.md)
-* [Use serverless SQL pool](quickstart-sql-on-demand.md)
synapse-analytics Apache Spark Azure Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-azure-log-analytics.md
spark.synapse.logAnalytics.keyVault.linkedServiceName <LINKED_SERVICE_NAME>
| spark.synapse.logAnalytics.keyVault.name | - | Azure Key vault name for the Azure Log Analytics ID and key | | spark.synapse.logAnalytics.keyVault.key.workspaceId | SparkLogAnalyticsWorkspaceId | Azure Key vault secret name for the Azure Log Analytics workspace ID | | spark.synapse.logAnalytics.keyVault.key.secret | SparkLogAnalyticsSecret | Azure Key vault secret name for the Azure Log Analytics workspace key |
-| spark.synapse.logAnalytics.keyVault.uriSuffix | ods.opinsights.azure.com | The destination Azure Log Analytics workspace [URI suffix][uri_suffix]. If your Azure Log Analytics Workspace is not in Azure global, you need to update the URI suffix according to the respective cloud. |
+| spark.synapse.logAnalytics.uriSuffix | ods.opinsights.azure.com | The destination Azure Log Analytics workspace [URI suffix][uri_suffix]. If your Azure Log Analytics Workspace is not in Azure global, you need to update the URI suffix according to the respective cloud. |
> [!NOTE]
-> - For Azure China clouds, the "spark.synapse.logAnalytics.keyVault.uriSuffix" parameter should be "ods.opinsights.azure.cn".
-> - For Azure Gov clouds, the "spark.synapse.logAnalytics.keyVault.uriSuffix" parameter should be "ods.opinsights.azure.us".
+> - For Azure China clouds, the "spark.synapse.logAnalytics.uriSuffix" parameter should be "ods.opinsights.azure.cn".
+> - For Azure Gov clouds, the "spark.synapse.logAnalytics.uriSuffix" parameter should be "ods.opinsights.azure.us".
[uri_suffix]: ../../azure-monitor/logs/data-collector-api.md#request-uri
synapse-analytics Active Directory Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/active-directory-authentication.md
# Use Azure Active Directory Authentication for authentication with Synapse SQL
-Azure Active Directory authentication is a mechanism that connects to [Azure Synapse Analytics](../overview-faq.md) by using identities in Azure Active Directory (Azure AD).
+Azure Active Directory authentication is a mechanism that connects to [Azure Synapse Analytics](../overview-faq.yml) by using identities in Azure Active Directory (Azure AD).
With Azure AD authentication, you can centrally manage user identities that have access to Azure Synapse to simplify permission management. Benefits include the following:
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
For querying Azure Cosmos DB, the full [SELECT](/sql/t-sql/queries/select-transa
In this article, you'll learn how to write a query with a serverless SQL pool that will query data from Azure Cosmos DB containers that are enabled with Azure Synapse Link. You can then learn more about building serverless SQL pool views over Azure Cosmos DB containers and connecting them to Power BI models in [this tutorial](./tutorial-data-analyst.md).This tutorial uses a container with an [Azure Cosmos DB well-defined schema](../../cosmos-db/analytical-store-introduction.md#schema-representation).
+## Prerequisites
+
+- Make sure that you have prepared Analytical store:
+ - Enable analytical store on [your Cosmos DB containers](../quickstart-connect-synapse-link-cosmos-db.md#enable-azure-cosmos-db-analytical-store).
+ - Get the connection string with a read-only key that you will use to query analytical store.
+ - Get the read-only [key that will be used to access Cosmos DB container](../../cosmos-db/database-security.md#primary-keys)
+- Make sure that you have applied all [best practices](best-practices-serverless-sql-pool.md), such as:
+ - Ensure that your Cosmos DB analytical storage is in the same region as serverless SQL pool.
+ - Ensure that the client application (Power BI, Analysis service) is in the same region as serverless SQL pool.
+ - If you are returning a large amount of data (bigger than 80GB), consider using caching layer such as Analysis services and load the partitions smaller than 80GB in the Analysis services model.
+ - If you are filtering data using string columns, make sure that you are using the `OPENROWSET` function with the explicit `WITH` clause that has the smallest possible types (for example, don't use VARCHAR(1000) if you know that the property has up to 5 characters).
+ ## Overview Serverless SQL pool enables you to query Azure Cosmos DB analytical storage using `OPENROWSET` function.
For more information about the SQL types that should be used for Azure Cosmos DB
## Create view
-Creating views in the master or default databases is not recommended or supported. So you need to create an user database for your views.
+Creating views in the master or default databases is not recommended or supported. So you need to create a user database for your views.
Once you identify the schema, you can prepare a view on top of your Azure Cosmos DB data. You should place your Azure Cosmos DB account key in a separate credential and reference this credential from `OPENROWSET` function. Do not keep your account key in the view definition.
For more information, see the following articles:
- [Use Power BI and serverless SQL pool with Azure Synapse Link](../../cosmos-db/synapse-link-power-bi.md) - [Create and use views in a serverless SQL pool](create-use-views.md) - [Tutorial on building serverless SQL pool views over Azure Cosmos DB and connecting them to Power BI models via DirectQuery](./tutorial-data-analyst.md)
+- Visit [Synapse link for Cosmos DB self-help page](resources-self-help-sql-on-demand.md#cosmos-db) if you are getting some errors or experiencing performance issues.
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
The value specified in the `WITH` clause doesn't not match the underlying Cosmos
If you are experiencing some unexpected performance issues, make sure that you applied the best practices, such as: - Make sure that you have placed the client application, serverless pool, and Cosmos DB analytical storage in [the same region](best-practices-serverless-sql-pool.md#colocate-your-cosmosdb-analytical-storage-and-serverless-sql-pool).
+- Make sure that you are using the `WITH` clause with [optimal data types](best-practices-serverless-sql-pool.md#use-appropriate-data-types).
- Make sure that you are using [Latin1_General_100_BIN2_UTF8 collation](best-practices-serverless-sql-pool.md#use-proper-collation-to-utilize-predicate-pushdown-for-character-columns) when you filter your data using string predicates. - If you have repeating queries that might be cached, try to use [CETAS to store query results in Azure Data Lake Storage](best-practices-serverless-sql-pool.md#use-cetas-to-enhance-query-performance-and-joins).
virtual-machines Oracle Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/oracle-create-upload-vhd.md
Preparing an Oracle Linux 7 virtual machine for Azure is very similar to Oracle
cat > /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg <<EOF datasource_list: [ Azure ] datasource:
- Azure:
- apply_network_config: False
+ Azure:
+ apply_network_config: False
EOF if [[ -f /mnt/resource/swapfile ]]; then
Preparing an Oracle Linux 7 virtual machine for Azure is very similar to Oracle
16. Click **Action -> Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure. ## Next steps
-You're now ready to use your Oracle Linux .vhd to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
+You're now ready to use your Oracle Linux .vhd to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/run-command.md
The Run Command feature uses the virtual machine (VM) agent to run shell scripts
## Benefits
-You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual%20machines%20run%20commands/runcommand), or [Azure CLI](/cli/azure/vm/run-command#az_vm_run_command_invoke) for Linux VMs.
+You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual-machines-run-commands/run-command), or [Azure CLI](/cli/azure/vm/run-command#az_vm_run_command_invoke) for Linux VMs.
This capability is useful in all scenarios where you want to run a script within a virtual machine. It's one of the only ways to troubleshoot and remediate a virtual machine that doesn't have the RDP or SSH port open because of improper network or administrative user configuration.
virtual-machines Run Scripts In Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/run-scripts-in-vm.md
The [Custom Script Extension](../extensions/custom-script-linux.md) is primarily
The [Run Command](run-command.md) feature enables virtual machine and application management and troubleshooting using scripts, and is available even when the machine is not reachable, for example if the guest firewall doesn't have the RDP or SSH port open. * Run scripts in Azure virtual machines.
-* Can be run using [Azure portal](run-command.md), [REST API](/rest/api/compute/virtual%20machines%20run%20commands/runcommand), [Azure CLI](/cli/azure/vm/run-command#az_vm_run_command_invoke), or [PowerShell](/powershell/module/az.compute/invoke-azvmruncommand)
+* Can be run using [Azure portal](run-command.md), [REST API](/rest/api/compute/virtual-machines-run-commands/run-command), [Azure CLI](/cli/azure/vm/run-command#az_vm_run_command_invoke), or [PowerShell](/powershell/module/az.compute/invoke-azvmruncommand)
* Quickly run a script and view output and repeat as needed in the Azure portal. * Script can be typed directly or you can run one of the built-in scripts. * Run PowerShell script in Windows machines and Bash script in Linux machines.
Learn more about the different features that are available to run scripts and co
* [Custom Script Extension](../extensions/custom-script-linux.md) * [Run Command](run-command.md) * [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md)
-* [Serial console](/troubleshoot/azure/virtual-machines/serial-console-linux)
+* [Serial console](/troubleshoot/azure/virtual-machines/serial-console-linux)
virtual-machines Vm Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/vm-usage.md
To begin, [download your usage details](../cost-management-billing/manage/downlo
| Resource Group | The resource group in which the deployed resource is running in. For more information, see [Azure Resource Manager overview.](../azure-resource-manager/management/overview.md)|`MyRG`| | Instance ID | The identifier for the resource. The identifier contains the name you specify for the resource when it was created. For VMs, the Instance ID will contain the SubscriptionId, ResourceGroupName, and VMName (or scale set name for scale set usage).| `/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/ resourceGroups/MyRG/providers/Microsoft.Compute/virtualMachines/MyVM1`<br><br>or<br><br>`/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/ resourceGroups/MyRG/providers/Microsoft.Compute/virtualMachineScaleSets/MyVMSS1`| | Tags| Tag you assign to the resource. Use tags to group billing records. Learn how to tag your Virtual Machines using the [CLI](./tag-cli.md) or [PowerShell](./tag-portal.md) This is available for Resource Manager VMs only.| `{"myDepartment":"RD","myUser":"myName"}`|
-| Additional Info | Service-specific metadata. For VMs, we populate the following data in the additional info field: <br><br> Image Type- specific image that you ran. Find the full list of supported strings below under Image Types.<br><br> Service Type: the size that you deployed.<br><br> VMName: name of your VM. This field is only populated for scale set VMs. If you need your VM Name for scale set VMs, you can find that in the Instance ID string above.<br><br> UsageType: This specifies the type of usage this represents.<br><br> ComputeHR is the Compute Hour usage for the underlying VM, like Standard_D1_v2.<br><br> ComputeHR_SW is the premium software charge if the VM is using premium software, like Microsoft R Server. | Virtual Machines<br>`{"ImageType":"Canonical","ServiceType":"Standard_DS1_v2","VMName":"", "UsageType":"ComputeHR"}`<br><br>Virtual Machine Scale Sets<br> `{"ImageType":"Canonical","ServiceType":"Standard_DS1_v2","VMName":"myVM1", "UsageType":"ComputeHR"}`<br><br>Premium Software<br> `{"ImageType":"","ServiceType":"Standard_DS1_v2","VMName":"", "UsageType":"ComputeHR_SW"}` |
+| Additional Info | Service-specific metadata. For VMs, we populate the following data in the additional info field: <br><br> Image Type- specific image that you ran. Find the full list of supported strings below under Image Types.<br><br> Service Type: the size that you deployed.<br><br> VMName: name of your VM. This field is only populated for scale set VMs. If you need your VM Name for scale set VMs, you can find that in the Instance ID string above.<br><br> UsageType: This specifies the type of usage this represents.<br><br> ComputeHR is the Compute Hour usage for the underlying VM, like Standard_D1_v2.<br><br> ComputeHR_SW is the premium software charge if the VM is using premium software. | Virtual Machines<br>`{"ImageType":"Canonical","ServiceType":"Standard_DS1_v2","VMName":"", "UsageType":"ComputeHR"}`<br><br>Virtual Machine Scale Sets<br> `{"ImageType":"Canonical","ServiceType":"Standard_DS1_v2","VMName":"myVM1", "UsageType":"ComputeHR"}`<br><br>Premium Software<br> `{"ImageType":"","ServiceType":"Standard_DS1_v2","VMName":"", "UsageType":"ComputeHR_SW"}` |
## Image Type For some images in the Azure gallery, the image type is populated in the Additional Info field. This enables users to understand and track what they have deployed on their Virtual Machine. The following values that are populated in this field based on the image you have deployed:
Microsoft.ClassicCompute represents classic resources deployed via the Azure Ser
### Why is the InstanceID field blank for my Virtual Machine usage? If you deploy via the classic deployment model, the InstanceID string is not available. ### Why are the tags for my VMs not flowing to the usage details?
-Tags only flow to you the Usage CSV for Resource Manager VMs only. Classic resource tags are not available in the usage details.
+Tags flow to the Usage CSV for Resource Manager VMs only. Classic resource tags are not available in the usage details.
### How can the consumed quantity be more than 24 hours one day? In the Classic model, billing for resources is aggregated at the Cloud Service level. If you have more than one VM in a Cloud Service that uses the same billing meter, your usage is aggregated together. VMs deployed via Resource Manager are billed at the VM level, so this aggregation will not apply. ### Why is pricing not available for DS/FS/GS/LS sizes on the pricing page?
virtual-machines Prepare For Upload Vhd Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/prepare-for-upload-vhd-image.md
In particular, Sysprep requires the drives to be fully decrypted before executio
### Generalize a VHD
+>[!NOTE]
+> If you're creating a generalized image from an existing Azure VM, we recommend to remove the VM extensions
+> before running the sysprep.
+ >[!NOTE] > After you run `sysprep.exe` in the following steps, turn off the VM. Don't turn it back on until > you create an image from it in Azure.
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/run-command.md
The Run Command feature uses the virtual machine (VM) agent to run PowerShell sc
## Benefits
-You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual%20machines%20run%20commands/runcommand), or [PowerShell](/powershell/module/az.compute/invoke-azvmruncommand) for Windows VMs.
+You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual-machines-run-commands/run-command), or [PowerShell](/powershell/module/az.compute/invoke-azvmruncommand) for Windows VMs.
This capability is useful in all scenarios where you want to run a script within a virtual machine. It's one of the only ways to troubleshoot and remediate a virtual machine that doesn't have the RDP or SSH port open because of improper network or administrative user configuration.
virtual-machines Run Scripts In Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/run-scripts-in-vm.md
The [Custom Script Extension](../extensions/custom-script-windows.md) is primari
The [Run Command](run-command.md) feature enables virtual machine and application management and troubleshooting using scripts, and is available even when the machine is not reachable, for example if the guest firewall doesn't have the RDP or SSH port open. * Run scripts in Azure virtual machines.
-* Can be run using [Azure portal](run-command.md), [REST API](/rest/api/compute/virtual%20machines%20run%20commands/runcommand), [Azure CLI](/cli/azure/vm/run-command#az_vm_run_command_invoke), or [PowerShell](/powershell/module/az.compute/invoke-azvmruncommand)
+* Can be run using [Azure portal](run-command.md), [REST API](/rest/api/compute/virtual-machines-run-commands/run-command), [Azure CLI](/cli/azure/vm/run-command#az_vm_run_command_invoke), or [PowerShell](/powershell/module/az.compute/invoke-azvmruncommand)
* Quickly run a script and view output and repeat as needed in the Azure portal. * Script can be typed directly or you can run one of the built-in scripts. * Run PowerShell script in Windows machines and Bash script in Linux machines.
Learn more about the different features that are available to run scripts and co
* [Custom Script Extension](../extensions/custom-script-windows.md) * [Run Command](run-command.md) * [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md)
-* [Serial console](/troubleshoot/azure/virtual-machines/serial-console-windows)
+* [Serial console](/troubleshoot/azure/virtual-machines/serial-console-windows)
virtual-machines Oracle Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md
During the initial request, the application server connects to the shard directo
When deploying your Oracle workloads to Azure, Microsoft takes care of all host OS-level patching. Any planned OS-level maintenance is communicated to customers in advance to allow the customer for this planned maintenance. Two servers from two different Availability Zones are never patched simultaneously. See [Manage the availability of virtual machines](../../availability.md) for more details on VM maintenance and patching.
-Patching your virtual machine operating system can be automated using [Azure Automation Update Management](../../../automation/update-management/overview.md). Patching and maintaining your Oracle database can be automated and scheduled using [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) or [Azure Automation Update Management](../../../automation/update-management/overview.md) to minimize downtime. See [Continuous Delivery and Blue/Green Deployments](/azure/devops/learn/what-is-continuous-delivery) to understand how it can be used in the context of your Oracle databases.
+Patching your virtual machine operating system can be automated using [Azure Automation Update Management](../../../automation/update-management/overview.md). Patching and maintaining your Oracle database can be automated and scheduled using [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) or [Azure Automation Update Management](../../../automation/update-management/overview.md) to minimize downtime. See [Continuous Delivery and Blue/Green Deployments](/devops/deliver/what-is-continuous-delivery) to understand how it can be used in the context of your Oracle databases.
## Architecture and design considerations